- In the Vertica database, what is a namespace?
- Why does `pg_lsclusters` not list my Postgres cluster?
- Need to replace only first occurrence of string
- Keeping version history of functions in PostgreSQL
- Insert into table from a query to a linked database hangs
- Reset IDENTITY value
- What are some good packages to install before configuring postgres on Ubuntu?
- Having trouble connecting tungsten replicator from EC2 to RDS
- In creating view use SQL Security definer or invoker?
- SQL Server 2008 log file growing and wont shrink. Hard Drive Space running out [duplicate]
- Too Many database connections on Amazon RDS
- Relational database primary key requirements
- mysql replication goes out of sync for some tables
- Find procedures that haven't been called in <n> days
- Force View's query plan to update?
- What is the best way to migrate multiple databases?
- Updating an FTS indexed column returns "Invalid InnoDB FTS Doc ID"
- Synchronization between master and backup server
- Can I restore an uncompressed differential backup after a compressed full backup?
- Cannot Connect to Oracle Database running on Windows 8 Hyper V Virtual Machine
- MySQL+Web-App Performance Issues using different servers
- Database running out of space
- SQL Server database schema (and likely, some data changes) - how to auto-distribute over many database instances
- MySQL slap with custom query
- Overview of how MongoDB uses its various threads
- MySQL information_schema doesn't update
- How to succesfully run a batch file in an SQL Agent job?
- MySQL partitioned tables?
- Is there a SQL Server equivalent to "OVERRIDING USER VALUE"
- MySQL user defined rollback procedure
Posted: 05 Jun 2013 09:09 PM PDT
In the Vertica database, what does the term "namespace" mean?
I have reviewed the entire Vertica documentation and cannot find what this means.
Posted: 05 Jun 2013 09:11 PM PDT
Posted: 05 Jun 2013 08:47 PM PDT
I hope I worded the question right. Here are the specifics:
This is regarding a MySQL database. I've inherited several hundred posts with custom fields and a unique excerpt. At this point I think I can solve the problem by inserting the
1 - Search each
Unfortunately each post has multiple
Posted: 05 Jun 2013 07:45 PM PDT
For many of my tables, I've added auditing triggers (based on this wiki page). It works very well and has helped me several times figure out who changed what in the system. We have a Python/Django application that sits on top of the data and that code is tracked in Git. However, there seems to be one area of our system where changes are not tracked very well. And that's the function/triggers, etc in PostgreSQL. I wish there was a way I could add a similar audit capability to the schema as I have with the data itself.
How do DBA track these changes? Note: I'm in a position where there is more than one person with sufficient privileges to "CREATE OR REPLACE" a function, so I can't necessarily count on a person to write a script and check it into Git and I certainly can't force them. If possible, it needs to be automatic.
I've thought about writing a python script to "create" a script file and write it to a file and then programmatically commit changes to a git repo.
Another option would be to add some tables to my audit schema and just query each night and look for changes and right them to a table. Certainly could work, but not quite as nice as being able to do a diff/blame via git.
This may not be the greatest question because I'm not exactly sure what I'm looking for and there may not be an exact "right" answer, but I would like to know what people do. I've done some "googling" on the topic, but I'm not finding a lot (maybe I'm just using the wrong terms).
Finally, I'm currently using PostgreSQL 9.1.
Posted: 05 Jun 2013 08:50 PM PDT
I have a schema with some database links to other schemas that we use to pull data from. We pull the data into a staging table, massage it, and then do some merges into our schema/database.
The data in the linked schema is a little much to pull into test systems, so I want to only get the first 10000 records, which is easy enough.
This returns very quickly as in under 2 seconds.
However when I try to insert this into my staging table, it causes SQL Developer to hang. Not a windows hang, the Script Output window just shows "ScriptRunner Task" with the moving bar back and forth showing that it's doing something.
When it does this I can see that there are DW locks, but nothing else is using the linked databases that I'm trying to query.
Any help would be greatly appreciated as I am very much a novice with Oracle.
Posted: 05 Jun 2013 08:04 PM PDT
I have a table with an IDENTITY column. While developing I delete the rows from time to time and add them again. But the IDENTITY values always kept increasing and didn't start from 1 when I added them again. Now my id's go from 68 -> 92 and this crashes my code.
How do I reset the IDENTITY value?
Posted: 05 Jun 2013 02:14 PM PDT
I can find a hundred examples of compiling Postgres 9.2 from source. None of them make mention of any options to pass to
Posted: 05 Jun 2013 01:08 PM PDT
I am testing using Tungsten Replicator to help migrate large databases to RDS. I have no trouble setting up the service on the EC2 "master". I am using a second EC2 instance as a kind of relay server host to an RDS target. I tested connection to mysql on rds from both ec2 (fine) and can ssh between the ec2 servers fine. the replicator service status is good on the ec2 master but perpetually "GOING-ONLINE:SYNCHRONIZING" state on the relay slave -> rds. Does anyone have experience trying this?
Posted: 05 Jun 2013 06:03 PM PDT
Posted: 05 Jun 2013 02:24 PM PDT
This question already has an answer here:
I have a SQL Server 2008 database which has a log file (
My problem is that I don't have enough hard drive space to take a log backup.
Is there anyway I can shrink this file?
The database is on
Posted: 05 Jun 2013 12:38 PM PDT
We are having problems with users running queries/views in Drupal that occasionally cause our site to freeze. The freeze occurs because the query causes the number of database connections to go up to 400+ and basically anytime the site goes over 100 database connections the site slows down terribly and just doesn't respond.
We are running Amazon RDS using MySQL Red Hat Linux
We have a large enough EC2 on the front end app server, and a large enough RDS.
The way we are fixing this issue now is to find the offending query, and kill it. Once the query is killed...our database connections drop to around 20 which is the normal amount you see when monitoring the site statistics.
Is there a way to stop the offending query and kill it before it runs too long and consumes the connections? I am trying to automate the killing of the bad query before it happens, or at least realize after 30 seconds its a bad query and kill it.
Posted: 05 Jun 2013 01:12 PM PDT
I am trying to put together an ER Diagram to design a fairly simple asset tag tracking/inventory management system for our servers/switches/UPS/etc. The top-level entity corresponds to the business location, followed by entities for Room, Rack, RackU, and finally device. Now, I know that primary keys are supposed to be unique, but is that unique over the entire database, or unique for the individual branch of the system?
By that I mean, there are one to many rooms in a location, and one to many racks in each room. do the primary keys for the racks in one room in a given location need to be unique from those of either another room in the same location, or even from a room in a different location?
I probably didn't describe this very well, so feel free to ask for clarifications on or at any point.
Posted: 05 Jun 2013 11:40 AM PDT
We are running mysql 5.1.61 on redhat systems and have the following setup
One master and four slaves replicating from the master, we recently added a new slave for replication and over a few days we have started noticing that on the newly added slave some tables ( not all ) loose some records , this happens only on this slave and it is not regular , over a period of 3 weeks this issue seems to have happened on 5-7 days .
We use statement based replication. I am not sure why this happens on only one slave. There seems to be no error in the mysql error logs. The only difference between the old slaves and the new slave is that the new slave has a slightly lower ram than the other ones but the new slave is not being used for anything right now.
Is there a way to trouble shoot this issue to see why this happens on only one slave ?. Could it be network related or anything else ? Any pointers on where to start looking at ?
Here is the memory info Old slave
Posted: 05 Jun 2013 02:12 PM PDT
We are deleting old stored procedures and tables.
How can I know what procedures haven't been called recently?
Posted: 05 Jun 2013 01:50 PM PDT
I have a View whose query plan appears to be cached, is there a way to force the View's plan to be recalculated on each access?
Posted: 05 Jun 2013 08:21 PM PDT
I want to migrate my database server to a new server. Right now I have a database server with Windows Server 2008 and now I am migrating to a new, separate server with Windows Server 2012.
There are around 50 to 100 databases.
What is the best way to migrate the database server without affecting clients (meaning no downtime)?
Posted: 05 Jun 2013 10:26 AM PDT
I have a table which has a full text index on it (real table has much more columns and rows):
MySQL documentation suggests to create the FTS_DOC_ID (with the right syntax) to prevent a full table rebuild.
So far all is good and I can query using the
I get a:
If I manually take care of this column like this:
Then the update is done. But this is not acceptable because I have several processes in parallel that update this table (though all different rows) and the risk is to get the same
successfully update the database.
Is it the expected behaviour? Or is it a bug?
I found a bug reported in the MySQL buglist about the opposite case (cannot update a non-fts indexed column but can on an fts indexed one) but not this case.
Posted: 05 Jun 2013 08:25 PM PDT
I am searching for an option to synchronize data between production server and backup server automatically. Does somebody know a method, how I can do this:
Posted: 05 Jun 2013 06:22 PM PDT
If we enable compression on database backups (because the database backup size is huge) but don't enable it on differential which are relatively small in size, will a restore work? Because the full backup is compressed but the differential isn't compressed.
Posted: 05 Jun 2013 02:48 PM PDT
I have oracle server (standard edition) installed on a guest VM (Windows 8 Hyper V). I have mapped a host entry to the VM as "brettvm". I am able to log into the VM and connect to the default instance using sqlplus. So far so good.
EDIT: Here's the listner.ora file on the server. I'm starting to suspect the trouble may lie here:
The tnsnames.ora file on the server looks like this:
The Problem: When I attempt to connect from the HOST machine, sqlplus prompts for a password and then hangs. Here is my tnsnames.ora on the host machine:
Some things I have tried:
I'm going to start digging through the installation guide...
Any other advice? Thanks.
Posted: 05 Jun 2013 10:52 AM PDT
We are having a performance issue with our MySQL servers that does not make any sense. I have read countless articles from many people (mostly Percona) and have made my.cnf tweaks. We have even manage to squeeze out another 30% more TPS thanks to those Percona articles. However, our problem is with our in-house web-app (a Tomcat/Java/Apache model). It performs poorly when connected to certain servers - the better hardware servers.
Here is the symptom:
If we point our test application server (Ubuntu, Apache, Tomcat, Java) to server MYSQL02, the applications performance is acceptable. However, if we point the application to MYSQL01 or MYSQL03 (and these two boxes are idle!) the application performance is poor. There are high latencies. Example:
We cannot figure out why! The MySQL servers and MONyog do NOT report any problems! If we execute the statements (100's of them) manually they return instance results and their explanations show they are all using indexes. We do NOT get any slow query, deadlock, or contention notifications.
Here is some basic information about our MySQL systems. They are all DEDICATED MySQL servers:
PROD (current production, not in replication farm, standalone)
We used sysbench to test and tweak all the above systems and here are the test results with notes.
NOTE: TPS = Transactions Per Second
Results before applying any new tweaks:
Results after my.cnf tweaks:
We are unsure why MYSQL01's performance is so poor. We can only summarize that there is an OS, RAID CARD or BIOS setting(s) that may be improperly set. I am leaning towards the RAID Card/Configuration. They only way to know for sure is to shutdown this server and scrutinize the configuration. A reload may be necessary. However, since it is our ultimate plan to make the current PROD hardware the primary production MySQL server then we may leave MYSQL01 alone for now and re-purpose the hardware after migrating to the 5.5 farm. However, we can't migrate until we figure out why our application is behaving so poorly on certain hardware.
Anyone have any suggestions?
Posted: 05 Jun 2013 01:09 PM PDT
My database has 16MB of space left.
I used to just truncate as I was taught but I found these links that advise against truncating:
Is there anything else I can do on my database to reduce the size other than deleting table records? I am new to the DBA forum and I probably should have looked around for other questions before posting but I am desperate as I am worried about my database going down.
Posted: 05 Jun 2013 10:24 AM PDT
Our development involves a SQL Server database (also might be Oracle or Postgres later) and we would sometimes make database schema changes or some other interventions in database.
What solutions exist to create a "patch" or "script" to distribute these changes on other installations of same database (we do not have direct access to these)? It needs to alter database schema and execute SQL and/or other complex, pre-programmed database data alterations as defined by person who initializes/designs change. On other instances, a system admin should be able just run some program/press button, so these changes would be applied automatically.
In addition, it is plus if such solution can take database snapshot and derive "difference" on contents of particular table that would be then distributed.
The solution can be commercial.
Thanks in advance!
Posted: 05 Jun 2013 06:24 PM PDT
I want to conduct stress test on our MySQL DB. I have the list of queries i need to execute. I have tried using Apache JMeter for this but it is very time consuming. Is it possible to run mysqlslap with custom .sql file containing INSERT, UPDATE, SELECT queries on specified MySQL database?
Posted: 05 Jun 2013 01:24 PM PDT
On one instance I have MongoDB using ~85 threads. In lieu of having time to investigate directly, I am curious:
Posted: 05 Jun 2013 08:24 PM PDT
I have a database, say
After I run the query
The strange thing is that I find the database size doesn't decrease at all, however the data in "test" is gone.
I've done this kind of test many times, this strange behavior happens sometimes.
Can anybody tell me what is wrong?
Actually, I use another thread to check the database size periodically.
Posted: 05 Jun 2013 05:24 PM PDT
I have a SQL Agent Job which generates a specific report in PDF-file and then copies the PDF to a network directory and then deletes the PDF file in the source directory.
The SQL Jobs consists of 2 steps: 1. Generate the report 2. Copy the report to the network location.
For step 2 I made a bat-file which handles the copying and removal of the pdf file.
The bat-file is as follows:
However, when I run my the Job, it hangs on the second step. The status just stays on "Executing".
This is the line which I stated in the step (location of the bat-file to execute):
My job-settings are as follows:
Type: Operating system (CmdExec) On Success: Go to the next step
On Failure: Quit the job reporting failure
Type: Operating system (CmdExec)
On Success: Quit the job reporting success
On Failure: Quit the job reporting failure
Posted: 05 Jun 2013 03:24 PM PDT
I have a database that supports a web application with several large tables. I'm wondering if partitioned tables will help speed up certain queries. Each of these tables has a colum called client_id. Data for each client_id is independent from every other client_id. In other words, web queries will always contain a where clause with a single client_id. I'm thinking this may be a good column on which to partition my large tables.
After reading up on partitioned tables, I'm still a little unsure as to how best to partition. For example, a typical table may have 50 million rows distributed more or less evenly across 35 client_ids. We add new client_ids periodically but in the short term the number of client_ids is relatively fixed.
I was thinking something along these lines:
My question. Is this an optimal strategy for partitioning these types of tables? My tests indicate a considerable speedup over indexing on client_id, but can I do better with some other form of partitioning (i.e. hash or range)?
Posted: 05 Jun 2013 01:50 PM PDT
While googling I found about the OVERRIDING USER VALUE parameter for an
Is there an equivalent command for Microsoft SQL Server versions 2005 or newer to allow you to insert into a table that has an
In this instance, the main goal is to help convince a superior from moving us off of using a GUID as the clustered primary key (the schema was developed pre-SQL 2000 where that was a decent idea, vs. now where it is a horrible idea) to adding an identity column to the tables and moving the clustered primary key to that and converting the old index to a non-clustered unique index.
The biggest push back is:
So that is why
Posted: 05 Jun 2013 04:24 PM PDT
I'm attempting to write my own mini-rollback procedure. I have a table that tracks any updates or deletes to another table using a trigger. I am attempting to make it possible to restore one or more of these tracked changes through the use of a procedure. However, I'm receiving a syntax error with the following:
The syntax error comes in in regards to my update statement, any help or suggestions would be appreciated.
|You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange |
To stop receiving these emails, you may unsubscribe now.
|Email delivery powered by Google|
|Google Inc., 20 West Kinzie, Chicago IL USA 60610|