[how to] List DB2 backups |
- List DB2 backups
- How do I remove duplicate records in a join table in psql?
- Strategies for organising SQL Server with large amount of RAM
- Leaf nodes for averages
- Specify Server for DBMS_Scheduler Job in Policy Managed RAC
- After streaming replication has failed, how to get it back again?
- What is a good, repeatable way to calculate MAXDOP on SQL Server?
- UNION is slow but both queries are fast in separate
- Are there any good open source tools for DB objects end user manipulation?
- Using pgAdmin SQL Editor to execute a master file containing multiple sql files
- Sum Up values in a query based on other information
- Install PostgreSQL 9.2 on Windows using WIN1252 encoding.
- Can the same database be log-shipping secondary and primary at the same time?
- Performing SELECT on EACH ROW in CTE or Nested QUERY?
- SQL Server 2008 can't repair consistency
- How can I store a pdf in PostgreSQL
- How I prevent deadlock occurrence in my application?
- Breaking Semisynchronous Replication in MySQL 5.5
- Script to get duration
- Is there an execution difference between a JOIN condition and a WHERE condition?
- DB2 Server Table Space Locked
- Relational database for address model
- unable to login oracle as sysdba
- Is there a combination of columns in sys.dm_exec_sessions that is unique per the server?
- "connectivity libraries needed are not found" error in IBM Data Studio
- How to insert into junction table using triggers
- Data dictionary best practices in SQL Server 2008 r2
- InnoDB - High disk write I/O on ibdata1 file and ib_logfile0
- Performance implications of MySQL VARCHAR sizes
- Comfortable sqlplus interface?
Posted: 12 Mar 2013 08:22 PM PDT Is there a way to list the DB2 backups? All I can find is |
How do I remove duplicate records in a join table in psql? Posted: 12 Mar 2013 08:13 PM PDT I have a table that has a schema like this: I would like to remove records that are duplicates, i.e. they have both the same What does the SQL look like for that? |
Strategies for organising SQL Server with large amount of RAM Posted: 12 Mar 2013 05:24 PM PDT We now have a new server for our DB and amongst other things we have 128GB of RAM available (previously I had 16GB) . I know SQL Server is very good at managing it's resources, but I was wondering if there are any special settings or strategies that I should employ in either the server/db settings or processing code (stored procs/indexes etc) to ensure that SS takes best advantage of the available ram. The DB is about 70GB and it's a non transactional db (it's a data warehouse). So basically large WRITE followed by massive READ is the normal flow of things. |
Posted: 12 Mar 2013 03:53 PM PDT I have the following MDX tuple calculation for my KPI in Dashboard Designer: This works perfectly when, for instance, I view it by Team name. However, when I view it by the |
Specify Server for DBMS_Scheduler Job in Policy Managed RAC Posted: 12 Mar 2013 02:33 PM PDT A unit test requires a dbms_scheduler job to run on the same RAC node as the unit test is being run from. I know that with an Admin managed database this could be done by creating a service that limited the available instances and then using that service in a job class the job uses. My question is, how can this be done in 11.2 with policy management? Pools can be created that have only a single server in them and databases can be assigned to multiple pools, but as I understand it, a server can only be assigned to a single pool. Therefore, a service can't be created that uses a single server and still have other services that use a pool defined with multiple servers including that one. I also know that services can be created as either SINGLETON or UNIFORM, but since SIGNLETON doesn't provide for allowed servers or even preferred servers, I'm not sure how this would help. Surely I am missing something that makes this all possible. |
After streaming replication has failed, how to get it back again? Posted: 12 Mar 2013 01:08 PM PDT I have a similar problem to this: Replication has failed; how to get going on again? Essentially my slave failed, and now complains "requested WAL segment 0000000100000135000000E4 has already been removed" In my case I HAVE done a full base backup again as per the instructions http://wiki.postgresql.org/wiki/Binary_Replication_Tutorial#Binary_Replication_in_6_Steps. So I shut the master down, did a full rsync, started up the slave, then started the master. And I still get the same error. |
What is a good, repeatable way to calculate MAXDOP on SQL Server? Posted: 12 Mar 2013 05:48 PM PDT When setting up a new SQL Server 2012, I use the following code to determine a good starting point for the I realize this is a bit subjective, and can vary based on many things; however I'm attempting to create a tight catch-all piece of code to use as a starting point for a new server. Does anyone have any input on this code? |
UNION is slow but both queries are fast in separate Posted: 12 Mar 2013 02:57 PM PDT Dunno what else to do about this one. I have one table that has has a start and a stop columns and I want to return the results of it joined both by start and by stop and I want a clear distinction between the two. Now both queries run fast separatly: So this one takes 0.063. But if I combine it in a UNION (doesn't matter if it's UNION ALL OR DISTINCT OR WHATEVER) it just takes about 0.400 seconds. Here is EXPLAIN on single query: And here is the EXPLAIN for the JOIN: Help on this one would be greatly appreciated. :) EDIT: I'm getting inconsistent result. If I remove the convert_tz for example and try to get the timezone outside the union I get very fast results, but If I rename the result it automatically goes down to the same underperformante query: this takes 0.374s while this takes 0.078 (mostly the lag from the db to my machine).. |
Are there any good open source tools for DB objects end user manipulation? Posted: 12 Mar 2013 12:15 PM PDT I've been recently tasked to provide some object adjustment features for our end users, simple thing like changing the value of two or three fields in some specific business know tables without the need to call the IT department and with some logging and auditing for our most paranoid managers. Is there any software that already does this? (gather the table schema and data from another DB and provide a user with really simple adjusting capabilities) Feature requests keep piling up, support for SQLServer and Oracle databases, auditing automatically and by user choice, running processes and what not. Any good tools that can provide this way of meta/highlevel simple database interaction? |
Using pgAdmin SQL Editor to execute a master file containing multiple sql files Posted: 12 Mar 2013 08:59 PM PDT I'm using pgAdminIII SQL Editor to develop a really long script. I'd like to break the script into smaller, more manageable scripts and include each sql file in a master file, then just execute the master file. example: contents (I don't know the syntax to use): I've found several tutorials about using Is this possible? Any references/documentation would be very helpful. EDIT: For clarity, I'm not asking about better products to use other than pgAdmin (unless the product can do what I'm asking above), nor am I asking how to do this in psql - I already have documentation for that and I don't want to use the command line. Preference is for executing the |
Sum Up values in a query based on other information Posted: 12 Mar 2013 03:05 PM PDT I am trying to grab the sum of 2 columns if 1 column is the same. I currently have a record-set that looks like this: I get these results by running this statement: How do I get the sum of I'd like to return 1 and only 1 row for each event number, and if there are more than 1 step, I'd like to add the I've tried and it just returns the results as a new column I also tried ...but that just wouldn't run at all. Here's some statements to set everything up |
Install PostgreSQL 9.2 on Windows using WIN1252 encoding. Posted: 12 Mar 2013 01:26 PM PDT I had installed PostgreSQL 9.2 earlier and it always installed with the encoding being WIN1252 (the default database was WIN1252). I then some time ago reinstalled it with the encoding being UTF8 (I dont exactly remember what I did). I am now trying to re-install postgresql again but re-installing it with the encoding set to WIN1252. I am installing postgresql 9.2.2 from the installer executable and using an options file. I am setting the locale to "English, United States" and the installer-language to "en". Are these the wrong values I should be using? Is there some internal variable I must of set to UTF8 that postgresql is reading to know to use UTF8? I dont see any reference to UTF8 anywhere when I install postgresql. After I install postgres, it shows my database is UTF8 and the 'client_encoding' variable is set to WIN1252. |
Can the same database be log-shipping secondary and primary at the same time? Posted: 12 Mar 2013 11:15 AM PDT Here is my scenario: Database DB1 on Server1 is log shipping primary in data center. Database DB1 on Server2 is log shipping secondary; Server2 is in remote location. Logs are shipped from data center to remote location via shared virtual Jungle Disk drive accessible both from data center and remote location via internet. In case I fail over to Server2 I would like to have log backups as well. So my thinking is after configuring DB1 on Server2 as log shipping secondary I would then also configure it as a log shipping primary (even though these log backups won't get shipped anywhere from Server2). When database DB1 on Server2 is in "secondary" mode log backup job will probably be disabled. Is this a valid use for log shipping? |
Performing SELECT on EACH ROW in CTE or Nested QUERY? Posted: 12 Mar 2013 11:45 AM PDT This is a problem in PostgreSQL I have a table which stores the tree of users; +------+---------+ | id | parent | |------+---------| | 1 | 0 | |------|---------| | 2 | 1 | |------|---------| | 3 | 1 | |------|---------| | 4 | 2 | |------|---------| | 5 | 2 | |------|---------| | 6 | 4 | |------|---------| | 7 | 6 | |------|---------| | 8 | 6 | +------+---------+ I can query a complete tree from any node by using the connectby function, and I can separately query the size of tree in terms of total nodes in it, for example
Now I want to do something like Selecting all possible trees from this table (which is again carried out by connectby), count the size of it and create another dataset with records of ID and size of underlying tree, like this: +------------------+-------------+ | tree_root_node | tree_size | |------------------+-------------| | 1 | 7 | |------------------+-------------| | 2 | 3 | |------------------+-------------| | 3 | 0 | |------------------+-------------| | 4 | 3 | |------------------+-------------| | 5 | 0 | |------------------+-------------| | 6 | 2 | |------------------+-------------| | 7 | 0 | |------------------+-------------| | 8 | 0 | +------------------+-------------+ The problem is, I am unable to perform the same SELECT statement for every available row in original table in order to fetch the tree and calculate the size, and even if I could, I dont know how to create a separate dataset using the fetched and calculated data. I am not sure if this could be simple use of some functions available in Postgres or I'd have to write a function for it or simply I dont know what exactly is this kind of query is called but googling for hours and searching for another hour over here at dba.stackexchange returned nothing. Can someone please point to right direction ? |
SQL Server 2008 can't repair consistency Posted: 12 Mar 2013 05:52 PM PDT I have a problem with a SQL Server 2008 database. Launching I get this error:
I found the table causing the problem:
So I tried with the repair operations: but I get the same error:
I've also tried setting the DB in SINGLE_USER mode, but with no result. I am not able to delete nor truncate the table. As I always get the same error. The table does not have any constraints. It has one PK and one Index, but I can't drop any of them. |
How can I store a pdf in PostgreSQL Posted: 12 Mar 2013 04:45 PM PDT I have to store .pdf files in a table. I have a table, I want to store the pdf files in the pdffile column. How can I do this? |
How I prevent deadlock occurrence in my application? Posted: 12 Mar 2013 11:53 AM PDT I am developing an LMS application in PHP framework(Codeigniter 2.1.0). I am using MySQL database. All the tables in the database have innodb engine. I also created indexes on each tables. Now I am doing load testing using Jmeter version 2.9 locally for 200 users concurrently. During the load testing, in a specific page action I got Deadlock Found error. I changed my original query to the new one but again same error is occurring. I have written save_interactions function which takes four parameters interaction array,module_id,course_id,user_id & is been called so many times by the AJAX script. The following script inserts the record if the specific interaction_id is not present in that table otherwise the update query will get fire. I got this type of error: Deadlock found when trying to get lock; try restarting transaction Can anyone please suggest me how to avoid Deadlock? |
Breaking Semisynchronous Replication in MySQL 5.5 Posted: 12 Mar 2013 12:25 PM PDT I've set up Semisynchronous Replication between two MySQL 5.5 servers running on Windows 7. My application is running and updating the database of the master server and same is being updated in the slave database server. But due to some unknown reasons sometimes, Replication breaks. On running the command: It gives this status: Ideally, in semi synchronization, when the sync breaks the status should come as OFF since master is not able to receive any acknowledgement from the slave. Please help us in this regard. |
Posted: 12 Mar 2013 02:53 PM PDT I am new to PostgreSQL. I am trying to write a query which can give me duration of the time. The fields are in the format |
Is there an execution difference between a JOIN condition and a WHERE condition? Posted: 12 Mar 2013 02:40 PM PDT Is there a performance difference between these two example queries? Query 1: Query 2; Notice the only difference is the placement of the supplemental condition; the first uses a When I run these queries on my Teradata system, the explain plans are identical and the JOIN step shows the additional condition in each case. However, on this SO question regarding MySQL, one of the answers suggested that the second style is preferred because Is there a general rule to follow when coding queries like this? I'm guessing it must be platform dependent since it obviously makes no difference on my database, but perhaps that is just a feature of Teradata. And if it is platform dependent, I'd like very much to get a few documentation references; I really don't know what to look for. |
Posted: 12 Mar 2013 02:11 PM PDT At work we keep receiving the following DataException seemingly at random when one of our processes tries to write/access a table for one of our clients: Has anyone encountered this? I'm not the person who primarily does administrative tasks on our databases, but even they seem to be having difficulty finding the root of this problem. Any suggestions? Anyone encounter this before? This error comes up for only one of our clients at a time, and it generally seems to rotate. We have Rackspace service but they wont be of much help unless we can provide screenshots, etc at the exact moment this occurs. Apologies if this post may be too vague, please let me know what information to supply to clarify things more. I'm one of the developers in my office, but I don't primarily handle the administrative tasks on our databases. edit: We spoke with IBM and this could possibly be caused by some sort of virus scan being run by IBM/Rackspace as a part of maintenance? They said this kind of dubiously though, so I am doubting this is the culprit because tables remained locked for variable amounts of time. |
Relational database for address model Posted: 12 Mar 2013 12:07 PM PDT I want to design an "Address" model for all types of entities like users, businesses, etc. I have two types of main models: one is User and the other is Business. Each one has different address types like below. User Business So I created an address model with an Address Relationships:
Now using the above relations, How can I design this one in an efficient way? |
unable to login oracle as sysdba Posted: 12 Mar 2013 06:38 PM PDT I just got 11gR2 running and was able to conn as sysdba. I shutdown and started up the database to mount a new pfile. Now, I cannot login as sysdba. My parameter for password file is: I am using sqlplus within the server. This is not a remote connection. Here's some more information: |
Is there a combination of columns in sys.dm_exec_sessions that is unique per the server? Posted: 12 Mar 2013 04:42 PM PDT In SQL Server, each session has its own spid. Spids are unique at any given notice, but spids, like process and thread identifiers in the OS are recycled. However sys.dm_exec_sessions has other columns with session metadata. Is there a combination of columns that is guaranteed to be unique for a server instance? |
"connectivity libraries needed are not found" error in IBM Data Studio Posted: 12 Mar 2013 02:37 PM PDT UPDATE I am getting the following error when I try to create a new database in IBM Data Studio v3.1.1.0. I have already started the instance using command. After searching exhaustively, I am not able to find any help on the internet regarding this error. |
How to insert into junction table using triggers Posted: 12 Mar 2013 01:39 PM PDT Sorry in advance if this is "basic SQL." I wanted to know how to update my junction tables automatically. For example, these are my tables. Artist and Song are base tables and SongArtist is the junction table. Everything in SongArtist is PK and FK. I created some triggers like this, but they don't seem to work as I can't do INSERT INTO and add a new row when I only know one field of the junction table because I have two columns that are PK. What should I do? |
Data dictionary best practices in SQL Server 2008 r2 Posted: 12 Mar 2013 07:38 PM PDT We are interested in sharing the meta data and data dictionary among the team. I know that we can use the Extended Properties for this purpose, but based on my experience I've seen it gets out of date easily, because team members tend to forget to update them or skip this step. I'm wondering if there is a more convenient way to create the data dictionary which can be maintained with the least amount of effort and time. Thank you. |
InnoDB - High disk write I/O on ibdata1 file and ib_logfile0 Posted: 12 Mar 2013 12:12 PM PDT Server Specification: VPS with following info We are running IP.Board from Invision Power Services, we are using From my inspection, I believe that it causes by high I/O usage on Running My question is, why we have high I/O write on Note that I'm only a programmer who are just by chance have the duty to maintain this, so please feel free to ask for more info. I've done a lot of things to this server, but please don't assume that I have done anything I should have done already. Additional info: and From @RolandoMySQLDBA : Please run this and show the output. This will tell you how many bytes per hour is written to ib_logfile0/ib_logfile1 based on the next 5 minutes. Above SQL query result (At 8am local time, while the members online is about 25% of the stat during the day): |
Performance implications of MySQL VARCHAR sizes Posted: 12 Mar 2013 01:33 PM PDT Is there a performance difference in MySQL between varchar sizes? For example, |
Comfortable sqlplus interface? Posted: 12 Mar 2013 11:26 AM PDT I found What is a good replacement / extension for SQL*Plus is the main command line tool to operate with the Oracle Database. |
You are subscribed to email updates from Recent Questions - Database Administrators - Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment