[how to] Totals Column - Blank if Zero or NULL |
- Totals Column - Blank if Zero or NULL
- Displaying Parent Child Information, With Certain Parent Columns Only Shown Once
- Message file not found when starting SQLPlus
- Should I be concered by large SERIAL values?
- SQL Query too slow on SUM function
- Oracle DB Recovery
- CTE query doesn't print the right structure?
- Transaction log has grown huge after maintenance plan to rebuild idex
- trigger in trigger or trigger before table creation oracle
- In PostgreSQL 9.3 , UNION VIEW with WHERE CLAUSE not taken into account
- Selecting from an excel spreadsheet into SQL Server table
- Permissions using .NET SqlDependency
- FTs catalog in SQL Server for multiple FT index
- SQL Server Database grew too much too fast
- Reading CSV files located on LINUX server and updating the tables in SQL Server Database
- How do I track why my SQL Azure occupied space pulsates?
- pros and cons of installing same instances of SQL Server on two different servers [on hold]
- Getting the information that exists in "Job properties -> Notifications" page?
- How do I unload data from a database to a flat file?
- Changing autogrow settings with database mirroring
- Two primary keys to one foreign key
- MySQL Continue Handler Problems inside a Function
- Data sharing between client and server two ways
- How can row estimates be improved in order to reduce chances of spills to tempdb
- TimesTen performance as compared to 10g
- MySQL performance tuning + queries stuck on "Copying to tmp table"
- How to find when last reindexing happened in sql server?
- innobackupex is failing while exporting backup
- MySQL 1264 warning from libmysql
- Views - Indexes/Keys/Execution Plans
Totals Column - Blank if Zero or NULL Posted: 05 Aug 2013 05:40 PM PDT A report currently computes a "totals" column like so: How can I get a blank cell when a total equals zero, instead of NULL or "0", and avoid doing the computation twice? |
Displaying Parent Child Information, With Certain Parent Columns Only Shown Once Posted: 05 Aug 2013 04:44 PM PDT For a TSQL report, I have a one-to-many parent-child relationship. I want to report both parent and child relationships in the same report, like so, with the parent columns on the left and child columns on the right: I want to avoid duplicating certain columns, like ParentIncome, because users dump the report output into Microsoft Excel and I want to minimize the chances of them summing columns incorrectly. Therefore, for each parent, I want to only display ParentIncome once (ParentNum and ParentName OK to repeat.). For rows other than the first row, display a blank cell (i.e., no NULL values). Currently, I produce this report using a wrapper around a subquery. The wrapper looks like this for all the columns: The subquery orders things using OVER and PARTITION BY and ORDER BY, so when ROW_NUMBER = 1, it's always a new parent. It seems like this is something others have run into, though...in TSQL, can I get the same results in a simpler way, without the extra outer wrapper query to format things? |
Message file not found when starting SQLPlus Posted: 05 Aug 2013 02:01 PM PDT I've installed Oracle XE 11.2 on Fedora 18 but have trouble starting SQLPlus. It quits with the following error: All of the environment variables are properly set (copied from installation guide)
Any suggestions? |
Should I be concered by large SERIAL values? Posted: 05 Aug 2013 03:45 PM PDT I have a Django application that uses PostgreSQL to analyze data from tweets. The data set increases by thousands of records with each request. I am using the database primarily as a cache, so I had planned to delete all records every 24 hours to permit new requests without needlessly increasing the size of the database. Django uses the |
SQL Query too slow on SUM function Posted: 05 Aug 2013 01:23 PM PDT I've a table with around 32 million rows having clustered unique index on CountryID,RetailerID,ProductID,DateID,EventID,TypeID and query is The query plan is showing 88% time on Clustered Index seek on EventPL table but still its taking around 15 seconds to complete. Is there any way I can optimise it to around 1/2 seconds? |
Posted: 05 Aug 2013 01:30 PM PDT Before I start, I just want to mention that I am not a DBA, I am a programmer/developer who also happens to look after the database. So if I say something wrong, my apologies in advance. We recently experienced a massive city wide power surge which killed our database server. We have multiple databases running on the server (RHEL6 - Oracle 11gr2). I back up all the data using rman and the command "backup database plus archivelog delete input" and then take an impdp dump of one particular schema so that we can import it into our test environments over night. After our server was destroyed, the only thing we could recover from the hard drives was the backupset folder which contains a lot of bkp files. And that's about it. I don't have a control file or the DBID of the database. I do however have a data dump that's 3 days old Is there any way we can recover the database on to a new host? Regards Alex |
CTE query doesn't print the right structure? Posted: 05 Aug 2013 12:23 PM PDT I have this table which represents hierarchy : The structure is: (The I already have this solution which uses the So The problem: The problem is that I Must(!) put values in the Otherwise , For example : If I put (yes, now sorting by path - doesn't work...) I need that (The only reason I added the Question : Is it possible to enhance the query so that I mean , If I don't care about the order of the siblings ( by putting |
Transaction log has grown huge after maintenance plan to rebuild idex Posted: 05 Aug 2013 11:02 AM PDT A couple of days ago I was testing some stuff using the Maintenance Plans in SQL Server 2008. I created one to rebuild indexes and take a full backup every week. (I know, bad thing I've found out today). The thing is, since the transaction log grew a lot (around 80gb and the db is 60gb), the backup did not run. Now I've been googling around all day to see if there is a way to shrink the transaction log to the size it used to be before, which was something like 200mb. Is this possible? Why does this happen? I know while rebuilding it creates a copy of the index in the transaction log or something like that, but is it possible to get rid of this copy? Thanks, Federico |
trigger in trigger or trigger before table creation oracle Posted: 05 Aug 2013 11:56 AM PDT i need to create trigger inside trigger as follows: After trigger is created i create table to see if it works and i get the error as above. Could you please advice me what the problem could be and how to proceed here? Thanks! Piotr |
In PostgreSQL 9.3 , UNION VIEW with WHERE CLAUSE not taken into account Posted: 05 Aug 2013 07:19 AM PDT We use Postgres 9.3, we want to split a large database (evaluated final size will be > 100 T) on many servers. As our users already have a set of existing queries, we cannot use solutions like PL/Proxy. We would like to use the postgres_fdw contrib to externalize some part of the data. To illustrate our problem, we created 2 foreign tables foreign_table_1 and foreign_table_2. As inheritance seems not available on foreign tables, we created an union view like this : The problem is that Postgres does not push the constraints into the query plan. For instance here both foreign tables are scanned :
|
Selecting from an excel spreadsheet into SQL Server table Posted: 05 Aug 2013 10:43 AM PDT This question has been asked before but I tried giving full admin rights to the SQL Server user on So for the following code snippet: EXPANDED SQL STATEMENT I get this error:
|
Permissions using .NET SqlDependency Posted: 05 Aug 2013 08:58 AM PDT I created a db login, user and role as follows: Furthermore, I added some permissions, following this link and also tried changing the db-owner as described in this article This DBA-Question ( aiming in the same direction ) does not provide any help. The permissions to execute the query and do a select on the table by the SqlDependency are also set. I verified this using SSMS and execute the query with the user/pw-combination created above. Now, I try to use the SqlDependency via .NET ( connection to DB is established with the login/user described above. Executing SqlDependency.Start( myConnectionString ) throws an exception ( translated from german ):
Any ideas what is missing? |
FTs catalog in SQL Server for multiple FT index Posted: 05 Aug 2013 09:00 AM PDT I have created one catalog for FTS index, can the same catalog be used by by another table in index. |
SQL Server Database grew too much too fast Posted: 05 Aug 2013 09:22 AM PDT I have an issue with one database. The issue is as follows: The database is being running on SQL Server 2000 Standard for the last 6 years in Full Recovery mode. In the beginning of this summer the database was about 5GB. Since then, the only thing we did beyond normal usage is some extensive deletions. This week, there was a problem with the PC and I was forced to do a backup and restore it on an SQL Server 2008R2 Express and set the database in 2008 mode. The backup file was about 1GB. When I restored it, the MDF was 9GB of size! I checked the old MDF and it was of the same size. I checked the size of the tables and they cannot reach the 9GB reported! I did a shrink but the size did not change. Any clues or where to check? Is there a chance that the Full Recovery, can affect the size of the MDF files? I am thinking of setinng the recovery model to Simple, back-it-up and restore it. Is it going to make a difference? Can I do it on a live database? Thanx in advance! UPDATE:The initial size of the database is 1306 MB UPDATE2 sp_spaceused: Database Size=8646.88 MB Unallocated Space= 0.00 MB reserved=1336984 KB data=1020376 KB index_size=210408 KB unused=106200 KB |
Reading CSV files located on LINUX server and updating the tables in SQL Server Database Posted: 05 Aug 2013 03:38 AM PDT I was wondering how do we ingest CSV files located on a Red Hat LINUX server into SQL Server Database Table. I know we can write a Stored Procedure/Bulk Insert to read the files that are located on the same Windows Server as SQL Server and update the database but not sure how to do it when the files are present in LINUX. Any help would be greatly appreciated |
How do I track why my SQL Azure occupied space pulsates? Posted: 05 Aug 2013 11:43 AM PDT This is inspired by this StackOverflow question. I have this code: for detecting database space consumption and it returns exactly the same result as SQL Azure management portal pie chart. The problem is the number fluctuates greatly without apparent reasons - sometimes it's around 35% and sometimes it's around 54%. This is kinda worrying - I don't get what happens and the major concern is that we suddenly consume all the space and our service comes to a halt. I seriously tried to look into What data should I use to find the root cause of the fluctuations? |
pros and cons of installing same instances of SQL Server on two different servers [on hold] Posted: 05 Aug 2013 04:45 AM PDT What are some of the pros and cons of installing same instances of SQL Server on two different servers , related to SSRS Configaration |
Getting the information that exists in "Job properties -> Notifications" page? Posted: 05 Aug 2013 02:26 PM PDT I use SQL Server 2008 R2 and I need get some properties from the msdb database. I need a query to get information that exists in the In which table I can find this information? |
How do I unload data from a database to a flat file? Posted: 05 Aug 2013 08:40 AM PDT I am aware of the concept of bulk insert wherein I load data from a flat file to a database. Now I want to unload data from a table to a flat file, i.e. I want to export data in a table to a flat file. Can anyone tell me how to do this? I am using SQL Server 2008 R2. |
Changing autogrow settings with database mirroring Posted: 05 Aug 2013 10:54 AM PDT Recently I changed the Autogrow settings for some of the databases on our SQL Server 2008 R2 server. These are involved in a database mirroring configuration, with the principal on SERVER1 and the mirror on SERVER2. This week I failed over three databases- now SERVER1 is the mirror and SERVER2 is the principal for these databases. It appears that the autogrow settings did not move over to SERVER2 properly, as the databases now show that they grow by a percentage (two are set to 32768%, the other to 131072%). This is different than the settings that used to be there (I believe it was the default- 10%), and also different that the 256MB I set on SERVER1. To make things more confusing, this is only happening on the primary file- the secondary files and log file has retained the settings I set on SERVER1. My suspicion is that this is a bug- I did patch SQL after changing the autogrow settings. My question is- has anyone seen this scenario before? Is there a method to make sure all of the settings are correct on both servers without failing all the databases over? UPDATE: Using a 4th database that I'll call DB1, I set the autogrow on SERVER1 to 512MB (after failing the database over, witnessing the same issue, and failing it back). When I failed it over to SERVER2 after that, it shows growth of 65536%. The takeaway is that the UPDATE2: With DB1 on SERVER1, I changed the autogrow to 10% and failed it to SERVER2. The 10% value remained. I then repeated this, setting autogrow back to 256MB this time. Again, the |
Two primary keys to one foreign key Posted: 05 Aug 2013 04:43 PM PDT I have 2 IDs from 2 different tables (racing_couple and breeding_couple) and they are primary keys. In 3. table (egg) I have IDpar which is foreign key and references ID in racing_couple and breeding_couple. Tables represents racing and breeding couple of pigeons, table "egg" represents egg of racing or breeding couple. And I'm using IDpar to identify which egg belongs to which couple. Note There are other fields in both database but they are mostly varchar and not so relevant to this problem. If I have something like this in both databases. How to know which IDpar has value from racing_couple and which IDpar has value from breeding_couple. I think I made mistake by making my database like this, but is there any way to make that work? |
MySQL Continue Handler Problems inside a Function Posted: 05 Aug 2013 08:13 PM PDT I am in process of writing a simple MySQL function that looks up a value in a table and returns it. In case there is not value found it returns null. However, even with a continue handler defined I still end up with a warning "No data - zero rows fetched, selected, or processed". My code is below, what am I doing wrong? I really want to get rid of this warning :) Update: MySQL Version 5.5.25 |
Data sharing between client and server two ways Posted: 05 Aug 2013 08:55 AM PDT One of my Silverlight applications is running. This Application is using one central database (SQL Server 2008) and three other client databases (SQL Server 2008) in server other locations. Our requirement is that with minor changing or without changing the Silverlight application, when we make changes in the central database (insert, update and delete in multiple tables) these changes automatically occur in all client databases. And when any change is made in any client database all client as well as central database should be updated automatically. For this I read some about Sync Framework and SQL Server 2008 Change Tracking. But I have no idea about how will I do this. Please help me. How can we achieve this? |
How can row estimates be improved in order to reduce chances of spills to tempdb Posted: 05 Aug 2013 03:38 PM PDT I notice that when there are spill to tempdb events (causing slow queries) that often the row estimates are way off for a particular join. I've seen spill events occur with merge and hash joins and they often increase the runtime 3x to 10x. This question concerns how to improve row estimates under the assumption that it will reduce chances of spill events. Actual Number of rows 40k. For this query, the plan shows bad row estimate (11.3 rows): For this query, the plan shows good row estimate (56k rows): Can statistics or hints be added to improve the row estimates for the first case? I tried adding statistics with particular filter values (property = 3330) but either could not get the combination correct or perhaps it is being ignored because the ObjectId is unknown at compile time and it might be choosing an average over all ObjectIds. Is there any mode where it would do the probe query first and then use that to determine the row estimates or must it fly blindly? This particular property has many values (40k) on a few objects and zero on the vast majority. I would be happy with a hint where the max expected number of rows for a given join could be specified. This is a generally haunting problem because some parameters may be determined dynamically as part of the join or would be better placed within a view (no support for variables). Are there any parameters that can be adjusted to minimize chance of spills to tempdb (e.g. min memory per query)? Robust plan had no effect on the estimate. |
TimesTen performance as compared to 10g Posted: 05 Aug 2013 01:25 PM PDT I am new here, so please be nice... I have a question regarding TimesTen & Oracle. I have attempted an experiment with both pieces of software, and realize that the query speed on a table is slower on TimesTen than on Oracle. The experiment setup was as follows:
Table definition:
Query: The query speed for different ranges is consistently slower on TimesTen, as compared to Oracle. No indexes were built on the search key. Is there a particular reason for this? Other notes: I ran the same experiment, but built an index on the search key on both TimesTen & on 10g, and the difference is stark on TimesTen's favour. |
MySQL performance tuning + queries stuck on "Copying to tmp table" Posted: 05 Aug 2013 09:55 AM PDT The latter part of the question's title (queries stuck on "Copying to tmp table") has been addressed many times, and I have spent a fair amount of time researching this. I would appreciate it if you guys could help me come to a conclusion - especially with my particular server set up in mind. Quick overview of server: The setup is in no way tweaked, so current config is to some extent way off. Hopefully your scolding could provide knowledge. The web application running on the server is a Magento site with 25.000+ products. The query giving the most headache is the one generating sitemaps. Currently, the following query has been stuck for a little over an hour on "Copying to tmp table":
Relevant config: Server buffers: Per thread buffers:
The above memory limits will allow me to use 130% more RAM than is physically installed on my system. Obviously, something must change. According to the MySQLTuner Perl script, I am currently in danger of using:
The two variables that seem to affect "Copying to tmp table" the most are: Mine are both set to 0.25G
There are two suggested fixes that seem to get mentioned more that others:
|
How to find when last reindexing happened in sql server? Posted: 05 Aug 2013 04:55 AM PDT I have set 'job' for re-indexing, I want to know when the last re-indexing happened in datetime. |
innobackupex is failing while exporting backup Posted: 05 Aug 2013 03:55 AM PDT I am using perl script which is working fine in one of my other box which used to take individual schema backups & finally take all schema together using percona innobackupex. I have following information in the Log(posting One day Log details): Full details are in /apps/dbbackup/backup.log. Named pipes are existing in the /tmp even after backup getting failed : Any idea, why the backup is failing? |
MySQL 1264 warning from libmysql Posted: 05 Aug 2013 04:23 AM PDT I'm using the I managed to connect and things are going almost fine. I use Questions / Doubts:
I would like to know if this is a Any pointers? |
Views - Indexes/Keys/Execution Plans Posted: 05 Aug 2013 09:17 AM PDT I am working with a SQL 2005 database server that contains multiple databases and tables that are replicated from various sources. No one but administrators have access to this server. There is a SQL 2008 R2 database server that exposes all of the tables from the 2005 databases using views (and only views) within a single database. There is no row-level filtering done on these views, they contain the same data as the original tables. Regardless of why this architecture was chosen, it is causing three specific problems:
I have done a little research into Indexed Views. The idea would be to basically replace the existing views with indexed views that have the same unique index as the base table. However, I don't think this is possible due to the following:
Does anyone have any thoughts on how to alleviate the above three problems? |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment