Monday, August 5, 2013

[how to] Totals Column - Blank if Zero or NULL

[how to] Totals Column - Blank if Zero or NULL


Totals Column - Blank if Zero or NULL

Posted: 05 Aug 2013 05:40 PM PDT

A report currently computes a "totals" column like so:

,CASE WHEN ISNULL(CLM_BREAKFAST_TYPEA.MealsA, 0) +             ISNULL(CLM_BREAKFAST_TYPEB.MealsB, 0) = 0                         THEN ''        ELSE CONVERT(VARCHAR(15),              ISNULL(CLM_BREAKFAST_TYPEA.MealsA, 0) +             ISNULL(CLM_BREAKFAST_TYPEB.MealsB, 0))         END AS 'MealsTotal'  

How can I get a blank cell when a total equals zero, instead of NULL or "0", and avoid doing the computation twice?

Displaying Parent Child Information, With Certain Parent Columns Only Shown Once

Posted: 05 Aug 2013 04:44 PM PDT

For a TSQL report, I have a one-to-many parent-child relationship.

I want to report both parent and child relationships in the same report, like so, with the parent columns on the left and child columns on the right:

ParentName | ParentNum | ParentIncome | ChildName | ChildNum | ChildAllowance  John       | 1         | 50000        | Johnny    | 1        | 5  Jane       | 2         | 55000        | Jackie    | 2        | 10  Jane       | 2         |              | Billy     | 3        | 5  Jane       | 2         |              | Sally     | 4        | 5  Jackie     | 3         | 90000        | Monique   | 5        | 0  

I want to avoid duplicating certain columns, like ParentIncome, because users dump the report output into Microsoft Excel and I want to minimize the chances of them summing columns incorrectly.

Therefore, for each parent, I want to only display ParentIncome once (ParentNum and ParentName OK to repeat.). For rows other than the first row, display a blank cell (i.e., no NULL values).

Currently, I produce this report using a wrapper around a subquery. The wrapper looks like this for all the columns:

CASE WHEN DATA.ROW_NUMBER = 1        THEN DATA.Field1       ELSE '' END AS 'AColumn'  

The subquery orders things using OVER and PARTITION BY and ORDER BY, so when ROW_NUMBER = 1, it's always a new parent.

It seems like this is something others have run into, though...in TSQL, can I get the same results in a simpler way, without the extra outer wrapper query to format things?

Message file not found when starting SQLPlus

Posted: 05 Aug 2013 02:01 PM PDT

I've installed Oracle XE 11.2 on Fedora 18 but have trouble starting SQLPlus. It quits with the following error:

Error 6 initializing SQL*Plus  SP2-0667: Message file sp1<lang>.msb not found  SP2-0750: You may need to set ORACLE_HOME to your Oracle software directory  

All of the environment variables are properly set (copied from installation guide)

# ~/.zshrc  ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe # It is in fact installed there  ORACLE_SID=XE  NLS_LANG=`$ORACLE_HOME/bin/nls_lang.sh`  PATH=$ORACLE_HOME/bin:$PATH  

NLS_LANG is AMERICAN_AMERICA.AL32UTF8. As far as I understand Oracle ignores system language related variables but here they are anyway:

LC_CTYPE=en_US.UTF-8  LANG=en_US.UTF-8  

Any suggestions?

Should I be concered by large SERIAL values?

Posted: 05 Aug 2013 03:45 PM PDT

I have a Django application that uses PostgreSQL to analyze data from tweets. The data set increases by thousands of records with each request. I am using the database primarily as a cache, so I had planned to delete all records every 24 hours to permit new requests without needlessly increasing the size of the database.

Django uses the SERIAL type for storing the ids; a new item is given the next highest value of the last item that was created, rather than the first available number. I don't use the ids for anything outside of the ORM. My concern is that I will eventually run out of key space on my 32 bit VM. What does PostgreSQL when the next value is too large for SERIAL? Does it give error? Does it roll back to one?

SQL Query too slow on SUM function

Posted: 05 Aug 2013 01:23 PM PDT

I've a table with around 32 million rows having clustered unique index on CountryID,RetailerID,ProductID,DateID,EventID,TypeID and query is

SELECT  f.RetailerID  ,       TypeID                                AS TypeID  ,       c.CalendarMonth                         AS CalendarValue                      ,       SUM(ISNULL([Volume],0)                  ) AS Volume  ,       SUM([VolumeBox]                         ) AS VolumeBox  ,       SUM([VolumeKG]                          ) AS VolumeKG  ,       SUM([VolumeUnit]                        ) AS VolumeUnit  ,       SUM(ISNULL([R_Turnover] , 0.0)          ) AS R_Turnover  ,       SUM(ISNULL([R_VAT],0.0)                 ) AS R_VAT  ,       SUM(ISNULL([R_TurnoverExVAT],0.0)       ) AS R_TurnoverExVAT  ,       SUM([SupplierRealisation_Amt]           ) AS [SupplierRealisation_Amt]  ,       SUM([SupplierDiscount1_Amt]             ) AS [SupplierDiscount1_Amt]  ,       SUM([Supplier_1NetSales_Amt]            ) AS [Supplier_1NetSales_Amt]  ,       SUM([SupplierDiscount2_Amt]             ) AS [SupplierDiscount2_Amt]  ,       SUM([Supplier_2NetSales_Amt]            ) AS [Supplier_2NetSales_Amt]  ,       SUM([SupplierDiscount3_Amt]             ) AS [SupplierDiscount3_Amt]  ,       SUM([Supplier_3NetSales_Amt]            ) AS [Supplier_3NetSales_Amt]  ,       SUM(ISNULL(S_CostofGoodsSold, 0)        ) AS [S_CostofGoodsSold]  ,       SUM(ISNULL(S_Profit, 0)                 ) AS S_Profit  ,       SUM(0.0                                 ) AS AdditionalCostofGoodsSold  ,       SUM(ISNULL([R_DistributionCost],0.0)    ) AS R_DistributionCost  ,       SUM(ISNULL([R_Profit],0.0)              ) AS R_Profit  FROM [dbo].[EventScenarios] es   JOIN dbo.[Event] e ON es.[EventID] = e.ID    JOIN dbo.EventProduct ep on es.EventID = ep.EventID   JOIN [dbo].Product p ON p.ID=ep.ProductID   JOIN dbo.EventPL f  ON  e.CountryID = f.CountryID AND f.RetailerID = e.RetailerID AND f.EventID = e.ID  AND ep.ProductID = f.ProductID    INNER JOIN Calendar c   ON  c.DateID = f.DateID     WHERE  f.CountryID= 14  AND c.CalendarMonth BETWEEN 201301  AND 201312   GROUP BY f.RetailerID , c.CalendarMonth ,TypeID    

The query plan is showing 88% time on Clustered Index seek on EventPL table but still its taking around 15 seconds to complete. Is there any way I can optimise it to around 1/2 seconds?

Oracle DB Recovery

Posted: 05 Aug 2013 01:30 PM PDT

Before I start, I just want to mention that I am not a DBA, I am a programmer/developer who also happens to look after the database. So if I say something wrong, my apologies in advance.

We recently experienced a massive city wide power surge which killed our database server. We have multiple databases running on the server (RHEL6 - Oracle 11gr2). I back up all the data using rman and the command "backup database plus archivelog delete input" and then take an impdp dump of one particular schema so that we can import it into our test environments over night. After our server was destroyed, the only thing we could recover from the hard drives was the backupset folder which contains a lot of bkp files. And that's about it. I don't have a control file or the DBID of the database. I do however have a data dump that's 3 days old

Is there any way we can recover the database on to a new host?

Regards

Alex

CTE query doesn't print the right structure?

Posted: 05 Aug 2013 12:23 PM PDT

I have this table which represents hierarchy :

childID  parentID          NAME          siblingOrder  1          1               a               0  212        1               ab              1  112        1               ac              2  412        1               ad              3  -912       112             aca             0  

The structure is:

a  +----ab  +----ac       +---aca  +----ad  

(The siblingOrdercolumn is for controlling the order of ab,ac,ad )

I already have this solution which uses the siblingOrder :

;WITH CTE AS(     SELECT childID, parentID, 0 AS depth, NAME , siblingOrder,           CAST(RIGHT('00000' + CAST(siblingOrder AS VARCHAR(6)), 6)  AS VARCHAR(1024))  AS PATH     FROM   @myTable     WHERE   childID = parentID       UNION ALL      SELECT  TBL.childID, TBL.parentID,             CTE.depth + 1 , TBL.name ,TBL.siblingOrder,             CAST(cte.Path + '.' + RIGHT('00000' + CAST(Tbl.siblingOrder AS VARCHAR(6)), 6)AS VARCHAR(1024) )      FROM   @myTable AS TBL              INNER JOIN CTE  ON  TBL.parentID = CTE.childID      WHERE   TBL.childID<>TBL.parentID  )  SELECT path,depth, childID, parentID, siblingOrder, REPLICATE('----', depth) + name  FROM CTE    ORDER BY PATH  

So order by PATH actually do the job :

enter image description here

The problem:

The problem is that I Must(!) put values in the siblingOrder in order for it to work !

Otherwise , For example :

If I put 0 in all siblingOrder this is the result :

enter image description here

(yes, now sorting by path - doesn't work...)

I need that aca will always be under ac

(The only reason I added the siblingOrder is to order siblings !) and I don't want to enforce adding siblingOrder when not needed

Question :

Is it possible to enhance the query so that siblingOrder will affect only to siblings ?

I mean , If I don't care about the order of the siblings ( by putting 0) , I still expect the aca to be under ac

Sqlonline : with siblingOrder

Sqlonline - without siblingOrder

Transaction log has grown huge after maintenance plan to rebuild idex

Posted: 05 Aug 2013 11:02 AM PDT

A couple of days ago I was testing some stuff using the Maintenance Plans in SQL Server 2008.

I created one to rebuild indexes and take a full backup every week. (I know, bad thing I've found out today).

The thing is, since the transaction log grew a lot (around 80gb and the db is 60gb), the backup did not run. Now I've been googling around all day to see if there is a way to shrink the transaction log to the size it used to be before, which was something like 200mb.

Is this possible? Why does this happen? I know while rebuilding it creates a copy of the index in the transaction log or something like that, but is it possible to get rid of this copy?

Thanks, Federico

trigger in trigger or trigger before table creation oracle

Posted: 05 Aug 2013 11:56 AM PDT

i need to create trigger inside trigger as follows:

SQL> CREATE OR REPLACE TRIGGER table_account_modify AFTER CREATE ON SCHEMA    2  DECLARE    3  NAZWA_TABELI varchar(250);    4  sq0 VARCHAR2 (30000);    5    6    7  BEGIN    8  select ora_dict_obj_name into NAZWA_TABELI from DUAL;    9   10  sq0:= 'CREATE OR REPLACE TRIGGER new_trigger2; BEFORE INSERT on ACCOUNT FOR EACH ROW; BEGIN INSERT INTO pstepien VALUES (:NEW.DN,:NEW.CONTAINERDN,:NEW.SUPERVISOR,:NEW.OWNERDN,:NEW.SELFDN,:NEW.TARGETCLASS,:NEW.ERACCOUNTCOMPLIANCE,:NEW.ERACCOUNTSTATUS,:NEW.ERLASTACCESSDATE,:NEW.ERPARENT,:NEW.ERSERVICE,:NEW.ERUID); END;';   11   12   13  IF NAZWA_TABELI = 'ACCOUNT' THEN   14  execute immediate sq0;   15  END IF;   16  END table_account_modify;   17  /      Trigger created.      SQL> create table ACCOUNT (  2    DN VARCHAR2(255),  3    CONTAINERDN VARCHAR2(255),  4    SUPERVISOR VARCHAR2(255),  5    OWNERDN VARCHAR2(255),  6    SELFDN VARCHAR2(255),  7    TARGETCLASS VARCHAR2(255),  8    ERACCOUNTCOMPLIANCE VARCHAR2(255),  9    ERACCOUNTSTATUS VARCHAR2(255),  10    ERLASTACCESSDATE VARCHAR2(255),  11    ERPARENT VARCHAR2(255),  12    ERSERVICE VARCHAR2(255),  13    ERUID VARCHAR2(255)  14  );  create table ACCOUNT (  *  ERROR at line 1:   ORA-00604: error occurred at recursive SQL level 1  ORA-00911: invalid character  ORA-06512: at line 13      SQL>  

After trigger is created i create table to see if it works and i get the error as above. Could you please advice me what the problem could be and how to proceed here?

Thanks! Piotr

In PostgreSQL 9.3 , UNION VIEW with WHERE CLAUSE not taken into account

Posted: 05 Aug 2013 07:19 AM PDT

We use Postgres 9.3, we want to split a large database (evaluated final size will be > 100 T) on many servers. As our users already have a set of existing queries, we cannot use solutions like PL/Proxy. We would like to use the postgres_fdw contrib to externalize some part of the data.

To illustrate our problem, we created 2 foreign tables foreign_table_1 and foreign_table_2. As inheritance seems not available on foreign tables, we created an union view like this :

CREATE VIEW union_table (id, name)    AS      ( SELECT id, name FROM foreign_table_1 where id BETWEEN 1 AND 100000    UNION ALL SELECT id, name FROM foreign_table_2 where id > 100000    ) ;  

The problem is that Postgres does not push the constraints into the query plan. For instance here both foreign tables are scanned :

postgres=# explain analyze select * from union_table where id = 1234;   Append  (cost=100.00..303.35 rows=5 width=8) (actual time=1.113..1.113 rows=0 loops=1)     ->  Foreign Scan on master_table_1  (cost=100.00..154.82 rows=1 width=8) (actual time=0.567..0.567 rows=0 loops=1)     ->  Foreign Scan on master_table_2  (cost=100.00..148.48 rows=4 width=8) (actual time=0.538..0.538 rows=0 loops=1)   Total runtime: 1.876 ms  (4 rows)  
  • Is it possible to push the WHERE clause into the query plan ?
  • Is it possible to use inheritance with postgres_fdw ?
  • Is there any other way to obtain a sharded table with Postgres ?

Selecting from an excel spreadsheet into SQL Server table

Posted: 05 Aug 2013 10:43 AM PDT

This question has been asked before but I tried giving full admin rights to the SQL Server user on C:\temp\ folder location (I am using Windows authentication into SQL Server Express).

So for the following code snippet:

Declare @strSQL as varchar(200)  declare @file as varchar(200)     SET @file='C:\temp\file.xlsx'  SET @strSQL=N'SELECT * INTO #mytemptable FROM OPENROWSET(''Microsoft.Jet.OLEDB.4.0'', ''Excel 8.0;Database='+@file+';HDR=YES'', ''SELECT * FROM [Sheet1$]'');'  SET @strSQL=@strSQL+N'SELECT * FROM ##mytemptable'  PRINT @strSQL  Exec (@strSQL)  

EXPANDED SQL STATEMENT

SELECT *   INTO #mytemptable   FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0', 'Excel 8.0;Database=C:\temp\file.xlsx;HDR=YES', 'SELECT * FROM [Sheet1$]');    SELECT * FROM ##mytemptable  

I get this error:

Msg 7399, Level 16, State 1, Line 1
The OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "(null)" reported an error. The provider did not give any information about the error.
Msg 7303, Level 16, State 1, Line 1
Cannot initialize the data source object of OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "(null)".

Permissions using .NET SqlDependency

Posted: 05 Aug 2013 08:58 AM PDT

I created a db login, user and role as follows:

CREATE LOGIN myLogin WITH PASSWORD = 'somePW', CHECK_POLICY = OFF;  BEGIN CREATE USER myUser FOR LOGIN myLogin;  BEGIN CREATE ROLE "myRole" AUTHORIZATION [dbo];  EXECUTE sp_addrolemember N'myRole', N'myUser';  

Furthermore, I added some permissions, following this link

--Database level permissions  GRANT CREATE PROCEDURE to [myRole];  GRANT CREATE QUEUE to [myRole];  GRANT CREATE SERVICE to [myRole];  GRANT SUBSCRIBE QUERY NOTIFICATIONS TO [myRole];  GRANT VIEW DEFINITION TO [myRole];    --Service Broker permissions  GRANT REFERENCES ON CONTRACT::[http://schemas.microsoft.com/SQL/Notifications/PostQueryNotification] TO [myRole];  GRANT RECEIVE ON QueryNotificationErrorsQueue TO [myRole];  

and also tried changing the db-owner as described in this article

This DBA-Question ( aiming in the same direction ) does not provide any help.

The permissions to execute the query and do a select on the table by the SqlDependency are also set. I verified this using SSMS and execute the query with the user/pw-combination created above.

Now, I try to use the SqlDependency via .NET ( connection to DB is established with the login/user described above. Executing SqlDependency.Start( myConnectionString ) throws an exception ( translated from german ):

The specified user 'owner' was not found. Search the queue object 'SqlQueryNotificationService-9741490d-4f04-1f4e-AD70-b198e85b2812' is not possible because the object does not exist or you do not have the required permission. Invalid object name 'SqlQueryNotificationService-9741490d-4f04-1f4e-AD70-b198e85b2812'.

Any ideas what is missing?

FTs catalog in SQL Server for multiple FT index

Posted: 05 Aug 2013 09:00 AM PDT

I have created one catalog for FTS index, can the same catalog be used by by another table in index.

SQL Server Database grew too much too fast

Posted: 05 Aug 2013 09:22 AM PDT

I have an issue with one database. The issue is as follows: The database is being running on SQL Server 2000 Standard for the last 6 years in Full Recovery mode.

In the beginning of this summer the database was about 5GB.

Since then, the only thing we did beyond normal usage is some extensive deletions.

This week, there was a problem with the PC and I was forced to do a backup and restore it on an SQL Server 2008R2 Express and set the database in 2008 mode.

The backup file was about 1GB. When I restored it, the MDF was 9GB of size! I checked the old MDF and it was of the same size.

I checked the size of the tables and they cannot reach the 9GB reported!

I did a shrink but the size did not change.

Any clues or where to check?

Is there a chance that the Full Recovery, can affect the size of the MDF files?

I am thinking of setinng the recovery model to Simple, back-it-up and restore it. Is it going to make a difference? Can I do it on a live database?

Thanx in advance!

UPDATE:The initial size of the database is 1306 MB

UPDATE2 sp_spaceused: Database Size=8646.88 MB Unallocated Space= 0.00 MB

reserved=1336984 KB data=1020376 KB index_size=210408 KB unused=106200 KB

Reading CSV files located on LINUX server and updating the tables in SQL Server Database

Posted: 05 Aug 2013 03:38 AM PDT

I was wondering how do we ingest CSV files located on a Red Hat LINUX server into SQL Server Database Table.

I know we can write a Stored Procedure/Bulk Insert to read the files that are located on the same Windows Server as SQL Server and update the database but not sure how to do it when the files are present in LINUX.

Any help would be greatly appreciated

How do I track why my SQL Azure occupied space pulsates?

Posted: 05 Aug 2013 11:43 AM PDT

This is inspired by this StackOverflow question. I have this code:

SELECT CONVERT(INT, SUM(reserved_page_count) * 8 / 1024) FROM sys.dm_db_partition_stats  

for detecting database space consumption and it returns exactly the same result as SQL Azure management portal pie chart. The problem is the number fluctuates greatly without apparent reasons - sometimes it's around 35% and sometimes it's around 54%.

This is kinda worrying - I don't get what happens and the major concern is that we suddenly consume all the space and our service comes to a halt.

I seriously tried to look into sys.dm_db_partition_stats and sys.partitions but they look like a pile of magic numbers to me.

What data should I use to find the root cause of the fluctuations?

pros and cons of installing same instances of SQL Server on two different servers [on hold]

Posted: 05 Aug 2013 04:45 AM PDT

What are some of the pros and cons of installing same instances of SQL Server on two different servers , related to SSRS Configaration

Getting the information that exists in "Job properties -> Notifications" page?

Posted: 05 Aug 2013 02:26 PM PDT

I use SQL Server 2008 R2 and I need get some properties from the msdb database.

I need a query to get information that exists in the Job properties -> Notifications page.

In which table I can find this information?

How do I unload data from a database to a flat file?

Posted: 05 Aug 2013 08:40 AM PDT

I am aware of the concept of bulk insert wherein I load data from a flat file to a database.

Now I want to unload data from a table to a flat file, i.e. I want to export data in a table to a flat file. Can anyone tell me how to do this?

I am using SQL Server 2008 R2.

Changing autogrow settings with database mirroring

Posted: 05 Aug 2013 10:54 AM PDT

Recently I changed the Autogrow settings for some of the databases on our SQL Server 2008 R2 server. These are involved in a database mirroring configuration, with the principal on SERVER1 and the mirror on SERVER2.

This week I failed over three databases- now SERVER1 is the mirror and SERVER2 is the principal for these databases. It appears that the autogrow settings did not move over to SERVER2 properly, as the databases now show that they grow by a percentage (two are set to 32768%, the other to 131072%).

This is different than the settings that used to be there (I believe it was the default- 10%), and also different that the 256MB I set on SERVER1.

To make things more confusing, this is only happening on the primary file- the secondary files and log file has retained the settings I set on SERVER1.

My suspicion is that this is a bug- I did patch SQL after changing the autogrow settings. My question is- has anyone seen this scenario before? Is there a method to make sure all of the settings are correct on both servers without failing all the databases over?

UPDATE: Using a 4th database that I'll call DB1, I set the autogrow on SERVER1 to 512MB (after failing the database over, witnessing the same issue, and failing it back). When I failed it over to SERVER2 after that, it shows growth of 65536%. The takeaway is that the is_autogrow_percent value in sys.master_files is not moving to SERVER2, but the growth value is.

UPDATE2: With DB1 on SERVER1, I changed the autogrow to 10% and failed it to SERVER2. The 10% value remained. I then repeated this, setting autogrow back to 256MB this time. Again, the growth value changed but the is_autogrow_percent did not.

Two primary keys to one foreign key

Posted: 05 Aug 2013 04:43 PM PDT

racing_couple  ID (PRIMARY)    breeding_couple  ID (PRIMARY)    egg  IDpar(FOREIGN KEY)  

I have 2 IDs from 2 different tables (racing_couple and breeding_couple) and they are primary keys. In 3. table (egg) I have IDpar which is foreign key and references ID in racing_couple and breeding_couple. Tables represents racing and breeding couple of pigeons, table "egg" represents egg of racing or breeding couple. And I'm using IDpar to identify which egg belongs to which couple.

Note There are other fields in both database but they are mostly varchar and not so relevant to this problem.

If I have something like this in both databases. How to know which IDpar has value from racing_couple and which IDpar has value from breeding_couple. I think I made mistake by making my database like this, but is there any way to make that work?

ID(racing_couple)  1  2    ID(breeding_couple)  1  2    IDpar(egg)  1 (ID racing_couple)  1 (ID breeding_couple)  2 (ID racing_couple)  2 (ID breeding_couple)  

MySQL Continue Handler Problems inside a Function

Posted: 05 Aug 2013 08:13 PM PDT

I am in process of writing a simple MySQL function that looks up a value in a table and returns it. In case there is not value found it returns null. However, even with a continue handler defined I still end up with a warning "No data - zero rows fetched, selected, or processed". My code is below, what am I doing wrong? I really want to get rid of this warning :)

DROP FUNCTION IF EXISTS `getCompanySetting`;  DELIMITER |    CREATE FUNCTION `getCompanySetting`(setting_company_id INT, setting_name VARCHAR(255))      RETURNS TEXT  BEGIN      DECLARE setting_value TEXT DEFAULT NULL;      DECLARE CONTINUE HANDLER FOR SQLWARNING BEGIN END;      DECLARE CONTINUE HANDLER FOR NOT FOUND SET setting_value = NULL;        SELECT          value      FROM company_settings      WHERE          company_id = `setting_company_id`          AND          name       = `setting_name`      INTO setting_value;        RETURN setting_value;  END|    DELIMITER ;    mysql> SELECT getCompanySetting(24, 'observers_active');  +-------------------------------------------+  | getCompanySetting(24, 'observers_active') |  +-------------------------------------------+  | NULL                                      |  +-------------------------------------------+  1 row in set, 1 warning (0.00 sec)    mysql> show warnings;  +---------+------+-----------------------------------------------------+  | Level   | Code | Message                                             |  +---------+------+-----------------------------------------------------+  | Warning | 1329 | No data - zero rows fetched, selected, or processed |  +---------+------+-----------------------------------------------------+  1 row in set (0.00 sec)  

Update: MySQL Version 5.5.25

Data sharing between client and server two ways

Posted: 05 Aug 2013 08:55 AM PDT

One of my Silverlight applications is running. This Application is using one central database (SQL Server 2008) and three other client databases (SQL Server 2008) in server other locations.

Our requirement is that with minor changing or without changing the Silverlight application, when we make changes in the central database (insert, update and delete in multiple tables) these changes automatically occur in all client databases. And when any change is made in any client database all client as well as central database should be updated automatically.

For this I read some about Sync Framework and SQL Server 2008 Change Tracking. But I have no idea about how will I do this. Please help me. How can we achieve this?

How can row estimates be improved in order to reduce chances of spills to tempdb

Posted: 05 Aug 2013 03:38 PM PDT

I notice that when there are spill to tempdb events (causing slow queries) that often the row estimates are way off for a particular join. I've seen spill events occur with merge and hash joins and they often increase the runtime 3x to 10x. This question concerns how to improve row estimates under the assumption that it will reduce chances of spill events.

Actual Number of rows 40k.

For this query, the plan shows bad row estimate (11.3 rows):

select Value    from Oav.ValueArray   where ObjectId = (select convert(bigint, Value) NodeId                       from Oav.ValueArray                      where PropertyId = 3331                          and ObjectId = 3540233                        and Sequence = 2)     and PropertyId = 3330  option (recompile);  

For this query, the plan shows good row estimate (56k rows):

declare @a bigint = (select convert(bigint, Value) NodeId                         from Oav.ValueArray                        where PropertyId = 3331                          and ObjectId = 3540233                          and Sequence = 2);    select Value    from Oav.ValueArray   where ObjectId = @a                    and PropertyId = 3330    -- Workpiece Side Active Objects      option (recompile);  

Can statistics or hints be added to improve the row estimates for the first case? I tried adding statistics with particular filter values (property = 3330) but either could not get the combination correct or perhaps it is being ignored because the ObjectId is unknown at compile time and it might be choosing an average over all ObjectIds.

Is there any mode where it would do the probe query first and then use that to determine the row estimates or must it fly blindly?

This particular property has many values (40k) on a few objects and zero on the vast majority. I would be happy with a hint where the max expected number of rows for a given join could be specified. This is a generally haunting problem because some parameters may be determined dynamically as part of the join or would be better placed within a view (no support for variables).

Are there any parameters that can be adjusted to minimize chance of spills to tempdb (e.g. min memory per query)? Robust plan had no effect on the estimate.

TimesTen performance as compared to 10g

Posted: 05 Aug 2013 01:25 PM PDT

I am new here, so please be nice...

I have a question regarding TimesTen & Oracle. I have attempted an experiment with both pieces of software, and realize that the query speed on a table is slower on TimesTen than on Oracle.

The experiment setup was as follows:

  • 500,000 rows of data
  • count(*) query on a column which is NOT a candidate key

Table definition:

CREATE TABLE menu(DISH_ID NUMBER NOT NULL PRIMARY KEY, price NUMBER(7))

Query: SELECT COUNT(*) FROM menu WHERE price <= 50. The price value increases uniformly until the largest value in the DB. Time taken for the query to be executed 1000 times was recorded.

The query speed for different ranges is consistently slower on TimesTen, as compared to Oracle. No indexes were built on the search key. Is there a particular reason for this?

Other notes: I ran the same experiment, but built an index on the search key on both TimesTen & on 10g, and the difference is stark on TimesTen's favour.

MySQL performance tuning + queries stuck on "Copying to tmp table"

Posted: 05 Aug 2013 09:55 AM PDT

The latter part of the question's title (queries stuck on "Copying to tmp table") has been addressed many times, and I have spent a fair amount of time researching this. I would appreciate it if you guys could help me come to a conclusion - especially with my particular server set up in mind.

Quick overview of server:
- Dedicated server with 2 cores and 64 GB RAM
- Only runs MySQL

The setup is in no way tweaked, so current config is to some extent way off. Hopefully your scolding could provide knowledge.

The web application running on the server is a Magento site with 25.000+ products. The query giving the most headache is the one generating sitemaps.

Currently, the following query has been stuck for a little over an hour on "Copying to tmp table":

Note: I do not really need input on how to increase performance by optimizing this query, I would rather see how much I can shave query time off the queries that already exist.

SELECT DISTINCT `e`.`entity_id`, `ur`.`request_path` AS `url`, `stk`.`is_in_stock` FROM `catalog_product_entity` AS `e`  INNER JOIN `catalog_product_website` AS `w` ON e.entity_id=w.product_id  LEFT JOIN `core_url_rewrite` AS `ur` ON e.entity_id=ur.product_id AND ur.category_id IS NULL AND ur.store_id='1' AND ur.is_system=1  INNER JOIN `catalog_category_product_index` AS `cat_index` ON e.entity_id=cat_index.product_id AND cat_index.store_id='1' AND cat_index.category_id in ('2', '3', '68', '86', '145', '163', '182', '196', '198', '214', '249', '252', '285', '286', '288', '289', '290', '292', '549') AND cat_index.position!=0  INNER JOIN `cataloginventory_stock_item` AS `stk` ON e.entity_id=stk.product_id AND stk.is_in_stock=1  INNER JOIN `catalog_product_entity_int` AS `t1_visibility` ON e.entity_id=t1_visibility.entity_id AND t1_visibility.store_id=0  LEFT JOIN `catalog_product_entity_int` AS `t2_visibility` ON t1_visibility.entity_id = t2_visibility.entity_id AND t1_visibility.attribute_id = t2_visibility.attribute_id AND t2_visibility.store_id='1'  INNER JOIN `catalog_product_entity_int` AS `t1_status` ON e.entity_id=t1_status.entity_id AND t1_status.store_id=0  LEFT JOIN `catalog_product_entity_int` AS `t2_status` ON t1_status.entity_id = t2_status.entity_id AND t1_status.attribute_id = t2_status.attribute_id AND t2_status.store_id='1' WHERE (w.website_id='1') AND (t1_visibility.attribute_id='102') AND ((IF(t2_visibility.value_id > 0, t2_visibility.value, t1_visibility.value)) IN(3, 2, 4)) AND (t1_status.attribute_id='96') AND ((IF(t2_status.value_id > 0, t2_status.value, t1_status.value)) IN(1))    

Relevant config:

Server buffers:

max_connections = 1500;    key_buffer_size = 22G;    innodb_buffer_pool_size = 16G;    innodb_additional_mem_pool_size = 2G;    innodb_log_buffer_size = 400M;    query_cache_size = 64M;    

Per thread buffers:

read_buffer_size = 2M;    read_rnd_buffer_size = 16M;    sort_buffer_size = 128M;    thread_stack = 192K;    join_buffer_size = 8M;    

Question: Does any of these variables seem way off to any of you?

The above memory limits will allow me to use 130% more RAM than is physically installed on my system. Obviously, something must change.

According to the MySQLTuner Perl script, I am currently in danger of using:
Total buffers: 40.7G global + 28.2M per thread (1500 max threads)
Maximum possible memory usage: 82.0G (130% of installed RAM)

Questions: What would give the most performance increase of the above variables, or: will it be more useful to increase the total server buffer limit or the per thread buffer limits?

The two variables that seem to affect "Copying to tmp table" the most are:
- tmp_table_size
- max_heap_table_size

Mine are both set to 0.25G

Question: Any recommendations specifically for those to variables?

There are two suggested fixes that seem to get mentioned more that others:
- Kill process. Do flush table.
- Kill process. Do repair / optimize tables.

Question: Which of the two above suggested solution do you think is the most viable?

How to find when last reindexing happened in sql server?

Posted: 05 Aug 2013 04:55 AM PDT

I have set 'job' for re-indexing, I want to know when the last re-indexing happened in datetime.

innobackupex is failing while exporting backup

Posted: 05 Aug 2013 03:55 AM PDT

I am using perl script which is working fine in one of my other box which used to take individual schema backups & finally take all schema together using percona innobackupex.

I have following information in the Log(posting One day Log details):

--slave-info is used with --no-lock but without --safe-slave-backup. The  binlog position cannot be consistent with the backup data.  Died at /apps/mysql/scripts/mysql_backup.pl line 214.      [Fri Oct 26 04:20:01 2012] [14670]    [Fri Oct 26 04:20:01 2012] [14670] Starting individual backups    [Fri Oct 26 04:20:01 2012] [14670]    [Fri Oct 26 04:20:01 2012] [14670] Backing up proddb    [Fri Oct 26 04:20:01 2012] [14670] Creating /tmp/mysql_backup-14670.fifo    [Fri Oct 26 04:20:01 2012] [14670] Starting /usr/local/bin/zipmt -s -t 4     -c - < /tmp/mysql_backup-14670.fifo > /apps/dbbackup/proddb02.backup-    2012-10-26-042001-proddb.dmp.bz2    [Fri Oct 26 04:20:01 2012] [14670] Starting /usr/bin/innobackupex     --slave-info --no-lock --stream=tar --databases proddb ./ >    /tmp/mysql_backup-14670.fifo    [Fri Oct 26 04:20:02 2012] [14670] FAIL: /usr/bin/innobackupex     --slave-info --no-lock --stream=tar --databases proddb ./ >     /tmp/mysql_backup-14670.fifo failed 256  

Full details are in /apps/dbbackup/backup.log.

 Cause:     /usr/bin/innobackupex --slave-info --no-lock --stream=tar --databases    proddb ./ > /tmp/mysql_backup-14670.fifo failed 256  

Named pipes are existing in the /tmp even after backup getting failed :

  ls -lrth /tmp     =============     prw-r--r--. 1 root root    0 Oct 25 04:20 mysql_backup-18215.fifo   prw-r--r--. 1 root root    0 Oct 26 04:20 mysql_backup-14670.fifo   prw-r--r--. 1 root root    0 Oct 27 04:20 mysql_backup-11278.fifo   prw-r--r--. 1 root root    0 Oct 28 04:20 mysql_backup-7163.fifo   prw-r--r--. 1 root root    0 Oct 29 04:20 mysql_backup-4191.fifo   prw-r--r--. 1 root root    0 Oct 30 04:20 mysql_backup-595.fifo  

Any idea, why the backup is failing?

MySQL 1264 warning from libmysql

Posted: 05 Aug 2013 04:23 AM PDT

I'm using the libmysql.dll library to connect to my database from a 3rd party application (metatrader).

I managed to connect and things are going almost fine. I use mysql_real_connect and mysql_errno for connecting and capturing possible errors. I'm using the InnoDB engine and STRICT_TRANS_TABLES in SQL mode.

Questions / Doubts:

  1. I thought mysql_errno only captures errors and not warnings, but I'm getting a 1264 MySQL warning in metatrader (1264 SQLSTATE: 22003 (ER_WARN_DATA_OUT_OF_RANGE)), however it doesn't say where. How can I resolve this?

  2. If I try to manually UPDATE or INSERT against my database I get no warnings and/or errors.

I would like to know if this is a libmysql bug or find out what's going on. Perhaps there is a specific table/field responsible?

Any pointers?

Views - Indexes/Keys/Execution Plans

Posted: 05 Aug 2013 09:17 AM PDT

I am working with a SQL 2005 database server that contains multiple databases and tables that are replicated from various sources. No one but administrators have access to this server.

There is a SQL 2008 R2 database server that exposes all of the tables from the 2005 databases using views (and only views) within a single database. There is no row-level filtering done on these views, they contain the same data as the original tables.

Regardless of why this architecture was chosen, it is causing three specific problems:

  1. In SQL Server Management Studio, when browsing the views, there is no way to determine the indexes or primary keys on the base table.

  2. When using tool such as SSRS to build a drag-and-drop type report, it does not automatically generate the relationship between the views (probably linked to #1).

  3. Running an execution plan against a query written against the views is virtually useless as many of the nodes end up being "Remote Query"

I have done a little research into Indexed Views. The idea would be to basically replace the existing views with indexed views that have the same unique index as the base table. However, I don't think this is possible due to the following:

All base tables referenced by the view must be in the same database as the view and have the same owner as the view. http://msdn.microsoft.com/en-us/library/ms191432.aspx

Does anyone have any thoughts on how to alleviate the above three problems?

No comments:

Post a Comment

Search This Blog