Wednesday, September 4, 2013

[how to] Problem using merge into a remote table over dblink

[how to] Problem using merge into a remote table over dblink


Problem using merge into a remote table over dblink

Posted: 04 Sep 2013 08:02 PM PDT

I'm attempting to perform a merge into a remote table, let's say on server A (over a dblink), using a local table on server B. Source and target databases are both on Oracle 10.2.0.3.

We already have a similar existing merge that works fine (but with the query running on server A and merging into a table on server B) so I'm a bit flummoxed as to what could be causing the problem.

I've reduced the case to the simplest - having identical tables and using the entire content of the local table for the merge, and still the merge fails with ORA-02064 "distributed operation not supported":

MERGE INTO attendance@dblnk tgt  USING (    SELECT *    FROM attendance    -- WHERE TRUNC(in_date) = TO_DATE('01.09.2013', 'DD.MM.YYYY')  ) src  ON (tgt.emp_no = src.emp_no AND tgt.in_date = src.in_date)  WHEN NOT MATCHED THEN    INSERT (emp_no, in_date, out_date)    VALUES (src.emp_no, src.in_date, src.out_date)  

Any idea what I should be checking next? Many thanks in advance

Software for working with SQL Server databases in Windows Server 2008

Posted: 04 Sep 2013 08:41 PM PDT

We have an ASP .NET project that uses SQL Server that we host on a Windows Server 2008 machine. What software can I use to interact with this database manually? I'm willing to use built-in tools and command line tools. (Command line tools would even be preferable.)

Export from phpMyAdmin and import later from another phpMyAdmin fail

Posted: 04 Sep 2013 03:46 PM PDT

I have a MySQL Server and a phpMyAdmin in a server at office, I exported a database using the phpMyAdmin export feature. I come home and try to import the exported .sql and get this error:

-- -- Constraints for table product_detail_has_category -- ALTER TABLE product_detail_has_category ADD CONSTRAINT fk_product_detail_has_category_category1 FOREIGN KEY ( category ) REFERENCES category ( id ) ON UPDATE CASCADE , ADD CONSTRAINT fk_product_detail_has_category_product_detail1 FOREIGN KEY ( detail ) REFERENCES product_detail ( id ) ON UPDATE CASCADE ;

MySQL said: Documentation

1452 - Cannot add or update a child row: a foreign key constraint fails (kraken.#sql-6f3_141, CONSTRAINT

fk_product_detail_has_category_category1 FOREIGN KEY (category) REFERENCES category (id) ON UPDATE CASCADE)

what should I do in order to import the file without problems in my server?

MySQL randomly refuses to use index

Posted: 04 Sep 2013 09:00 PM PDT

Mysql randomly refuses to use indexes for even simple queries. 1 example from slow-query log:

# Time: 130904 12:19:59
# User@Host: cron[cron] @ [10.71.41.38]
# Query_time: 58.488760 Lock_time: 0.000029 Rows_sent: 0 Rows_examined: 13634899
SET timestamp=1378311599;
UPDATE dial_list set status='AB' where lead_id = '3335602076';

This is a MYISAM table with lead_id as primary key

| Field         | Type             | Null | Key | Default     |Extra                   |  | lead_id       | int(9) unsigned  | NO   | PRI | NULL        | auto_increment         |  

The same query run on an identical server with similar data size uses the index. Also, not all such queries on this server turn up as table scans. What could be the problem?

mysql> show create table dial_list\G
************* 1. row *************
Table: dial_list
Create Table: CREATE TABLE dial_list (
lead_id int(9) unsigned NOT NULL AUTO_INCREMENT,
entry_date datetime DEFAULT NULL,
modify_date timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
status varchar(6) DEFAULT NULL,
user varchar(20) DEFAULT NULL,
vendor_lead_code varchar(20) DEFAULT NULL,
source_id varchar(50) DEFAULT NULL,
list_id bigint(14) unsigned NOT NULL DEFAULT '0',
gmt_offset_now decimal(4,2) DEFAULT '0.00',
called_since_last_reset enum('Y','N','Y1','Y2','Y3','Y4','Y5','Y6','Y7','Y8','Y9','Y10') DEFAULT 'N',
phone_code varchar(10) DEFAULT NULL,
phone_number varchar(18) NOT NULL,
title varchar(4) DEFAULT NULL,
first_name varchar(30) DEFAULT NULL,
middle_initial varchar(1) DEFAULT NULL,
last_name varchar(30) DEFAULT NULL,
address1 varchar(100) DEFAULT NULL,
address2 varchar(100) DEFAULT NULL,
address3 varchar(100) DEFAULT NULL,
city varchar(50) DEFAULT NULL,
state varchar(2) DEFAULT NULL,
province varchar(50) DEFAULT NULL,
postal_code varchar(10) DEFAULT NULL,
country_code varchar(3) DEFAULT NULL,
gender enum('M','F','U') DEFAULT 'U',
date_of_birth date DEFAULT NULL,
alt_phone varchar(12) DEFAULT NULL,
email varchar(70) DEFAULT NULL,
security_phrase varchar(100) DEFAULT NULL,
comments varchar(255) DEFAULT NULL,
called_count smallint(5) unsigned DEFAULT '0',
last_local_call_time datetime DEFAULT NULL,
rank smallint(5) NOT NULL DEFAULT '0',
owner varchar(20) DEFAULT '',
entry_list_id bigint(14) unsigned NOT NULL DEFAULT '0',
PRIMARY KEY (lead_id),
KEY phone_number (phone_number),
KEY list_id (list_id),
KEY called_since_last_reset (called_since_last_reset),
KEY status (status),
KEY gmt_offset_now (gmt_offset_now),
KEY postal_code (postal_code), KEY last_local_call_time (last_local_call_time),
KEY rank (rank),
KEY owner (owner),
KEY phone_list (phone_number,list_id),
KEY list_phone (list_id,phone_number),
KEY list_status (list_id,status)
) ENGINE=MyISAM AUTO_INCREMENT=35830810 DEFAULT CHARSET=latin1

Publishing database changes with SSDT that include altering data types of columns

Posted: 04 Sep 2013 02:04 PM PDT

I have a SQL Server Data Tools (VS2012) project that's published automatically during the build process. A column was recently updated from an int to decimal(18,4). As a result of this change, the publishing fails with the error

(49,1): SQL72014: .Net SqlClient Data Provider: Msg 50000, Level 16, State 127, Line 6 Rows were detected. The schema update is terminating because data loss might occur. (44,0): SQL72045: Script execution error. The executed script: /* The type for column QuantityReceived in table [dbo].[Reconciliation_Receiving] is currently INT NOT NULL but is being changed to DECIMAL (18, 4) NOT NULL. Data loss could occur. */

IF EXISTS (select top 1 1 from [dbo].[Reconciliation_Receiving]) RAISERROR (N'Rows were detected. The schema update is terminating because data loss might occur.', 16, 127) WITH NOWAIT An error occurred while the batch was being executed.

I understand why I'm receiving that error, and I know it could be resolved by disabling the "Block Incremental Deploy if Data Loss May Occur" flag. However, there's very strong opposition to disabling that feature, so it's not going to be an acceptable solution.

The only other solution I can think of is to do the following:

  1. Make a temporary table and copy the existing table's contents into the temporary table
  2. Truncate the existing table
  3. Let SSDT update the data type
  4. Fill the data back in from the temporary table

That seems horribly clunky and inefficient, though.

Is there a better alternative?

Error On Collection_set_2_upload, SQL Server 2008 R2

Posted: 04 Sep 2013 11:29 AM PDT

The following error appears in the Job history Log in SQL Server

An error occured while trying to access the cache directory: "D:\". Inner Error ------------------> Access is denied. Process Exit Code 4. The step failed.,00:00:00,0,0,,,,0

Has any one experienced this error before?

This is occuring during MDW collection_set_2_upload

Because of this the database log Becomes full.

Any suggestions?

Thanks!

before insert trigger for insert duplicate rows into another table

Posted: 04 Sep 2013 04:14 PM PDT

I have a table called tblspmaster in which sp column i have unique index so there will be no duplicates will be inserted, but i want to insert duplicate rows into tblspduplicate . so i decided to write trigger for this . IN master table which is tblspmaster records will be inserted using Load File of mysql

create trigger tblspmaster_noduplicate  before insert on tblspmaster  for each row  begin    if ( select count(sp) from tblspmaster where sp=new.sp > 0 )then      insert into tblspduplicate (sp,FileImported,AMZFileName)   values (NEW.sp,NEW.FileImported,NEW.AMZFileName)    END  END  

I have list of questions

  1. Is this right approach to stop duplicates and insert into another table ?

  2. My trigger is not executing as its showing some syntax errors

ERROR response is Error Code: 1064. You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'END END' at line 7

MySQL looking up more rows than needed (indexing issue)

Posted: 04 Sep 2013 04:15 PM PDT

In our MySQL 5.5 database, we have following InnoDB table with 30M+ rows:

+----------------+-------------+-----------------+--------+  | participant_id | question_id | given_answer_id | status |  +----------------+-------------+-----------------+--------+  |       500      |      12     |        25       |    0   |  +----------------+-------------+-----------------+--------+  

The combination participant_id + question_id + given_answer_id is unique.

At the moment we have following keys:

PRIMARY KEY (in this order)

  • participant_id
  • question_id
  • given_answer_id

INDEX KEY

  • question_id

For this table, we have two kinds of select queries in our application:

[...] WHERE participant_id = x AND question_id = y AND given_answer_id = z;  

and

[...] WHERE question_id = x;  

Normally, every participant_id has between 0 and <100 rows with different question_ids. The other way around, every question_id can have unlimited (usually not more than 100 000) rows with different participant_ids. The first query is executed way more often than the second one.

When we execute following query, it shows us that 32096 rows where looked up:

EXPLAIN SELECT * FROM example WHERE question_id = 500;    +----+-------------+-----------+------+---------------+-------------+---------+-------+-------+-------+  | id | select_type | table     | type | possible_keys | key         | key_len | ref   | rows  | Extra |  +----+-------------+-----------+------+---------------+-------------+---------+-------+-------+-------+  | 1  | SIMPLE      | example   | ref  | question_id   | question_id | 8       | const | 32096 |       |  +----+-------------+-----------+------+---------------+-------------+---------+-------+-------+-------+  

Whereas, when we execute the same query without EXPLAIN only 18732 rows are returned.

What indexes do we need on this table in order to prevent this overhead but still performing for both kinds of queries?


Here's is the code to create this table:

  CREATE TABLE `example` (      `participant_id` BIGINT(20) UNSIGNED NOT NULL,      `question_id` BIGINT(20) UNSIGNED NOT NULL,      `given_answer_id` BIGINT(20) UNSIGNED NOT NULL,      `status` TINYINT(3) UNSIGNED NOT NULL DEFAULT '0',      PRIMARY KEY (`participant_id`, `question_id`, `given_answer_id`),      INDEX `question_id` (`question_id`)  )  ENGINE=InnoDB;  

Data structure for storing synonyms

Posted: 04 Sep 2013 10:50 AM PDT

Hello all I am new to this forum and also fresher in computer science industry so this question may seem unstructured to you. I am asking this question as i have to include it in my project. Even if you could not help me by exact answers i request you to show me path to reach to my answer.

Question

I am in process of finding self evolving data structure for alignment of Terms and phrases (synonyms under some rules) . For example mobile and cell will be synonyms if the query fired to find synonyms of mobile will also include some calling related part (Rule will be calling here) . Neither meaning of Mobile and cell can also be different. Mobile can also be also mobile library , mobile hospital etc. and cell can be biological term.

Work by Me

I have explored somewhat NOSQL and neo4j graph database . They are indeed great but was unable to find any useful concepts that can relate to my above question.

Thanks in Advance . Any help would be appreciated

Best way to handle temporary data

Posted: 04 Sep 2013 10:56 AM PDT

Say I have a table prices. Users can see the prices of this table (I'm simplifying it). Users also can send prices to the server, so the server is going to collect these data in another table (temp_prices).

When the server detects that there are 50 prices or more, then the server calculates a median of those temporal prices and update the table prices with the calculated value (again, very simplified).

The first idea that came to my mind was have two tables with the same schema but different names (prices and temp_prices).

I don't know if this is a good/common practice. I am missing something? Also, what mechanism should I use to trigger the server to inspect the table temp_prices? I think about cron but again, I don't know if there is something I'm missing (another tool or common practice for doing this kind of stuff).

Thanks.

Server Wide DDL Trigger Permissions Issue

Posted: 04 Sep 2013 11:03 AM PDT

I am trying to implement a server wide DDL Trigger that will audit the DDL changes across the databases on the server using script very similar to the article found here

The trigger will log the DDL events into one table within one database, however I am coming up against permissions issues and some users, even myself with sysadmin privileges, receive the following error message.

Msg 297, Level 16, State 1, Procedure LogDDLEvent, Line 19  The user does not have permission to perform this action.  

I've read about using the WITH EXECUTE AS clause and creating a login specifically for the purpose of executing the trigger though this doesn't appear to have resolved the issue.

Could someone advise on the correct method to implement a trigger of this kind and what permissions are required to users or to a specific login to be used in the WITH EXECUTE AS?

EDIT: Some further info

The server is running SQL Server 2008 R2 and the DDL Trigger is set to DDL_EVENTS as follows:

CREATE TRIGGER LogDDLEvent  ON ALL SERVER  FOR DDL_EVENTS  AS    DECLARE     @eventInfo XML  SET         @eventInfo = EVENTDATA()    INSERT INTO Tools.audit.DDLEvent  VALUES  (        REPLACE(CONVERT(VARCHAR(128), @eventInfo.query('data(/EVENT_INSTANCE/PostTime)')),'T', ' ') -- EventTime        , CONVERT(VARCHAR(128), @eventInfo.query('data(/EVENT_INSTANCE/EventType)')) -- EventType        , CONVERT(VARCHAR(128), @eventInfo.query('data(/EVENT_INSTANCE/LoginName)')) -- LoginName        , CONVERT(VARCHAR(128), @eventInfo.query('data(/EVENT_INSTANCE/UserName)')) -- UserName        , CAST(HOST_NAME() AS VARCHAR(128)) -- MachineName        , (SELECT   CAST(client_net_address AS VARCHAR(128))              FROM  sys.dm_exec_connections              WHERE Session_id = CONVERT(INT, @eventInfo.value('data(/EVENT_INSTANCE/SPID)[1]', 'int'))) -- IPAddress        , CONVERT(VARCHAR(128), @eventInfo.query('data(/EVENT_INSTANCE/DatabaseName)')) -- DatabaseName        , CONVERT(VARCHAR(128), @eventInfo.query('data(/EVENT_INSTANCE/SchemaName)')) -- SchemaName        , CONVERT(VARCHAR(128), @eventInfo.query('data(/EVENT_INSTANCE/ObjectName)')) -- ObjectName        , CONVERT(VARCHAR(128), @eventInfo.query('data(/EVENT_INSTANCE/ObjectType)')) -- ObjectType        , CONVERT(VARCHAR(MAX), @eventInfo.query('data(/EVENT_INSTANCE/TSQLCommand/CommandText)')) -- DDLCommand        , @eventInfo -- DDLEventXML  )  

The above script is how I originally created the trigger and this worked fine for me and some other users, though a user who had db_owner role to a particular database kept receiving the error message above.

After trying to implement the WITH EXECUTE AS I found that the trigger would not work for myself.

The ordinal position for a column changed without my knowledge

Posted: 04 Sep 2013 10:00 AM PDT

I have a column that is out of order (ordinally, not in terms of select statement), and I just can't figure out how it got that way.

I've been researching this for quite some time and am coming up empty. I also could not reproduce it in testing.

I have informed the developers that they are not allowed to run queries that do not explicitly name columns, for exactly this reason.

In the mean time, though, it's driving me crazy that I have no idea how we came to this state of affairs.

How to avoid locks by large transactions?

Posted: 04 Sep 2013 10:03 AM PDT

We have a large application running, that requires cleanup for performance reasons. This was however not forseen when designing the application.

Basically, the delete is executed from a stored procedure, which first does a couple of selects to define the data to be deleted, after which it starts deleting from different tables. As the link between this data is essential for the deletion, it has to be avoided to delete for example and order without deleting some dependencies. Therefore it has to run in one transaction.

Problem is, whenever the script is running, the application itself become not unusable: timeouts when getting data from the web, or trying to update a certain record. Al those queries are blocked by the sessions that runs the transaction

The data being deleted is not relevant anymore, and should thus not be updated by the application.

I've tried running the transaction in different isolation levels, including snapshot, but it still doesn't work.

How can I avoid these locks? Should I use READ_COMMITTED_SNAPSHOT?

Thanks in advance...

Securely remove columns (and traces thereof) from a SQL database

Posted: 04 Sep 2013 09:47 AM PDT

I've come upon a database that has previously held sensitive data (think "passwords/credit-card-numbers" kind of sensitive) in plain text. All of the sensitive data in question has been deleted, but I'm concerned that if accessed by someone with malicious intentions, the data might be recoverable through backups and/or transaction logs.

Is there a way to permanently and securely delete certain columns out of transaction logs and/or backup files while maintaining the integrity of all other data in a SQL database?

picking consecutive or sequential number in a resultset

Posted: 04 Sep 2013 08:07 AM PDT

I have a mySQl database containing tables that contain all of the possible 5 number combinations for the take 5 lottery (pick 5 numbers: 1-39), organized by the sums of the 5 number combinations. EX. { (sum table: 15 - 1,2,3,4,5), (sum table: 17 - 1,2,3,4,7 | 1,2,3,5,6)} ... FOR SUMS 15 THROUGH 185.

I need a way to be able to filter my queried results to those results that contain at least two sequential numbers for a selected sum table. Ex. For sum 17, this would be: RESULT: 1,2,3,4,7 | 1,2,3,5,6
1,2 or 2,3 or 3,4 1,2, or 2,3, or 5,6

First I query the table for everything. using a query of a query I query the above resultset for rows containing numbers that are consecutive or sequential.

I need to put some kind of code or regular expression in the query to select just those rows containing 2 or more consecutive numbers.

the table is organized as follows: col: num1 to num 5 as integers and are currently in numerical order ascending the tables have an auto increment and a row ID as a primary key, and an regular index on the entire row.
There are no duplicates sets in the table. Ex. 12345, 21354, 54321, etc... would all be considered the same set of numbers since they all add up to 15, so only one of these would exist in the table.

Q) Is it possible to use a regular expression? If not what is the best solution vector?

Data Gateway for ODBC Connect Problems with ORA-28513

Posted: 04 Sep 2013 07:54 AM PDT

We try to connect from an Oracle 11g (11.2.0.3.0) data base server to a MS-SQL Server running on some other server. Our machine has Windows 2008 R2 Enterprise 64 Bit.

We use dg4odbc for our link to MS-SQL, and configured it as follows:

Before anything, we ran the script @ORA_HOME\RDBMS\ADMIN\caths.sql as a SYSDBA. Next we configured the ODBC data source with the program odbcad32.exe located in the system32 folder of the windows system (we also tried odbcad32.exe from the sysWOW64 folder) and named it MSQL_ODBC.

As a thrird step we created a new file for our heterogenous service (dg4odbc) and added a suitable configuration to Oracle's listener and TNS config files.

The configuration of our ODBC data source for dg4odbc is contained in a file named ORA_HOME\hs\admin\initMSQL_ODBC.ora and looks like this

HS_FDS_CONNECT_INFO = MSQL_ODBC  HS_FDS_TRACE_LEVEL = Debug  HS_FDS_TRACE_FILE_NAME = ORA_HOME\hs\trace\MSQL_ODBC.trc  

Our ORA_HOME\network\admin\Listener.ora contains the definition of a listener for the data gateway (dg4odbc). The entry we added is

(SID_DESC =      (SID_NAME = MSQL_ODBC)      (ORACLE_HOME = ORA_HOME)      (PROGRAM = dg4odbc)  )  

Also in this directory we edited the tnsnames.ora to contain an entry for the ODBC connection which looks like this

MSQL_LNK =  (DESCRIPTION =      (ADDRESS =          (PROTOCOL = TCP)          (Host = LOCALSVRNAME)          (Port = 1521)      )      (CONNECT_DATA =          (SID = MSQL_ODBC)          (SERVICE_NAME = MSQL_ODBC)      )      (HS=OK)  )  

We receive an error message when we try to execute a select statement (we only tried to select so far) which also creates a trace file in the ORA_HOME\hs\trace folder. The error message is

ORA-28513: internal error in heterogeneous remote agent  ORA-02063: preceding line from MSSQL_LINK  

and the log file contains the following lines

Oracle Corporation --- MITTWOCH   SEP 04 2013 16:49:04.007  Heterogeneous Agent Release   11.2.0.3.0   Unable to open init file initMSQL_ODBC.ora in directory ORA_HOME/hs/admin  

We already checked all file access rights of some users, but can not single out the real reason of this problem. Any hints are appreciated.

How to track if employees are late which has more than one work schedules or shifts?

Posted: 04 Sep 2013 05:43 PM PDT

I have designed a MySQL database for an employee monitoring system, it was working perfectly well until I was asked to put a new work schedule for some of the employees.

The new schedule was what they called a broken schedule because an employee would have different shifts in one day, for example, an employee could be given a work schedule for a given day starting from 7:00AM-11:00AM and then continues at 3:00PM-7:00PM...

The problem I encountered now is, I am confused on how to keep track if the employees are late of not now that some employees have two or more work schedules...

Could anyone help me pls?

addendum:

for more clarity, lets say we have employee A. employee A for example has the following work schedule for a given day - 5 days in a week.
7:00AM-11:00AM - shift 1
3:00PM-7:00PM - shift 2
9:00PM-11:30PM - shift 3

then, lets say employee A logged in at these different scenarios:
scenario #1: logged in at 7:10AM
scenario #2: logged in at 2:30PM

how do I check that employee A's log info at scenario #1 is not referencing shift2 or shift3?
likewise, how do I check if employee A's log info at scenario #2 would only reference shift2 only and would not make him/her late at his/her schedule at shift #1 or too early at shift #3?

Data Base Testing [on hold]

Posted: 04 Sep 2013 08:07 AM PDT

We have 5000+ Dbs (on MySQL - Unix Machine) each having similar 30-40 tables. We are performing Migration activity from one machine to another (again same on MySQL - Unix Machine).

Could any one suggest best possible solution to validate all the data is migrated properly. Also what other things should we consider while testing this activity?

Thanks in advance!

-- C.F.

Configure Email to be sent on restoring a database

Posted: 04 Sep 2013 11:10 AM PDT

Is there a way to configure Email in SQL Server such that an Email is sent when a particular database is restored.

Looking for a database-design to model a availability problem (large data sets are expected) [on hold]

Posted: 04 Sep 2013 08:24 AM PDT

I am looking for a database-design to store and query information about disposability of cars in a to be designed care-sharing-community, where users can rent cars provided by other users. There will be a (proximity) search which should only show all available cars.

I'll expect to have following data: Car data with general availability - set by the car owner. Rent contracts for a specific or recurring date/time, which reduces the availability.

Here some example queries that should be possible:

  1. get all cars available now
  2. get all cars available tomorrow from 8am to 2:30pm
  3. get all cars continuously available from 2013-10-01 until 2013-10-30
  4. get all cars available each Tuesday to Thursday from 10am to 6pm from 2013-11-01 until 2013-11-24

No matter if SQL or NoSQL is used for the model, any ideas are welcome. Important: The database-design should scale well, as large data sets are expected.

SQL Query too slow on SUM function

Posted: 04 Sep 2013 02:18 PM PDT

I've a table with around 32 million rows having clustered unique index on CountryID,RetailerID,ProductID,DateID,EventID,TypeID and query is

SELECT  f.RetailerID  ,       TypeID                                AS TypeID  ,       c.CalendarMonth                         AS CalendarValue                      ,       SUM(ISNULL([Volume],0)                  ) AS Volume  ,       SUM([VolumeBox]                         ) AS VolumeBox  ,       SUM([VolumeKG]                          ) AS VolumeKG  ,       SUM([VolumeUnit]                        ) AS VolumeUnit  ,       SUM(ISNULL([R_Turnover] , 0.0)          ) AS R_Turnover  ,       SUM(ISNULL([R_VAT],0.0)                 ) AS R_VAT  ,       SUM(ISNULL([R_TurnoverExVAT],0.0)       ) AS R_TurnoverExVAT  ,       SUM([SupplierRealisation_Amt]           ) AS [SupplierRealisation_Amt]  ,       SUM([SupplierDiscount1_Amt]             ) AS [SupplierDiscount1_Amt]  ,       SUM([Supplier_1NetSales_Amt]            ) AS [Supplier_1NetSales_Amt]  ,       SUM([SupplierDiscount2_Amt]             ) AS [SupplierDiscount2_Amt]  ,       SUM([Supplier_2NetSales_Amt]            ) AS [Supplier_2NetSales_Amt]  ,       SUM([SupplierDiscount3_Amt]             ) AS [SupplierDiscount3_Amt]  ,       SUM([Supplier_3NetSales_Amt]            ) AS [Supplier_3NetSales_Amt]  ,       SUM(ISNULL(S_CostofGoodsSold, 0)        ) AS [S_CostofGoodsSold]  ,       SUM(ISNULL(S_Profit, 0)                 ) AS S_Profit  ,       SUM(0.0                                 ) AS AdditionalCostofGoodsSold  ,       SUM(ISNULL([R_DistributionCost],0.0)    ) AS R_DistributionCost  ,       SUM(ISNULL([R_Profit],0.0)              ) AS R_Profit  FROM [dbo].[EventScenarios] es   JOIN dbo.[Event] e ON es.[EventID] = e.ID    JOIN dbo.EventProduct ep on es.EventID = ep.EventID   JOIN [dbo].Product p ON p.ID=ep.ProductID   JOIN dbo.EventPL f  ON  e.CountryID = f.CountryID AND f.RetailerID = e.RetailerID AND f.EventID = e.ID  AND ep.ProductID = f.ProductID    INNER JOIN Calendar c   ON  c.DateID = f.DateID     WHERE  f.CountryID= 14  AND c.CalendarMonth BETWEEN 201301  AND 201312   GROUP BY f.RetailerID , c.CalendarMonth ,TypeID    

The query plan is showing 88% time on Clustered Index seek on EventPL table but still its taking around 15 seconds to complete. Is there any way I can optimise it to around 1/2 seconds?

Selecting from an excel spreadsheet into SQL Server table

Posted: 04 Sep 2013 11:18 AM PDT

This question has been asked before but I tried giving full admin rights to the SQL Server user on C:\temp\ folder location (I am using Windows authentication into SQL Server Express).

So for the following code snippet:

Declare @strSQL as varchar(200)  declare @file as varchar(200)     SET @file='C:\temp\file.xlsx'  SET @strSQL=N'SELECT * INTO #mytemptable FROM OPENROWSET(''Microsoft.Jet.OLEDB.4.0'', ''Excel 8.0;Database='+@file+';HDR=YES'', ''SELECT * FROM [Sheet1$]'');'  SET @strSQL=@strSQL+N'SELECT * FROM ##mytemptable'  PRINT @strSQL  Exec (@strSQL)  

EXPANDED SQL STATEMENT

SELECT *   INTO #mytemptable   FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0', 'Excel 8.0;Database=C:\temp\file.xlsx;HDR=YES', 'SELECT * FROM [Sheet1$]');    SELECT * FROM ##mytemptable  

I get this error:

Msg 7399, Level 16, State 1, Line 1
The OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "(null)" reported an error. The provider did not give any information about the error.
Msg 7303, Level 16, State 1, Line 1
Cannot initialize the data source object of OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "(null)".

How do I track why my SQL Azure occupied space pulsates?

Posted: 04 Sep 2013 12:18 PM PDT

This is inspired by this StackOverflow question. I have this code:

SELECT CONVERT(INT, SUM(reserved_page_count) * 8 / 1024) FROM sys.dm_db_partition_stats  

for detecting database space consumption and it returns exactly the same result as SQL Azure management portal pie chart. The problem is the number fluctuates greatly without apparent reasons - sometimes it's around 35% and sometimes it's around 54%.

This is kinda worrying - I don't get what happens and the major concern is that we suddenly consume all the space and our service comes to a halt.

I seriously tried to look into sys.dm_db_partition_stats and sys.partitions but they look like a pile of magic numbers to me.

What data should I use to find the root cause of the fluctuations?

Scaling of PostGIS vs MySQL cos/sin calculations

Posted: 04 Sep 2013 05:53 PM PDT

I need to sort database rows according to GPS coordinates (or any other geographical representation).

The way I've understood it, MySQL Spatial Indexes are too buggy at the time of writing (at least articles and my own tests indicate so).

So I can either switch to PostgreSQL to use PostGIS which seems to be working excellently, or I can do it mathematically in MySQL:

((acos(sin(latitude * pi() / 180) * sin($latitude * pi() / 180) + cos(latitude * pi() / 180) * cos($latitude * pi() / 180) * cos((longitude - $longitude) * pi() / 180)) * 180 / pi()) * 60 * 1.1515 * 1.609344) km  

I'm not asking for a PostgreSQL vs MySQL debate, but simply how the performance of the MySQL expression scales as rows grow compared to using PostGIS/PostgreSQL.

It's a drastic step to switch database system, so the upside better be significant.

MySQL Continue Handler Problems inside a Function

Posted: 04 Sep 2013 08:18 PM PDT

I am in process of writing a simple MySQL function that looks up a value in a table and returns it. In case there is not value found it returns null. However, even with a continue handler defined I still end up with a warning "No data - zero rows fetched, selected, or processed". My code is below, what am I doing wrong? I really want to get rid of this warning :)

DROP FUNCTION IF EXISTS `getCompanySetting`;  DELIMITER |    CREATE FUNCTION `getCompanySetting`(setting_company_id INT, setting_name VARCHAR(255))      RETURNS TEXT  BEGIN      DECLARE setting_value TEXT DEFAULT NULL;      DECLARE CONTINUE HANDLER FOR SQLWARNING BEGIN END;      DECLARE CONTINUE HANDLER FOR NOT FOUND SET setting_value = NULL;        SELECT          value      FROM company_settings      WHERE          company_id = `setting_company_id`          AND          name       = `setting_name`      INTO setting_value;        RETURN setting_value;  END|    DELIMITER ;    mysql> SELECT getCompanySetting(24, 'observers_active');  +-------------------------------------------+  | getCompanySetting(24, 'observers_active') |  +-------------------------------------------+  | NULL                                      |  +-------------------------------------------+  1 row in set, 1 warning (0.00 sec)    mysql> show warnings;  +---------+------+-----------------------------------------------------+  | Level   | Code | Message                                             |  +---------+------+-----------------------------------------------------+  | Warning | 1329 | No data - zero rows fetched, selected, or processed |  +---------+------+-----------------------------------------------------+  1 row in set (0.00 sec)  

Update: MySQL Version 5.5.25

Data sharing between client and server two ways

Posted: 04 Sep 2013 09:18 AM PDT

One of my Silverlight applications is running. This Application is using one central database (SQL Server 2008) and three other client databases (SQL Server 2008) in server other locations.

Our requirement is that with minor changing or without changing the Silverlight application, when we make changes in the central database (insert, update and delete in multiple tables) these changes automatically occur in all client databases. And when any change is made in any client database all client as well as central database should be updated automatically.

For this I read some about Sync Framework and SQL Server 2008 Change Tracking. But I have no idea about how will I do this. Please help me. How can we achieve this?

Database table design question

Posted: 04 Sep 2013 03:04 PM PDT

Firstly, database novice here. I'm trying to create a system for processing Sales Orders into Purchase Orders and then split the Purchase Order into Shipping Containers. The systems needs to be able to split a Sales Order Line Item into 2 or more Purchase Orders and a Purchase Order Line Item into 2 or more Shipping Containers.

There will be some(a lot of) adjustments back and forth where once a Sales Order Line Item gets split and put into multiple Purchase Orders it might get split differently later on. And same when a Purchase Order Line Item is processed into Shipping Containers. I can't for the life of me how to best handle this operation.

If it helps, here are the tables I have so far with some details excluded for brevity:

CREATE TABLE PurchaseOrder (      Id INTEGER UNSIGNED UNIQUE NOT NULL AUTO_INCREMENT,      PurchaseOrderNumber VARCHAR(15) UNIQUE,        PRIMARY KEY(Id)  );      CREATE TABLE Container (      Id INTEGER UNSIGNED UNIQUE NOT NULL AUTO_INCREMENT,      ContainerName VARCHAR(20) UNIQUE,      PurchaseOrderId INTEGER UNSIGNED,        PRIMARY KEY(Id),      FOREIGN KEY(PurchaseOrderId) REFERENCES PurchaseOrder(Id),  );    CREATE TABLE SalesOrder (      Id INTEGER UNSIGNED UNIQUE NOT NULL AUTO_INCREMENT,      ClientId INTEGER UNSIGNED,      SalesOrderNumber VARCHAR(10),        PRIMARY KEY(Id),      FOREIGN KEY(ClientId) REFERENCES Client(Id)  );    CREATE TABLE SalesOrderLineItem (      Id INTEGER UNSIGNED UNIQUE NOT NULL AUTO_INCREMENT,      SalesOrderId INTEGER UNSIGNED,      ProductId INTEGER UNSIGNED,      Qty INTEGER,      Price DECIMAL(5,2),      Cost DECIMAL(5,2),        PRIMARY KEY(Id),      FOREIGN KEY(SalesOrderId) REFERENCES SalesOrder(Id),      FOREIGN KEY(ProductId) REFERENCES Product(Id)  );    CREATE TABLE PurchaseOrderLineItem (      Id INTEGER UNSIGNED UNIQUE NOT NULL AUTO_INCREMENT,      PurchaseOrderId INTEGER UNSIGNED,      SalesOrderId INTEGER UNSIGNED,      ProductId INTEGER UNSIGNED,      ClientId INTEGER UNSIGNED,      MfgId INTEGER UNSIGNED,        PRIMARY KEY(Id),      FOREIGN KEY(PurchaseOrderId) REFERENCES PurchaseOrder(Id),      FOREIGN KEY(SalesOrderId) REFERENCES SalesOrder(Id),      FOREIGN KEY(ProductId) REFERENCES SalesOrder(Id),      FOREIGN KEY(ClientId) REFERENCES Client(Id),      FOREIGN KEY(MfgId) REFERENCES Mfg(Id)  );  

I'm thinking to create additional association tables between Sales Order Line Item and Purchase Orders and for Purchase Order Line Item and Containers to keep track of this kind of back and froth splitting?

Help appreciated!

No comments:

Post a Comment

Search This Blog