Thursday, August 22, 2013

[how to] Install and Configure Oracle 11g using c# program

[how to] Install and Configure Oracle 11g using c# program


Install and Configure Oracle 11g using c# program

Posted: 22 Aug 2013 08:10 PM PDT

I need to create a windows application program using c# which will allow me to install oracle client 11g and also configure ODBC automatically with just one click, as my customers have limitations in technology and hence the installation and configuration is very difficult.

Is it possible and also if you could help with some code reference?

How to relate two rows in the same table

Posted: 22 Aug 2013 06:11 PM PDT

I have a table where the rows can be related to each other, and logically, the relationship goes both ways (basically, is directionless) between the two rows. (And if you're wondering, yes, this really should be one table. It is two things of the exact same logical entity/type.) I can think of a couple ways to represent this:

  1. Store the relationship and its reverse
  2. Store the relationship one way, constrain the database from storing it the other way, and have two indexes with opposite orders for the FKs (one index being the PK index)
  3. Store the relationship one way with two indexes and allow the second to be inserted anyway (sounds kind of yucky, but hey, completeness)

What are some major pros and cons of these ways, and of course, is there some way I haven't thought of?

Here's a SQLFiddle to play with: http://sqlfiddle.com/#!12/7ee1a/1/0. (Happens to be PostgreSQL since that's what I'm using, but I don't think this question is very specific to PostgreSQL.) It currently stores both the relationship and its reverse just as an example.

How to structure IF condition in MySQL trigger?

Posted: 22 Aug 2013 07:23 PM PDT

I am trying to write a MySQL trigger. I have two tables like this:

Table A------------------------------Table B

order_id------sku-----------------order_id------order_#-------sku_copy

568---------AAA---------------568---------2345  567---------BBB---------------567---------6789-------empty column  566---------CCC---------------566---------1234  

When a customer makes a purchase a new record is added to each table. I have added column 'sku_copy' to Table B, so it does not get populated when a new record is created.

When a new record is created, I want my trigger to copy the 'sku' field in Table A to the 'sku_copy' field in Table B. However, the problem I am having is how to structure the following condition in the trigger.

IF: 'order_id' in Table A matches 'order_id' in Table B. THEN: copy 'sku' from that Table A record to the record in Table B with the matching 'order_id'. The data should be added to Table B 'sku_copy'.

I am using the following SQL trigger but it gives this error when it's run:

"#1363 - There is no OLD row in on INSERT trigger"

Here is the trigger:

DELIMITER $$  CREATE TRIGGER trigger_name      AFTER INSERT ON tableA      FOR EACH ROW BEGIN        INSERT INTO tableB      SET sku_copy = OLD.sku,           order_id = OLD.order_id,          order = OLD.order;  END $$  DELIMITER ;  

Can some one show me how to correct the error in this code or suggest a better one?

Thank you for any help you can give.

Here is an update:

I tried this trigger (this is the live data instead of simplified as in the above examples) but get an error code:

"#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'WHERE virtuemart_order_id=new.virtuemart_order_id; END IF; END' at line 7"

Here is that trigger:

DELIMITER $$  CREATE TRIGGER `sku_after_update` AFTER UPDATE ON `uau3h_virtuemart_order_items`     FOR EACH ROW    BEGIN      IF (old.order_item_sku_copy != new.order_item_sku)      THEN      UPDATE uau3h_virtuemart_orders          SET order_item_sku_copy=new.order_item_sku,                            WHERE virtuemart_order_id=new.virtuemart_order_id;      END IF;    END$$  DELIMITER ;  

Does anyone have any suggestions on how to make this trigger work?

Reverse connect by prior level value for arbitrarily-deep hierarchy

Posted: 22 Aug 2013 01:57 PM PDT

Background

Using a menu hierarchy to drive a login process for users. Users have the ability to set their preferred menu item. When they log in, if they have a preferred menu item set, the system directs them to that item. If no preferred menu item is set, they log into the default menu item for their "most important" role.

Code

The query uses connect by prior to get the list of menus:

  SELECT      LEVEL AS menu_level,      jmi.name AS menu_name,      jmi.id AS menu_id    FROM      jhs_menu_items jmi, (        SELECT          jmi.id        FROM          jhs_users ju        JOIN jhs_user_role_grants jurg ON          ju.id = jurg.usr_id        LEFT OUTER JOIN user_menu_preferences ump ON          ju.id = ump.jhs_usr_id        LEFT OUTER JOIN default_menu_preferences dmp ON          jurg.rle_id = dmp.jhs_rle_id        JOIN jhs_menu_items jmi ON          -- Retrieve the user's preferred menu item, failing to the default          -- if no preference is set.          jmi.id = coalesce(            ump.jhs_menu_items_id,            dmp.jhs_menu_items_id          )        WHERE          ju.username = 'USERNAME' AND          ROWNUM = 1        ORDER BY          dmp.role_priority_sort      ) menu_preference      -- Derive the menu hierarchy starting at the user's preference, going back to     -- the root menu item.    START WITH jmi.id = menu_preference.id    CONNECT BY PRIOR jmi.mim_id = jmi.id  

Problem

A root menu item has NULL for its parent (mim_id). The user's menu preference is a menu item leaf node, which can be found at any level in the hierarchy (the maximum depth is 3, in this case).

When the data is returned, the values for the LEVEL pseudocolumn (alias MENU_LEVEL) are in reverse order:

╔════════════╦═══════════╦══════════════╗  ║ MENU_LEVEL ║ MENU_NAME ║ MENU_ITEM_ID ║  ╠════════════╬═══════════╬══════════════╣  ║          1 ║ MenuTab3  ║ 100436       ║  ║          2 ║ MenuTab2  ║ 101322       ║  ║          3 ║ MenuTab1  ║ 101115       ║  ╚════════════╩═══════════╩══════════════╝  

This should actually return:

╔════════════╦═══════════╦══════════════╗  ║ MENU_LEVEL ║ MENU_NAME ║ MENU_ITEM_ID ║  ╠════════════╬═══════════╬══════════════╣  ║          3 ║ MenuTab3  ║ 100436       ║  ║          2 ║ MenuTab2  ║ 101322       ║  ║          1 ║ MenuTab1  ║ 101115       ║  ╚════════════╩═══════════╩══════════════╝  

However, since the hierarchy is connected by starting from the user's preferred menu item, and worked back up to the root menu item, it makes sense that LEVEL is counting "backwards".

Having the level reversed means we can ask, "What is the 3rd-level menu item for the user named 'USERNAME'"? Expressed in as a SQL where clause:

WHERE menu_level = 3 AND username = 'USERNAME';  

Question

How would you reverse the value of LEVEL for an arbitrarily-deep hierarchy?

For example, something like:

SELECT    LEVEL AS MENU_LEVEL_UNUSED,    max(LEVEL) - LEVEL + 1 AS MENU_LEVEL  FROM ...  

Obviously that won't work because max is an aggregate function.

Ideas

  • We could add a column to jhs_menu_items that stores the depth. This is a bit redundant, though, because the hierarchy itself contains that information.
  • We could wrap the jhs_menu_items table in a view that calculates the depth. This could get computationally expensive.
  • Is this a good candidate for WITH?

Migrating from MSSQL to MySQL using MySQL Workbench tool

Posted: 22 Aug 2013 12:31 PM PDT

I'm trying to migrate few tables from MSSQL to MySQL using MySQL Workbench migration wizard. All work fine for structure migrations but when I go to the data migration section it throws an error for one table:

ERROR: dbo.Documents:SQLExecDirect(SELECT [DocumentID], [CategoryID], CAST([DocumentName] as NVARCHAR(255)) as [DocumentName], [Active], [NavigatorID], CAST([DocumentText] as NTEXT) as [DocumentText], [UseSubtitle], CAST([DocumentSubtitle] as NVARCHAR(255)) as [DocumentSubtitle], CAST([DocumentPlainText] as NTEXT) as [DocumentPlainText], [DocumentType], CAST([DocumentLink] as NVARCHAR(255)) as [DocumentLink], [Sitemap], CAST([SubtitleImage] as NVARCHAR(255)) as [SubtitleImage], CAST([MetaTags] as NVARCHAR(8000)) as [MetaTags], CAST([MetaDescription] as NVARCHAR(8000)) as [MetaDescription], [AccessLevel] FROM [ctool_test].[dbo].[Documents]): 42000:1131:[Microsoft][ODBC SQL Server Driver][SQL Server]The size (8000) given to the convert specification 'nvarchar' exceeds the maximum allowed for any data type (4000).

2131:[Microsoft][ODBC SQL Server Driver][SQL Server]The size (8000) given to the convert specification 'nvarchar' exceeds the maximum allowed for any data type (4000).

Based on that what I can understand it limits columns with 'nvarchar' data to max size of 4000 when MySQL can handle 65535.

Any clue how I can get this to work?

Thanks

Oracle 10g and Active Directory not behaving as expeted

Posted: 22 Aug 2013 11:59 AM PDT

I'm trying to connect to our Oracle 10g 64bit database running on a Windows 2003 server authenticating with an Active Directory user.

I found this weird results on our Development environment which I cannot replicate on the testing environment (same setup, different Active Directory)

I created a user as "OPS$DOMAIN\USER" and works ok if I log-in locally from the server, for example, running SQLPLUS / would connect directly without asking for username/password. SHOW USER would return "USER is "OPS$DOMAIN\USER"".

The weird thing happened when I tried to log-in from my computer (win 7 32bit oracle 11 client) logged as domain\user. SQLPLUS /@MYINSTANCE did not work... it wouldn't connect, asking me for a valid user/password.

After several attempts, a colleague suggested creating the user "OPS$USER" without indicating the domain. This does not work when I try and connect locally, but it works when I connect from my machine. So now, SQLPLUS /@MYINSTANCE from my computer works, and SHOW USER returns "USER is "OPS$USER"".

Makes any sense?

Finally, we tried the same on our Testing environment, where it all seems to work as expected (only works if I use "OPS$DOMAIN\USER").

Any ideas why might this be happening? My machine is part of the Development domain environment, before you ask :D

I'm starting to believe this has nothing to do with Oracle config, but maybe some weird setup on our Dev AD.

Thanks in advance!

Allow developers to run database maintenance

Posted: 22 Aug 2013 11:49 AM PDT

We use the database solution by Ola Hallengren. We're trying to give our developers a way to run the maintenance stored procedures after they do an ETL. Our developers have locked down permissions. We've installed Ola's stored procs: CommandExecute, CommandLog, DatabaseIntegrityCheck, and IndexOptimize.

Using Procedure with Execute as login as an example. I then setup the following:

-- Create the stored procedure in master. Otherwise,   -- the certificate will need to be backed and restored  -- to the database where the stored procedure exists  use [master]  go      -- create certificate that the procedure will be signed with  create certificate maintenance_proc_cert      encryption by password = 'H@rdP@w0rd'      with subject = 'Enable maintenance through procedure',      expiry_date = '01/01/2030';  go      -- create login that will be granted right to run maintenance  create login maintenance_login from certificate maintenance_proc_cert;  go      -- grant the login the SA role  exec master..sp_addsrvrolemember @loginame = N'maintenance_login', @rolename = N'sysadmin'  go      -- create procedures  if exists (select * from sys.objects where type = 'P' and name = 'SPECIAL_DatabaseIntegrityCheck')  drop procedure SPECIAL_DatabaseIntegrityCheck;  go  create procedure [dbo].[SPECIAL_DatabaseIntegrityCheck]      @Databases nvarchar(max),      @CheckCommands nvarchar(max) = 'CHECKDB',      @PhysicalOnly nvarchar(max) = 'N',      @NoIndex nvarchar(max) = 'N',      @ExtendedLogicalChecks nvarchar(max) = 'N',      @TabLock nvarchar(max) = 'N',      @FileGroups nvarchar(max) = NULL,      @Objects nvarchar(max) = NULL,      @LockTimeout int = NULL,      @LogToTable nvarchar(max) = 'N',      @Execute nvarchar(max) = 'Y'  as  begin      execute dbo.DatabaseIntegrityCheck          @Databases = @Databases,          @CheckCommands = @CheckCommands,          @PhysicalOnly = @PhysicalOnly,          @NoIndex = @NoIndex,          @ExtendedLogicalChecks = @ExtendedLogicalChecks,          @TabLock = @TabLock,          @FileGroups = @FileGroups,          @Objects = @Objects,          @LockTimeout = @LockTimeout,          @LogToTable = @LogToTable,          @Execute = @Execute;  end  go  if exists (select * from sys.objects where type = 'P' and name = 'SPECIAL_IndexOptimize')  drop procedure SPECIAL_IndexOptimize;  go  create procedure [dbo].[SPECIAL_IndexOptimize]      @Databases nvarchar(max),      @FragmentationLow nvarchar(max) = NULL,      @FragmentationMedium nvarchar(max) = 'INDEX_REORGANIZE,INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',      @FragmentationHigh nvarchar(max) = 'INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',      @FragmentationLevel1 int = 5,      @FragmentationLevel2 int = 30,      @PageCountLevel int = 1000,      @SortInTempdb nvarchar(max) = 'N',      @MaxDOP int = NULL,      @FillFactor int = NULL,      @PadIndex nvarchar(max) = NULL,      @LOBCompaction nvarchar(max) = 'Y',      @UpdateStatistics nvarchar(max) = NULL,      @OnlyModifiedStatistics nvarchar(max) = 'N',      @StatisticsSample int = NULL,      @StatisticsResample nvarchar(max) = 'N',      @PartitionLevel nvarchar(max) = 'N',      @MSShippedObjects nvarchar(max) = 'N',      @Indexes nvarchar(max) = NULL,      @TimeLimit int = NULL,      @Delay int = NULL,      @LockTimeout int = NULL,      @LogToTable nvarchar(max) = 'N',      @Execute nvarchar(max) = 'Y'  as  begin      execute dbo.IndexOptimize          @Databases = @Databases,          @FragmentationLow = @FragmentationLow,          @FragmentationMedium = @FragmentationMedium,          @FragmentationHigh = @FragmentationHigh,          @FragmentationLevel1 = @FragmentationLevel1,          @FragmentationLevel2 = @FragmentationLevel2,          @PageCountLevel = @PageCountLevel,          @SortInTempdb = @SortInTempdb,          @MaxDOP = @MaxDOP,          @FillFactor = @FillFactor,          @PadIndex = @PadIndex,          @LOBCompaction = @LOBCompaction,          @UpdateStatistics = @UpdateStatistics,          @OnlyModifiedStatistics = @OnlyModifiedStatistics,          @StatisticsSample = @StatisticsSample,          @StatisticsResample = @StatisticsResample,          @PartitionLevel = @PartitionLevel,          @MSShippedObjects = @MSShippedObjects,          @Indexes = @Indexes,          @TimeLimit = @TimeLimit,          @Delay = @Delay,          @LockTimeout = @LockTimeout,          @LogToTable = @LogToTable,          @Execute = @Execute  end  go      -- sign the procedures with the certificate  add signature to SPECIAL_DatabaseIntegrityCheck  by certificate maintenance_proc_cert  with password ='H@rdP@w0rd';  go  add signature to SPECIAL_IndexOptimize  by certificate maintenance_proc_cert  with password ='H@rdP@w0rd';  go      -- Grant execute to the procedure to NNEACC Admins  grant execute on SPECIAL_DatabaseIntegrityCheck to [developer1];  grant execute on SPECIAL_IndexOptimize to [developer1];  go  

Then as developer1, I tried running:

EXECUTE dbo.SPECIAL_DatabaseIntegrityCheck      @Databases = 'ALL_DATABASES',      @CheckCommands = 'CHECKDB',      @LogToTable = 'Y';  

However, it fails with the error:

Msg 50000, Level 16, State 1, Procedure DatabaseIntegrityCheck, Line 156  The stored procedure CommandExecute is missing. Download http://ola.hallengren.com/scripts/CommandExecute.sql.    Msg 50000, Level 16, State 1, Procedure DatabaseIntegrityCheck, Line 170  The table CommandLog is missing. Download http://ola.hallengren.com/scripts/CommandLog.sql.    Date and time: 2013-08-22 14:44:30  

I found that I had to sign each of the procs that the parent proc calls. I did this by running:

add signature to DatabaseIntegrityCheck  by certificate maintenance_proc_cert  with password ='H@rdP@w0rd';  go  add signature to IndexOptimize  by certificate maintenance_proc_cert  with password ='H@rdP@w0rd';  go  add signature to CommandExecute  by certificate maintenance_proc_cert  with password ='H@rdP@w0rd';  go  add signature to CommandLog  by certificate maintenance_proc_cert  with password ='H@rdP@w0rd';  go  

Now developer1 can run SPECIAL_DatabaseIntegrityCheck successfully. I guess my question is, is there a much simpler way that I'm missing?

Two types of data, so two type of databases?

Posted: 22 Aug 2013 01:32 PM PDT

For a social network site, I need to propose a DB. The application is written in Java & will be hosted on VPS(s) initially.

Broadly classified there is two type of data to be stored at backend:

 1. dynamic lists which are:         - frequently appended to          - frequently read              - sometimes reduced         2. fixed set of data keyed by a primary key(sometimes modified).     "For serving any page, I need to have access to both kind of data!"  

As demanded by every other SN site, we need to consider for easy scaling in the future, but in addition to that our team & resources are also very very limited. We would like to start with a 1 or 2 medium sized VPS(s) & add more servers as data & load grows.

Personally I usually prefer something that is used by a large community, so ofcourse MySQL is big option but it doesn't fit our entire needs. It could be used for 2nd kind of data(among the list above) ie for storing fixed set of columns/data but not ideal for storing dynamic lists(ie 1st kind). So should I use a 2nd database just to fit in only that type of data (two database each containing only data best suited for them)? (Some suggested Cassandra to store the 2nd kind of data.) What is the way to go ?

Oracle to MS SQL Server 2008 Code Conversion Problems [on hold]

Posted: 22 Aug 2013 10:41 AM PDT

      CREATE OR REPLACE FUNCTION CHI_X2 (a1 in number, b1 in number, a2 in number, b2 in number)            RETURN NUMBER IS         DECLARE @tr1 INT;         DECLARE @tr2 INT;         DECLARE @tc1 INT;         DECLARE @tc2 INT;         DECLARE @ca1 INT;         DECLARE @ca2 INT;         DECLARE @cb1 INT;         DECLARE @cb2 INT;         DECLARE @xi INT;         DECLARE @nt INT;           CREATE PROCEDURE ()         AS         BEGIN             SET tr1 = a1+b1           SET tr2 = a2+b2           SET tc1 = a1+a2           SET tc2 = b1+b2           SET nt = tr1+tr2           SET ca1 =(tc1/nt*tr1)           SET ca2 =(tc1/nt*tr2)           SET cb1 =(tc2/nt*tr1)           SET cb2 =(tc2/nt*tr2)           SET xi =((power((a1 -ca1),2)/ca1)+(power((a2 -ca2),2)/ca2)+(power((b1 -cb1),2)/cb1)+(power((b2-cb2),2)/cb2))          return xi          END CHI_X2           CREATE PROCEDURE ()         AS         begin            DECLARE @max_chi INT   DECLARE @xi INT   DECLARE @maxpos INT   DECLARE @n INT   DECLARE @SWV_CUR_OUT_sno VARCHAR(255)   DECLARE @SWV_CUR_OUT_p VARCHAR(255)   DECLARE @SWV_CUR_OUT_t VARCHAR(255)   DECLARE @SWV_cursor_var1 CURSOR   DECLARE @SWV_CUR_IN_sno VARCHAR(255)   DECLARE @SWV_CUR_IN_p VARCHAR(255)   DECLARE @SWV_CUR_IN_t VARCHAR(255)   delete from CH_TABLE   commit   SET @SWV_cursor_var1 = CURSOR  FOR select sessionnumber, sessioncount, timespent from CH_TABLE             order by sessionnumber asc   OPEN @SWV_cursor_var1   FETCH NEXT FROM @SWV_cursor_var1 INTO            @SWV_CUR_OUT_sessionnumber,@SWV_CUR_OUT_sessioncount,@SWV_CUR_OUT_timespent   while @@FETCH_STATUS = 0   begin          SET @max_chi = -999          SET @maxpos = NULL          SET @SWV_cursor_var1 = CURSOR  FOR select sessionnumber, sessioncount, timespent from      CH_TABLE             order by sessionnumber asc          OPEN @SWV_cursor_var1          FETCH NEXT FROM @SWV_cursor_var1 INTO            @SWV_CUR_IN_sessionnumber,@SWV_CUR_IN_sessioncount,@SWV_CUR_IN_timespent          while @@FETCH_STATUS = 0          begin             select   @n = count(*) from(select x1 as x from CH_TABLE union all select x2 from      CH_TABLE) AS TabAl   where x = @SWV_CUR_OUT_sessionnumber or x = @SWV_CUR_IN_sessionnumber          if n = 0         begin            SET xi =                round(CHI_X2(cur_out.sessioncount,cur_out.timespent,cur_in.sessioncount,cur_in.timespent),2)             if xi > max_chi             begin          SET max_chi = xi          SET maxpos = cur_in.sessionnumber            end         end               FETCH NEXT FROM @SWV_cursor_var1 INTO            @SWV_CUR_IN_sessionnumber,@SWV_CUR_IN_sessioncount,@SWV_CUR_IN_timespent          end         if max_chi > -999         begin         INSERT INTO CH_TABLE(X1, X2, VALUE)         VALUES(cur_out.sessionnumber, maxpos, max_chi)               commit     end          CLOSE @SWV_cursor_var1      FETCH NEXT FROM @SWV_cursor_var1 INTO            @SWV_CUR_OUT_sessionnumber,@SWV_CUR_OUT_sessioncount,@SWV_CUR_OUT_timespent   end   CLOSE @SWV_cursor_var1     END  

Hye, there everyone, I'm new to everything here just need to that I have converted the following code into MS SQL SERVER 2008 from Oracle! Now it has some errors though I have done it and NEW to SQL SERVER 2008 Please correct my code;it has some errors! I don't know about how to get rid as I'm new! Please can somebody help me It will be a great effort like a teacher! I will be thankful! Thanks in advance...

Following Expected errors:

Msg 156, Level 15, State 1, Line 1  Incorrect syntax near the keyword 'OR'.  Msg 102, Level 15, State 1, Line 13  Incorrect syntax near '('.  Msg 102, Level 15, State 1, Line 17  Incorrect syntax near '='.  Msg 178, Level 15, State 1, Line 27  A RETURN statement with a return value cannot be used in this context.  Msg 102, Level 15, State 1, Line 28  Incorrect syntax near 'CHI_X2'.  Msg 134, Level 15, State 1, Line 36  The variable name '@xi' has already been declared. Variable names must be unique within a query batch or stored procedure.  Msg 137, Level 15, State 2, Line 49  Must declare the scalar variable "@SWV_CUR_OUT_sessionnumber".  Msg 137, Level 15, State 2, Line 56  Must declare the scalar variable "@SWV_CUR_IN_sessionnumber".  Msg 137, Level 15, State 2, Line 60  Must declare the scalar variable "@SWV_CUR_OUT_sessionnumber".  Msg 102, Level 15, State 1, Line 63  Incorrect syntax near '='.  Msg 102, Level 15, State 1, Line 66  Incorrect syntax near '='.  Msg 137, Level 15, State 2, Line 71  Must declare the scalar variable "@SWV_CUR_IN_sessionnumber".  Msg 137, Level 15, State 2, Line 83  Must declare the scalar variable "@SWV_CUR_OUT_sessionnumber".  Msg 102, Level 15, State 1, Line 86  Incorrect syntax near 'END'.  

Find Duplicate Customers

Posted: 22 Aug 2013 01:50 PM PDT

Okay... I have a table that has customers:

-- Individual Table  * ID (Internal Unique ID)  * IndividualID (External Unique Individual Identifier)  * Last Name  * First Name  * Birth Date  * SSN  * ...  

The issue is that sometimes a person gets multiple Individual ID's. Say the person doesn't provide a SSN for one Encounter, the last name changes, typo in birthday, etc. So you end up with the same person in the individual table multiple times:

1, Frost, Jack, 1/1/2000, 000-00-0008  2, Frost, Jack, 1/1/2000, 000-00-0003  3, Doe,   Jane, 1/1/2000, 000-00-0005  4, Doe,   Janet, 1/1/2000, 000-00-0005  5, Frost, Janet, 1/1/2000, 000-00-0005  

Those are just some examples. The basic idea is that I need to find individuals that are potential matches, so that the right person can merge the individuals into a single account.

The particular query I'm currently on is on SS2008-SP1, but I have other queries on SS2005 through SS2012. Is there any way I can improve this?

Initially I had a single select statement (instead of 2 temp tables, 5 inserts and a select statement), but the "This or This or This or..." took many minutes and this takes ~10 seconds. Population of Customers is ~144k (Select count(*) from Data)

Current I'm using a simple attempt to try and match four parts: Last Name, First Name, DOB, SSN. If 3 or 4 of them match on different individuals, the need to be inspected closer to determine if they really are the same person.

IF object_id('tempdb..#DATA') IS NOT NULL      DROP TABLE #DATA;  GO    CREATE TABLE #DATA (        EXTID VARCHAR(30) NOT NULL      , LNAME VARCHAR(30) NULL      , FNAME VARCHAR(30) NULL      , SSN VARCHAR(11) NULL      , DOB VARCHAR(8) NULL      )  GO    INSERT INTO #DATA  SELECT         EXTID = D1.EXTERNALID      , LNAME = D1.LASTNAME      , FNAME = D1.FIRSTNAME      , SSN = CASE WHEN D1.SSN = '000-00-0000' THEN NULL ELSE D1.SSN END      , DOB = convert(VARCHAR, D1.DOB, 112)  FROM Data D1  WHERE Type = 1 and STATUS = 1  GO    SELECT D1.*, [Splitter] = 'MATCH', D2.*   FROM #Demo D1, #Demo D2 WHERE D1.ID > D2.ID      AND (   D1.LNAME = D2.LNAME          AND D1.FNAME = D2.FNAME          AND D1.SSN   = D2.SSN          AND D1.DOB   = D2.DOB)          UNION  SELECT D1.*, 'LName', D2.*   FROM #Demo D1, #Demo D2 WHERE D1.ID > D2.ID       AND (   D1.LNAME <> D2.LNAME          AND D1.FNAME = D2.FNAME          AND D1.SSN   = D2.SSN          AND D1.DOB   = D2.DOB)          UNION  SELECT D1.*, 'FName', D2.*   FROM #Demo D1, #Demo D2 WHERE D1.ID > D2.ID      AND (   D1.LNAME = D2.LNAME          AND D1.FNAME <> D2.FNAME          AND D1.SSN   = D2.SSN          AND D1.DOB   = D2.DOB)          UNION  SELECT D1.*, 'SSN  ', D2.*   FROM #Demo D1, #Demo D2 WHERE D1.ID > D2.ID      AND (   D1.LNAME = D2.LNAME          AND D1.FNAME = D2.FNAME          AND D1.SSN   <> D2.SSN          AND D1.DOB   = D2.DOB)          UNION  SELECT D1.*, 'DOB  ', D2.*   FROM #Demo D1, #Demo D2 WHERE D1.ID > D2.ID      AND (   D1.LNAME = D2.LNAME          AND D1.FNAME = D2.FNAME          AND D1.SSN   = D2.SSN          AND D1.DOB   <> D2.DOB);  

Edit to add Distinct Counts:

LName   FName   SSN DOB Count  36737   14539   115073  34284   144044  

Edit: Cleaned up a bit to get rid of second temp table. Poking around the Estimated Execution plan, the above query - broken into 5 parts - uses hash map inner joins and takes about 10 seconds. My initial query, and other variations seem to use loop joins and is still chugging along at 10+ minutes.

Redundant transpose. Case is not enough to solve my problem

Posted: 22 Aug 2013 10:22 AM PDT

I have db problem in transposing rows to columns. I am half way through the result but getting redundant data. My table:

EMP_ID  EMP_NAME   SAL_PAID  01      ABC        JAN  01      ABC        FEB  01      ABC        MAR  02      PQR        JAN  02      PQR        MAR  03      XYZ        FEB  

Result Table:

EMP_ID  EMP_NAME   JAN    FEB    MAR    APR  01      ABC         Y      Y      Y      N  02      PQR         Y      N      Y      N  03      XYZ         N      Y      N      N  

I have used case and then the result is as below:

EMP_ID  EMP_NAME   JAN    FEB    MAR    APR  01      ABC         Y      N      N      N  01      ABC         N      Y      N      N  01      ABC         N      N      Y      N  02      PQR         Y      N      N      N  02      PQR         N      N      Y      N  02      PQR         N      N      N      N  03      XYZ         N      Y      N      N  03      XYZ         N      N      N      N  

I have tried this for quite a bit now. Thank you.

Which of these two methods is standard when creating a 1 to many database relationship?

Posted: 22 Aug 2013 02:20 PM PDT

If I have a customer that can have many addresses, I can create an Address table with columns Street, Town etc. and CustomerId. Then I can insert multiple records to have multiple addresses per customer.

Alternatively I can create multiple addresses and give them all the same AddressId, then in my customer table I can have an AddressId (so you'd do SELECT * FROM Address WHERE Address.AddressId = Customer.AddressId).

Which of these is better, is there some reason why you'd use one over the other, or is one of them just silly?

Can scheduled and continous replication configurations exist side-by-side on the same master/slave servers?

Posted: 22 Aug 2013 09:10 AM PDT

Environment

We have a core sql server cluster. This cluster contains some databases that get replicated to a load-balanced sql cluster of currently 3 servers. These databases are replicated each 12 hours but will eventually be replicated every 4 hours.

Requirement

On this cluster a new database is created and we need this database to be replicated asap to the load-balanced sql cluster. A delay of seconds or minutes is allowed and writes to this database are currently and in the future low (a few per hour).

Questions

Can two different replication plans coexist side-by-side on the same environment?

Is it possible to setup a second replication routine for this scenario (continuous transaction replication) besides the current replication schema for the existing databases?

Does this create a high risk for a large existing scheduled replication job?

Our DBA says that this replication scenario creates a high risk for the existing replication configuration (2x a day).

My brainwaves

I can't imagine that this minor write activity with continuous transaction replication can create issues for the large existing replication job. I can imagine the other way around that our continuous replication will suffer twice a day due to the large replication job. We are perfectly fine with that as replication is required ASAP during regular operation.

Where can I find scenarios to work through T-SQL [on hold]

Posted: 22 Aug 2013 08:38 AM PDT

I've been working through the exam guides for the new 70-461 MCSE data exams and i'm at a point where I want to practice my T-SQL. Does anyone know of any good resources whereby scenarios are posed and you have to create a solution through practice. For example, when I learned C# I would work towards a goal of creating a certain type of system or project like a web app or a console app that did a certain thing. I'm struggling to come up with test scenarios to improve my T-SQL experience because I don't have a frame of reference for what professionals in SQL do in their day jobs.

How to make SSMS upper case keywords

Posted: 22 Aug 2013 12:50 PM PDT

I recently started using Management Studio 2012. When using MySQL Workbench, a handy feature was that I could stay all in lower case and any reserved word (like SELECT, INSERT) would convert to upper case automatically. How do I replicate this behavior in SSMS?

Centralize Oracle RMAN scripts in one server

Posted: 22 Aug 2013 01:14 PM PDT

I want to centralize all my Oracle's DB backup shell scripts in one server, and all of them are in different servers, architectures and DB versions. For this purpose I'm going to use the same server where the Recovery Catalog is installed, a rhel6 64bit with 11.2.0.3 DB, and schedule the backup shell scripts via crontab.

This is easily done executing, from the Recovery Catalog server, something like RMAN TARGET SYS@RemoteDBNetIdentifier CATALOG catalogowner@rmanCatalog

Problem is that, according with the compatibility matrix, RMAN client version must match DB target connected version. http://docs.oracle.com/cd/E11882_01/backup.112/e10643/compat003.htm#i634479

$ rman target sys@db_prod_1 catalog rman@rcat    Recovery Manager : Release 11.2.0.3.0 - Production on Jue Ago 22 13:27:52 2013    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.    Contraseña de la base de datos destino:  el paquete PL/SQL SYS.DBMS_BACKUP_RESTORE versión 11.02.00.01 de la base de datos TARGET no es actual  el paquete PL/SQL SYS.DBMS_RCVMAN versión 11.02.00.01 de la base de datos TARGET no es actual  conectado a la base de datos destino: DB_PROD_1 (DBID=943768957)  Contraseña de la base de datos del catálogo de recuperación:  conectado a la base de datos del catálogo de recuperación    RMAN> exit  

But I don't want to install every single Oracle DB version in the recovery catalog server. So I tried to copy JUST the rman executable from the Oracle DB installations in the others servers to the Recovery Catalog server, but it didn't worked

$ scp oracle@SRV_PROD_1:/usr/oracle/product/11.2.0/bin/rman /usr/oracle/product/rman_bin/11.2.0.1/  rman                                    100%   14MB 209.6KB/s   01:06    $ cd product/rman_bin/11.2.0.1/  $ ./rman target sys@db_prod_1 catalog rman@rcat    Recovery Manager : Release 11.2.0.1.0 - Production on Jue Ago 22 13:51:27 2013    Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.    RMAN-00571: ===========================================================  RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============  RMAN-00571: ===========================================================  RMAN-00554: fallo al inicializar el paquete del gestor de recuperación interno  RMAN-03000: fallo al inicializar el componente compilador del gestor de recuperación  RMAN-06035: versión incorrecta de recover.bsq; se esperaba 11.2.0.1 y se ha encontrado 11.2.0.3  

It seems that I'd also need the recover.bsq file (which is in $ORACLE_HOME/ADMIN), but I want to be sure that I wont miss more files or face new problems when we'll have to perform a disaster recovery or something in the future.

What can I do?

How can I only install the different versions of RMAN client? There's another method to centralize all the backups work (I think that EM Cloud Control can achieve this, I didn't read about it, just figuring out, but we dont want to rely on a GUI web-based application to manage our backup strategy.)

Regards

Inline edit SQL Server database rows from Visual Studio

Posted: 22 Aug 2013 05:13 PM PDT

I'm pretty sure Microsoft have pulled one of the most useful features for performing quick edits on a SQL Server Database within the Visual Studio IDE. It seems to have only affected SQL 2012 instances, but from the Server Explorer I can no longer right click on a table "Show Table Data", pop open the SQL pane, query the data then perform inline edits on the results (as if I were modifying a spreadsheet).

Show Table Data

This means I now need to go into SSMS to make these kind of quick updates. Does anybody know of a plugin I can use with VS 2012 to bring back this functionality? It seems odd to me that Microsoft have two different development trajectories with SSDT and SSMS? Are they designed to serve different purposes? Can SSMS be integrated into the Visual Studio IDE? I'd rather have a unified development environment if possible.

Any thoughts on a workaround for this problem would be much appreciated.

EDIT

I know some purists would quiver at the thought of treating a database table like a spreadsheet, but semantically they are not a world apart, plus this is supported in SSMS. I'm more in favour of relying on GUI based approaches where I can to speed up routine tasks, why some would balk at this I have no idea?

Dealing with data stored as arrays in a MySQL DB

Posted: 22 Aug 2013 08:14 PM PDT

So I know storing arrays in a DB field is wrong and would never do it myself, however a 3rd party plugin my company is using stores data in an array and I was wondering if you could help me try to deal with it.

It basically seems to link 2 tables and add a view count. Here is an example of the data:

a:4:{i:4;i:196;i:26;i:27;i:5;i:155;i:34;i:4;}

So I think this means there are 4 entries in the array, each with 2 attributes. The first - i.e. 4, 26, 5, 34 are "store codes". The second lot (196, 27, 155, 4) are number of plays. God knows why they are stored like this as there is already another table that links the video with the stores and they could've just stuck another column there for view count.

Anywho, what I want to do is order by view count based on store id within that array. Do you think this is possible and does anyone have any ideas how to do this? If storing data like this is a standard, do you know the name for it as I could probably take it from there?

Thanks!

Unable to connect oracle as sysdba tables have been dropped

Posted: 22 Aug 2013 11:13 AM PDT

I have a script which lists all tables belonging to the user and executes DROP for all of them.

By mistake, I logged in oracle with 'sys as sysdba' and ran above script. Due to which all sysdba tables are dropped.

Now i can not startup database instance. In alert log, it gives following error:

Sat Jul 20 15:28:21 2013

Errors in file orcl_ora_4276.trc:

ORA-00942: table or view does not exist

Error 942 happened during db open, shutting down database

USER: terminating instance due to error 942

I tried to flashback one droppd table, but it is giving error:

SQL> alter database open;

alter database open

*

ERROR at line 1:

ORA-01092: ORACLE instance terminated. Disconnection forced

SQL> FLASHBACK TABLE MAP_OBJECT TO BEFORE DROP;

ERROR:

ORA-03114: not connected to ORACLE

Please suggest if there is any way to restore all these tables. Or if creating new database is the only way?

Analyse MySQL General Query Log in Real-time?

Posted: 22 Aug 2013 06:13 PM PDT

We want to use mysql general query log to do real-time monitoring and auditing.

Currently our approach is:

  • set general_log=on;
  • sleep 15m;
  • set general_log=off;
  • scp & rm xxx.log;
  • set general_log=on;...

But the main problem is when turn on/off general log it'll cause a peak of slow query.

I also thought of another approach: turn on genlog; tail -f it and send out the log; periodically truncate the logfile (with "> xxx.log" or "cat /dev/null >xxx.log").

I'm wondering whether it's practical.

If only mysql would provide some built-in general log message queue stuff...

Mysql settings for query_cache_min_res_unit

Posted: 22 Aug 2013 12:13 PM PDT

What is the best setting for query_cache_min_res_unit for these results:

+-------------------------+-----------+  | Variable_name           | Value     |  +-------------------------+-----------+  | Qcache_free_blocks      | 35327     |  | Qcache_free_memory      | 295242976 |  | Qcache_hits             | 236913188 |  | Qcache_inserts          | 49557287  |  | Qcache_lowmem_prunes    | 0         |  | Qcache_not_cached       | 7128902   |  | Qcache_queries_in_cache | 195659    |  | Qcache_total_blocks     | 426870    |  +-------------------------+-----------+  

Do I need to change any other settings?

My website creates very large results. This is the current setting:

query_cache_min_res_unit = 4096  

Info on the mysql dev website

If most of your queries have large results (check the Qcache_total_blocks and Qcache_queries_in_cache status variables), you can increase performance by increasing query_cache_min_res_unit. However, be careful to not make it too large (see the previous item).

DB2 db2fm proccess

Posted: 22 Aug 2013 07:13 PM PDT

Server is been up for 365 days, however i got some weird repeated procceses.

Are these normal?

ps -fea | grep db2fm

  db2inst1  643284  229516  29   May 25      - 212564:06 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1  671770  229516  56   May 14      - 227447:02 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1  757794 1237058   0   Apr 19  pts/7  0:00 /bin/sh /home/db2inst1/sqllib/bin/db2cc  db2inst1  774232  229516  30   Sep 25      - 94218:54 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1  962750  229516  30   Jul 18      - 145256:01 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1  999450  229516  29   Aug 17      - 117710:27 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1 1179898  229516  58   Nov 02      - 75788:49 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  

ps -fea | grep db2agent

  db2inst1  409770  680100   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1  450750  778412   0   Apr 18      -  0:03 db2agent (idle) 0  db2inst1  618688  680100   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1  651440  680100   0   Nov 17      -  0:20 db2agent (DATABASEA) 0  db2inst1  655508  491676   0   Apr 19      -  0:04 db2agent (idle) 0  db2inst1  684038  680100   0   Mar 23      -  0:03 db2agent (DATABASEA) 0  db2inst1  790706  491676   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1  880672  680100   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1  913438  778412   0   Nov 16      -  0:20 db2agent (idle) 0  db2inst1  946182  491676   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1  991312  778412   0   Apr 17      -  0:16 db2agent (idle) 0  db2inst1 1077466  491676   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1 1134726  680100   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1 1142964  491676   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1 1233112  491676   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1 1261748  778412   0   Jun 15      -  0:18 db2agent (idle) 0  db2inst1 1384678  778412   0   Mar 23      -  0:27 db2agent (idle) 0  db2inst1 1404936  680100   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1 1421368  778412   0   Mar 22      -  0:04 db2agent (idle) 0  db2inst1 1445936  491676   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1 1482864  491676   0   Jun 16      -  0:31 db2agent (idle) 0  db2inst1 1503440  778412   0   Jun 15      -  0:56 db2agent (idle) 0  db2inst1 1519842  778412   0   Mar 23      -  0:00 db2agent (DATABASEA) 0  db2inst1 1531946  680100   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1 1572884  680100   0   Apr 19      -  0:00 db2agent (idle) 0  

Other info

  oslevel -g  Fileset                                 Actual Level        Maintenance Level  -----------------------------------------------------------------------------  bos.rte                                 5.3.0.40            5.3.0.0    db2fm -s -S  Gcf module 'fault monitor' is NOT operable  Gcf module '/home/db2inst1/sqllib/lib/libdb2gcf.a' state is AVAILABLE      uptime    02:14PM   up 365 days,  12:51,  6 users,  load average: 6.69, 6.89, 6.97     db2level  DB21085I  Instance "db2inst1" uses "64" bits and DB2 code release "SQL08020"  with level identifier "03010106".  Informational tokens are "DB2 v8.1.1.64", "s040812", "U498350", and FixPak "7"    

How to snapshot or version a relational database when data changes?

Posted: 22 Aug 2013 04:13 PM PDT

My system receives data feeds. Each data feed will end up creating inserts and/or updates to most tables in the (relational) database.

I need to capture the snapshot of what the entire database looked like after each data feed is received. Basically I need a way to version the database each time a data feed is run through the system.

Note, by capturing a snapshot, I dont mean literally taking a snapshot of the database, but rather writing history records or some such mechanism so that I can query the database across "versions" to see what changed between versions (among other use cases)

Do known data model designs exist that can capture a snapshot of a database version like this?

Generic SQL Job Scheduler for multiple RDBMS's?

Posted: 22 Aug 2013 08:13 AM PDT

I have been searching for an answer to this, but can't seem to find anything. So my problem is this - we have an environment with MS SQL Server 2008, MySQL, and RedShift, and have some complex dataflows between the databases. Right now, the scheduling is done through independent systems, but I want to have one scheduler that controls the dataflows from beginning-to-end, and is able to script flows from MS SQL to RedShift, etc. Is there a system that can accomplish this already? I'm not a DBA, so I am guessing someone has had this problem before...

Thanks in advance!

EDIT: So one of our dataflows might look like this - file posted on SFTP --> run normal ETL routines --> compile final complete file --> send to customer/push to S3 --> Run SQL commands on Redshift to load* --> Nightly batch processing on RedShift* --> Unload to S3* --> Load into MySQL*

*These are manually run using a tool that just connects via jdbc (can't remember the program)

My DB-related experience is very light, so I was about to write some python scripts and schedule them in CRON, but that is custom and hard to expand - surely someone has had this problem before. We would like to be able to see a status of the job in one place, create new dataflows/ETL's between all three systems (like an SSIS job).

Login failed for user Error: 18456, Severity: 14, State: 11

Posted: 22 Aug 2013 10:13 AM PDT

I have an AD group XYZ that I have added to SQL Server security with data_reader permissions.

The XYZ group has around 10 users in there who are successfully able to access the SQL Server database. I recently added a new user to this group (at AD level), but this person is not able to access SQL Server (through Mgmt Studio) and he's getting the error below

Login failed for user. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors.

Error: 18456, Severity: 14, State: 11.

I have already verified AD permissions are setup properly, user has restarted his machine, he is not part of any group that has DENY access and the SQL Server XYZ group has been removed and readded to the SQL Server instance in Mgmt Studio and server has been restarted.

Any ideas on how to proceed further?

Thanks!

Rent weekly cost database design

Posted: 22 Aug 2013 03:13 PM PDT

I have a database which contains a table BUILDING with in each row details about some building, another table BUILDING_UNIT contains rows with details about a single building unit which refers with a foreign key to the belonging BUILDING.ID. The BUILDING_UNIT table also refers to a table CATEGORY which tells whether the BUILDING_UNIT is of category A,B,C,D again with a foreign key pointing to CATEGORY.ID.

Now the final cost of renting the building unit depends on its building, category and on the number of days it is rented and specific period of the year. We only rent them weekly so I might as well use weeks only however I'd like it to be as flexible as possible in the future.

I cannot convince myself on a table which can represent this situation.

Do I have to use a table with coefficients for each day of the year and then a table with coefficients for A,B,C,D and then a table with coefficients for each Building and then somehow calculate a result?

Is there some standard and recognized implementation for problems of this type?

Thank you

EDIT: Notice the solution should abstract from the formula for calculating the cost which might change in the future. However I might be asked to make a specific week of the year, for building unit X inside building Y to cost 300$ while the week after 600$. Generally building units inside the same building and in the same week cost the same, however that might change in future so I'd like to treat already all specific cases.

Proper procedure for migrating a MySQL database to another Debian machine?

Posted: 22 Aug 2013 02:13 PM PDT

I have one server running an older Debian version with MySQL 5.x and a newer Debian server, also running MySQL.

I've created a backup of all databases on the first server like so:

mysqldump -uuser -ppass --all-databases > dump.sql  

On the other server, I did a:

mysql -uuser -ppass < dump.sql  

At first, everything seemed great. I could browse my databases in phpMyAdmin, but as soon as I tried logging in again, it failed. Turns out, my root password had been overwritten with the one from the older database.

I wanted to reset it, but in order to do so, I would have needed to start mysqld_safe. Which I couldn't because the password for the debian-sys-maint user had been overwritten as well in the database. When I thought all hell had broken loose, I somehow reset both the root and debian-sys-maint passwords to the original values of the new server, and I managed to revert to a clean state.

Since I obviously don't want to go down that road again, here's the question(s):

  • Was I right with my approach of using a complete --all-databases dump?
  • Was there something I needed to do in advance to reading in that dump to prevent this desaster from happening? Or even before creating the dump?

If I'm going about this the wrong way:

  • What is the proper procedure for migrating all databases and their users to another server?

Note that I'm not that experienced with MySQL and server administration at all, so I might be missing something obvious. All the tutorials and how-tos I've found never mention anything like this and just talk about importing the complete dump.

How to add 'root' MySQL user back on MAMP?

Posted: 22 Aug 2013 01:13 PM PDT

On PhpMyAdmin, I removed 'root' user by mistake. I was also logged in as 'root'. How can I add the user 'root' back, on MAMP?

Slow insert with MySQL full-text index

Posted: 22 Aug 2013 09:13 AM PDT

I use a full-text index in a MySQL table, and each insert into this table takes about 3 seconds. It seems that MySQL rebuilds (a part) of the full text index after each insert/update. Is this right?

How can I get better performance from the INSERT? Is there perhaps an option to set when MySQL rebuilds the full-text index?

No comments:

Post a Comment

Search This Blog