Wednesday, May 22, 2013

[how to] How can a database be normalized when an optional field causes denormalization?

[how to] How can a database be normalized when an optional field causes denormalization?


How can a database be normalized when an optional field causes denormalization?

Posted: 22 May 2013 08:34 PM PDT

Note: I attempted to provide images to illustrate my question, but lack the reputation to do so on this StackExchange site. If someone with edit privileges can embed it with the question, it would be much appreciated.

Suppose you have a warehouse full of widgets. Each widget in the warehouse is stored in a specific identifiable location within the warehouse. You might have a schema which looks like the Original Layout <1> in the image link.

A widget may also be (optionally) located on a cart in the warehouse, which could lead to a schema like Carts Added to Schema <2> in the image link.

However, a problem arises because a cart can also only be located in one location within the warehouse, which turns the prior schema into something like Denormalized Schema <3> in the image link.

Howver, this schema is denormalized and can result in anomalies, such as if the data was:

  widgetID  storageLocationID  cartID  ========  =================  ======  1         foo                A  2         bar                A  

Although both widgets are located on the same cart, the database shows them in different locations which is not possible.

If a cart was required, it would be relatively simple to address this normalization issue by simply removing the storageLocationID from the Widgets table. However, because a cart is not required, that solution will not work.

How can this data be structured to eliminate the anomalies?

Does an index with multiple columns make a similar index redundant?

Posted: 22 May 2013 06:59 PM PDT

Let's say I have a table that looks like this:

CREATE TABLE Activity (      ActivityID int primary key identity(1,1) ,      ActivityName nvarchar(10),      InactiveFlag bit  )  

with an index that looks like this:

CREATE INDEX Activity_Index on Activity   (       ActivityName  )  

then for some reason, someone has created a second index:

CREATE INDEX Another_Activity_Index on Activity   (       ActivityName, InactiveFlag  )  

is it definitely safe to delete the first index? Is it just taking up unneccesary disk space? Will the second index cover all cases of the first? The column ordering is definitely "ActivityName" first.

Percona xtrabackup Prepare Fails

Posted: 22 May 2013 06:31 PM PDT

I made a backup and according to Percona it completed successfully. Now I'm trying to prepare it and having the following issue:

./xtrabackup_55  --defaults-file="/etc/my.cnf"  --defaults-group="mysqld" --prepare --target-dir=/backup/unprepared/2013_05_21_21_31_41_full  xtrabackup: cd to /backup/unprepared/2013_05_21_21_31_41_full  xtrabackup: This target seems to be not prepared yet.  xtrabackup: No valid checkpoint found.  xtrabackup: Error: xtrabackup_init_temp_log() failed.  

I've been searching for answers for quite a while and not finding anything conclusive. Any advice on how to resolve this?


Edit: per comment below, here is requested info:

xtrabackup_55 version 2.0.3 for Percona Server 5.5.16 Linux (x86_64) (revision id: 470)

cat /etc/my.cnf|grep innodb  innodb_data_home_dir = /data/mysql_data/  innodb_data_file_path = ibdata1:2000M;ibdata2:10M:autoextend  innodb_log_group_home_dir = /var/mysql/data/  innodb_file_per_table  innodb_buffer_pool_size = 5G  innodb_additional_mem_pool_size = 40M  innodb_log_file_size = 1G  innodb_log_buffer_size = 16M  innodb_log_block_size = 512  innodb_flush_log_at_trx_commit = 1  innodb_lock_wait_timeout = 50  innodb_thread_concurrency = 8  innodb_doublewrite_file = /var/mysql/innodb_doublewrite.buff  innodb_flush_method = ALL_O_DIRECT  

Postgres 9.2 select multiple specific rows in one query

Posted: 22 May 2013 04:16 PM PDT

I have a table with three columns,

  • Id Serial.
  • Value real.
  • timein Timestamp.

I want to select the values based on a range of days ie. from two days ago until now. The table may contain one or two rows per day, if it does contain two rows, I want to select the second row plus all the rows following it in the period of time.

To clarify:

id | value | timein  1  | 20.5  | 2013-04-25 11:25:42  2  |  4.2  | 2013-04-26 09:10:42  3  |  3.1  | 2013-04-26 15:45:42  4  |  100  | 2013-04-27 14:52:42  5  | 15.5  | 2013-04-28 17:41:42  6  | 23.3  | 2013-04-29 07:32:42  7  | 21.4  | 2013-04-29 14:32:42  

If I wanted to select the values from day 26 (only the second row) plus all the values until day 29 and so on, can it be done in one query?

Managing version control of Erwin file changes with multiple environments

Posted: 22 May 2013 03:01 PM PDT

What is the easiest way to manage three environments (DEV, QA and PROD) within a single Erwin file or multiple Erwin files? The Erwin file is separated by subject area, so I'm not sure reverse engineering is a viable option. We perform migrations every day, but they don't always conform to what is in Erwin, so we have a difficult time showing our end-users what is in QA and what is in PROD.

JOIN to read first table and get possible relationship in the second table

Posted: 22 May 2013 02:30 PM PDT

Consider a main table as

CREATE TABLE groups  (  group_id int(11) NOT NULL AUTO_INCREMENT,  group_title varchar(255),  PRIMARY KEY(group_id)  ) ENGINE=InnoDB  

and a second table for relationship as

CREATE TABLE group_members  (  relation_id int(11) NOT NULL AUTO_INCREMENT,  group_id int(11),  user_id int(11),  FOREIGN KEY(group_id) REFERENCES groups(group_id) ON DELETE CASCADE,  FOREIGN KEY(user_id) REFERENCES users(user_id) ON DELETE CASCADE,  PRIMARY KEY(relationsh_id)  ) ENGINE=InnoDB  

How to get a full list of all groups with an additional column indicating that if a given user is member of each group or not. This means that we will have a WHERE clause indicating the corresponding user_id, and the additional column will show 0 or 1, depending that this user_id has a connection to each group or not.

How to get a row_number to have the behavior of dense_rank

Posted: 22 May 2013 02:06 PM PDT

I have a stored procedure will be used to page through data. One of the requirements of the procedure is to have a parameter that will be used to sort two columns of data, the sorting should be used when applying the row_number().

Some sample data:

CREATE TABLE grp   (    [grp_id] uniqueidentifier primary key,     [grp_nm] varchar(5) not null,     [grp_owner] varchar(200) not null  );    INSERT INTO grp ([grp_id], [grp_nm], [grp_owner])  VALUES      ('7F5F0F16-4EBE-E211-9C26-78E7D18E1E84', 'test1', 'me'),      ('1F52A713-EFAC-E211-9C26-78E7D18E1E84', 'test2', 'me'),      ('D123B48A-63AB-E211-9C26-78E7D18E1E84', 'test3', 'me'),      ('48361F86-2BC2-E211-9C26-78E7D18E1E84', 'test4', 'me'),      ('27429A57-93C1-E211-9C26-78E7D18E1E84', 'test5', 'me'),      ('D5DF9F8E-EDC2-E211-9C26-78E7D18E1E84', 'test6', 'me'),      ('9A07EA21-1AAD-E211-9C26-78E7D18E1E84', 'test7', 'me');      CREATE TABLE mbr  (    [grp_id] uniqueidentifier,     [mbr_id] int not null primary key,     [acct_id] varchar(7) not null,     [cst] varchar(4) null  );    INSERT INTO mbr ([grp_id], [mbr_id], [acct_id], [cst])  VALUES      ('7F5F0F16-4EBE-E211-9C26-78E7D18E1E84', 10, '1', 'AA'),      ('7F5F0F16-4EBE-E211-9C26-78E7D18E1E84', 11, '2', 'BB'),      ('1F52A713-EFAC-E211-9C26-78E7D18E1E84', 12, '1234578', 'blah'),      ('D123B48A-63AB-E211-9C26-78E7D18E1E84', 13, '78', 'test'),      ('48361F86-2BC2-E211-9C26-78E7D18E1E84', 14, 'x', 'mbr1'),      ('48361F86-2BC2-E211-9C26-78E7D18E1E84', 15, 'a', 'mbr2'),      ('27429A57-93C1-E211-9C26-78E7D18E1E84', 16, 'b', 'mbr1'),      ('27429A57-93C1-E211-9C26-78E7D18E1E84', 17, 'c', 'mbr2'),      ('D5DF9F8E-EDC2-E211-9C26-78E7D18E1E84', 18, 'a', 'mbr1'),      ('9A07EA21-1AAD-E211-9C26-78E7D18E1E84', 19, 'a', 'mbr1');  

The procedure and the sorting is currently working as written.

But an issue has cropped up with the way that we are applying the row_number(). The row_number() should almost simulate the ranking that occurs with dense_rank but it does not work as expected due to the sorting with the parameter.

For example, if I run the following query:

declare @sort_desc bit = 0    select g.grp_id, g.grp_nm,      m.mbr_id, m.acct_id, m.cst,      row_number() over(order by case when @sort_desc = 0 then g.grp_nm end                                , case when @sort_desc = 0 then m.acct_id end                                , case when @sort_desc = 1 then g.grp_nm end desc                                , case when @sort_desc = 1 then m.acct_id end desc) rn,      dense_rank()  over(order by case when @sort_desc = 0 then g.grp_nm end                                , case when @sort_desc = 0 then m.acct_id end                                , case when @sort_desc = 1 then g.grp_nm end desc                                , case when @sort_desc = 1 then m.acct_id end desc) dr  from grp g  inner join mbr m    on g.grp_id = m.grp_id;  

See SQL Fiddle with Demo (a trimmed down stored proc is also present)

I get a result of:

|                               GRP_ID | GRP_NM | MBR_ID | ACCT_ID |  CST | RN | DR |  -------------------------------------------------------------------------------------  | 7F5F0F16-4EBE-E211-9C26-78E7D18E1E84 |  test1 |     10 |       1 |   AA |  1 |  1 |  | 7F5F0F16-4EBE-E211-9C26-78E7D18E1E84 |  test1 |     11 |       2 |   BB |  2 |  2 |  | 1F52A713-EFAC-E211-9C26-78E7D18E1E84 |  test2 |     12 | 1234578 | blah |  3 |  3 |  | D123B48A-63AB-E211-9C26-78E7D18E1E84 |  test3 |     13 |      78 | test |  4 |  4 |  | 48361F86-2BC2-E211-9C26-78E7D18E1E84 |  test4 |     15 |       a | mbr2 |  5 |  5 |  | 48361F86-2BC2-E211-9C26-78E7D18E1E84 |  test4 |     14 |       x | mbr1 |  6 |  6 |  | 27429A57-93C1-E211-9C26-78E7D18E1E84 |  test5 |     16 |       b | mbr1 |  7 |  7 |  | 27429A57-93C1-E211-9C26-78E7D18E1E84 |  test5 |     17 |       c | mbr2 |  8 |  8 |  | D5DF9F8E-EDC2-E211-9C26-78E7D18E1E84 |  test6 |     18 |       a | mbr1 |  9 |  9 |  | 9A07EA21-1AAD-E211-9C26-78E7D18E1E84 |  test7 |     19 |       a | mbr1 | 10 | 10 |  

But the desired result is:

|                               GRP_ID | GRP_NM | MBR_ID | ACCT_ID |  CST | RN | expR|  -------------------------------------------------------------------------------------  | 7F5F0F16-4EBE-E211-9C26-78E7D18E1E84 |  test1 |     10 |       1 |   AA |  1 |  1  |  | 7F5F0F16-4EBE-E211-9C26-78E7D18E1E84 |  test1 |     11 |       2 |   BB |  2 |  1  |  | 1F52A713-EFAC-E211-9C26-78E7D18E1E84 |  test2 |     12 | 1234578 | blah |  3 |  2  |  | D123B48A-63AB-E211-9C26-78E7D18E1E84 |  test3 |     13 |      78 | test |  4 |  3  |  | 48361F86-2BC2-E211-9C26-78E7D18E1E84 |  test4 |     15 |       a | mbr2 |  5 |  4  |  | 48361F86-2BC2-E211-9C26-78E7D18E1E84 |  test4 |     14 |       x | mbr1 |  6 |  4  |  | 27429A57-93C1-E211-9C26-78E7D18E1E84 |  test5 |     16 |       b | mbr1 |  7 |  5  |  | 27429A57-93C1-E211-9C26-78E7D18E1E84 |  test5 |     17 |       c | mbr2 |  8 |  5  |  | D5DF9F8E-EDC2-E211-9C26-78E7D18E1E84 |  test6 |     18 |       a | mbr1 |  9 |  6  |  | 9A07EA21-1AAD-E211-9C26-78E7D18E1E84 |  test7 |     19 |       a | mbr1 | 10 |  7  |  

As you can see the expR column values are incrementing based on the grp_id but the rows are still in the correct sort order. I am at a loss of how to get this result and any suggestions would be great.

Can I decrypt TDE transaction log files for use with 3rd party software?

Posted: 22 May 2013 06:22 PM PDT

Is there a way to decrypt TDE log files on disk for use with 3rd party software? I have the original key and certificates. I am trying to use the transaction files with a 3rd party tool in order to recover some lost data.

XML data source with XSD validation importing the same data despite different files

Posted: 22 May 2013 03:25 PM PDT

Please bear with me, I've never used XML as a data source, nor am I familiar with XML itself, so I'm not entirely familiar with the nomenclature.

I have a series of XML files with data to be imported to a database and the files are being validated with a single XSD file. I have the file names of the XML and XSD files statically set as variables and file connections set with the variables on the ConnectionString property.

Prior to importing the file, I run the XML files through an XML Task with the operation type as Validate, using the XSD file as the SecondOperand. I have the SourceType and SecondOperandType of the XML Task set to the file connections of the XML and XSD files, as stated above.

The first file I imported worked fine, the data matches up perfectly. However, upon moving on to the second file, it imported the same data from the first file, leaving me with duplicate rows in the database. I double-checked the file name variable and the contents of the file itself and they are both correct, so I'm not sure of where the duplication would have taken place.

Is there something else that needs to be done to distinguish the data from the second file from that of the first? Does the validation file or process need to be altered in some way?

Why do linked servers have a limitation of 10 branches in a CASE expression?

Posted: 22 May 2013 06:01 PM PDT

Why does this CASE expression:

SELECT CASE column           WHEN 'a' THEN '1'           WHEN 'b' THEN '2'           ... c -> i          WHEN 'j' THEN '10'           WHEN 'k' THEN '11'        END [col]   FROM LinkedServer.database.dbo.table  

Produce this result?

Error message: Msg 8180, Level 16, State 1, Line 1
Statement(s) could not be prepared.
Msg 125, Level 15, State 4, Line 1
Case expressions may only be nested to level 10.

Clearly there isn't a nested CASE expression here, though there are more than 10 "branches."

Another oddity. This inline table-valued function produces the same error:

ALTER FUNCTION [dbo].[fn_MyFunction]  (          @var varchar(20)  )  RETURNS TABLE   AS  RETURN   (      SELECT CASE column               WHEN 'a' THEN '1'               WHEN 'b' THEN '2'               ... c -> i              WHEN 'j' THEN '10'               WHEN 'k' THEN '11'            END [col]       FROM LinkedServer.database.dbo.table  )  

But a similar multi-statement TVF works fine:

ALTER FUNCTION [dbo].[fn_MyFunction]  (         @var varchar(20)  )  RETURNS @result TABLE   (      value varchar(max)  )  AS  BEGIN      INSERT INTO @result      SELECT CASE column               WHEN 'a' THEN '1'               WHEN 'b' THEN '2'               ... c -> i              WHEN 'j' THEN '10'               WHEN 'k' THEN '11'            END [col]       FROM LinkedServer.database.dbo.table    RETURN;  END  

Generate List of Missing Relationships

Posted: 22 May 2013 06:33 PM PDT

I have two tables, CLUSER and CLUSERRX. CLUSER contains all the employees at my company and CLUSERRX contains a list of all the people in CLUSER with a specific workgroup and all of the users from CLUSER with a similar, but not identical workgroup. The columns for these tables are as follows:

CREATE TABLE [dbo].[CLUSER]    (       [LOGONID]   [VARCHAR](15) NOT NULL,       [WORKGROUP] [VARCHAR](30) NOT NULL,       PRIMARY KEY CLUSTERED ( [LOGONID] ASC ) WITH (...other settings...,          FILLFACTOR = 99) ON [PRIMARY]    )  ON [PRIMARY]    CREATE TABLE [dbo].[CLUSERRX]    (       [LOGONID] [VARCHAR](15) NOT NULL,       [RXUSER]  [VARCHAR](10) NOT NULL,       CONSTRAINT [USERRX_LOGONID] PRIMARY KEY CLUSTERED          ( [LOGONID] ASC, [RXUSER] ASC ) WITH (...other settings...,        FILLFACTOR = 99) ON [PRIMARY]    )  ON [PRIMARY]   

I am able to generate a list of every CLUSERRX.LOGONID, CLUSER.WORKGROUP for that LOGONID, CLUSERRX.RXUSER, and CLUSER.WORKGROUP for the RXUSER using the following query:

SELECT u.LOGONID,         u.WORKGROUP as 'USER WORKGROUP',         r.RXUSER,         p.WORKGROUP as 'PROVIDER WORKGROUP'  FROM   CLUSERRX r         LEFT JOIN CLUSER u           ON u.LOGONID = r.LOGONID         LEFT JOIN cluser p           ON p.LOGONID = r.RXUSER  

What I need to do is take the data in CLUSER and find what relationships are missing in CLUSERRX.

Example data:

CLUSER  |-----------|------------|  |  LOGONID  |  WORKGROUP |  |-----------|------------|  | JCURCIO   | 001 USER   |  | TUSER     | 001 SUP    |  | ATEST     | 001 SUP    |  | MPAGE     | 001 User   |  |-----------|------------|    CLUSERRX  |-----------|------------|  |  LOGONID  |  RXUSER    |  |-----------|------------|  | JCURCIO   | TUSER      |  | JCURCIO   | ATEST      |  |-----------|------------|  

Since there is no record for MPAGE in CLUSERRX for TUSER or ATEST that is incorrect and I need to display that.

I feel like I need to use something like:

SELECT u.LOGONID,         u.WORKGROUP AS 'USER WORKGROUP',         r.RXUSER,         p.WORKGROUP AS 'PROVIDER WORKGROUP'  FROM   CLUSERRX r         LEFT JOIN CLUSER u                ON u.LOGONID = r.LOGONID         LEFT JOIN cluser p                ON p.LOGONID = r.RXUSER  WHERE  ( u.LOGONID NOT IN (SELECT u.LOGONID,                                    u.WORKGROUP AS 'USER WORKGROUP',                                    r.RXUSER,                                    p.WORKGROUP AS 'PROVIDER WORKGROUP'                             FROM   CLUSERRX r                                    LEFT JOIN CLUSER u                                           ON u.LOGONID = r.LOGONID                                    LEFT JOIN cluser p                                           ON p.LOGONID = r.RXUSER)           AND p.WORKGROUP NOT IN (SELECT u.LOGONID,                                          u.WORKGROUP AS 'USER WORKGROUP',                                          r.RXUSER,                                          p.WORKGROUP AS 'PROVIDER WORKGROUP'                                   FROM   CLUSERRX r                                          LEFT JOIN CLUSER u                                                 ON u.LOGONID = r.LOGONID                                          LEFT JOIN cluser p                                                 ON p.LOGONID = r.RXUSER) )  

But I have no idea how to format it properly to get the information I need. Can anyone provide any insight on what I may be missing, or if I am even in the right direction?

Troubleshooting SOS_SCHEDULER_YIELD wait

Posted: 22 May 2013 01:45 PM PDT

Running our corporate ERP (Dynamics AX 2012), I noticed our production environment seemed much slower than our development systems.

After performing the same activities in both the development and production environments while running a trace, I confirmed that SQL queries were executing very slowly on our production environment compared to development (10-50x slower on average).

At first I attributed this to load, and re-ran the same activities on the production environment during off hours and found the same results in the trace.

I cleared my wait stats in SQL Server then let the server run under its normal production load for a little while, and then ran this query:

WITH [Waits] AS      (SELECT          [wait_type],          [wait_time_ms] / 1000.0 AS [WaitS],          ([wait_time_ms] - [signal_wait_time_ms]) / 1000.0 AS [ResourceS],          [signal_wait_time_ms] / 1000.0 AS [SignalS],          [waiting_tasks_count] AS [WaitCount],          100.0 * [wait_time_ms] / SUM ([wait_time_ms]) OVER() AS [Percentage],          ROW_NUMBER() OVER(ORDER BY [wait_time_ms] DESC) AS [RowNum]      FROM sys.dm_os_wait_stats      WHERE [wait_type] NOT IN (          N'CLR_SEMAPHORE',    N'LAZYWRITER_SLEEP',          N'RESOURCE_QUEUE',   N'SQLTRACE_BUFFER_FLUSH',          N'SLEEP_TASK',       N'SLEEP_SYSTEMTASK',          N'WAITFOR',          N'HADR_FILESTREAM_IOMGR_IOCOMPLETION',          N'CHECKPOINT_QUEUE', N'REQUEST_FOR_DEADLOCK_SEARCH',          N'XE_TIMER_EVENT',   N'XE_DISPATCHER_JOIN',          N'LOGMGR_QUEUE',     N'FT_IFTS_SCHEDULER_IDLE_WAIT',          N'BROKER_TASK_STOP', N'CLR_MANUAL_EVENT',          N'CLR_AUTO_EVENT',   N'DISPATCHER_QUEUE_SEMAPHORE',          N'TRACEWRITE',       N'XE_DISPATCHER_WAIT',          N'BROKER_TO_FLUSH',  N'BROKER_EVENTHANDLER',          N'FT_IFTSHC_MUTEX',  N'SQLTRACE_INCREMENTAL_FLUSH_SLEEP',          N'DIRTY_PAGE_POLL',  N'SP_SERVER_DIAGNOSTICS_SLEEP')      )  SELECT      [W1].[wait_type] AS [WaitType],      CAST ([W1].[WaitS] AS DECIMAL(14, 2)) AS [Wait_S],      CAST ([W1].[ResourceS] AS DECIMAL(14, 2)) AS [Resource_S],      CAST ([W1].[SignalS] AS DECIMAL(14, 2)) AS [Signal_S],      [W1].[WaitCount] AS [WaitCount],      CAST ([W1].[Percentage] AS DECIMAL(4, 2)) AS [Percentage],      CAST (([W1].[WaitS] / [W1].[WaitCount]) AS DECIMAL (14, 4)) AS [AvgWait_S],      CAST (([W1].[ResourceS] / [W1].[WaitCount]) AS DECIMAL (14, 4)) AS [AvgRes_S],      CAST (([W1].[SignalS] / [W1].[WaitCount]) AS DECIMAL (14, 4)) AS [AvgSig_S]  FROM [Waits] AS [W1] INNER JOIN [Waits] AS [W2] ON [W2].[RowNum] <= [W1].[RowNum]  GROUP BY [W1].[RowNum], [W1].[wait_type], [W1].[WaitS],      [W1].[ResourceS], [W1].[SignalS], [W1].[WaitCount], [W1].[Percentage]  HAVING SUM ([W2].[Percentage]) - [W1].[Percentage] < 95; -- percentage threshold  

My results are as follows:

WaitType               Wait_S  Resource_S  Signal_S  WaitCount  Percentage  AvgWait_S  AvgRes_S  AvgSig_S  SOS_SCHEDULER_YIELD   4162.52        3.64   4158.88    4450085       77.33     0.0009    0.0000    0.0009  ASYNC_NETWORK_IO       457.98      331.59    126.39     351113        8.51     0.0013    0.0009    0.0004  PAGELATCH_EX           252.94        5.14    247.80     796348        4.70     0.0003    0.0000    0.0003  WRITELOG               166.01       48.01    118.00     302209        3.08     0.0005    0.0002    0.0004  LCK_M_U                145.47      145.45      0.02        123        2.70     1.1827    1.1825    0.0002  

So seemingly the largest Wait is SOS_Scheduler_Yield by far, and I googled around and found it typically relates to the CPU not being able to keep up.

I then ran this query multiple times in succession.

SELECT *  FROM sys.dm_os_schedulers  WHERE scheduler_id < 255  

I know I'm supposed to be looking for schedulers with non-zero runnable_tasks_count or pending_disk_io_count, but it's basically zero almost all the time.

I should also mention that Max Degree of Parallelism was set to 1, since the Dynamics AX workload is typically OLTP in nature, and changing it 8 did not make much of difference in the above wait stats, they became almost the exact same with the same performance problems.

I'm sort of at a loss of where to go from here, I basically have a SQL Server that is seemingly CPU strapped but not waiting on runnable_tasks or IO.

I do know that the IO subsystem of this SQL Server isn't very good, because running SQLIO on the drive containing the actual databases can lead to pretty low numbers (think 10MB a sec for certain types of reads/write), that said, it doesn't seem like SQL is waiting on that because of the amount of memory on the server caching most of the databases.

Here is some environment information to help:

Production environment:

  • SQL Server
  • HP ProLian DL360p Gen8
  • Intel Xeon E5-2650 0 @ 2.00GHz x 2 with hyperthreading (32 logical cores)
  • 184GB memory
  • Windows Server 2012
  • 2 instances of SQL Server 2012 Standard (RTM, unpatched)
  • Raid 1 279GB drives (15k) C: drive, contains databases and operating system
  • Page File and TempDB on distinct,seperate drives (solid state)

My DEV:

  • Hyper-V hosted SQL Server and Dynamics AX 2012 AOS server
  • Core i7 3.4ghz with hyperthreading (8 logical cores)
  • 8GB of memory
  • Windows Server 2008 R2
  • SSD for the entire VM.

I would welcome any input on other things to look for.

How to do an in-place upgrade of SQL Server 2005 Express to SQL Server 2008 R2 Express

Posted: 22 May 2013 02:06 PM PDT

I would like to upgrade my SQL Server 2005 Express instance which is installed using this command:

SQLEXPR.EXE/qs /action=INSTALL /IACCEPTSQLSERVERLICENSETERMS /FEATURES=SQLEngine /INSTANCENAME=ABCD /SQLSVCACCOUNT="NT AUTHORITY\SYSTEM" /SECURITYMODE=SQL /SAPWD=abc#Wd1234 /TCPENABLED=1 /NPENABLED=1  

but when I am trying to upgrade the server instance ABCD with this command

SQLEXPR_x64_ENU.exe /q /ACTION=upgrade /INSTANCENAME=ABCD /IACCEPTSQLSERVERLICENSETERMS=1  

the instance ABCD is not upgraded to SQL Server 2008 R2 Express.

When I check ADD/Remove program list of OS, it shows me both SQL Server 2005 and 2008 R2 are installed (that may be I am doing a side by side upgrade but not a in-place one) and when I click on start button of OS and then goes to SQL Server 2008 R2 folder in AllPrograms list . It has a sub folder configuration tools which contains SQL Server Installation center. When I open the installation center it show various option to install SQL Server 2008 R2 or to upgrade from 2005 etc. I am confused whether SQL Server 2008 R2 is even installed on my machine after running the above 2nd query. please help me where I am going wrong.

How do I turn off ALTER DATABASE script output in VS 2012?

Posted: 22 May 2013 03:02 PM PDT

I am using SQL Server 2005 in conjunction with Visual Studio 2012 and a SSDT database project.

When I publish my VS project and generate the script to run against my database, VS includes the following in the script. I really don't want the script to go changing database properties, particularly the PAGE_VERIFY setting. I notice that the script doesn't set the properties back the way it found them. How do I turn this off? I have spent time in the Tools->Options dialog and I just don't see the setting.

IF EXISTS (SELECT 1         FROM   [master].[dbo].[sysdatabases]         WHERE  [name] = N'$(DatabaseName)')  BEGIN      ALTER DATABASE [$(DatabaseName)]          SET ANSI_NULLS ON,              ANSI_PADDING ON,              ANSI_WARNINGS ON,              ARITHABORT ON,              CONCAT_NULL_YIELDS_NULL ON,              QUOTED_IDENTIFIER ON,              ANSI_NULL_DEFAULT ON,              CURSOR_DEFAULT LOCAL           WITH ROLLBACK IMMEDIATE;    END  GO    IF EXISTS (SELECT 1         FROM   [master].[dbo].[sysdatabases]         WHERE  [name] = N'$(DatabaseName)')  BEGIN      ALTER DATABASE [$(DatabaseName)]          SET PAGE_VERIFY NONE           WITH ROLLBACK IMMEDIATE;  END  GO  

How to achieve this sql server replication scenario?

Posted: 22 May 2013 07:51 PM PDT

I want to achieve below illustrated scenario via sql server replication.

enter image description here

Client A and Client B are client databases containing same article Table A. This article needs to be published by Client A and Client B whose changes need to be merged at the server (direction of arrows represent data flow).

Server is the destination database where article Table A should contain the changes made by both Client A and Client B. The trio consisting of Server, Client A and Client B are running Sql Server 2008 R2.

This is what I've tried in vain. Made Client A and Client B publishers and initially subscribed Server to Client A (went well). Next I tried subscribing Server to Client B (went haywire, found this msdn post where I figured out the reason behind the failure )

I still need to achieve the above diagrammed scenario. Is this even possible? If yes, any ideas how?

Thanks for your time.

Is it "legal" to CREATE and DROP #SomeTable more than once?

Posted: 22 May 2013 06:46 PM PDT

I've got my code sort of segregated as "coherent blocks" that I can insert into a longer "configuration script" over and over, and one of the patterns I'm using is this:

CREATE TABLE #WidgetSetting   (      WidgetID bigint not null,      Name nvarchar(100) not null,      Value nvarchar(max) not null,      CreateDate datetime not null  )    INSERT VALUES    MERGE TABLES    DROP TABLE #WidgetSetting  

But now SSMS is complaining that the object already exists by the next time the CREATE TABLE fires. What gives?

I think it's obvious I'm going to have to declare the table once at the beginning of the script, truncate instead of drop, but it's frustrating, naturally, to not be able to just drop the table and use the same name again.

The Login is from an untrusted domain - domain setup with computer level login

Posted: 22 May 2013 02:54 PM PDT

I have a Sql Server 2008 R2 database I'm attempting to connect to using ADO.NET using integrated security in the connection string.

The database resides on a 64 bit Windows Server 2003 machine. The client is a 64 bit Windows 7 Enterprise machine.

The specific error I receive is : SQL Server 2008: The login is from an untrusted domain and cannot be used with Windows authentication

The computer is added as a windows account on the sql server as DOMAIN.edu\ComputerName$ and has been granted full permissions on the target database.

Both the server and client are part of the same active directory domain. The computers can communicate with each other over remote desktop. The client machine can see network shares on the server machine, so I don't think network connectivity is a problem.

I didn't have a problem with this setup when the Sql Server 2008 R2 database was residing on a 64 Windows XP Professional machine and the client was the same machine.

What else can I verify or modify to make my current setup work for machine level authentication?

Connection pools being reset with Error: 18056, Severity: 20, State: 46. & Perfmon Counters not showing

Posted: 22 May 2013 04:55 PM PDT

We are using SQL authentication & .net 4.0 Connection strings to connect to an Enterprise Edition 2012 SP1 SQL Server on a windows 2008r2 Enterprise Server. We use about 50 Servers split into 8 different groups different parts of a website.

Our website is using this SQL Server to log Visit tracking data and over the last few days it has spat out the following messages about the resetting connection pools.

The client was unable to reuse a session with SPID 1327, which had been reset for connection pooling. The failure ID is 46. This error may have been caused by an earlier operation failing. Check the error logs for failed operations immediately before this error message.

Errorlog reads

Error: 18056, Severity: 20, State: 46.

The client was unable to reuse a session with SPID 959, which had been reset for connection pooling. The failure ID is 46. This error may have been caused by an earlier operation failing. Check the error logs for failed operations immediately before this error message.

Login failed for user 'xxxx'. Reason: Failed to open the database 'xxxxxxxx' configured in the login object while revalidating the login on the connection. [CLIENT: 10.xx.xx.xxx]

After some digging I found this document from CSS blog

http://blogs.msdn.com/b/psssql/archive/2010/08/03/how-it-works-error-18056-the-client-was-unable-to-reuse-a-session-with-spid-which-had-been-reset-for-connection-pooling.aspx

and this one by the scholar Aaron Bertrand (I know the error number is different but the failure ID is the same with a number of the messages are identical)

http://sqlblog.com/blogs/aaron_bertrand/archive/2011/01/14/sql-server-v-next-denali-additional-states-for-error-18456.aspx

Failure ID 46, suggests that the login did not have permissions. Our logins default to the master database and the db name is specificied in the connection string.

I wanted to check the number of connection strings pools, etc and checked all the counters in Perfmon for ".Net Data Provider for SqlServer" it only gave me the option of defaultdomain9675 for the instance so I selected that assuming that is a system generated ID name for our Datacentre network. Unfortunately all of the counters are reading zero. On one of our other main servers the connection pools are hovering around 10 which is what I expected to see on a healthy server with that kind of load.

My question is 3 fold

1- Can anyone suggest why the Windows 2008 R2 Server is not showing ".Net Data Provider for SqlServer"?

2- Has anyone experienced this as I obviously believe that the login not having permissions is a red herring ?

3- If different groups of web servers have the same connection string syntax but with slightly different whitespace, would this cause the server to use another connection pool ?

How to override SSIS 2008 package config file path?

Posted: 22 May 2013 06:09 PM PDT

I have an SSIS 2008 package with a single configuration named Config. I would like to use the dtexec /SET switch to change the path of the config file that it uses so our developers that build the package can have it point to their local path, but when we deploy it to our production servers we can specify a new path for dtexec to load the config from so it can contain prod settings.

According to MSDN we should be able to do this using the /SET command as it mentions here:

Using the /Set option to change the location from which the utility loads design-time configurations Behavior in SQL Server Integration Services 2008

Succeeds.

After the utility applies the run-time options, the utility reloads the design-time configurations from the new location.

I've tried:

dtexec /FILE "PackagePath" /SET  "\Package.Configurations[Config].Properties[ConfigurationString]";"ConfigFilePath"  

and it fails with:

The package path referenced an object that cannot be found: "\Package.Configurations[Config].Properties[ConfigurationString]". This occurs when an attempt is made to resolve a package path to an object that cannot be found.

I've also pasted a redacted version of my .dtsx file here in case anyone thinks I'm just mistyping something. And here is the contents of config that I'm trying to change it to.

I realize there are other ways of doing indirect configs using environment variables, etc... but that is not what I'm interested in so please do not offer up those suggestions. I'm specifically trying to use the /SET command since Microsoft has explicitly stated that we should be able to. Due to the way our environment is setup this method would be the best for us.

Why would SequelPro only import 23k rows out of 130k?

Posted: 22 May 2013 07:09 PM PDT

I use SequelPro for mysql on a Mac OS X -- and I used the import function to upload a 130k .csv file to my database. Everything seems to work fine, then I get the message File Read Error: An error occurred when reading the file, as it could not be read using the encoding you selected (Auto-detect - Unicode (UTF-8)). Only 23,000 rows were imported.

When I hit "Ok," everything else seems to work relatively fine -- I'm just missing about 107,000 rows.

Any idea as to what it could be -- maybe if I use something other than auto-detect during the import? I thought that it might have been some extra commas floating around in the actual .csv file, which their were, but I got rid of those and the same thing happened.

Thanks!

--

Edit: I also don't know if this is an option for everybody, but since the problem stems from exporting from a Windows machine to a Mac then opening it in Excel for Mac -- the problem seems to go away if I just used OpenOffice instead.

Why some of mysql system variables are readonly

Posted: 22 May 2013 04:09 PM PDT

There are almost more or less 277 mysql server system variables . In these variables some of the variables are dynamic and few are not.

I don't under stand what is the reason behind read only variables . why few of mysql system variables are read only. what will happen if they make those variable dynamic.

Is there any deep reason for the read only variables?

For now we can consider :

  • innodb_buffer_pool_size
  • innodb_checksums
  • datetime_format
  • log-bin

and there are many other variable we can find at http://dev.mysql.com/doc/refman/5.5/en/server-system-variables.html

Of course, I know we can't change variable like version. But many other variables like log-bin are not dynamic. For me, it does not make sense if general_log and log_slow_queries are dynamic. Why is log-bin not dynamic? like wise there are many other variables.

MySQL server crashed.

Posted: 22 May 2013 03:09 PM PDT

Help! I managed to crash MySQL last night. I am on a Mac using the native version that came with Mountain Lion. I was upgrading from 5.5 to 5.6. I have followed instructions in this forum to delete the installation, but trying to re-install 5.5 says that there is a newer version and won't install. Trying to install 5.6 fails. I found this error in the console:

4/21/13 10:16:56.000 PM kernel[0]: exec of /Volumes/mysql-5.6.11-osx10.7-x86/mysql-5.6.11  osx10.7-x86.pkg/Contents/Resources/preflight denied since it was quarantined by TextEdit    and created without user consent, qtn-flags was 0x00000006  

Help me please ?? I am stuck and in a world of hurt and despair.

SQL Server replication conflicts after migration from 2000 to 2008

Posted: 22 May 2013 05:09 PM PDT

I got a suggestion over at Stackoverflow to post here....greatful for any and all help.

Please bear with me I think this might take a while to explain. For many years now my company has hosted a solution for a client involving a complex web application with smaller mobile solution consisting of IIS 6 for the web app, SQL Server 2000 on its own server and Visual Studio 2005 Pocket PC app replicating with SQL Server via Merge Replication. This whole time the mobile solution has been very solid and did not require many updates so we have replicated with sscesa20.dll the entire time.

We recently migrated this entire solution as follow:

  • Web Box - New Win Server 2008 R2 running IIS 7.5
  • SQL Server Box - New Win Server 2008 R2 running SQL Server 2008
  • Mobile app - small updates converted to Visual Studio 2008 and Windows for Mobile 5

The new web box received the 64 bit version of SQL Server Compact 3.5 tools and we now call sqlcesa35.dll from the mobile device to perform merge replication.

The basic idea of the entire process is that mobile field users get assigned safety inspections to perform on buildings. When a facility in the system needs an inspection an inspection record is created via the web app in the DB. A status flag is set such that the HOST_NAME() is utilized to make sure only records for a given inspector with this particular status will let them show up on their mobile device. The user can synch multiple times in the field sending their data up to the SQL Server/web app and receive more inspections down or other updates such as look up table data...typical merge replication here and has been working great for years. Once the field user changes the status of the inspection, it will travel from mobile device to SQL Server database and be removed from their iPaq. The inspection has additional work flow on the web app from here on out.

Now on to the problem. We migrated everything publishing the exact same subset of tables with the same joins/filters. Same settings on the publication as far as I can tell are the same. However; when a user gets a new inspection down to the hand held for the very first time, enters data, then synchronizes back to the database every row has a conflict. Since we have default conflict resolution the publisher wins and the data collected in the field it lost. The inspection now looks blank just as it did when it first came down to the mobile device. If the user syncs again with or without changes on the mobile (subscriber) all is well. Any future changes from the mobile device are intact.

It is as if the web/db data is newer then the hand held data. I am 100% sure it is not. I have looked at table triggers, web app logic, etc. We were very careful not to include any application changes to DB/web app/mobile app with respect to data manipulation during this migration.

Here is a summary of the order of operation:

New row created in the database >> Mobile user receives data >> mobile user updates data >> synchronizes - data is lost. Conflicts show up for all data lost.

From here on out any additional mobile changes are captured. Merge replication works in both directions flawlessly.

Thanks for taking the time to read please help. I am stuck after 3 days.

How to restore a filegroup from its backup in SQL Server

Posted: 22 May 2013 06:43 PM PDT

I need guidance on how to restore a file group in Database from the File group backup taken from another same DB server. Is it possible to restore the filegroup by running restore DB for File Group.

I was trying to restoring filegroup from its back up like below

RESTORE DATABASE <DBName > FILEGROUP = 'MFG12' FROM DISK = 'E:\MFG12.bak'    WITH RECOVERY  

The backup set holds a backup of a database other than the existing DBNAME database.

When I was trying to restore in to same DB , restoring works perfectly,

Processed 840976 pages for database 'DB', file 'MFG12_File' on file 1.  RESTORE DATABASE ... FILE=<name> successfully processed 840976 pages in 59.375 seconds    (110.654 MB/sec).  

When I run the Select query against partion of filegroup. it throws error

 One of the partitions of index '' for table 'xyz'   (partition ID 72057594656129024) resides on a filegroup ("MFG12")   that cannot be accessed because it is offline, restoring,   or defunct. This may limit the query result  

When I see status of all file group.

I see one of the File Group MFG 12 has status = "Restoring" , How to set back to Online Mode.

When I try to restore log file also.i get below error

The roll forward start point is now at log sequence number (LSN) 66787000000001800001.  Additional roll forward past LSN 66787000000008900001 is required to complete the   restore sequence".   

Please help how do I bring back the missing file group data even though filegroup backup has been taken from another same Database in another server or in the same server DB

Will a primary key be added as a clustered index?

Posted: 22 May 2013 04:27 PM PDT

I've inherited a database where no primary keys were defined on the tables. There are also no clustered indexes assigned to the tables.

If I perform an alter table to assign a primary key will this result in SQL Server also creating a clustered index? If so should I expect slowness on the database due to IO from the data being repositioned on the harddisk?

Tools and methodologies to keep to DBs aligned

Posted: 22 May 2013 02:09 PM PDT

2 DBs having schemas that represent the same semantic objects. The first one is production DB (Non-RDBMS, in-house implemented in-memory DB with shitload of RAM). Other is Postgres.

Once in a while the production DB is changed (schema upgrade).

Question: what is the best practice to keep DBs of different types aligned semantically?

How to connect to a Database made by Oracle SQL Database?

Posted: 22 May 2013 04:03 PM PDT

So I am fairly new at this, so if you could keep that in mind in your answers, it would be much appreciated. I installed Oracle SQL Database on my Windows PC. It came in two zip files. I installed it and the online portion of it works fine. I can login with Username: sys and Password: **. What I am trying to do is connect to this newly created database on another computer through SQL Developer. I have read that in order to do this, you need to change the hostname of the Database from "localhost" to an IP Address. How do I do that and is there anything else I need to do to make this work?

I also found this LINK. Is this something I should do? I do not have a Domain though.

listener.ora

# listener.ora Network Configuration File:    C:\app\hyderz\product\11.2.0\dbhome_2\network\admin\listener.ora  # Generated by Oracle configuration tools.    SID_LIST_LISTENER =  (SID_LIST =   (SID_DESC =     (SID_NAME = CLRExtProc)     (ORACLE_HOME = C:\app\hyderz\product\11.2.0\dbhome_2)     (PROGRAM = extproc)     (ENVS = "EXTPROC_DLLS=ONLY:C:\app\hyderz\product\11.2.0\dbhome_2\bin\oraclr11.dll")   )  )    LISTENER =  (DESCRIPTION_LIST =    (DESCRIPTION =      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1522))      (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1522))    )  )    ADR_BASE_LISTENER = C:\app\hyderz  

tnsnames.ora

# tnsnames.ora Network Configuration File:   C:\app\hyderz\product\11.2.0\dbhome_2\network\admin\tnsnames.ora  # Generated by Oracle configuration tools.    LISTENER_ORCL =    (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1522))      ORACLR_CONNECTION_DATA =  (DESCRIPTION =  (ADDRESS_LIST =    (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1522))  )  (CONNECT_DATA =    (SID = CLRExtProc)    (PRESENTATION = RO)  )  )    ORCL =  (DESCRIPTION =  (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1522))  (CONNECT_DATA =    (SERVER = DEDICATED)    (SERVICE_NAME = orcl)  )  )  

MySQL: Lock wait timeout exceeded

Posted: 22 May 2013 08:09 PM PDT

I have a developer that has been trying to alter a large table (~60M rows). Via LiquidBase/JDBC they're running

ALTER TABLE foo DROP FOREIGN KEY fk_foo_1;  

Today while it was running I checked in on it periodically; everything looked normal, the query was running, in state "copying to tmp table", I could see the temp table on disk getting larger and larger (still plenty of free space). I could see other processes blocked waiting for this table to be unlocked. Finally after about 4.5 hours, they got the "lock wait timeout exceeded; try restarting transaction" error. This is actually the 2nd time they've tried, and it seems to fail right about when I would expect it to complete.

innodb_lock_wait_timeout is set to the default 50, I can't imagine it would run for so long to be affected by this. No errors logged, no deadlocks or other weirdness seen in 'show engine innodb status'. Can anyone help me with other ideas? I'm fairly stumped on this one.

thanks

Do you know an easy way to generate one record for each hour of the past 12 hours?

Posted: 22 May 2013 05:03 PM PDT

I have a report that shows the count of events for the past 12 hours, grouped by the hour. Sounds easy enough, but what I am struggling with is how to include records that cover the gaps.

Here is an example table:

Event  (    EventTime datetime,    EventType int  )  

Data looks like this:

  '2012-03-08 08:00:04', 1    '2012-03-08 09:10:00', 2    '2012-03-08 09:11:04', 2    '2012-03-08 09:10:09', 1    '2012-03-08 10:00:17', 4    '2012-03-08 11:00:04', 1  

I need to create a result set that has one record for every hour of the past 12 hours, regardless of there being events during that hour or not.

Assuming the current time is '2012-03-08 11:00:00', the report would show (roughly):

Hour  EventCount  ----  ----------  23    0  0     0  1     0  2     0  3     0  4     0  5     0  6     0  7     0  8     1  9     3  10    1  

I came up with a solution that uses a table that has one record for every hour of the day. I managed to get the results I was looking for using a UNION and some convoluted case logic in the where clause, but I was hoping somebody had a more elegant solution.

3rd party dll in SQL Server CLR

Posted: 22 May 2013 04:41 PM PDT

I need to use a third party DLL in a trigger c# code in SQL Server CLR

But when I try to add reference it just shows some DLLs from SQL Server.

How can I add my third party dll to SQL Server?

No comments:

Post a Comment

Search This Blog