Tuesday, April 30, 2013

[SQL Server 2008 issues] "SET STATISTICS TIME" and "SET STATISTICS IO" questions

[SQL Server 2008 issues] "SET STATISTICS TIME" and "SET STATISTICS IO" questions


"SET STATISTICS TIME" and "SET STATISTICS IO" questions

Posted: 29 Apr 2013 04:56 PM PDT

1) As I understood from here[url=http://www.sqlservercentral.com/articles/Performance+Tuning/measuringperformance/1323/][/url]"elapsed time" is the total time of statement's execution, but how than it can be less then CPU time, it should includes it?[quote]DBCC execution completed. If DBCC printed error messages, contact your system administrator. SQL Server Execution Times: CPU time = 16 ms, elapsed time = 5 ms.[/quote]2) I receive this message for a statement executed right after DBCC DROPCLEANBUFFERS, why 193 and 203, where from it reads remaining 10 pages, I've just cleaned the cache?[quote]Table 'InsertTest'. Scan count 1, logical reads 203, physical reads 0, read-ahead reads 193, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.[/quote]

SQL Server Activity Monitor

Posted: 29 Apr 2013 05:25 PM PDT

Good Day . We are running SQL Server 2008R2 as a named instance on a virtual Server . I am able to view the Activity Monitor except for the % Processor time that is grayed out. I was granted Sysadmin Privileges in SQL Server as well as View Server State Permissions . Any ideas ? There is an office policy that only the server administrators may have Admin Privileges.

MDX Query

Posted: 29 Apr 2013 04:03 PM PDT

Hi,While running the MDX query to return prevoius month data, i am getting getting Null for previous month data.Withmember [measures].[T1_pytd_sales]assum(parallelperiod([Time].[Fiscal].[Year],1,[Time].[Fiscal].currentmember),[Measures].[Spend])member [measures].[T1_ytd_sales] assum([Time].[Fiscal].currentmember,[Measures].[Spend])member [measures].[DiffTier1] as'[measures].[T1_ytd_sales]-[measures].[T1_pytd_sales]'seelct (descendants({Closingperiod([Time].[Fiscal].[Year])},[Time].[Fiscal].[Month])*{[T1_ytd_sales],[measures].[T1_pytd_sales],[measures].[DiffTier1]}) on columnsfrom cubeActually we have two time dimesions. Its working fine for one dimension, but its getting Null for other time dimension.For T1_pytd_sales, i am getting NULL for Month wise even for yearly also.Can anyone what could be the reason?Thanks in advance.

Moving multiple similar access databases to SQL

Posted: 29 Apr 2013 08:35 AM PDT

I'm planning on moving around 4 or 5 MS Access databases to SQL Server and need some insight on what might be the best approach in regards to the following:First what the issue is: I have a client database which has pretty generic information about some people (Jobs, referral info, etc..) I will refer to this as my master database. These other databases (let's call them program databases) all have similar information, but are not connected to the master. :angry:[b]I'm trying to figure out if moving them to 1 database is the answer, or to each their own database on SQL.[/b]The master will have all things client/job/referral related to it and will definitely need to be available for read/write for all the program databases.Each time a client enrolls into a program, they will have to be in the master first and then they would be available for use for the programs.These program databases are for grant based programs and can easily come and go, so there is a good chance that at some time, the tables or databases will no longer be used once the grant expires or some other reason. One of the other gotcha's is that I'm a creature of habit and each one of the program databases has a couple tables that are named the same. That being said, I just need a little direction from anyone that may have some pros and cons on separating these out by databases or just by tables. If more specifics are needed let me know, I didn't want to get too detailed and make my question convoluted.Thanks,Jeff

SSIS 2008 - renaming files

Posted: 29 Apr 2013 03:28 AM PDT

Hi,I have placed a file system task inside foreach loop container and the operator is rename file. I'm wondering if it's possible to find all files (regardless of file extensions) that contain "circle" and then replace it with "square"? If so, how I can accomplish this?Thanks!

SQL QUERY to find maximum and minimum column value of a primay key column in a year

Posted: 29 Apr 2013 09:19 AM PDT

Hi Experts,C_KEY ENTER_DATE1 2008-12-31 00:00:00.0002 2008-12-31 00:00:00.0003 2008-12-31 00:00:00.0004 2009-12-31 00:00:00.0005 2009-12-31 00:00:00.0006 2009-12-31 00:00:00.0007 2010-12-31 00:00:00.000 8 2010-12-31 00:00:00.0009 2010-12-31 00:00:00.00010 2010-12-31 00:00:00.000can someone help me in finding min and max c_key for all years.for EG:YEAR Min C KEY MAX CKEY2008 1 32009 4 62010 7 10

Join Issues - Customer Penetration project

Posted: 29 Apr 2013 01:48 PM PDT

Maybe I'm just trying to get confirmation that this is possible, but essentially I'm trying to run the main query for a month or more date range and join to another dataset (basically the same data) but with rolled up sales for a Total Sales # by Customer (no parameters). I know, you're thinking why not eliminate the join and just start with the wider dataset?? Well, yes, and I might still need to do that, but because of the way several queries have already been built in SSRS, and management is asking after the fact to add a Customer Penetration field to the report, I'm trying to add this join to what is already built.So, I will try to summarize how I think the select should be built (please be gentle, I'm still fairly new at SSIS). In order to get customer penetration the formula is customer sales on one or more product group(s)*100 / total customer sales (same customers). So this is where I have to join these specific customers.DECLARE and variables...selectbranch,customer,product group,Monthly Sales ,Monthly Total Sales,Customer Penetration calcfrom (select branch, customer, product group, etc.... from salestransinner join to date...inner join to customer...left join to product...left outer join to billbacks...full outer join (to total sales for customer penetration) (select branch, customer, total sales, etc ) as TotalCustSales on (TotalCustSales.branch = salestrans.branch and TotalCustSales.CustID = salestrans.CustID and TotalCustSales.Branch = customer.CompanyID and TotalCustSales.CustID = customer.CustomerID) where salestrans conditions, etc...group by...) as CustProdGrpSaleswhere Variable/parameter conditions...Keep in mind that the main query and the customer penetration query run fine separately, but when joined I'm getting a divisor=0 error, which tells me that the total sales is not coming out correctly. Again, the total sales select is essentially the same as the main query, but just stripped down to get customer sales totals. But I'm also getting an error at the end of the main select outside of the parentheses where I called it ) as CustProdGrpSales. It says something about "No column name was specified for column 29 of 'CustVendProdGrpSales' ". Haven't seen this one before, but it just means my syntax is off somewhere. This is the only line that has red squiggles, so I can't tell where the error lies. However, maybe someone has a better suggestion on how to build this? Thanks in advance!

Index maintenance increase log size

Posted: 28 Apr 2013 10:39 PM PDT

In SQL Server 2008R2, we are running an Index maintenance job on Sunday.It rebuilds/reorganize the indexes ONLINE on a database in a loop. The max size in Indexes is 16GB , but my log size grows upto 160 GB.There are around 200 indexes. I understand that ONLINE rebuild causes log size to grow but I also think that once the rebuild of one index is complete, the log space should be available for reuse but that doesn't seem to be happening. Can it be a reason that since rebuild happens in a loop and loop runs fast , so essentially there are several rebuilds happening in parallel? Will putting a waitperiod of ,say 10 sec, after each rebuild help?

Reporting in records in the right order

Posted: 29 Apr 2013 12:21 AM PDT

hi guysI have some data from a system upgrade where stuff didn't get the right sequence numbers [code="sql"]create #test( ref int, oldvalue nvarchar, newvalue nvarchar, seqno int)insert into #test (ref,oldvalue,newvalue,seqno) values (100,null,'A',1)insert into #test (ref,oldvalue,newvalue,seqno) values (100,'B','C',2)insert into #test (ref,oldvalue,newvalue,seqno) values (100,'A','B',3)insert into #test (ref,oldvalue,newvalue,seqno) values (100,'B','X',4)insert into #test (ref,oldvalue,newvalue,seqno) values (100,'C','B',5)[/code]the correct order for these records is actually 1,3,2,5,4 (oldvalue = newvalue from previous record). The challenge is that from seqno 3 you have two potential choices (2 or 4) but only the combination above collects all of the records.I can't see how to do this without resorting to cursors or other RBAR solutions. any help would be appreciated. The dataset is approximately 900K records spread over 250K ref#s

TDE and Page-Level Compression

Posted: 29 Apr 2013 06:03 AM PDT

I cannot find any information specifically on this at this point, but I was curious if anyone knew if removing page-level compression on an encrypted database will decrypt it? We had a database decrypt this weekend after a deployment, and the only think that logically makes sense to me (since there was no specific decrypt command issued) is that removing page-level compression required the database to be decrypted. Any thoughts?

Database Engine vs. UI SQL Statements

Posted: 29 Apr 2013 04:43 AM PDT

I recently had a conversation with a friend of mine who is an Oracle developer. I was discussing how we develop in MS SQL and that we create stored procedures to handle all of our data manipulation. He immediately made a face at me and stated "Why in the world would you do that?!?" His claim is that it is far better to pass SQL statements directly from the code reducing the MS SQL server workload.I stopped for a minute to think, could this be true? Have I been developing the wrong way for all of these years? Unfortunately, I am not fast enough to respond but after some thought I am not sure I would change a thing since I do not want my UI developers to have to understand the structure of my database. In addition, since I have customer sites hosting their own copy of our software. If I make a SQL change, I just send a script and all is good. My friend's way, I would have to send out a completely new UI to all of my customers and go through a full implementation process.Can anyone shed some light on best practices for coding a UI that interacts with MS SQL and whether or not one should use stored procedures for the SELECT/INSERT/UPDATE/DELETE statements? I just can't believe that it is good practice to code your SQL code in application and not use the database. After all the database engine is going to have to run the statement that is passed to it and their is no precompiled execution plan for that SQL string.

Need to Decrypt an Encrypted column that is selected in a view

Posted: 29 Apr 2013 05:50 AM PDT

Hi,I have a document imaging application that connects to my database through a view and it can only use the view. One of the table fields selected in the view has now been encrypted for PCI and HIPAA compliance. I need to be able to decrypt this field for the view but due to MS design, the view does not let me open the key. I've also read where it is not recommended to use the openrowset to execute a stored procedure in the view. Does anyone have a suggestion to accomplish this task? Thanks in advance.

SQL Server outof space question.

Posted: 29 Apr 2013 12:52 AM PDT

Hello Experts,One of our clients server ran into this issue. The server has both application and SQL Server installed on it. The hard drive is completely full and we don't know what is causing it to be full but because it is full, we can't login to the SQL Server because there is not enough space to write to the transaction log. We can't stop the MSSQLSERVER service because there's no space and there's not anything I know of clearing to be able to clear enough space so we can delete the transaction log. What else could I do other than attaching another HDD and move/add data/log file to the new volume?Thanks in advance!

Huge performance impact after migration from SQL 2005 to SQL 2008 R2 - Urgent

Posted: 29 Apr 2013 03:13 AM PDT

We recently migrated our database from SQL 2005 to SQL 2008 R2. Application is performing pretty fast and as expected. However when any queries are called from linked servers then it just hangs. Select top 5 * from linked server takes forever and when killed takes forever to roll back. This impacts the whole system big time. Suggestions?

Query re-write application

Posted: 29 Apr 2013 03:32 AM PDT

Just wondering if anyone knows of a application that can intercept a certain query on the fly and re-write it? We have a troublesome query in our application that uses an INNER LOOP JOIN that the vendor has not been able to correct. It is dynamic and embedded deeply in their code. I just need to remove the LOOP keyword and the query flies. Thank you!

SQL Client Access Licence requirements

Posted: 14 Feb 2012 04:55 AM PST

HiSQL 2008R2 Std Ed.We are having an internal debate on what does (does not) require a SQL CAL for the end user.We only use SQL for reporting..data being sourced from external (non SQL) dbsThe majority of our users only require static non updatable reports and therefore, where we can, we would like to reduce our CAL licensing costsIf there are any licensing "experts" prepared to comment on what we believe below, then it will be appreciatedScenariosMS Excel pivot table accessing a SSAS Cube....CAL requiredMS Excel directly querying SQL via MS Query...CAL requiredMS Access ADP linked to SQL...CAL required User accessing a static Excel file created thro TSQL/SSIS .....CAL not requireduser receiving an email generated via TSQL/SSIS ....CAL not required.Any access to SSRS...CAL required.Thanks.

asp_net user personalization copy

Posted: 29 Apr 2013 02:08 AM PDT

I have a web site running with asp_net personalization for some web parts on the page. I am wondering if it is possible to take the pagesettings from one user and copy them over in the aspnet_PersonalizationPerUser table for a new user so that the page configuration would be the same... I cant seem to get it to work.thanksJason

Msg 468, Level 16, State 9, Procedure "procedurename", Line 129 Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation.

Posted: 12 Apr 2012 01:51 AM PDT

Dear All,I have a stored procedure and I keep getting the following error message:Msg 468, Level 16, State 9, Procedure "procedurename", Line 129Cannot resolve the collation conflict between "Latin1_General_CI_AS" and "SQL_Latin1_General_CP1_CI_AS" in the equal to operation.I looked at the Collation for the Database and it is SQL_Latin1_General_CP1_CI_AS but I don't know what else I need to do to get it working please?Thank you in advance!

How to Determine New SQL Box Specs for Migration from 2005 to 2008

Posted: 29 Apr 2013 01:01 AM PDT

So I've been researching this, but can't get a clear how to on this. I just joined a new job and one of my first projects is to migrate a 2005 sql server to 2008. I need to give specs to my infrastructure guy for new environment. We have an application running on Windows 2003 Box (4 processors, 4GB of ram) with SQL 2005 SP4. We are running on VM Ware with Net App. We need to get to Windows 2008 with SQL Server 2008 R2.How do I determine Specs for the New Box. For example, how many CPU (and their speed), how much RAM, etc.Is there a tutorial you guys can point me to?

Order issues

Posted: 29 Apr 2013 12:13 AM PDT

Hello!I am importing data from a source system running IBM iSeries. I use OPENQUERY to query the source system.The source system is a bit peculiar, so in table tblModelsAccount I have the following fields (my example is simplified)AccountNo ModelCode12345 A12345 BSo which one is the current ModelCode for AccountNo 12345? Well, the not so obvious business rule is that the first one is always the current. That means A.So I import all rows to a table I have in my SQL Server DB. Like this INSERT INTO MYDATABASE.dbo.ModelAccounts(AccountNo, ModelCode)SELECT * FROM OPENQUERY(SourceDSN,'SELECT AccountNo, ModelCode FROM tblModelsAccount ')Then I do a Select from my import tableSELECT * FROM MYDATABASE.dbo.ModelAccountsInteresting enough, the rows have shifted places.AccountNo ModelCode12345 B12345 AObviously it is hard to use the rule "first one is the current", when this happens. But exactly what is deciding the order of rows in the SQL Server table when doing an insert like this? I would have expected the rows to be ordered like in the source system.

Truncate Log Script

Posted: 28 Apr 2013 10:12 PM PDT

Hi Buddies,I got a Script for Truncate Logs...Please SUGGEST how to make this runnable***************************************************************DECLARE c CURSOR FOR SELECT database_id, name, recovery_model_desc FROM sys.databasesDECLARE @dbname VARCHAR(1024); DECLARE @rmod VARCHAR(1024); DECLARE @id INT; DECLARE @lfile VARCHAR(1024); OPEN c; FETCH NEXT FROM c INTO @id, @dbname, @rmod; WHILE @@FETCH_STATUS = 0 BEGINIF @rmod = ''FULL''BEGINSET @lfile = (SELECT name FROM sys.master_files WHERE database_id = @id AND type=1)PRINT @lfileEXEC(''alter database ['' + @dbname + ''] set recovery simple '')EXEC(''USE [''+@dbname+'']; DBCC SHRINKFILE([''+@lfile+''], 1)'')EXEC(''ALTER DATABASE ['' + @dbname + ''] SET RECOVERY FULL '')END ELSEIF @rmod = ''SIMPLE''BEGINSET @lfile = (SELECT name FROM sys.master_files WHERE database_id = @id AND type=1) PRINT @lfileEXEC(''USE [''+@dbname+'']; DBCC SHRINKFILE([''+@lfile+''], 1)'')ENDFETCH NEXT FROM c INTO @id, @dbname,@rmod; END; CLOSE c DEALLOCATE c GO***************************************************************EXECUTING THIS ERROR COMES---:w00t:Msg 156, Level 15, State 1, Line 14Incorrect syntax near the keyword 'FULL'.Msg 156, Level 15, State 1, Line 18Incorrect syntax near the keyword 'alter'.Msg 102, Level 15, State 1, Line 18Incorrect syntax near ''.Msg 156, Level 15, State 1, Line 19Incorrect syntax near the keyword 'USE'.Msg 911, Level 16, State 1, Line 19Could not locate entry in sysdatabases for database '''+@dbname+'''. No entry found with that name. Make sure that the name is entered correctly.

Precedence Constraint editor

Posted: 28 Apr 2013 11:22 PM PDT

Hi any one explain how the precedence constraints works lfor logical Or and logical And. and in values what is success,failure and completion

Login audit

Posted: 28 Apr 2013 08:27 PM PDT

Hi,I've been asked to produce a report on who has accessed a particular database on an SQL server we have. The system (a third party with no internal auditing) uses sql authentication. Logging on the server is set to record failed and successful login attempts. I can read the log files using xp_readerrorlog but that only gives the the time and name of the user and not the database.Any one got any ideas on how I can do this report?Thanks.

Mega Problems installing SQL Server 2008 R2 Developer

Posted: 28 Apr 2013 09:50 PM PDT

I normally install SQL Server 2008 R2 Developer without any problems, for example on an unupdated version of Windows XP. I have also installed it on Windows 7 which may have had the initial 500-600MB or so of updates although I am not sure if these help with a new installation of SQL Server 2008 R2 Developer. I think that version may have had Visual Studio 2010 or 2008 on it first, although I can't remember the sequence of installs.I had a problem of not being able to go into SSIS a few days ago. This may have been something to do with SQL Server 2012 RC being uninstalled and problems with shared .dll's. After much irritation I did a clean install of Windows 7 SP1.I then tried to install SQL Server 2008 R2 Developer on Windows 7 HP (without initial updates) but this came up with errors. I barely slept over the weekend as the whole reinstallation process of my laptop has taken ages.If I try to install Visual Studio 2008, on its own, a msg appears:[i]An earlier version of Microsoft Visual Studio 2008 has been detected on the system that must be updated to SP1 before installation can proceed. Please update all other versions of Visual Studio 2008 to SP1 by visiting Microsoft Update and then install Visual Studio 2008 Express SP1.[/i] At the same time Visual Studio 2008 SP1 (which I downloaded as does not install now) I get the following message:[i]A compatible version of Visual Studio 2008 was not detected on the system. This update is designed for only the Microsoft Visual Studio 2008 (ENU) product family, and is not compatible with any Express editions.[/i]Should this not install without problems on a Windows 7 HP SP1 that has not been updated? The initial Windows Updates don't seem to have anythhing to do with impacting on this installation and are just security updates for Visual C++ 2005-2008.If this happens again I will just have to try to problem solve, but I would rather do another install now on a clean install of Windows 7 as the whole process of trying to fix the existing setup and previous setup is way to complicated and I have seen solutions involving copying shared .dll's, going into the registry and restore points (I tried the shared .dll's and restore points with no luck). I don't favour these solutions and would like to see if there is something that can be installed first as once SQL Server is installed it is very difficult to uninstall or correct.Many thanks for any suggestions to how to prepare for a clean install of SQL Server 2008 R2 Developer on a clean install of Windows 7 HP SP1.

Monday, April 29, 2013

[how to] Defacto way of fixing constraint issues without downtime?

[how to] Defacto way of fixing constraint issues without downtime?


Defacto way of fixing constraint issues without downtime?

Posted: 29 Apr 2013 09:22 PM PDT

I come across two constraint issues all the time (UNIQUE, and referential integrity constraints) within InnoDB. My question is when these issues arise reproducibly what is typically the solution? I'm asking this from a web application attempting to do transactions on the database; so there isn't much control on what is being done unless I risk taking down the database; so conceptually the solutions I have found are below, and am wondering if input can be provided.

UNIQUE constraint possible solutions:

  • Remove 'old' value so new one can be inserted.

Typical error: ERROR 1062 (23000): Duplicate entry '0' for key 1

FOREIGN KEY constraint failure possible solutions:

  • Insert expected value into parent.
  • Set foreign_key_checks to false so the transaction doesn't get aborted. I don't like this solution but it works.

Typical error: SQLSTATE[23000]: Integrity constraint violation: 1452 Cannot add or update a child row: a foreign key constraint fails...

I've also thought that maybe there are inconsistencies between tables that causes generated queries to fail; this makes me wonder if null operations on all the tables could resolve the issue? e.g. ALTER TABLE <tbl_name> ENGINE=InnoDB;

Suggestions where to also dump the table and re-import it; but this just sounds too much of a hassle for a 'non-guaranteed' fix.

So, I ask what are typical solutions for the above errors?

strange io in mysql server

Posted: 29 Apr 2013 08:53 PM PDT

one of mysql servers becomes slow recently, and I found that there is a high IO in the server:

$ sudo  pt-ioprofile -cell sizes  Mon Apr 29 22:39:32 CDT 2013  Tracing process ID 16934       total      pread       read     pwrite      fsync       open      lseek filename  2147483647    1735082      64640     369248          0          0 2147483647 /var/lib/mysql/db1/estate.MYD   889808476     524176      19420     128436          0          0  889136444 /var/lib/mysql/db2/estate.MYD   343733731       7231        600          0          0          0  343725900 /var/lib/mysql/db2/cmsuser.MYD    18334349       3325        180          0          0          0   18330844 /var/lib/mysql/db1/cmsuser.MYD      104076          0          0     104076          0          0          0 /var/lib/mysql/db1/estate.MYI       98304          0          0      98304          0          0          0 /var/lib/mysql/ibdata1       34300          0          0      34300          0          0          0 /var/lib/mysql/db2/estate.MYI        1024          0          0       1024          0          0          0 /var/lib/mysql/ib_logfile1         512          0          0        512          0          0          0 /var/lib/mysql/ib_logfile0           0          0          0          0          0          0          0 /etc/hosts.deny           0          0          0          0          0          0          0 /etc/hosts.allow  

however, when I enter show process in the server, I found no active process!

what's possibly going on here?

thx

Deadlock : Behaviour of NOLOCK

Posted: 29 Apr 2013 08:27 PM PDT

We are running into deadlocks when we run concurrent instances of following query :

BEGIN TRAN    IF NOT EXISTS (SELECT TOP 1 * FROM Table1 WITH (NOLOCK) WHERE Col1 = Val1 AND Col2 = Val2)    BEGIN      INSERT INTO Table1 (ID, Col1, Col2)      VALUES(-1, Val1, Val2)    END  COMMIT TRAN  

The deadlock results in INSERT being aborted. I am trying to understand the working of NOLOCK hint for the above. This link says that NOLOCK acquires table level shared lock and database level schema lock. However, there is no page level lock. So if there is no shared lock to exclusive lock conversion (ie, from SELECT to INSERT), why would there be a deadlock ? Is there lock escalation with NOLOCK if the SELECT query is run over large table ? Am I better off using UPDLOCK instead of NOLOCK for SELECT, as there will not be lock conversion conflicts ?

Monitor For Disabled Queue On RDS SQL Server

Posted: 29 Apr 2013 05:41 PM PDT

I would like to somehow get notification of a queue being disabled due to poisoned messages. (The issue is occasional resource contention. Until we solve it, monitoring would help us be aware.) In my case, SQL Server's service broker is running under Amazon's RDS environment, which limits what is available.

Suggestions?

Why would I use the MySQL "system" command?

Posted: 29 Apr 2013 01:40 PM PDT

What can I do with the MySQL client's system command? Why does it exist?

The most productive use I've seen is to look around the file system (e.g., to remember the file name you want to SOURCE or LOAD FILE).

mysql> system ls /tmp  backup.sql   mysql> source /tmp/backup.sql  Query OK, 1 row affected (0.02 sec)    mysql>  

As near as I can tell, you can't pass a query result to the STDIN of a command, you can't pass STDOUT of a command into a MySQL query.. it just doesn't seem widely useful.

Why does CREATE INDEX ... WITH ONLINE=ON block access to the table over a period of minutes?

Posted: 29 Apr 2013 12:20 PM PDT

I have an existing table:

CREATE TABLE dbo.ProofDetails  (      ProofDetailsID int NOT NULL           CONSTRAINT PK_ProofDetails           PRIMARY KEY CLUSTERED IDENTITY(1,1),      ProofID int NULL,      IDShownToUser int NULL,      UserViewedDetails bit NOT NULL           CONSTRAINT DF_ProofDetails_UserViewedDetails           DEFAULT ((0)),  );  GO  

This table has 150,000,000 rows. The system is in operation 24x7x365, so there are no regularly occurring maintenance windows.

I want to add an index to the table, and with the Enterprise edition of SQL Server, I should be able to do that without blocking write access to the table. The command I used was:

CREATE INDEX IX_ProofDetails_ProofID_Etc   ON dbo.ProofDetails (ProofID, IDShownToUser)  INCLUDE (UserViewedDetails)  WITH (ONLINE=ON      , ALLOW_ROW_LOCKS=ON      , ALLOW_PAGE_LOCKS=ON      , FILLFACTOR=100      , MAXDOP=4  );  

This ran for over a minute, then began blocking other sessions. I then immediately cancelled the CREATE INDEX command since I cannot block other sessions. During the first minute, nothing was blocking my CREATE INDEX command, sys.dm_exec_requests showed the process with a wait type of CXPACKET - of course. I don't think that is a bad thing since the operation was parallelized.

Am I misunderstanding the implementation of WITH (ONLINE=ON)? Or is there something else I need to be aware of.

The server is a fairly beefy machine, with 2 quad-core Xeon E5-2643 3.3Ghz processors, 192GB RAM, and SAN storage capable of 5,000+ iops. CPU is typically below 20%, RAM is 93% utilized, mostly by SQL Server. There is nothing else running on the box, just Windows Server 2012, and SQL Server 2012.

Oracle database created whats next? Create Schema or Create Tablespace?

Posted: 29 Apr 2013 12:01 PM PDT

I'm a programmer that's being thrown to the DBA/Sysadmin island all by myself with a volleyball that I named Wilson. I'm trying to survive here.

I'm supposed to create a database for an application. The server where this will be running previously had a database for a pretty similar app. I don't know much about Oracle so I reused the ZFS filesystems and left them how they were created (because honestly, I didn't knew why they were created that way, but I'm pretty sure it was for a good reason).

app/oradata_smart_ora1    858M  12.2G   858M  /oradata/SMART/ora1  app/oradata_smart_ora2   7.18M  18.0G  7.18M  /oradata/SMART/ora2  app/oradata_smart_ora3   7.18M  36.0G  7.18M  /oradata/SMART/ora3  app/oradata_smart_ora4   60.6K   400G  60.6K  /oradata/SMART/ora4  app/oradata_smart_redo1   400M  2.61G   400M  /oradata/SMART/redo1  app/oradata_smart_redo2   200M  2.80G   200M  /oradata/SMART/redo2  app/oradata_smart_redo3   200M  2.80G   200M  /oradata/SMART/redo3  

Since I reused the filesystems I created my database and placed the controlfiles in the same places where the old database files were (/oradata/SMART/ora1,/oradata/SMART/ora2,/oradata/SMART/ora3). Thinking like MySQL works I created app/oradata_smart_ora4 60.6K 400G 60.6K /oradata/SMART/ora4 specifically to store the database there.

The databases startups and mounts no problem. Now is where I'm stuck. I've read this, this,this,this,this and much more but I still have doubts.

Note that this server will manage with millions/billions records throughout its lifetime.

  1. Now that my Database is created, whats the next step? Create the Schema or Tablespace?
  2. Tablespace Questions: Tablespace datafile(s) is where actual data from tables is stored? how many are needed? Default or Temporary? How much space will I need for it? Autoextend?

I'm really sorry for all these questions but I really want to do this right while following the best practices for Oracle. If you need more information for your answer let me know.

DB2 Load Failure due to timestampformat

Posted: 29 Apr 2013 11:41 AM PDT

I'm trying to load a file into DB2 database table using DB2 Load. I'm able to load the file, using the following statement from the unix prompt.

db2 +p -x load from BOM_Sales_20130326.txt of DEL MODIFIED BY COLDEL\| timestampformat=\"YYYY-MM-DD hh:mm:ss\" insert into GEO_SALES.SDM_STL_VFRSK_SALES NONRECOVERABLE

But, when I try to call DB2 Load from a script after parameterising many of the options, it does not seem to work.

For example, I have re-written the statement as follows:

db2 +p -x "load client from $FILE_DIR/$filenm of DEL MODIFIED BY COLDEL\$FILE_DELMTR timestampformat=\"YYYY-MM-DD HH:MM:SS\" insert into $TGT_SCHEMA_NM.$FILE_STG_TBL_TGT NONRECOVERABLE"

All the variable are assigned with correct values prior to calling the statemant. But it seemed to fail due to wrong timestampformat. The error is as follows:

SQL3191N The field in row "F2-1", column "1" which begins with "2013-03-26|STL|5678|D|3212345" does not match the user specified DATEFORMAT, TIMEFORMAT, or TIMESTAMPFORMAT. The row will be rejected.

In fact, my need is to give the value "timestampformat" also as a parameter, as follows: db2 +p -x "load client from $FILE_DIR/$filenm of DEL MODIFIED BY COLDEL\$FILE_DELMTR timestampformat=$DTTIME_FMT insert into $TGT_SCHEMA_NM.$FILE_STG_TBL_TGT NONRECOVERABLE"

Since it was not working, just for the testing purpose, the variable $DTTIME_FMT was replaced with \"YYYY-MM-DD HH:MM:SS\". As said earlier, that too does not seem to work.

Request your help to fix this and also like to know whether I can parameterise the all these load options as I've tried.

Thanks, Tom

Upgrade SQL Server 2005 Enterprise to SQL Server 2008 R2 Standard

Posted: 29 Apr 2013 01:34 PM PDT

I understand that this is not a supported path, but does anyone have any insight about how to accomplish this? I can't seem to find any documents on the subject. I need my QA (2005) to match my Production (2008R2). Just joined this company and this is one of my first tasks...

Optimizing Query

Posted: 29 Apr 2013 10:58 AM PDT

Can anyone help me tuning this query.....

explain extended      SELECT DISTINCT a.msisdn FROM `std_msc_opr_wise` a    LEFT OUTER JOIN    (SELECT msisdn     FROM as_treat_pre_usage_30days     GROUP BY msisdn     HAVING SUM(std_total_og_mou)>0) b ON a.msisdn=b.msisdn  WHERE    b.msisdn IS NULL    AND a.call_dt BETWEEN '2013-03-27' AND '2013-04-28';        +----+-------------+---------------------------+-------+---------------+---------+---------+------+----------+----------+-----------------------------------+      | id | select_type | table                     | type  | possible_keys | key     | key_len | ref  | rows     | filtered | Extra                             |      +----+-------------+---------------------------+-------+---------------+---------+---------+------+----------+----------+-----------------------------------+      |  1 | PRIMARY     | a                         | ALL   | ix_cd         | NULL    | NULL    | NULL | 73985537 |    50.00 | Using where; Using temporary      |      |  1 | PRIMARY     | <derived2>                | ALL   | NULL          | NULL    | NULL    | NULL |  1121664 |   100.00 | Using where; Not exists; Distinct |      |  2 | DERIVED     | as_treat_pre_usage_30days | index | NULL          | PRIMARY | 14      | NULL |  3033621 |   100.00 |                                   |      +----+-------------+---------------------------+-------+---------------+---------+---------+------+----------+----------+-----------------------------------+      3 rows in set, 1 warning (1.70 sec)    ________________________________________________________________________    explain extended         SELECT DISTINCT a.msisdn FROM `std_msc_opr_wise` a FORCE INDEX(ix_cd) LEFT OUTER JOIN      (SELECT msisdn FROM as_treat_pre_usage_30days GROUP BY msisdn HAVING SUM(std_total_og_mou)>0 )b      ON a.msisdn=b.msisdn      WHERE b.msisdn IS NULL AND a.call_dt BETWEEN '2013-03-27' AND '2013-04-28';         +----+-------------+---------------------------+-------+---------------+---------+---------+------+----------+----------+-----------------------------------+      | id | select_type | table                     | type  | possible_keys | key     | key_len | ref  | rows     | filtered | Extra                             |      +----+-------------+---------------------------+-------+---------------+---------+---------+------+----------+----------+-----------------------------------+      |  1 | PRIMARY     | a                         | range | ix_cd         | ix_cd   | 4       | NULL | 36992756 |   100.00 | Using where; Using temporary      |      |  1 | PRIMARY     | <derived2>                | ALL   | NULL          | NULL    | NULL    | NULL |  1121664 |   100.00 | Using where; Not exists; Distinct |      |  2 | DERIVED     | as_treat_pre_usage_30days | index | NULL          | PRIMARY | 14      | NULL |  3033621 |   100.00 |                                   |      +----+-------------+---------------------------+-------+---------------+---------+---------+------+----------+----------+-----------------------------------+        ____________________________________________________    TABLE SCHEMA    ____________________________________________________  CREATE TABLE `as_treat_pre_usage_30days` (    `msisdn` varchar(12) NOT NULL COMMENT 'Subscriber number',    `local_a2a_og_mou` bigint(20) DEFAULT NULL COMMENT 'Local A2A OG MOU of the subs',    `local_a2o_og_mou` bigint(20) DEFAULT NULL COMMENT 'Local A2O OG MOU of the subs',    `std_total_og_mou` bigint(20) DEFAULT NULL COMMENT 'STD OG MOU of the subs',    `total_og_mou` bigint(20) DEFAULT NULL COMMENT 'Total OG MOU of the subs',    PRIMARY KEY (`msisdn`),    KEY `ix_std` (`std_total_og_mou`)  ) ENGINE=InnoDB  ____________________________________________________  CREATE TABLE `std_msc_opr_wise` (    `msisdn` varchar(12) DEFAULT NULL COMMENT 'Mobile Number of the Subscriber',    `call_dt` date DEFAULT NULL,    `destination_type` varchar(5) DEFAULT NULL COMMENT 'Destination Type - A2A or A2O',    `duration` int(6) DEFAULT NULL COMMENT 'Call Duration in Seconds',    KEY `ix_ms` (`msisdn`),    KEY `ix_cd` (`call_dt`)  ) ENGINE=InnoDB DEFAULT CHARSET=latin1  /*!50100 PARTITION BY RANGE (to_days(call_dt))  (PARTITION p20130120 VALUES LESS THAN (735253) ENGINE = InnoDB,   PARTITION p20130121 VALUES LESS THAN (735254) ENGINE = InnoDB,   PARTITION p20130122 VALUES LESS THAN (735255) ENGINE = InnoDB,   PARTITION p20130123 VALUES LESS THAN (735256) ENGINE = InnoDB,   PARTITION p20130124 VALUES LESS THAN (735257) ENGINE = InnoDB,   PARTITION p20130126 VALUES LESS THAN (735259) ENGINE = InnoDB,   PARTITION p20130127 VALUES LESS THAN (735260) ENGINE = InnoDB,   PARTITION p20130128 VALUES LESS THAN (735261) ENGINE = InnoDB,   PARTITION p20130129 VALUES LESS THAN (735262) ENGINE = InnoDB,   PARTITION p20130130 VALUES LESS THAN (735263) ENGINE = InnoDB,   PARTITION p20130131 VALUES LESS THAN (735264) ENGINE = InnoDB,   PARTITION p20130201 VALUES LESS THAN (735265) ENGINE = InnoDB,   PARTITION p20130202 VALUES LESS THAN (735266) ENGINE = InnoDB,   PARTITION p20130203 VALUES LESS THAN (735267) ENGINE = InnoDB,   PARTITION p20130204 VALUES LESS THAN (735268) ENGINE = InnoDB,   PARTITION p20130205 VALUES LESS THAN (735269) ENGINE = InnoDB,   PARTITION p20130206 VALUES LESS THAN (735270) ENGINE = InnoDB,   PARTITION p20130207 VALUES LESS THAN (735271) ENGINE = InnoDB,   PARTITION p20130208 VALUES LESS THAN (735272) ENGINE = InnoDB,   PARTITION p20130209 VALUES LESS THAN (735273) ENGINE = InnoDB,   PARTITION p20130210 VALUES LESS THAN (735274) ENGINE = InnoDB,   PARTITION p20130211 VALUES LESS THAN (735275) ENGINE = InnoDB,   PARTITION p20130212 VALUES LESS THAN (735276) ENGINE = InnoDB,   PARTITION p20130213 VALUES LESS THAN (735277) ENGINE = InnoDB,   PARTITION p20130214 VALUES LESS THAN (735278) ENGINE = InnoDB,   PARTITION p20130215 VALUES LESS THAN (735279) ENGINE = InnoDB,   PARTITION p20130216 VALUES LESS THAN (735280) ENGINE = InnoDB,   PARTITION p20130217 VALUES LESS THAN (735281) ENGINE = InnoDB,   PARTITION p20130218 VALUES LESS THAN (735282) ENGINE = InnoDB,   PARTITION p20130219 VALUES LESS THAN (735283) ENGINE = InnoDB,   PARTITION p20130220 VALUES LESS THAN (735284) ENGINE = InnoDB,   PARTITION p20130221 VALUES LESS THAN (735285) ENGINE = InnoDB,   PARTITION p20130222 VALUES LESS THAN (735286) ENGINE = InnoDB,   PARTITION p20130223 VALUES LESS THAN (735287) ENGINE = InnoDB,   PARTITION p20130224 VALUES LESS THAN (735288) ENGINE = InnoDB,   PARTITION p20130225 VALUES LESS THAN (735289) ENGINE = InnoDB,   PARTITION p20130226 VALUES LESS THAN (735290) ENGINE = InnoDB,   PARTITION p20130227 VALUES LESS THAN (735291) ENGINE = InnoDB,   PARTITION p20130228 VALUES LESS THAN (735292) ENGINE = InnoDB,   PARTITION p20130301 VALUES LESS THAN (735293) ENGINE = InnoDB,   PARTITION p20130302 VALUES LESS THAN (735294) ENGINE = InnoDB,   PARTITION p20130303 VALUES LESS THAN (735295) ENGINE = InnoDB,   PARTITION p20130304 VALUES LESS THAN (735296) ENGINE = InnoDB,   PARTITION p20130305 VALUES LESS THAN (735297) ENGINE = InnoDB,   PARTITION p20130306 VALUES LESS THAN (735298) ENGINE = InnoDB,   PARTITION p20130307 VALUES LESS THAN (735299) ENGINE = InnoDB,   PARTITION p20130308 VALUES LESS THAN (735300) ENGINE = InnoDB,   PARTITION p20130309 VALUES LESS THAN (735301) ENGINE = InnoDB,   PARTITION p20130310 VALUES LESS THAN (735302) ENGINE = InnoDB,   PARTITION p20130311 VALUES LESS THAN (735303) ENGINE = InnoDB,   PARTITION p20130312 VALUES LESS THAN (735304) ENGINE = InnoDB,   PARTITION p20130313 VALUES LESS THAN (735305) ENGINE = InnoDB,   PARTITION p20130314 VALUES LESS THAN (735306) ENGINE = InnoDB,   PARTITION p20130315 VALUES LESS THAN (735307) ENGINE = InnoDB,   PARTITION p20130316 VALUES LESS THAN (735308) ENGINE = InnoDB,   PARTITION p20130317 VALUES LESS THAN (735309) ENGINE = InnoDB,   PARTITION p20130318 VALUES LESS THAN (735310) ENGINE = InnoDB,   PARTITION p20130319 VALUES LESS THAN (735311) ENGINE = InnoDB,   PARTITION p20130320 VALUES LESS THAN (735312) ENGINE = InnoDB,   PARTITION p20130321 VALUES LESS THAN (735313) ENGINE = InnoDB,   PARTITION p20130322 VALUES LESS THAN (735314) ENGINE = InnoDB,   PARTITION p20130323 VALUES LESS THAN (735315) ENGINE = InnoDB,   PARTITION p20130324 VALUES LESS THAN (735316) ENGINE = InnoDB,   PARTITION p20130325 VALUES LESS THAN (735317) ENGINE = InnoDB,   PARTITION p20130326 VALUES LESS THAN (735318) ENGINE = InnoDB,   PARTITION p20130327 VALUES LESS THAN (735319) ENGINE = InnoDB,   PARTITION p20130328 VALUES LESS THAN (735320) ENGINE = InnoDB,   PARTITION p20130329 VALUES LESS THAN (735321) ENGINE = InnoDB,   PARTITION p20130330 VALUES LESS THAN (735322) ENGINE = InnoDB,   PARTITION p20130331 VALUES LESS THAN (735323) ENGINE = InnoDB,   PARTITION p20130401 VALUES LESS THAN (735324) ENGINE = InnoDB,   PARTITION p20130402 VALUES LESS THAN (735325) ENGINE = InnoDB,   PARTITION p20130403 VALUES LESS THAN (735326) ENGINE = InnoDB,   PARTITION p20130404 VALUES LESS THAN (735327) ENGINE = InnoDB,   PARTITION p20130405 VALUES LESS THAN (735328) ENGINE = InnoDB,   PARTITION p20130406 VALUES LESS THAN (735329) ENGINE = InnoDB,   PARTITION p20130407 VALUES LESS THAN (735330) ENGINE = InnoDB,   PARTITION p20130408 VALUES LESS THAN (735331) ENGINE = InnoDB,   PARTITION p20130409 VALUES LESS THAN (735332) ENGINE = InnoDB,   PARTITION p20130410 VALUES LESS THAN (735333) ENGINE = InnoDB,   PARTITION p20130411 VALUES LESS THAN (735334) ENGINE = InnoDB,   PARTITION p20130413 VALUES LESS THAN (735336) ENGINE = InnoDB,   PARTITION p20130414 VALUES LESS THAN (735337) ENGINE = InnoDB,   PARTITION p20130415 VALUES LESS THAN (735338) ENGINE = InnoDB,   PARTITION p20130416 VALUES LESS THAN (735339) ENGINE = InnoDB,   PARTITION p20130417 VALUES LESS THAN (735340) ENGINE = InnoDB,   PARTITION p20130418 VALUES LESS THAN (735341) ENGINE = InnoDB,   PARTITION p20130419 VALUES LESS THAN (735342) ENGINE = InnoDB,   PARTITION p20130420 VALUES LESS THAN (735343) ENGINE = InnoDB,   PARTITION p20130421 VALUES LESS THAN (735344) ENGINE = InnoDB,   PARTITION p20130422 VALUES LESS THAN (735345) ENGINE = InnoDB,   PARTITION p20130423 VALUES LESS THAN (735346) ENGINE = InnoDB,   PARTITION p20130424 VALUES LESS THAN (735347) ENGINE = InnoDB,   PARTITION p20130425 VALUES LESS THAN (735348) ENGINE = InnoDB,   PARTITION p20130426 VALUES LESS THAN (735349) ENGINE = InnoDB,   PARTITION p20130427 VALUES LESS THAN (735350) ENGINE = InnoDB,   PARTITION p20130428 VALUES LESS THAN (735351) ENGINE = InnoDB,   PARTITION p20130429 VALUES LESS THAN (735352) ENGINE = InnoDB,   PARTITION p20130430 VALUES LESS THAN (735353) ENGINE = InnoDB,   PARTITION p20130501 VALUES LESS THAN (735354) ENGINE = InnoDB) *  

I can't install db-mysql in node.js

Posted: 29 Apr 2013 05:45 PM PDT

When I execute this command: npm install db-mysql, I get the following result. What's the solution to this problem?

C:\Users\Mouad>npm install db-mysql  npm http GET https: //registry.npmjs.org/db-mysql  npm http GET https: //registry.npmjs.org/db-mysql  npm http GET https: //registry.npmjs.org/db-mysql  npm ERR! Error: SELF_SIGNED_CERT_IN_CHAIN  npm ERR!     at SecurePair.<anonymous> (tls.js:1283:32)  npm ERR!     at SecurePair.EventEmitter.emit (events.js:92:17)  npm ERR!     at SecurePair.maybeInitFinished (tls.js:896:10)  npm ERR!     at CleartextStream.read [as _read] (tls.js:430:15)  npm ERR!     at CleartextStream.Readable.read (_stream_readable.js:294:10)  npm ERR!     at EncryptedStream.write [as _write] (tls.js:344:25)  npm ERR!     at doWrite (_stream_writable.js:211:10)  npm ERR!     at writeOrBuffer (_stream_writable.js:201:5)  npm ERR!     at EncryptedStream.Writable.write (_stream_writable.js:172:11)  npm ERR!     at write (_stream_readable.js:547:24)  npm ERR! If you need help, you may report this log at:  npm ERR!     <http: //github.com/isaacs/npm/issues>  npm ERR! or email it to:  npm ERR!     "npm-@googlegroups.com"    npm ERR! System Windows_NT 6.1.7601  npm ERR! command "C:\\Program Files\\nodejs\\\\node.exe" "C:\\Program Files\\nodejs\\node_modules\\npm\\bin\\npm-> cli.js" "install" "db-mysql"  npm ERR! cwd C:\Users\Mouad  npm ERR! node -v v0.10.4  npm ERR! npm -v 1.2.18  npm ERR!  npm ERR! Additional logging details can be found in:  npm ERR!     C:\Users\Mouad\npm-debug.log  npm ERR! not ok code 0  

Table for optional parent/child relationship

Posted: 29 Apr 2013 12:09 PM PDT

Assuming we have the following table: Item, Parent, Child and Parent is the parent of child.

The item can either belong to a parent or child and not both.

I have other tables that are similar to Item and they too can belong to either a Parent or Child.

Should I simply just add 2 nullable FK to them?

Can I enforce that either a Parent or Child must exists using the db?

What is the correct model for related tables with millions of rows?

Posted: 29 Apr 2013 10:49 AM PDT

I need to create a question and answer tables that will have millions (maybe billions) of rows.

The current model is:

Question Table id_question (PK, auto increment) id_user question_content question_date

Answer Table id_answer (PK, auto increment) id_question id_user answer_content answer_date

Is this a correct model (considering better query performance)? Should I add the id_question and id_user columns to the primary key?

Thanks

Is it ever a good idea to denormalize for integrity?

Posted: 29 Apr 2013 12:13 PM PDT

I'm using Postgres 9.2 to develop a quiz app, in which I present the user with a series of problems and record their answers.

These problems can take a variety of forms - they might be multiple choice (What's 2 + 2? A: 2. B: 3. C: 4), or they might require the user to calculate an answer of their own, in which case I need to constrain their input to be something like '440' or '1/2' or '.333'. Some of the problems might prompt the user to type in an essay. And, of course, I may need to add more types of problems later.

The tables I'm envisioning, in a simplified form, are something like this:

CREATE TABLE problems  (    problem_id serial NOT NULL PRIMARY KEY,    problem_type text NOT NULL, -- Refers to a lookup table    answer_letter text, -- If not null, refers to the correct answer in the answers table below.    response text -- If not null, represents a correct answer to be input, like '0.4'  );    CREATE TABLE answers  (    problem_id integer, -- Foreign key    answer_letter text,    content text,      CONSTRAINT answers_pkey PRIMARY KEY (problem_id, answer_letter)  )    CREATE TABLE questions  (    user_id integer,    created_at timestamptz,    problem_id integer, -- Foreign key    answer_letter text,    response text,      CONSTRAINT questions_pkey PRIMARY KEY (user_id, created_at)  );  

So, the problems table would have a variety of constraints to ensure that:

  • When problem_type is 'multiple_choice', answer_letter must not be null and response must be null.
  • When problem_type is 'user_input', answer_letter must be null and response must not be null. Response must also consist of only a few characters.
  • When problem_type is 'essay', both answer_letter and response must be null, since I can't really have a correct answer for an essay question.

This is clean enough, and constrains the problems table just fine. I might use an enum instead of a lookup table for problem_type, since all of its possible values will already be baked into the schema anyway.

My difficulty is, how to constrain the questions table? Its constraints will be very similar (I don't want an answer_letter supplied for a question that references an essay problem, and so on). I can think of a few options:

  1. Create a unique index on problems (problem_id, problem_type), add a problem_type field to questions and include it in the reference to problems, then use it in check constraints in a way similar to problems. This is the way I'm leaning right now, since it seems cleanest overall, but then I'm denormalizing to achieve the proper constraints, which feels wrong to me.
  2. Create three problem tables, one for each type, and constrain them separately. Do the same with three question tables. This feels like the pure relational way to me, which is typically what I'd like to go for, but it also feels way too complex. I don't want to have to deal with unioning three tables (or more, later on) to get a user's question history.
  3. Go with #2, but use Postgres' inheritance support to try to keep the logic simple. But since you can't point a foreign key at a hierarchy of tables, again it's not a very clean solution, it's a solution that would again need to be hacked around.
  4. Use triggers to ensure that questions data fits the corresponding problem. Maybe it's because I don't have much experience with triggers, but I'm concerned that cramming that sort of imperative logic into the DB will eventually become unmanageable.
  5. Forget the question constraints, handle it in app logic, and hope for the best. You can't constrain everything all the time. Of course, I don't really like this idea either.

I feel like there's a problem with my approach to modeling that's leading me to these difficulties, especially since I'm running into a couple of cases very similar to this elsewhere in my schema (this one was just the easiest to describe). Maybe it's just a domain that's difficult to model, but I feel like there must be a better way, and I'm just not normalizing correctly.

Help? Thanks!

Separating tables vs having one table

Posted: 29 Apr 2013 01:42 PM PDT

At the moment I have a table setup that looks somewhat like this:

create table tbl_locationcollections  (     id int(11) PRIMARY KEY IDENTITY, --(Primary Key),     name varchar(100) not null,     cleanname varchar(100) not null,     typeid int(11) not null (foreign key)  )    create tbl_locationcollectiontypes  (     id int(11) PRIMARY KEY IDENTITY,  --(Primary Key)     type varchar(100)  )  

a type would be something like: country, state, city etc etc.

I would then join to the various type tables like this:

create tbl_states2locationcollection  (     id int(11) PRIMARY KEY IDENTITY,  --(Primary Key)     stateid int(11) not null, --(foreing key)     locationcollectionid int(11) not null --(foreign key)  )  

with other tables like tbl_cities2locationcollection etc etc.

Is this style of seperation better for speed and readability than having a table such as:

create tbl_locations2collection  (     id int(11) PRIMARY KEY IDENTITY, --(Primary key)     locationid int(11) not null, --(foreign key to the type of: state, country, city)     typeid int(11) not null --(foreign key)  )  

where all the different types are mixed in together.

the only downside i can see for not having a mixed table, is having to create a specific table for each type that is created in the future.

Easier way to handle so many isnull() situation

Posted: 29 Apr 2013 09:13 PM PDT

Is there any good way to avoid writing so many times isnull() function inside sproc ?
I have stored procedure that use almost 30 times isnull() function, I think that I am miss a concept , but until I find better way to get my data ill love to clean my code from so many isnull() functions.
Can I for while set MSSQL 2008R2 server to using null values as float 0.
Moost of my isnull()-s adding just zero 0 value if there is no data so I can do maths operations.

EDIT: I am not lazy, I just trying to clean my code and avoid select parts looks like this

select       vrsta.NAZIV     ,isnull(sum(prod.prod_mpci),0) as prod_MPCI     ,isnull(sum(isnull(mal_MPCI,0) + (vel_kol * isnull(mal_MPC,vel_VPC))),0) as lager_mpci     ,isnull(sum(vel_NCI),0)+isnulL(sum(mal_NCI),0) as lager_nci     ,sum(   case               when isnull(vel_KOL,0)+isnull(mal_KOL,0) > isnull(prod.prod_kol,0) and isnull(dzn.dzn,'2010-01-01') < @dzu                      then ((isnull(vel_KOL,0)+isnull(mal_KOL,0))-isnull(prod.prod_kol,0)) * isnull(mal_MPC,vel_VPC)                      else 0 end              ) as visak_MPC     ,sum(   case               when isnull(vel_KOL,0)+isnull(mal_KOL,0) > isnull(prod.prod_kol,0) and isnull(dzn.dzn,'2010-01-01') < @dzu                      then ((isnull(vel_KOL,0)+isnull(mal_KOL,0))-isnull(prod.prod_kol,0)) * isnull(mal_NC,vel_NC)                      else 0 end              ) as visak_MPC      ,sum(   case               when isnull(vel_KOL,0)+isnull(mal_KOL,0) <= isnull(prod.prod_kol,0)                       then ((isnull(vel_KOL,0)+isnull(mal_KOL,0))-isnull(prod.prod_kol,0)) * isnull(mal_MPC,vel_VPC)                      else 0 end              ) as manjak_MPC        ,sum(   case               when isnull(vel_KOL,0)+isnull(mal_KOL,0) <= isnull(prod.prod_kol,0)                       then ((isnull(vel_KOL,0)+isnull(mal_KOL,0))-isnull(prod.prod_kol,0)) * isnull(mal_NC,vel_NC)                      else 0 end              ) as manjak_NC  

T-SQL Issues With Defining 'AS'

Posted: 29 Apr 2013 12:04 PM PDT

I am creating a fully dynamic application but have ran into a bit of a hiccup. There are multiple 'undefined' fields that can be defined by the users. The only problem is redisplaying them. Currently they are showing as [UDF_0] to [UDF_7], i.e. :

SELECT [TABLE].[UDF_0],         [TABLE].[UDF_1],         [TABLE].[UDF_2],         [TABLE].[UDF_3],         [TABLE].[UDF_4],         [TABLE].[UDF_5],         [TABLE].[UDF_6],         [TABLE].[UDF_7]  FROM [TABLE]  

Would obviously display as:

UDF_0 || UDF_1 || etc...  

What I would like to be able to do is display them something to this effect, but I can't for the life of me figure it out.

EXECUTE PROCEDURE [dbo].[GetProjectInfo] @Project varchar(100)    AS  BEGIN    SELECT [TABLE].[UDF_0] AS (SELECT [TBL_PROJECT_DESC].[UDF_0]                             FROM [TBL_PROJECT_DESC]                             WHERE [TBL_PROJECT_DESC].[PROJECT_NAME]=@Project),         --etc....  FROM [TABLE]  

Desired display would be :

Field Name 0 || Field Name 1 || etc...  

SQL Server update query on linked server causing remote scan

Posted: 29 Apr 2013 08:04 PM PDT

I have a SQL Server 2012 setup as a linked server on a SQL Server 2008 server.

The following queries executes in less than 1 second:

   SELECT kg.IdGarment     FROM Products p      INNER JOIN ZAMStaffRentals r ON r.ProductID = p.UniqueID      INNER JOIN ZAMCustomer_Staff s ON r.StaffID = s.UniqueID      INNER JOIN ZAMRentalGarments g ON g.StaffRentalID = r.UniqueID      INNER JOIN [REMOTE_SERVER].[REMOTE_DB].dbo.t_ProcessIT_Garment kg on g.BarcodeRFID = kg.IdGarment      INNER JOIN ZAMKannChanges log on log.GarmentID = g.UniqueID      WHERE log.ActionType = 'I'   

t_ProcessIT_Garment contains 37,000 rows, the query returns two records, the IdGarment column is the Primary Key. No problem here.

However, if I run this query to do a remote update, it takes 24 seconds, and 2 rows is affected:

   UPDATE [REMOTE_SERVER].[REMOTE_DB].dbo.t_ProcessIT_Garment SET      IdGarment = IdGarment     FROM Products p      INNER JOIN ZAMStaffRentals r ON r.ProductID = p.UniqueID      INNER JOIN ZAMCustomer_Staff s ON r.StaffID = s.UniqueID      INNER JOIN ZAMRentalGarments g ON g.StaffRentalID = r.UniqueID      INNER JOIN [REMOTE_SERVER].[REMOTE_DB].dbo.t_ProcessIT_Garment kg on g.BarcodeRFID = kg.IdGarment      INNER JOIN ZAMKannChanges log on log.GarmentID = g.UniqueID      WHERE log.ActionType = 'I' ;  

I tested using IdGarment = IdGarment to keep things simple. The execution plan shows it uses Remote Query for the first query, but Remote Scan for the second query, which has 100% of the cost.

The table joins are identical in both queries, why is it using Remote Scan for the second query, and how do I fix this?

Mysql - How to optimize retrival time in a table

Posted: 29 Apr 2013 06:04 PM PDT

I have query like this! which has 200 million Records in a single table.. I am using BTree Indexes in my table...

mysql> select COUNT(DISTINCT id) from [tablename] where [columname] >=3;
+------------------------------+
| COUNT(DISTINCT id) |
+------------------------------+
| 8242063
+------------------------------+
1 row in set (3 min 23.53 sec)

I am not satisfy with this timing ..! how can I reduce the result time less than 30sec. Kindly give me any suggessions! It will be more helpful to me!

thanking you!

MySQL specific database configuration file

Posted: 29 Apr 2013 02:04 PM PDT

In MySQL's configuration file I've globally disabled autocommit as so.

[mysqld]  autocommit=0  

I need to turn MySQL's autocommit on for a specific Ruby on Rails database though. It could be for the user or the database itself (doesn't matter). Thinking it would look something like this.

[mysqld]  autocommit=0  execute_sql="Custom SQL to set autocommit for a database"  

SQL Server 2005 Replication

Posted: 29 Apr 2013 05:04 PM PDT

I am in the process of creating Replication between 2 Remote Servers, server 1 is the Distributor and Publisher and server 2 is the Subscription.

server 1 windows 2003 server 192.168.10.1 connected by vpn SQL Server 2005 domain1.local

server 1  windows 2003 server  192.168.10.1 connected by vpn  SQL Server 2005  domain1.local  

server 2 windows 2003 server 192.168.10.6 connected by vpn SQL Server 2005 domain2.local

server 2  windows 2003 server  192.168.10.6 connected by vpn  SQL Server 2005  domain2.local  

When I setup up Replication everything looked fine until I looked at the sync status and it said:

The Agent could not be started    An exception occurred while executing a transact-sql statement or batch    sqlserveragent error request to run job  server1-username blah blah blah  

From user sa refused because the job is already running from a request by user sa changed database context to technical error 22022.

I have cleared jobs in the server agent as well as restarted the service.

Could this be something to do with authentication between two non trusted domains as I can browse and even control each sql server via SQL studio but just not setup replication?

Yes I can manage each SQL Server in SSMS and we are using merge with snapshot.

Mysqldump tables excluding some fields

Posted: 29 Apr 2013 01:04 PM PDT

Is there a way to mysqldump a table without some fields?

Let me explain:
I have a MySQL database called tests. In tests I have 3 tables: USER, TOTO and TATA. I just want to mysqldump some fields of table USER, so excluding some fields like mail, ip_login, etc.

How can I do this?

How to do something like UPDATE DELAYED in MySQL

Posted: 29 Apr 2013 03:42 PM PDT

I have an averages table that should keep track of an average value over time. I don't want to have a row for each value, just a single row that continuously updates the average. What I've come up with is this:

set @value=4;  set @name="myAverageValue";  UPDATE `timing` SET    `max` = greatest(`max`,@value),    `average` = `average` + ((@value - `average`) / (`count` + 1)),    `count` = `count` + 1  WHERE `name` = @name  

Many clients may be doing this at the same time, and I don't want there to be any locking issues. I don't care what order the updates are run in, since in the end it will all end up the same. I just want to have a query that sends the UPDATE to the database, and it will process it eventually, similar to an INSERT DELAYED. Does UPDATE LOW_PRIORITY do this, or does that cause the client to wait until it is available?

How does SQL Server AlwaysOn work with help with scheduled jobs and SSIS packages?

Posted: 29 Apr 2013 02:12 PM PDT

I have 2 sql servers set up with a database within an AlwaysOn availability group.

I also have jobs set up which run against the database. During a failover how do I ensure the jobs will continue to run on the secondary server? Do I need to install the jobs and ssis packages on both machines and manually disable them on the secondary machine... then manually enable them in the case of a failover? Or is there built in functionality to handle this?

Replication master binlog rotation when network is unavailable

Posted: 29 Apr 2013 04:04 PM PDT

I recently experienced an issue where the binlog file in master rotated because network connectivity between the slave and master was unavailable.

After solving the network issue, the slave was not able to follow the master as it was doing through previous binlog and position.

That was solved by purging the binlog to last binlog in master and pointing the slave to that last binlog and previous binlog's position which was following.

I am wondering if this issue is normal?

Take individual MySQL database offline

Posted: 29 Apr 2013 07:04 PM PDT

MySQL does not have any SQL commands or internal mechanisms for

  • making an individual database unavailable / offline
  • moving an individual database

Thtis being the case, how can you take an individual database offline?

Choosing shard key and friendly URL Ids for my MongoDB

Posted: 29 Apr 2013 09:04 PM PDT

I have decided to use MongoDB as my Database for a web application. However, I have some difficulties to get started and I hope that you can help me out with a few questions.

I am developing my application in ASP.NET and with MongoDB as the back-end. I intend to start with a single server + 1 replication but wanted to built it right so I won't have problem sharding the database in the future if I have to.

One of my biggest problems is choosing the right shard key and friendly URLs for my website.

I have a folders collection and files as embedded collection inside the folders collection. Each user can create any number of folders and add files to it. Each folder belongs to one user. I wanted to know what is the best shard key for this type of collection? Many queries will query by the user, getting the folder and its items by querying the folders collection by its unique id. I will also use the id in the URL to get the folder and its filers: ex. mywebsite.com/folder/[the_id_of_the_folder]

I will also will use paging in my application, so I need to query the data (also in a sharded environment) and get for example: the last 10 records, page 2 with 10 records - all ordered by the last time the were inserted/updated

  • So my first question is what is the best shard key to use for a single machine, but considering that I will have to shard in the future
  • Does the shard key has to be the primary unique id of the document in MongoDB?
  • How can I generate more user friendly URLs - I prefer a numerical value instead of GUID (is there option to convert it?)

Help will be very appreciated, as I am stuck and can continue until I solve this.

SQL Server 2008 R2 Transactional Replication "Cannot insert explicit value for identity column..."

Posted: 29 Apr 2013 12:32 PM PDT

Right now I'm having an "identity crisis" with transactional replication in SQL Server 2008 R2. The database is running in compatibility 90. There's a table that has an identity column and won't replicate over. The error is "Cannot insert explicit value for identity column in table '' when IDENTITY_INSERT is set to OFF. (Source: MSSQLServer, Error number: 544)".

The "not for replication" is set to true for that table. I can't find any settings for the articles to specify this as well.

Any ideas are appreciated.

Is there a repository for pgadmin plugins?

Posted: 29 Apr 2013 05:24 PM PDT

I've been using PostgreSQL with PGAdmin III for a while now and it's been bugging me that there is a plugins menu option that is empty. I've Googled some and found a plugin here and there but I wanted to know if there was a repository I was missing out on?

If you don't know of a repo but are aware of additional plugins please also let me know about them.

[SQL Server] inherited an app, Error started the next day

[SQL Server] inherited an app, Error started the next day


inherited an app, Error started the next day

Posted: 29 Apr 2013 10:23 AM PDT

Hi, First post, I've been using SQL for about 3 years, I just changed jobs and inherited a bunch of code. I have tracked down the problem section of code but haven't worked much with cursors. I could use a hand to solve this bug.------Error-----Msg 536, Level 16, State 5, Line 34Invalid length parameter passed to the SUBSTRING function.The statement has been terminated.Msg 536, Level 16, State 5, Line 34Invalid length parameter passed to the SUBSTRING function.The statement has been terminated..........{INFINITE LOOP}-----Code Snippet------PRINT 'CREATE RECORDS BASED ON MOVEITEMS INFORMATION STOCK'DECLARE @MESSAGE VARCHAR(2000), @ID INT, @DATE DATETIME, @STOCKCODE varchar(50), @DESCRIPTION varchar(2000), @USERID INTCREATE TABLE #TEMP ([ID] INT,[DATE] DATETIME,ITEMID INT,STOCKCODE VARCHAR(50),[DESCRIPTION] VARCHAR(100),USERID INT,QUANTITY FLOAT,DIRECTION VARCHAR(10),LOCATIONID INT)DECLARE MOVEITEMS_CUR CURSORFOR SELECT MOVEITEMSID,MOVEDATE,ITEMID,[DESCRIPTION],USERID FROM MYOB_IMPORT..MOVEITEMS OPEN MOVEITEMS_CURFETCH NEXT FROM MOVEITEMS_CURINTO @ID,@DATE,@STOCKCODE,@DESCRIPTION,@USERIDWHILE @@FETCH_STATUS = 0BEGIN WHILE (LEN(@DESCRIPTION)) > 5 BEGIN INSERT INTO #TEMP ([ID],[DATE],ITEMID,STOCKCODE,[DESCRIPTION],USERID,QUANTITY,DIRECTION,LOCATIONID) SELECT @ID AS [ID], @DATE AS [DATE], @STOCKCODE AS ITEMID, ITEMNUMBER AS STOCKCODE, REPLACE(REPLACE(LEFT(@DESCRIPTION,CHARINDEX(CHAR(13),@DESCRIPTION) - 1),CHAR(10),''),'MOVED ',''), @USERID AS USERID, CASE WHEN CHARINDEX('INTO',REPLACE(REPLACE(LEFT(@DESCRIPTION,CHARINDEX(CHAR(13),@DESCRIPTION) - 1),CHAR(10),''),'MOVED ','')) = 0 THEN CONVERT(FLOAT,REPLACE(LEFT(@DESCRIPTION,CHARINDEX(' ',@DESCRIPTION) - 1),CHAR(10),'')) ELSE - CONVERT(FLOAT,REPLACE(LEFT(@DESCRIPTION,CHARINDEX(' ',@DESCRIPTION) - 1),CHAR(10),'')) END AS QUANTITY, DIRECTION = CASE WHEN CHARINDEX('INTO',REPLACE(REPLACE(LEFT(@DESCRIPTION,CHARINDEX(CHAR(13),@DESCRIPTION) - 1),CHAR(10),''),'MOVED ','')) = 0 THEN 'TO' ELSE 'FROM' END, L.LOCATIONID FROM MYOB_IMPORT..ITEMS I LEFT JOIN MYOB_IMPORT..LOCATIONS L ON LEFT(REVERSE(LEFT(REVERSE(REPLACE(REPLACE(LEFT(@DESCRIPTION,CHARINDEX(CHAR(13),@DESCRIPTION) ),CHAR(10),''),'MOVED ','')),CHARINDEX(' ',REVERSE(REPLACE(REPLACE(LEFT(@DESCRIPTION,CHARINDEX(CHAR(13),@DESCRIPTION) ),CHAR(10),''),'MOVED ',''))) - 1)),LEN(LEFT(REVERSE(REPLACE(REPLACE(LEFT(@DESCRIPTION,CHARINDEX(CHAR(13),@DESCRIPTION) ),CHAR(10),''),'MOVED ','')),CHARINDEX(' ',REVERSE(REPLACE(REPLACE(LEFT(@DESCRIPTION,CHARINDEX(CHAR(13),@DESCRIPTION) ),CHAR(10),''),'MOVED ',''))) - 1)) - 1) = L.LOCATIONIDENTIFICATION WHERE @STOCKCODE = ITEMID IF (LEN(@DESCRIPTION) - LEN(LEFT(@DESCRIPTION,CHARINDEX(CHAR(13),@DESCRIPTION)))) > 0 BEGIN SELECT @DESCRIPTION = RIGHT(@DESCRIPTION,LEN(@DESCRIPTION) - LEN(LEFT(@DESCRIPTION,CHARINDEX(CHAR(13),@DESCRIPTION)))) END ELSE BEGIN BREAK END END FETCH NEXT FROM MOVEITEMS_CUR INTO @ID,@DATE,@STOCKCODE,@DESCRIPTION,@USERIDENDCLOSE MOVEITEMS_CURDEALLOCATE MOVEITEMS_CURGOany assistance appreciated

case when then or subquery?

Posted: 29 Apr 2013 07:40 AM PDT

i'm stuck!! i'm trying to determine if a customer attended our webinar based on their log in date/time - that's the easy part. but there are many customers who logged in several times, and using a case statement is not working the way i need it to. below is a snippet of my code:case when jh.join_date BETWEEN DATEADD(mi,-45,web.START_DATE_TIME) and web.END_DATE_TIME then 'Attended' else 'Did Not Attend' end and this works if a customer ONLY logged in during the specified times. however, if they logged in during the event AND logged in prior to or after the event, they will be listed multiple times. makes sense so far, except i need either/or. i've tried to re-write my query several times, but i'm not able to exclude those records that fall outside of the specified time.thanks in advance for your help!

[Articles] Factivism

[Articles] Factivism


Factivism

Posted: 28 Apr 2013 11:00 PM PDT

There are many people trying to make the world better for others. Bono is one of them, with the elimination of hunger as his goal. He calls himself a "factivist" trying to use data to inspire others to join him.

[MS SQL Server] 2005 to 2008 R2 Migration Specs

[MS SQL Server] 2005 to 2008 R2 Migration Specs


2005 to 2008 R2 Migration Specs

Posted: 29 Apr 2013 04:22 AM PDT

So I've been researching this, but can't get a clear how to on this. I just joined a new job and one of my first projects is to migrate a 2005 sql server to 2008. I need to give specs to my infrastructure guy for new environment. We have an application running on Windows 2003 Box (4 processors, 4GB of ram) with SQL 2005 SP4. We are running on VM Ware with Net App. We need to get to Windows 2008 with SQL Server 2008 R2.How do I determine Specs for the New Box. For example, how many CPU (and their speed), how much RAM, etc.Is there a tutorial you guys can point me to?

What is the service should be enable\start for witness server configuration in db mirroring

Posted: 28 Apr 2013 10:34 PM PDT

What is the service should be enable\start for witness server configuration in db mirroring?

How many connections possible to standby db in logshipping

Posted: 28 Apr 2013 10:30 PM PDT

How many connections possible to standby db in logshipping ?

[SQL 2012] Validate Disk Access Latency:No disks were found on which to perform cluster validation tests.

[SQL 2012] Validate Disk Access Latency:No disks were found on which to perform cluster validation tests.


Validate Disk Access Latency:No disks were found on which to perform cluster validation tests.

Posted: 29 Apr 2013 02:43 AM PDT

Hi ,I get the following below error .Kindly check and let me know what should be done in initiating iscsi disk.Validate Disk Access LatencyDescription: Validate acceptable latency for disk read and write operations.Start: 4/29/2013 11:38:53 AM.No disks were found on which to perform cluster validation tests.Stop: 4/29/2013 11:38:53 AM.Back to SummaryBack to Top--------------------------------------------------------------------------------Validate Disk ArbitrationDescription: Validate that a node that owns a disk retains ownership after disk arbitration.Start: 4/29/2013 11:38:53 AM.No disks were found on which to perform cluster validation tests.Stop: 4/29/2013 11:38:53 AM.Back to SummaryBack to Top--------------------------------------------------------------------------------Validate Disk FailoverDescription: Validate that a disk can fail over successfully with data intact.Start: 4/29/2013 11:38:53 AM.No disks were found on which to perform cluster validation tests.Stop: 4/29/2013 11:38:53 AM.Back to SummaryBack to Top--------------------------------------------------------------------------------Validate File SystemDescription: Validate that the file system on disks in shared storage is supported by failover clusters.Start: 4/29/2013 11:38:53 AM.No disks were found on which to perform cluster validation tests.Stop: 4/29/2013 11:38:53 AM.Back to SummaryBack to Top--------------------------------------------------------------------------------Validate IP ConfigurationDescription: Validate that IP addresses are unique and subnets configured correctly.AGSCHEL02-SRV03.agshealth.comItem Name Adapter Name Local Area Connection* 11 Adapter Description Microsoft Failover Cluster Virtual Adapter Physical Address 02-30-E6-30-4C-47 Status Not Operational DNS Servers IP Address fe80::8cdb:826d:a12a:d479%15 Prefix Length 64 IP Address 169.254.212.121 Prefix Length 16 Item Name Adapter Name Ethernet Adapter Description Intel(R) PRO/1000 MT Network Connection Physical Address 00-0C-29-7B-F0-1A Status Operational DNS Servers 10.10.12.15, 10.10.10.21 IP Address 10.10.12.18 Prefix Length 24 Item Name Adapter Name Loopback Pseudo-Interface 1 Adapter Description Software Loopback Interface 1 Physical Address Status Operational DNS Servers IP Address ::1 Prefix Length 128 IP Address 127.0.0.1 Prefix Length 8 Item Name Adapter Name isatap.{46BEEE67-06E3-40D4-B5B6-2C61FEDCD197} Adapter Description Microsoft ISATAP Adapter #2 Physical Address 00-00-00-00-00-00-00-E0 Status Not Operational DNS Servers IP Address fe80::5efe:10.10.12.18%14 Prefix Length 128 AGSCHTD05-SRV01.agshealth.comItem Name Adapter Name Local Area Connection* 11 Adapter Description Microsoft Failover Cluster Virtual Adapter Physical Address 02-CE-31-7E-6B-47 Status Not Operational DNS Servers IP Address fe80::b1cb:2621:8ba0:fa8c%16 Prefix Length 64 IP Address 169.254.250.140 Prefix Length 16 Item Name Adapter Name Ethernet Adapter Description Intel(R) PRO/1000 MT Network Connection Physical Address 00-50-56-B4-59-25 Status Operational DNS Servers 10.10.10.21, 10.10.12.15 IP Address fe80::4802:b41d:bcca:4363%12 Prefix Length 64 IP Address 10.10.10.10 Prefix Length 23 Item Name Adapter Name Loopback Pseudo-Interface 1 Adapter Description Software Loopback Interface 1 Physical Address Status Operational DNS Servers IP Address ::1 Prefix Length 128 IP Address 127.0.0.1 Prefix Length 8 Item Name Adapter Name isatap.{D82337FC-B8DF-4D4D-9D86-CB2BB252814C} Adapter Description Microsoft ISATAP Adapter #2 Physical Address 00-00-00-00-00-00-00-E0 Status Not Operational DNS Servers IP Address fe80::5efe:10.10.10.10%14 Prefix Length 128 Item Name Adapter Name Teredo Tunneling Pseudo-Interface Adapter Description Teredo Tunneling Pseudo-Interface Physical Address 00-00-00-00-00-00-00-E0 Status Not Operational DNS Servers IP Address fe80::100:7f:fffe%15 Prefix Length 64 Verifying that a node does not have multiple adapters connected to the same subnet.Verifying that each node has at least one adapter with a defined default gateway.Verifying that there are no node adapters with the same MAC physical address.Verifying that there are no duplicate IP addresses between any pair of nodes.Checking that nodes are consistently configured with IPv4 and/or IPv6 addresses. Verifying that all nodes IPv4 networks are not configured using Automatic Private IP Addresses (APIPA).Back to SummaryBack to Top--------------------------------------------------------------------------------Validate Memory Dump SettingsDescription: Validate that none of the nodes currently requires a reboot (as part of a software update) and that each node is configured to capture a memory dump if it stops running.Validating software configuration.Back to SummaryBack to Top--------------------------------------------------------------------------------Validate Microsoft MPIO-based disksDescription: Validate that disks that use Microsoft Multipath I/O (MPIO) have been configured correctly.Start: 4/29/2013 11:38:53 AM.No disks were found on which to perform cluster validation tests.Stop: 4/29/2013 11:38:53 AM.Back to SummaryBack to Top--------------------------------------------------------------------------------Validate Multiple ArbitrationDescription: Validate that in a multiple-node arbitration process, only one node obtains control.Start: 4/29/2013 11:38:53 AM.No disks were found on which to perform cluster validation tests.Stop: 4/29/2013 11:38:53 AM.

Search This Blog