Saturday, July 27, 2013

[SQL Server 2008 issues] Convert Excel Formula to SQL code

[SQL Server 2008 issues] Convert Excel Formula to SQL code


Convert Excel Formula to SQL code

Posted: 26 Jul 2013 10:23 AM PDT

How do I convert this excel formula to SQL code?=IF(T2<R2,T2*O2,MIN(T2-R2,S2-R2)*O2)Thanks,Helal

How to do mathmetical calcualtaion in SQL

Posted: 26 Jul 2013 04:41 PM PDT

Dear friends,need your help- calculation - round to closest integer:(((Monthly Revenue field - Monthly Cost field )/Monthly Revenue field) * 100)--> how to do this please?thanksDJ

Query taking non-consistent time to execute

Posted: 22 Jul 2013 08:19 PM PDT

I have a query which takes large time to execute sometimes.When i checked the speed it was taking 15 seconds.But actually it should run faster than that.When i again checked the query it again executed with in 11 sec.Then i tried removing cast used in query and when executed it ran just in 8 seconds.But when i again the original query(with cast) it takes less time.I checked in several computers running the query.But in some cases i get output just in 1 or 2 seconds also.So same query takes different time to execute in different pc and in same pc also time required is not consistent.a So there is no consistency with the time taken to execute.So i am not able to find why this happens?Below is the query i used to test[code="sql"]SELECT [customer].[Customer name],[customer].[Sl_No],[customer].[Id]FROM [company].dbo.[customer]WHERE ( Charindex('9000413237',CAST([company].dbo.[customer].[Phone no] AS VARCHAR(MAX)))>0 ) [/code]Here customer table has about 30 columns.In that 'Customer name' is of datatype varchar(255),'Sl_No' is of int,'Id' is of int and 'Phone no' is of varchar(255).

Cannot start in Single User Mode

Posted: 26 Jul 2013 08:58 AM PDT

For a disaster recovery test we are being required to restore our master database to an alternate server.Several servers have worked fine.I have used SQL Server Configuration Manager and changed the startup parameters to Single Mode.When I use SQLCMD to restore the database I am plagued by an error stating there is basically another use logged in.I have started SQL Server from the command prompt as well (-m and -c options to watch for logins (I changed SQL Server security to log successful logins) and I see nothing.Any suggestions. I have googled the heck out of this and not coming up with anything else to try.

Affecting index rebuild on mirroring wchich in turn creates blocking and timeouts.

Posted: 26 Jul 2013 08:36 AM PDT

Hi all. Here is my situation.We have a 24 x 7 production system which is mirrored with high safety mode, and every night we have a job to defragment all indexes (at nights it slightly less accessed than during the day). Cumulative size of all principal databases is 2 TB+.Due to large amount of indexes rebuilt our mirroring does not keep up with rebuilding causing in turn mirroring delays. Since it is in high safety mode, it holds blocks on resources waiting for each transaction to acknowledgement from the mirror side, creating many blockings and timeouts.Does anybody experience same problems? What would be best solution to it ? Does changing mirroring from high safety to high performance mode solves this problem? What implications it can cause?Our SQL Server version is 2008 R2, and we use mirroring mostly for disaster recovery; we have failover clustering for high availability.Thanks

Attach/Detach Database message in sql server errorlog

Posted: 26 Jul 2013 01:18 AM PDT

Is there a way to automatic log entry in sql server error log when a database is attached or detached or created or dropped. thnx

Get the version and service pack details of all sql server instances

Posted: 26 Jul 2013 06:48 AM PDT

Hi,I'm trying to get the version and service pack details from all the sql server instances. I have a table which contains the list of sql servers....i just need the output in this formatexample :Instancename ServicePack VersionDBATest 10.0.4000.0 SP2this table has the list of serversselect instanceName from dbo.MonitoredSQLServersBelow scrpit gives the above outputselect SERVERPROPERTY('servername') Instancename,SERVERPROPERTY('productversion') ServicePack, SERVERPROPERTY('productlevel') VersionPlease let me know how to get the details for all the servers, i tried using the cursor...but i'm not much into developing...so any help would be greatly appreciated.Thanks in advace

Information about Jobs

Posted: 25 Jul 2013 10:10 PM PDT

Any script which will provide the below details of the JOBS??? as i have more than 100 jobs :w00t:1. Job Name:2. Start time of Job3. Max time taken by each job to run successfully.4. Server resources consumed by each jobPlease advice

Table partition/rebuild issues

Posted: 26 Jul 2013 12:26 AM PDT

Hi all,I have the following scenario on my 6 TB database (it's a mobile billing database) on SQL Server 2008 R2 SP2 (with Cumulative Update #5):- Huge amount of data inserted every day, at every time on a single table- Table partitioned on a DATETIME column with a range of 15 minutes- Daily maintenance to merge the oldest partitions (it's only necessary to keep data from the last 60 days) and add the future partitions- On this daily maintenance, I also rebuild the indexes partitions with fragmentation greater than 30%In order to complete this daily maintenance, I execute some steps to avoid locks and deadlocks:- Kill all sessions that are inserting data and/or querying this particular table- Disable the users which inserts and query- Execute the steps to add/remove partitions- Rebuild indexes partitions- Enable usersI applied the Cumulative Update # 5 patch , which says that the deadlocks on adding/removing partitions should be fixed (http://support.microsoft.com/kb/2713501/en-us). So, after that I removed the first and forth steps, so process can keep on inserting and users, querying. Well, it didn't work. The scenario occurred with deadlocks and locking on the rebuild.I sort of "workarounded" the deadlock issue by setting the DEADLOCK_PRIORITY to HIGH, but when we rebuild the index partitions, even if I switch the fragmented partition to an auxiliary table, execute the rebuild on the index partition on this auxiliary table and then switch back the partition to the original table, it also locks the users and the inserting and querying (and the rebuild).Is there anything I should do to fix this issue, so the users are not disabled? Any thoughts?Thanks

Single Column Encryption On Sensitive Data-Unabale to use Join

Posted: 26 Jul 2013 01:29 AM PDT

Hi All,My issue is regarding Single Column Encryption on Primary Key Column , what happens if this Primary Key or Unique Key column contains sensitive data, and also used for joins. between two tables.?what are the options to encrypt this data , so that they can remain same , even in encrypted state and can be used for joins.e.g.in table A there are two columns.Member_Id(PK) City1234 London6789 Paris7777 NewYorkin table B there are two columnsMember_Id(FK) Bank_account1234 RBC1234 ScotiaBank7777 BOBWhen I applied single column encryption, to save these sensitive data, it worked perfactly, but, it gives me different encrypted number for same Member_Id(both in table A and B) everytime.e.g.1234(pk) -> AES_128 -> HASDHAS829092)(jsadkljdhnasd1234(fk) -> AES_128 -> NCBASJEUHHKLS893298()JSLHAso I can not use this fields to join this tables any more,!!!!is there any option to solve this.?I already used "DataMasking" , which works fine.Now, I also want to try with encryption too., Please help me..Thanks..

Function to return the fiscal month from a date. Fiscal year begins in October

Posted: 26 Jul 2013 03:59 AM PDT

I need a function that would return the fiscal month number from a date. Our fiscal year starts in October.Something like...declare @Date datetimeif month(@Date)>9 then month(@Date) -9 else month(@Date) + 3

oledb destination on error continue for next row

Posted: 26 Jul 2013 01:42 AM PDT

Hi everybodyI need some advice about this:I have a DataFlowTask, wich contains a OleDbSource connected to a sql server table and a OleDbDestination also connected to sql server table. The Ole DB Source will extract around 3 millions records. The Ole DB Destination is replicated 2 times. The first ole db destination is configured for "Maximum insert commit size" to 100; "Row per batch" to 10; a "Error output" configured to "Ole Db Destination Input" --> "Redirect row"; then "Ole Db Destination error output" connected to the next "Ole DB Destination", which the unique difference is "Maximum insert commit size" to 1; "Row per batch" to 1.For some reason some rows have some error, I need the next correct row continues for inserting, but capturing the error for previous row error. At this moment the ole DB Destination is broken when comes the first rows with error.Some suggestionsThanks

SQL Server 2008 - how to upgrade to R2

Posted: 25 Jul 2013 09:25 PM PDT

Hi,I'm looking for some guidance on upgrading SQL Server 2008 version 10.1.2531.0 up to R2 which I understand will be 10.5.xxxI have tried running the installer for R2 SP2 and R2 SP1 but the installer prevents the update saying [quote]The version of SQL Server instance MSSQLSERVER does not match the version expected by the SQL Server update. The installed SQL Server product version is 10.1.2531.0, and the expected SQL Server version is 10.50.1600.1.[/quote]What is the next step? Clearly R2 SP1 is too much of a jump but I really have struggled to find any download in between.Thanks

Find and update the Schema changes of a table?

Posted: 26 Jul 2013 12:02 AM PDT

Hi Friends, I have around 25 tables where i am frequently using to put the data from live server to my local . The schema's of the live database gets changed frequently. I mean we add one or more columns evertime. When i use my SSIS package , to pull the data from live server to my local, it is messing up. I have to drop the database and recreate it with new schemas and update the package as well. So what i am looking for isJust Curious..1. How do we create a script to update the schema ? I know there is Red Gate tool which does this comparision of the schemas. But is there a way to create some SQL statements to update the schema? (because I can't buy the red gate software. price is not affordable :-))2. How can we avoid the SSIS package getting this error? You know if we have added some fields in the table, then i have to update the package as well....and then using it.Any suggestions would be really appreciated. Sorry in case , if my questions are meaning less...

Log Shipping - Point In Time Recovery

Posted: 25 Jul 2013 10:33 PM PDT

HiWe have Log Shipping as one of our DR solutions. Basically, we take a TL backup every 15 minutes to our DR server, we keep a maximum of 3 days worth of TL backups, which we're looking to extend to 10 days, so that we can roll the Production database back further back in time.What is the best method In restoring the Production database to a specific point in time? Would I need to apply the backup of the night before, then apply the TL up until the required restore time?

BAI, BAI2 bank files

Posted: 02 Nov 2011 10:47 PM PDT

Hi,Does anyone have a script to load and convert these files into SQL Server tables to construct regular, readable rows of data that could be exported to an Excel or CSV?Thanks

Page level compression - Indices/Partitions/Heap

Posted: 23 Jul 2013 04:49 AM PDT

I have read a lot of articles on compression but i couldn't find a definitive answer for these:i) if a table has no clustered index and has non-clustered indices, do i need to compress the table first and then the nc indices?ii)if a table has clustered index and has non-clustered indices, would just compressing the clustered index also compress the nc index?iii) When compressing the clustered index, can this be with ONLINE=ON option?

Change replication order in Transaction Replication

Posted: 25 Jul 2013 08:25 PM PDT

Hello,I'am running into a small problem concerning transactional replication.The database we are trying to replicate has different views which use user defined stored procedures and functions.When we setup transactional replication and select tables, views, sp's and functions the replication failes with an error certain functions are not available for the view it is creating on the subscriber.When I create a new subscription with only the Tables, SP's and functions and replicate that it works fine. When I add the views to the publication, create a new snapshot and reinitialize the subscription it all works as well.It seems when you setup replication the order in which it creates articles is Tables -> Views -> SP's -> Functions.Is there a way to change this order into Functions -> SP's -> Views -> Tables so we won't run into error messages that certain functions are not available to create the view on the subscriber?Thanks for the help!

Active Users

Posted: 25 Jul 2013 08:34 PM PDT

Hi,How to Find out Active users in SQLServer by T-SQL Statement

memory utilisation

Posted: 25 Jul 2013 08:30 PM PDT

How to check memory utilisation of single query ?

Friday, July 26, 2013

[how to] Refactoring SQL Server to workaround Anti-Semi Join SQL 2005 bug

[how to] Refactoring SQL Server to workaround Anti-Semi Join SQL 2005 bug


Refactoring SQL Server to workaround Anti-Semi Join SQL 2005 bug

Posted: 26 Jul 2013 09:14 PM PDT

I built the following SQL Server query, but it is encountering the anti-semi join defect in SQL Server 2005 which results in inaccurate cardinality estimates (1 -- urgh!) and runs forever. Since it is a longstanding production SQL server I can't easily suggest to upgrade versions, and as such I cannot force the traceflag 4199 hint on this specific query. I'm having a hard time refactoring the WHERE AND NOT IN (SELECT). Can anyone care to help? I've made sure to try and use the best joins based on clustered key pairs.

SELECT TOP 5000 d.doc2_id      ,d.direction_cd      ,a.address_type_cd      ,d.external_identification      ,s.hash_value      ,d.publishdate      ,d.sender_address_id AS [D2 Sender_Address_id]      ,a.address_id AS [A Address_ID]      ,d.message_size      ,d.subject      ,emi.employee_id  FROM assentor.emcsdbuser.doc2 d(NOLOCK)  INNER JOIN assentor.emcsdbuser.employee_msg_index emi(NOLOCK) ON d.processdate = emi.processdate      AND d.doc2_id = emi.doc2_id  INNER LOOP JOIN assentor.emcsdbuser.doc2_address a(NOLOCK) ON emi.doc2_id = a.doc2_id      AND emi.address_type_cd = a.address_type_cd      AND emi.address_id = a.address_id  INNER JOIN sis.dbo.sis s(NOLOCK) ON d.external_identification = s.external_identification  WHERE d.publishdate > '2008-01-01'      **AND d.doc2_id NOT IN (          SELECT doc2_id          FROM assentor.emcsdbuser.doc2_address d2a(NOLOCK)          WHERE d.doc2_id = d2a.doc2_id              AND d2a.address_type_cd = 'FRM'          )**  OPTION (FAST 10)  

Note that the Employee_MSG_Index table is 500m rows, doc2 is 1.5b rows, SIS is ~500m rows.

Any help would be appreciated!

PostgreSQL: column of type timestamp expression varchar ... How do I cast the expression and why?

Posted: 26 Jul 2013 06:35 PM PDT

This is infuriating. I have a query, in psql, which gives me the exact row I want from one table and I just want to INSERT INTO other_table SELECT .... FROM backup_table WHERE ...

Here are the (barely sanitized) queries:

SELECT ts,id,serial,name,mgmt,svc FROM mylog WHERE name='foobar' AND ts > '2013-07-11 23:00:00';  

... which returns exactly one row.

I try to insert that into my other table with something like:

INSERT INTO mytable SELECT id,serial,name,svc,mgmt,... FROM mylog WHERE id=1234 AND ts > '2013-07-11 23:00:00';  

... (the 'id' is pulled from the query in lieu of the name, though it doesn't matter when I change that to match the exact query). It gives the following error message:

ERROR:  column "ts" is of type timestamp without time zone but expression is of type character varying  HINT:  You will need to rewrite or cast the expression.  

(The actual INSERT lists over 60 column names because this particular pair of tables is a nightmare which I've inherited and cannot yet normalize. However the only detail which should be relevant to this question is that I'm excluding the ts column from the INSERT because it's set automatically in the target column. Changes to the target column, mytable in these examples, trigger an insertion into the source, *my_log* here). I'm trying to restore a row which was removed from the main table.

The only column on which I can distinguish among the rows in *my_log* to select the most recent is the timestamp which, as the error message indicates is of type: ts | timestamp without time zone | not null in *my_log* and ** ts | timestamp without time zone | not null default now()** in mytable.

I've tried many different incantations of ::date and ::timestamp, date() and cast(ts as text) and so on ... both to the literal time/date sting and to the 'ts' column in the WHERE clause. None of that seems to help.

So, what am I doing wrong and why is it that syntax which works just fine for a SELECT query fails for SELECT clause the INSERT INTO?

Illegal mix of collations for operation 'UNION'

Posted: 26 Jul 2013 06:09 PM PDT

i have this database with tables bellow. it was running good , till when i have done this code

 alter table <some_table> convert to character set utf8 collate utf8_unicode_ci;  

to change collation and the website is down and showed this error:

Illegal mix of collations for operation 'UNION'  

, now i dont know what to do . if i will continue to convert all tables , or no . i have also converted the database to utf8 . i should backup my database but its too late. any rollback for this or is there a fix ? thanks.

articles            InnoDB  utf8_unicode_ci     48.0 KiB    -  authentications     InnoDB  utf8_general_ci     32.0 KiB    -  comments            MyISAM  utf8_unicode_ci     3.3 KiB     -  favorites           InnoDB  latin1_swedish_ci   16.0 KiB    -  login_attempts      InnoDB  latin1_swedish_ci   16.0 KiB    -  members             InnoDB  utf8_unicode_ci     48.0 KiB    -  member_setting      InnoDB  latin1_swedish_ci   16.0 KiB    -  messages            MyISAM  latin1_swedish_ci   2.7 KiB     -  page_views          InnoDB  utf8_unicode_ci     16.0 KiB    -  reports             MyISAM  latin1_swedish_ci   1.0 KiB     -  users               InnoDB  utf8_general_ci     16.0 KiB    -  videos              InnoDB  utf8_unicode_ci     80.0 KiB    -  viewer_info         InnoDB  latin1_swedish_ci   64.0 KiB    -  13 tables           MyISAM  utf8_general_ci     359.0 KiB   0 B  

EDIT: where there is utf8_unicode_ci its this table i have converted.

Adding an index to a system catalog in Postgres

Posted: 26 Jul 2013 06:03 PM PDT

I'm having a situation very similar to the one described here:

I've got a SaaS situation where I'm using 1000+ schemas in a single database (each schema contains the same tables, just different data per tenant). I used schemas so that the shared app servers could share a connection to the single database for all schemas. Things are working fine.

and, while the application itself appears to be working fine, some queries involving system catalogs are very slow. Also, psql's auto-completion is totally useless and \dt is very slow.

In particular, I need to calculate the on-disk size of each schema using something like this:

SELECT pg_total_relation_size(schemaname || '.' || tablename)   FROM pg_tables   WHERE schemaname = :schema_name  

which is dog-slow. I'm thinking of adding an index on schemaname column of pg_tables (which contains about 1M rows) to see if it improves things (UPDATE: pg_tables turned out to be a view and the actual tables used in this view do have indexes, so my initial plan was a bit naive), but I'm a bit worried by the comments in the above thread:

There are a lot of gotchas here, notably that the session in which you create the index won't know it's there (so in this case, a reindex on pg_class would likely be advisable afterwards). I still think you'd be nuts to try it on a production database, but ...

I'm also worried by the fact that modification of system catalogs seemed to be completely disabled in Postgres 9.0 and 9.1 (I'm using 9.2)

So, the question is: what are the gotchas in adding an index to a system catalog in Postgres and will I be nuts if I (eventually) do that on a production system?

Replication error

Posted: 26 Jul 2013 05:19 PM PDT

We have a slave server that has stopped replication due to the following error:

Slave SQL: Query caused different errors on master and slave.  

What could be the cause of this error? And what would be a way to fix it?

Version of both master and slave is MySQL 5.5.30

130726 23:55:45 [Note] C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqld: Shutdown complete    130726 23:58:39 [Note] Plugin 'FEDERATED' is disabled.  130726 23:58:39 [Warning] C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqld: ignoring option '--innodb-file-per-table' due to invalid value 'ON'  130726 23:58:39 [Note] Plugin 'InnoDB' is disabled.  130726 23:58:39 [Note] Server hostname (bind-address): '0.0.0.0'; port: 3306  130726 23:58:39 [Note]   - '0.0.0.0' resolves to '0.0.0.0';  130726 23:58:39 [Note] Server socket created on IP: '0.0.0.0'.  130726 23:58:39 [Note] Slave SQL thread initialized, starting replication       in log 'mysql-bin.000234' at position 1065421256,       relay log '.\slave-relay-bin.000917' position: 1065421402  130726 23:58:39 [Note] Slave I/O thread: connected to master 'replication@191.5.3.4:3306',      replication started in log 'mysql-bin.000235' at position 166680598  130726 23:58:39 [Note] Event Scheduler: Loaded 0 events  130726 23:58:39 [Note] C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqld:       ready for connections.  Version: '5.5.30-log'  socket: ''  port: 3306  MySQL Community Server (GPL)  130726 23:59:04 [ERROR] Slave SQL: Query caused different errors on master and slave.      Error on master: message (format)='Incorrect key file for table '%-.200s';       try to repair it' error code=1034 ;       Error on slave: actual message='no error', error code=0.       Default database: 'shared'.       Query: 'CREATE TEMPORARY TABLE tmp_grades (                          vehsysid INT(11),                          grade INT(1),                          dt TIMESTAMP,                          INDEX(vehsysid),                          INDEX(grade),                          INDEX(dt)                      ) SELECT vehsysid, Grade, MAX(dt) AS dt                          FROM shared.tbl_valuations                           GROUP BY vehsysid, grade', Error_code: 0  130726 23:59:04 [ERROR] Error running query, slave SQL thread aborted. Fix the problem,     and restart the slave SQL thread with "SLAVE START".      We stopped at log 'mysql-bin.000234' position 1065421256  

What else I can't figure out is how a temporary table would lead to such an error (in master):
'Incorrect key file for table '%-.200s'; try to repair it' error code=1034

The last lines from master error log:

130725 23:15:57 [Warning] Warning: Enabling keys got errno 120 on shared.tmp_grades, retrying  130726 23:15:58 [Warning] Warning: Enabling keys got errno 137 on shared.tmp_grades, retrying  

Additional info:
- both Master and Slave run on Windows (I don't know if that's relevant.)
- the disks at both have plenty of space.
- replication format is MIXED

Window functions causes error "The Parallel Data Warehouse (PDW) features are not enabled."

Posted: 26 Jul 2013 09:03 PM PDT

I have the following virtual column,

, MIN(picture_id) OVER ( PARTITION BY [360_set]  ORDER BY picture_id ASC)  

However, when I execute that, I get the following.

Msg 11305, Level 15, State 10, Line 12  The Parallel Data Warehouse (PDW) features are not enabled.  

This is where it gets interesting though, this works:

, MIN(picture_id) OVER ( PARTITION BY [360_set] )  

And, further, this works

, ROW_NUMBER() OVER ( PARTITION BY [360_set]  ORDER BY picture_id ASC)  

How come the desired statement doesn't work? Where is this documented?

The version information was requested, this is what I in Help → About.

Microsoft SQL Server Management Studio          10.0.5512.0  Microsoft Analysis Services Client Tools        10.0.5500.0  Microsoft Data Access Components (MDAC)         6.1.7601.17514  Microsoft MSXML                                 3.0 6.0   Microsoft Internet Explorer                     9.10.9200.16635  Microsoft .NET Framework                        2.0.50727.5472  Operating System                                6.1.7601  

The result from SELECT @@VERSION is Microsoft SQL Server 2008 (SP3) - 10.0.5512.0 (X64) Aug 22 2012 19:25:47 Copyright (c) 1988-2008 Microsoft Corporation Standard Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1) (VM)

Understanding MS SQL Server Date Types

Posted: 26 Jul 2013 04:00 PM PDT

Consider the following:

declare @dt datetime, @dt2 datetime2, @d date  set @dt  = '2013-01-01'  set @dt2 = '2013-01-01'  set @d   = '2013-01-01'    select convert(varbinary, @dt) as dt,         convert(varbinary, @dt2) as dt2,         convert(varbinary, @d) as d  

Output:

dt                    dt2                     d  ------------------    --------------------    --------  0x0000A13900000000    0x07000000000094360B    0x94360B  

Now, I already understand from the documentation that datetime has a smaller range, and starts from 1753-01-01, while datetime2 and date use 0001-01-01 as their start date.

What I don't understand though, is that datetime appears to be little-endian while datetime2 and date are big-endian. If that's the case, how can they even be properly sortable?

Consider if I want to know how many integer days are represented by a date type. You would think you could do this:

declare @d date  set @d = '0001-01-31'  select cast(convert(varbinary, @d) as int)  

But due to the endianness, you get 1966080 days!

To get the correct result of 30 days, you have to reverse it:

select cast(convert(varbinary,reverse(convert(varbinary, @d))) as int)  

Or, of course you can do this:

select datediff(d,'0001-01-01', @d)  

But that means internally somewhere it is reversing the bytes anyway.

So why did they switch endianness?

I only care because I'm working on a custom UDT in SQLCLR and the binary order of the bytes does seem to matter there, but these built-in types seem much more flexible. Does SQL Server have something internal where each type gets to provide it's own sorting algorithm? And if so, is there a way I can tap into that for my custom UDT?

See also, a related (but different) question on StackOverflow.

Optimising Large SQL Server Tables

Posted: 26 Jul 2013 03:28 PM PDT

I have a large table with around 50 million rows and 15 columns. Whenever I read, I always need all columns so I can't split them. I have a clustered index on the table with 4 keys (ALL INT) and I always read data using those keys and if I see the query plan it shows 84% for clustered index seek.

But the performance is still slow, my queries are fairy simple like this

select       CountryId, RetailerID, FY,       sum(col1), sum(col2),.....sum(col15)   from mytable a  join product p on a.productid = p.id  join ......  join .....  join ......  join .....  Where .......  group by CountryId, RetailerID, FY  

I'm not using any IN operator or any sub queries here on any inline functions... which I know obviously make it slow. I've looked at partitioning but not sure about that, can I get some performance improvement by doing partition?

OR is there anything else I can do?

I'm using SQL Server 2012 Enterprise Edition

Estimated Time to Create a Database with Microsoft Access and Python

Posted: 26 Jul 2013 03:19 PM PDT

How long on average would it take to create a Python program for a customer database in Access? The process would involve parsing some 50-page requirement Word documents and miscellaneous Excel tables, and then inputting the gathered information into the Access database.

The database has several tables:

Requirements

Category

Subcategory

Film Specifications

MainDoc

Doctype

Customer

Facilities

Products

The category, subcategory, customer, facilities and product tables will be filled with other information not found in the documents.

The only information that would be inputted would be the categorization of requirements, the requirements itself, the responses, the document information(ID, etc.)

How long would this take for someone unfamiliar with Access or Python? I'm merely trying to get an estimate, as I'm planning to create this database soon.

This may be the wrong board to ask this on, so if it is, please let me know.

Running the DBT2 test results in 0.00 NOTPM

Posted: 26 Jul 2013 03:15 PM PDT

I'm trying to run the MySQL DBT2 Benchmark Tool, I have unzipped the archive and run the following commands:

./configure --with-mysql  make  make install  mkdir /tmp/dbt  datagen -w 3 -d /tmp/dbt --mysql  scripts/mysql/mysql_load_db.sh --path /tmp/dbt --mysql-path /usr/bin/mysql  scripts/run_mysql.sh --connections 20 --time 300 --warehouses 3  

But got the following output:

...    Stage 3. Starting of the test. Duration of the test 300 sec  STARTING DRIVER COMMAND:   /root/dbt2-0.37.50.3/src/driver -d 127.0.0.1 -l 300 -wmin 1 -wmax 3 -spread 1 -w 3 -sleep 300 -tpw 10 -outdir /root/output/14/driver -p 30000 -warmup 0 -cooldown 20 -intermediate_timer_resolution 0  Shutdown clients. Send TERM signal to 5154.  Shutdown driver. Send TERM signal to 5190.    Stage 4. Processing of results...                           Response Time (s)   Transaction      %    Average :    90th %        Total        Rollbacks      %  ------------  -----  ---------------------  -----------  ---------------  -----      Delivery   0.00          N/A                      0                0  100.00     New Order   0.00          N/A                      0                0  100.00  Order Status   0.00          N/A                      0                0  100.00       Payment   0.00          N/A                      0                0  100.00   Stock Level   0.00          N/A                      0                0  100.00    0.00 new-order transactions per minute (NOTPM)  0.0 minute duration  0 total unknown errors  0.00 rollback transactions  0 second(s) ramping up    scripts/run_mysql.sh: line 632:  5154 Killed                  nohup $CLIENT_COMMAND > $OUTPUT_DIR/client.out 2>&1  scripts/run_mysql.sh: line 632:  5190 Killed                  nohup $DRIVER_COMMAND > $OUTPUT_DIR/driver.out 2>&1  Test completed.  

During the execution, SHOW PROCESSLIST did show that 20 clients were connected, all sleeping.

Any idea why no transactions are executed?

MySQL row does not persist, but primary key with auto-increment is incremented

Posted: 26 Jul 2013 03:35 PM PDT

I have a MySQL database that is acting oddly. I insert a new row and observe that for a brief period of time (less than 30 seconds), the row persists as expected in the table. After this time, though, the row vanishes (despite no other queries being executed).

This table has an integer ID as a primary key, with auto-increment set, and the primary key is auto-incremented as expected for new rows. This leads me to believe there is not some kind of transactional rollback (and my table is MyISAM, so that shouldn't be possible anyways) or anything else that is somehow reverting the database to a previous state.

What logs should I be checking to see what is going on here? The contents of my '/var/log/mysql/error.log' are below, but I don't see anything unusual.

120815 21:01:01 [Note] Plugin 'FEDERATED' is disabled.  120815 21:01:02  InnoDB: Initializing buffer pool, size = 8.0M  120815 21:01:02  InnoDB: Completed initialization of buffer pool  120815 21:01:03  InnoDB: Started; log sequence number 0 44233  120815 21:01:03 [Note] Event Scheduler: Loaded 0 events  120815 21:01:03 [Note] /usr/sbin/mysqld: ready for connections.  Version: '5.1.63-0ubuntu0.11.10.1'  socket: '/var/run/mysqld/mysqld.sock'      port: 3306  (Ubuntu)  120815 21:01:04 [ERROR] /usr/sbin/mysqld: Table './appname/users' is marked as crashed and should be repaired  120815 21:01:04 [Warning] Checking table:   './appname/users'  120815 21:10:34 [Note] /usr/sbin/mysqld: Normal shutdown    120815 21:10:34 [Note] Event Scheduler: Purging the queue. 0 events  120815 21:10:34  InnoDB: Starting shutdown...  120815 21:10:39  InnoDB: Shutdown completed; log sequence number 0 44233  120815 21:10:39 [Note] /usr/sbin/mysqld: Shutdown complete  

I noted the 'crashed' mark on the appname/users table, but mysqlcheck suggests the table is OK.

Any thoughts?

AlwaysOn AG, DTC with failover

Posted: 26 Jul 2013 04:13 PM PDT

Problem: How can I run Distributed Transaction Coordinator (DTC) on all servers in an AlwaysOn Availability Group (AG)? I do NOT need to maintain transactions over failover/switchover events.

Setup: I have a Windows Failover Cluster (WSFC) with three Windows 2008 R2 servers where they are all running SQL 2012. Two servers are in one data center and are part of an AlwaysOn Failover Cluster (FCI), while the third server is in a second data center. The WSFC is a multi-subnet cluster. Here's a sketch of the setup: enter image description here

I've been able to install and configure DTC to work between the two FCI nodes because they are on the same subnet and share storage. I have configured a couple of AGs and they have been working fine. This screenshot shows DTC installed on the FCI:

enter image description here

This screenshot shows that I can configure DTC on one of the FCI nodes (whichever is active): enter image description here

I'd like to migrate an application that uses DTC onto this cluster and use an AG. I've read that DTC is not supported with AGs (Reference). I have not been able to find a way to configure DTC on the third node in the second data center. When I try to configure DTC on the third node, it seems to be unavailable, as shown in this screenshot:

enter image description here

In Brent Ozar's Free Setup Checklist PDF for Availability Groups he lists:

Cluster Installation...

29. If an FCI is involved, configure DTC per your Planning section decisions.

In the comments on SQL Server 2012 AlwaysOn Availability Groups Rock Brent says that "... nothing changes when AGs are in play. Just keep in mind that databases in an Availability Group don't support transactional consistency when failed over together to another replica..."

This makes it seem that DTC can be used in Availability Groups as long as you understand that transactions will not be maintained in a AG switchover. I would not need it to maintain transactions from the FCI nodes. I would just need DTC available for the application to use in the case of a catastrophic disaster (where I lost my primary data center).

How do I configure DTC on my third node? Or, is it the case that I'm just out of luck when it comes to using AGs and an application that needs DTC?

Grant Execution Permissions on SQL Server 2005 Failing

Posted: 26 Jul 2013 03:36 PM PDT

This is an interesting question that is similar to some asked before but with an interesting difference. I am attempting to add the execute permission for a user to a stored procedure using T-SQL on a SQL Server 2005. This should be straightforward and I've done this dozens of times before :

GRANT EXECUTE ON [sto].[StoredProc] TO [user];  GO  

The user is a local SQL Server user, it is the owner of the sto schema and not dbo, sa, etc.

Immediately I receive an error:

Cannot grant, deny, or revoke permissions to sa, dbo, entity owner, information_schema, sys, or yourself.

I've checked the permissions here and there doesn't appear to be anything unusual with this user account with the exception of the ownership if the schema, However when I apply the permissions manually through SSMS and the securables tab in the user account, they are applied without a problem. I am executing this with a separate sysadmin elevated account.

My question is why is this happening in TSQL and not in SSMS ?

Any ideas would be appreciated. Thanks !

How do I determine if a column is defined as a serial data type instead of an integer based off the catalog?

Posted: 26 Jul 2013 02:52 PM PDT

So I'm currently creating some SQL to read through the postgres (9.1) catalogs to build table definitions. However, I am encountering a problem with SERIAL/BIGSERIAL data types.

Example:

CREATE TABLE cruft.temp ( id BIGSERIAL PRIMARY KEY );  SELECT * FROM information_schema.columns WHERE table_schema='cruft' AND table_name='temp';  "db","cruft","temp","id",1,"nextval('cruft.temp_id_seq'::regclass)","NO","bigint",,,64,2,0,,,,,,,,,,,,,"db","pg_catalog","int8",,,,,"1","NO","NO",,,,,,,"NEVER",,"YES"  

It gives me database name (db), schema name (cruft), table name (temp), column name (id), default value (nextval( ... )), and data type (bigint and int8 .. NOT bigserial) ... I realize that I could just check to see if the default value was a sequence - but I don't believe that would be 100% accurate since I could manually create a sequence and create a non serial column where the default value was that sequence.

Does anyone have a suggestion for how I might accomplish this? Anything other than checking the default value for a nextval(*_seq)?

Setting up replication alerts

Posted: 26 Jul 2013 03:27 PM PDT

I 'm trying to configure alerts for replication. Currently on my test setup I run everything under user 'sa'. When I try to set up alert for agent failure I get the following error

Login failed for user 'sa'.

This error is shown when i try to add a new operator.

Any suggestions how to fix this problem. thanks

Getting "ORA-00942: table or view does not exist" while table does exist

Posted: 26 Jul 2013 02:50 PM PDT

I'm fairly new to Oracle database. I have installed Oracle Database 11g R2 on Oracle Linux 6. I have successfully created a new database with dbca and connected to the database using:

$ sqlplus "/ as sysdba"  

I successfully created a table and inserted some data and performed some selects:

SQL> CREATE TABLE Instructors (           tid    NUMBER(7) PRIMARY KEY,           fname  VARCHAR2(32),           lname  VARCHAR2(32),           tel    NUMBER(16),           adrs   VARCHAR2(128) );    Table created.    SQL> INSERT INTO Instructors (tid, fname, lname, tel, adrs)       VALUES (8431001, 'John', 'Smith', 654321, 'London');    1 row created.    SQL> SELECT count(*) FROM Instructors;      COUNT(*)  ----------          1  

Then I created a new user with the CONNECT privilege:

SQL> CREATE USER teacher1 IDENTIFIED BY pass1;    User created.    SQL> GRANT CONNECT TO teacher1;    Grant succeeded.  

Then I created a new role with appropriate object privileges:

SQL> CREATE ROLE instructor;    Role created.    SQL> GRANT SELECT, UPDATE ON Instructors TO instructor;    Grant succeeded.  

And granted the role to the user:

SQL> GRANT instructor TO teacher1;    Grant succeeded.  

Next I exited sqlplus with exit; and connected as the new user to test it. I logged in successfully to the database with:

$ sqlplus teacher1    SQL*Plus: Release 11.2.0.1.0 Production on Thu Jul 25 03:20:50 2013  Copyright (c) 1982, 2009, Oracle.  All rights reserved.    Enter password: *****    Connected to:  Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production  With the Partitioning, OLAP, Data Mining and Real Application Testing options    SQL>  

But when I try to select from the table it says:

SQL> SELECT * FROM Instructors;  SELECT * FROM Instructors                *  ERROR at line 1:  ORA-00942: table or view does not exist  

What I'm missing here ?!

Datatype range Clarification [on hold]

Posted: 26 Jul 2013 04:00 PM PDT

What would happen if we use Datatype range in exceeded way (e.g assign varchar(max) or varchar(500) for the field which contains the records "Yes or No only")

How to handle "or" possibilities in MongoDB

Posted: 26 Jul 2013 07:01 PM PDT

I'm new to MongoDB and by no means an expert in DB design in general. This feels like a really basic question that's probably been answered a million times, but I'm having a surprisingly hard time finding an answer for it: is there a good way to easily handle either/or choices in the DB, or structure it in a way that makes it easy to deal with in code?

Let's say I'm building a cocktail database in MongoDB. I'm adding the entry for a Manhattan.

  • 2 oz. Bourbon or Rye Whiskey (this is the issue)
  • 1 oz. Sweet Vermouth
  • Dash Aromatic Bitters
  • Garnish Maraschino Cherry

So I might do a cocktails collection with an entry like:

{      "_id" : "1234",      "name" : "Manhattan",      "ingredients" : [          {              "measure" : "2 oz.",              "ingredient" : "Bourbon Whiskey"          },          {              "measure" : "1 oz.",              "ingredient" : "Sweet Vermouth"          },          {              "measure" : "Dash",              "ingredient" : "Aromatic Bitters"          },          {              "measure" : "Garnish",              "ingredient" : "Maraschino Cherry"          }      ]  }  

Which is fine, but how do I get the rye in there? I don't think I'd want to do "ingredient" : "Bourbon or Rye Whiskey", would I? Is it better for the purpose of searching, later, to have them separated out? What's the best practice, here?

Also, an even more tricky drink would be something like a Martini, where I would actually use a different garnish for vodka than I would for gin, so I'd want to give the user the option to choose the base liquor they're working with and then give the proper garnish accordingly.

Any suggestions? Thanks!

Mongo repair database "ERROR: error: exception cloning object in local.system.replset "

Posted: 26 Jul 2013 07:04 PM PDT

I'm trying to repair a mongo database and am running into the following error early on:

Wed Jul 24 16:55:21 [initandlisten] ERROR: error: exception cloning object in local.system.replset   E11000 duplicate key error index: local.system.replset.$_id_  dup key: { : "replicaset-name" }   obj:{ _id: "replicaset-name", version: 92, members: [ { _id: 16, host: "host1:27032" },   { _id: 17, host: "host2:27032" },   { _id: 18, host: "host3:27032", priority: 0.0, hidden: true },   { _id: 19, host: "host4:27032", priority: 6.0 },   { _id: 20, host: "host5:27032", priority: 7.0 },   { _id: 22, host: "host6:27032", priority: 4.0 },   { _id: 23, host: "host7:27032", priority: 3.0 },   { _id: 24, host: "host8:27032", votes: 0, priority: 0.0 } ] }  

(replica set and host names have been sanitized for public posting)

To attempt the repair I shut down the mongod and started it up with

/usr/bin/mongod-2.2 -f /var/local/mongo/mongod.conf --nojournal --repair --repairpath /mnt/mongo_repair/  

My mongod.conf consists of just two lines:

dbpath = /var/local/mongo/data/replicaset-name  port = 17000  

The .conf normally did have a replSet declaration, but this was removed for the repair as I'm trying to run the repair on a secondary. (Per the docs repairs can only be done on primaries or standalones; so need to make it look like a standalone temporarily).

Any idea what the problem could be?

Check and remove iteratively without an index

Posted: 26 Jul 2013 07:08 PM PDT

Without sync priority or performance requirements on the task, I need to check all documents once per day and removing some of them.

The find filter is not static but can be defined within a db engine script function like function check_doc(){ }.

With .find() - no filter/condition in brackets - the function will check some non-indexed fields and decides whether the document must be removed or not.

.find().check_doc()/forEach the most effient way for iteratively checking each document, one by one, and remove justOne from there?

Furthermore, is there a variable available for checking the mongodb status, whether it is locked or sync delay (because of other queries and tasks on the DB)? If so, I'd prefer to delay the check_doc which has lowest priority. However, so far I could not find a way to decrease priority or a flag to check the status/delay.

Security for Oracle linked server from SQL Server

Posted: 26 Jul 2013 03:14 PM PDT

I'm trying to make the linked server from SQL Server 2005 to Oracle more secure by not having user/pass on "Be made using this security context:" fields so only a specific user can access the data from linked server.

So I tried mapping SQL Security User to Oracle user/pass by inserting it in "Local server login to remote server login mappings:" area then "For a login not defined in the list above, connection will:" set to "Not be made".

When I click OK, I get:

Access to the remote server is denied because no login-mapping exists  

Is it not possible to map SQL login to a Oracle login? Is there any way to get this to work?

App for rapid prototyping of relational data structures

Posted: 26 Jul 2013 05:43 PM PDT

What are some apps (especially web apps) that provide an Extremely Lightweight user interface for building, inserting test data into, and querying a relational data structure? The app should have some kind of "visual" interface (even if only a dropdown) for defining relationships between properties (columns in the RDBMS world).

The schema (if there is one), data, and any relationships should be exportable in a common format and convention (something based on JSON maybe). An API for interacting with the base programmatically would be nice (REST and JSON for example), but since I can't find anything that fits the above criteria, I'll settle for prototype-and-then-export functionality.

Statistical Analysis of Data that has to be done in an order?

Posted: 26 Jul 2013 08:43 PM PDT

Bear with me - that is the first time try that in SQL Server, normally I have been doing that on the front end ;)

I a implementing some analysis on time coded data series. This is not super complicated stuff, but some of it requires some numbers we do not store in the database and that has to be calculated by aggregating the numbers in a specific algorithm IN ORDER.

To give an example:

  • I have a list of trades and I need to know the maximum loss we had in the account, so i need to aggregate the plus/minus and then take the most extreme negative and positive.

This can not be pre-calculated due to dynamic filtering - there are a number of filters that can be applied to the data.

So far - past - I pulled the data to the application, now for the standard stuff I plan to try to keep that in the sql server.

My problem now is - I can see how that works (acceptable) in SQL Server:

[TradeBegin],  Pnl,  SUM (Pnl) OVER (ORDER BY [TradeBegin] ROWS UNBOUNDED PRECEDING)  [Total]  

But if I put that into a view... and then filter out rows, the Sum is still calcualted from the beginning. And I need a view because I want (need) to map that standard analysis data into an ORM (so dynamic SQL is out). Anyone an idea how to do that?

Clear schema from database without dropping it

Posted: 26 Jul 2013 07:43 PM PDT

I'm working on a school project where I have a SQL Server with a database for my team.

I already imported a local database created with Entity Framework.

Now the model has changed, table properties were added/deleted and I want to update my full database.

However, the teachers didn't gave us the create rights so dropping the whole database isn't really an option.

Now is my question, is it possible to drop all the tables currently in the database and just import the newly created one without problems? Or do I really need to drop the whole database?

SUPER privilege not defined for master user in Amazon MySQL RDS

Posted: 26 Jul 2013 03:43 PM PDT

I have created one medium instance on amazon rds in asia pecific (singapore) region. i have created my master user with master password. and it is working/connecting fine with workbench installed on my local PC. When, I am going to create function on that instance, it show me following error

ERROR 1418: This function has none of DETERMINISTIC, NO SQL, or READS SQL DATA in its declaration and binary logging is enabled (you might want to use the less safe log_bin_trust_function_creators variable)

At my instance, my variable (log_bin_trust_function_creators) shows OFF. now when I go to change with variable using

SET GLOBAL log_bin_trust_function_creators = 1;  

it gives me another error

Error Code: 1227. Access denied; you need (at least one of) the SUPER privilege(s) for this operation

I don't know how to solve this error.

Can anybody help???

How to import table's data in MySql from Sql Server?

Posted: 26 Jul 2013 06:43 PM PDT

I am trying to export table from SQL Server 2008 R2 TO MySql 5.5. For this I am using SQL Server Import and Export Wizard of Sql Server 2008, but it is giving this error. Error

Here this error may be occurring because table in Sql Server has a column with data type image and table in MySql this column's data type is LONGBLOB.

Please provide your expert answers. If not possible through SQL Server Import and Export Wizard than please suggest any other method for data transfer.

Is it possible to pipe the result of a mysqldump straight to rsync as the source argument?

Posted: 26 Jul 2013 04:43 PM PDT

Is it possible to pipe the result of a mysqldump straight to rsync as the source argument?

Conceptually, I was thinking something like:

mysqldump -u root -p database_to_backup > db_backup_file.sql | sudo rsync -avz db_backup_file.sql myuser@mysite.com:/var/www/db_backup_file.sql  

I've seen people pipe the result to mysql for their one liner backup solution, but I was curious if it was possible with rsync. You know--- cause rsync is magic :)

Thanks for your time!

Conditional compilation of SQL Server stored procedure

Posted: 26 Jul 2013 07:34 PM PDT

Short version: Is there some way to conditionally compile chunks of TSQL code in a SQL Server Data Project using SQL Server Data Tools for Visual Studio 2010?

I'm using SQL Server Data Tools in Visual Studio 2010 to work on an experimental SQL Server Express database. The eventual destination if things work well would be an enterprise SQL Server platform. I have both a 2008 instance on one box and a 2012 instance on another, because my company is in the process of migrating from 2008 to 2012 for the many enterprise databases as well.

In other programming languages I have used, preprocessor directives make it easy to conditionally compile parts of a code base. The most common uses for this are to have different code for different platforms in restricted sections or to excluded debugging output code from release builds.

Both of these could be very helpful in some store procedures I'm working on. Is there anything like this available? I know I can use sqlcmd variables to swap out specific values during deployment, but I can't figure how to use that to include or exclude subsequent chunks of code.

Example:

#IF $(DebugVersion) = 'True'      -- A bunch of useful PRINTs and what not  #ELSE      SET NOCOUNT ON  #ENDIF    #IF $(SSVersion) = '2012'      SET @pretty_date = FORMAT(@some_date, 'dddd, MMM dd, yyyy')  #ELSE      SET @pretty_date = CAST(@some_date AS nvarchar(12))  #ENDIF  

Minimizing Indexed Reads with Complex Criteria

Posted: 26 Jul 2013 02:43 PM PDT

I'm optimizing a Firebird 2.5 database of work tickets. They're stored in a table declared as such:

CREATE TABLE TICKETS (    TICKET_ID id PRIMARY KEY,    JOB_ID id,    ACTION_ID id,    STATUS str256 DEFAULT 'Pending'  );  

I generally want to find the first ticket that hasn't been processed and is in Pending status.

My processing loop would be:

  1. Retrieve 1st Ticket where Pending
  2. Do work with Ticket.
  3. Update Ticket Status => Complete
  4. Repeat.

Nothing too fancy. If I'm watching the database while this loop runs I see the number of indexed reads climbs for each iteration. The performance doesn't seem to degrade terribly that I can tell, but the machine I'm testing on is pretty quick. However, I've received reports of performance degradation over time from some of my users.

I've got an index on Status, but it still seems like it scans down the Ticket_Id column each iteration. It seems like I'm overlooking something, but I'm not sure what. Is the climbing number of indexed reads for something like this expected, or is the index misbehaving in some way?

-- Edits for comments --

In Firebird you limit row retrieval like:

Select First 1    Job_ID, Ticket_Id  From    Tickets  Where    Status = 'Pending'  

So when I say "first", I'm just asking it for a limited record set where Status = 'Pending'.

Calculating percentage of a row over total sum

Posted: 26 Jul 2013 06:42 PM PDT

Apologies for the bad title, I wasn't sure what would be a good title for this.

This is currently (simplified view of the) data I'm working with

Agent    |  Commission       ---------|------------  Smith    |    100  Neo      |    200  Morpheus |    300  

I need to calculate the percentage of the total commission, each agent is responsible for.

So, for Agent Smith, the Percentage would be calculated as (Agent Smith's commission / Sum(commission)*100

So, my expected data would be

Agent    |  Commission   |  % Commission      ---------|---------------|---------------  Smith    |    100        |     17  Neo      |    200        |     33  Morpheus |    300        |     50  

I have a function returning the commission for each agent. I have another function returning the percentage as (Commission/Sum(Commission))*100. The problem is that Sum(commission) gets calculated for each and every row, and given that this query would be run on a Data Warehouse, the data set would be rather large ( currently, it's just under 2000 records) and quite honestly, a bad approach (IMO).

Is there a way of having the Sum(Commission) not calculate for every row being fetched ?

I was thinking something on the lines of a 2 part query, the first part would fetch the sum(commission) into a package variable/type and the second part would refer to this pre-calculated value, but I'm not sure how I can accomplish this.

I am limited to using SQL, and I'm running on Oracle 10g R2.

[SQL Server] What is the set up to study 70-432?

[SQL Server] What is the set up to study 70-432?


What is the set up to study 70-432?

Posted: 26 Jul 2013 12:56 AM PDT

Hi all,based on http://www.microsoft.com/learning/en-us/exam.aspx?ID=70-432and http://technet.microsoft.com/en-us/library/ms143506(v=sql.100).aspx#SSETools_x64as the exam involves HA topics,what is the minimum setup required to get hands on topics relating to mirroring, log shipping, replication and clustering. believe that mirroring, and log shipping, replication are on db level, so in that case we can use 2 VM for thisbut for clustering it is on server level, so we need to use another 2VM.how should we set up the VM? how much disk space is required?what should be the virtualization technology that we should use?we needHard Disk Space Requirements (32-Bit and 64-Bit)would require a max of 240 MBbut how about the overall disk requirements? how should we partitioned the virtual disk?am I right to say I need to purchase 4* OS licenses for the VM?Please share your thoughts on the most affordable way to learn SQL server certification.thanks a lot!

Login creation failed

Posted: 25 Jul 2013 08:38 PM PDT

Hello Master,My client created a local non domain windows account and now they want me to add that acount on SQL Server and want sysadmin rights. My SQL Server is in domain on that machine. While creating user, I am able to search that account (As it is local machine account) but I am not able to create login for the same. I got an error : "Error 15401: Windows NT user or group '%s' not found".Is it necessary that new login should be in domain ? OR how can I create login for Nondomain(local account) account ?

[MS SQL Server] SELECT StatMan

[MS SQL Server] SELECT StatMan


SELECT StatMan

Posted: 26 Jul 2013 05:12 AM PDT

Does anyone know what the query below means? I'm running into this type of query pretty frequently an I'm not sure how to find out the problem, if there's any.SELECT StatMan([SC0]) FROM ( SELECT TOP 100 PERCENT [AT_field_3] AS [SC0] FROM [dbo].[tablename] WITH ( READUNCOMMITTED ) ORDER BY [SC0] ) AS _MS_UPDSTATS_TBL Thanks.

Install BIDS only

Posted: 26 Jul 2013 02:37 AM PDT

Our full MSSQL install file is 64 bit which prevents me from installing any components onto a 32 bit desktop.How can I get an install file for only BIDS 2008 R2? What do I need to download?Thanks very much for any tips.

[Articles] Could You Live Like a DBA?

[Articles] Could You Live Like a DBA?


Could You Live Like a DBA?

Posted: 25 Jul 2013 11:00 PM PDT

A light-hearted take on the life of a DBA.

[SQL 2012] Failed Logins - Resolved

[SQL 2012] Failed Logins - Resolved


Failed Logins - Resolved

Posted: 26 Jul 2013 02:30 AM PDT

This is more of an informational post to be stored for historical purposes because when I searched the internet I didn't see a reference to my issue.Intermittently I would get notifications about failed logins with the following messages.[code="other"]Reason: An attempt to login using SQL authentication failed. Server is configured for Windows authentication only[/code]My server IS configured for Mixed mode.It turns out if you pass a null value for username when attempting to connect using SQL Authentication you will get this confusing error message. :hehe:

how to insert commma seperated vales as a input?

Posted: 25 Jul 2013 10:35 PM PDT

i have a table dbo.student_mark_details which contains 4 columns likestudent_id,name,age,marksin my usp_student_details procedure i have 2 parameters as a i/p like @columns,@valuesmy i/p format isexec student_details @columns='name,age,marks',@values='a,22,85'here how do i insert these column values in a single row in my procedurecan anyone help me?

Access issues with the 'which had been reset for connection pooling.' error.

Posted: 25 Jul 2013 05:54 PM PDT

Single Server: @@VersionMicrosoft SQL Server 2012 (SP1) - 11.0.3368.0 (X64) May 22 2013 17:10:44 Copyright (c) Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1)Mem: 72GBCPU: 16 cores (2 physical)Error in order of desc:The client was unable to reuse a session with SPID 356, which had been reset for connection pooling. The failure ID is 46. This error may have been caused by an earlier operation failing. Check the error logs for failed operations immediately before this error message.Login failed for user '<<User Name>>'. Reason: Failed to open the database '<<Database>>' configured in the login object while revalidating the login on the connection. [CLIENT: xx.xx.xx.xx]The Issue:So what happens is that username and database connect to the server 24/7 thousands of times with no problems and then every once in a while these errors occur. The problem is about once a day the errors occur hundreds of times and the application fails because it can't connect. It takes a restart and things go back to normal.I know it was a kind of bug in 2008 but this is 2012. I've also done the THREAD checks and sp_who2 and things seem normal. Any ideas?? P.S.I know what failure ID 46 is but the database exists and is available at the time.Thanks

Replacing important missing data by row

Posted: 25 Jul 2013 04:09 AM PDT

This is my first post in many years so please forgive if this is the wrong forum. Been a DBA in a previous life and am now working with a client on a data feed and this issue has raised it's ugly head. The issue is we have a table that has SSNs in a data field, but not all rows have an SSN (the UI has the field as optional) so some users enter it and some don't, painful but true. What I want to do is where I have multiple rows and where the lastName, firstName and date of birth (DOB) are a match with another row and one of those rows have a valid SSN, I want to enter that valid SSN into that blank SSN field. In essence I would be entering the optional SSN in those rows where the user chose not to but for some reason had done it in the past or on a future entry.Example:ID lastName firstName DOB SSN other columns...1 Doe John 1/1/1967 111-22-33332 Doe John 1/1/1967 3 Doe John 1/1/1967 111-22-3333I am sure there is a elegant way to do this and while I have started poking at it, I thought I would hit some forums and look for help. Understood this might need some programming but if I can at least select the rows to start with the client can appreciate the counts and we can fix.Appreciate any helpSteve Z

Search This Blog