Friday, March 1, 2013

[how to] Separated databases, separeted datas

[how to] Separated databases, separeted datas


Separated databases, separeted datas

Posted: 01 Mar 2013 08:42 PM PST

We have a global project. We will have some servers all continents. We separate the datas because it is not necessary that they be in one place together. BUT! We need unique IDs (MySQL, InnoDB). I have two ideas.

  1. We use the auto_increment_increment and auto_increment_offset each server. The problem: we don't know how many servers will be!
  2. We create a custom random ID generator with VARCHAR(16-32) primary keys. The problem: How can we guarantee the unique values (maybe unique server prefix?) and it isn't slower?

Have anybody other idea? I made some fast benchmark, and I don't find differents the integer or varchar primary key.

SQL Server media belongs to which edition

Posted: 01 Mar 2013 06:59 PM PST

Could anyone please help me on below question?

Without installing SQL Server media 2008 and 2012, how can we know that the media belongs to which edition (like developer, standard or enterprise....etc)?

Thanks!

Hard Parse on Non-Identical Statements?

Posted: 01 Mar 2013 06:10 PM PST

Let's say I have this query :

SELECT X FROM Y  

If I run it for the first time then it would do a hard parse.

After that, I run this query :

select x from y  

With just a difference in alphabetical upper and lower case, does it mean they have to do the hard parse again?

Thank You.

Which triplestores have support for full-text search

Posted: 01 Mar 2013 03:33 PM PST

Which RDF triplestores support full-text search and what performance can be expected from them.

I am assuming that they will have to extend SPARQL in some way as the only way that I can think of for performing full text search currently is to perform a regular expression filter against all literals returned by the SELECT.

Find a string in MS SQL database

Posted: 01 Mar 2013 01:51 PM PST

I have an application which has logged an error like

Object <obj id> generated an exception.  

The application store all its information in MSSQL database which contains 100+ tables. For sure somewhere there must be a table which contains all the objects of this type and I'm sure obj id is the key column there.

How can I find this object ? Do I have to create 100+ SELECT queries to query each table or maybe there is a more convenient solution?

Database Design - different objects with shared tagging

Posted: 01 Mar 2013 03:56 PM PST

My background is more in web programming rather than database administration, so please correct me if I'm using the wrong terminology here. I'm trying to figure out the best way to design the database for an application I'll be coding.

The situation: I've got Reports in one table and Recommendations in another table. Each Report can have many Recommendations. I also have a separate table for Keywords (to implement tagging). However, I want to have just one set of keywords that gets applied to both Reports and Recommendations so that searching on keywords gives you Reports and Recommendations as results.

Here's the structure I started out with:

Reports  ----------  ReportID  ReportName      Recommendations  ----------  RecommendationID  RecommendationName  ReportID (foreign key)      Keywords  ----------  KeywordID  KeywordName      ObjectKeywords  ----------  KeywordID (foreign key)  ReportID (foreign key)  RecommendationID (foreign key)  

Instinctively, I feel like this isn't optimal and that I should have my taggable objects inherit from a common parent, and have that comment parent be tagged, which would give the following structure:

BaseObjects  ----------  ObjectID (primary key)  ObjectType      Reports  ----------  ObjectID_Report (foreign key)  ReportName      Recommendations  ----------  ObjectID_Recommendation (foreign key)  RecommendationName  ObjectID_Report (foreign key)      Keywords  ----------  KeywordID (primary key)  KeywordName      ObjectKeywords  ----------  ObjectID (foreign key)  KeywordID (foreign key)  

Should I go with this second structure? Am I missing any important concerns here? Also, if I do go with the second, what should I use as a non-generic name to replace "Object"?

Replication issue - CREATE SELECT alternative?

Posted: 01 Mar 2013 09:02 AM PST

I've an MySQL 5.1 slave for our BI team.

They need to make some CREATE SELECT with big select queries (several million lines).

As CREATE SELECT is a DDL, if the replication attempts to update some rows in same tables than the SELECT statement, replication is blocked until the freeing of the CREATE SELECT.

Do you now a good non-blocking alternative to thoses CREATE SELECT statements?

I thought to an SELECT INTO OUTPUT FILE then LOAD DATA INFILE but they will fill out our disks as BI guys like to do... :)

Max.

Difference between return next and return record

Posted: 01 Mar 2013 11:00 AM PST

What is the difference between RETURN NEXT and RETURN RECORD in PL/pgSQL? I find the manual quite confusing in explaining what RETURN RECORD actually does. What is a RECORD type?

sql xml performance

Posted: 01 Mar 2013 03:17 PM PST

I have a sql table with 21 columns. The majority are numerical and the rest are nvarchar's.

I need to add 4 columns that will contain XML data for each record. The XML data for each record can be anywhere from 200 to 2,000 lines.

The question is:

  • Does adding the XML typed columns to the same table change the speed of querying that table?

  • What are the performance benefits when I add the xml typed
    columns to another table and join both tables when doing queries?

  • Is it better to encode the XML data to shrink it and decode it in the application?

How to take a database dump for selective set of values in few tables?

Posted: 01 Mar 2013 10:52 AM PST

create table foo (id, val1, user_id,...)  create table bar (id, foo_id, val2, ...)  create table baz (id, bar_id, val3, ...)    select * from foo where user_id = 1;  select * from bar where id in (select id from foo where user_id = 1)  select * from baz where id in (for all the above foos)  

How do I write a dump command which does the above?

Postgres error ( XX000: invalid memory alloc request size 4294967290 )

Posted: 01 Mar 2013 01:22 PM PST

I'm getting the following error in my postgres 8.4 database:

XX000: invalid memory alloc request size 4294967290

Operating System : Windows 7 64 bit Memory : 8GB Database Version: Postgres 8.4 server

I cannot back up my database; I can't even view the data using SELECT. The server gets automatically stopped when I try to do this.

Also posted:

i added the screen shot of the error which occurred during executing the following

select * from < tablename >

enter image description here

ERROR: invalid memory alloc request size 4294967290

          • ERROR * * * * * *

ERROR: invalid memory alloc request size 4294967290

SQL state: XX000

Is it possible to use two different drivers for sql server on the same machine. Say 2000 and 2008

Posted: 01 Mar 2013 08:02 PM PST

I am trying to use two versions of sql server on the same machine. I have sql server 2000 driver installed on the system. Can I also install sql server 2008 driver on the same machine and use both side by side?

Different results rebuilding an index online and offline

Posted: 01 Mar 2013 04:10 PM PST

I have a non-clustered, non-unique index on a foreign key column of type bigint. When I rebuild the index online, the average fragmentation drops to 3%, with 2 fragments, and 30 pages.

When I run the same rebuild index offline, the average fragmentation is 25%, with 4 fragments and 28 pages.

This is the query, with names redacted.

ALTER INDEX [IX] ON [dbo].[Table]  REBUILD WITH  (      PAD_INDEX  = OFF,       STATISTICS_NORECOMPUTE  = OFF,       ALLOW_ROW_LOCKS  = ON,       ALLOW_PAGE_LOCKS  = ON,       ONLINE = ON,       SORT_IN_TEMPDB = ON  );  

What could be causing this difference? The same situation occurs on multiple tables.

How might a corrupt partition in TempDB result in DBCC CHECKDB reporting no issue?

Posted: 01 Mar 2013 10:01 AM PST

One of our SQL Servers reported the following error recently:

DATE/TIME:  2/25/2013 9:15:14 PM    DESCRIPTION:    No catalog entry found for partition ID 9079262474267394048       in database 2. The metadata is inconsistent. Run DBCC CHECKDB to check for       a metadata corruption.  

Less than 15 minutes later I connected to the server and ran:

SELECT name  FROM sys.databases  WHERE database_id = 2;  

Which returned 'tempdb'. I then ran:

DBCC CHECKDB ('tempdb') WITH NO_INFOMSGS, TABLERESULTS;  

Which returned no results, indicating no issues with the database affected.

How could corruption in the database result in the error message above yet DBCC CHECKDB not report the problem? I presume if a page checksum calculation fails, resulting in the page being marked as suspect that any object referencing that page would not be able to be dropped, but I must be wrong.

Once a page is marked 'suspect', how can it be marked not-suspect, or fixed, or reused, or whatever such that DBCC CHECKDB does not report any problem with the page in question?


Edit: 2013-02-27 13:24

Just for fun, I tried to recreate the corruption in TempDB assuming a #temp table was the culprit.

However, since I cannot set the SINGLE_USER option in TempDB, I cannot use DBCC WRITEPAGE to corrupt a page, and therefore I cannot force corruption in TempDB.

Instead of using DBCC WRITEPAGE one could set the database offline and use a hex editor to modify random bytes in the db file. Of course, that does not work either on TempDB since the database engine cannot run with TempDB offline.

If you stop the instance, TempDB is automatically recreated at next startup; hence that won't do the trick either.

If anyone can think of a way to recreate this corruption, I'd be willing to do further research.

In order to test the hypothesis that a corrupted page cannot be fixed by DROP TABLE I created a test database and used the following script to corrupt a page, then attempt to drop the affected table. Result here was the table could not be deleted; I had to RESTORE DATABASE Testdb PAGE = ''... in order to recover the affected page. I assume if I had made a change to some other part of the page in question, perhaps the page could have been corrected with DROP TABLE or perhaps TRUNCATE table.

/* ********************************************* */  /* ********************************************* */  /* DO NOT USE THIS CODE ON A PRODUCTION SYSTEM!! */  /* ********************************************* */  /* ********************************************* */  USE Master;  GO  ALTER DATABASE test SET RECOVERY FULL;  BACKUP DATABASE Test       TO DISK = 'Test_db.bak'      WITH FORMAT          , INIT          , NAME = 'Test Database backup'          , SKIP          , NOREWIND          , NOUNLOAD          , COMPRESSION          , STATS = 1;  BACKUP LOG Test      TO DISK = 'Test_log.bak'      WITH FORMAT          , INIT          , NAME = 'Test Log backup'          , SKIP          , NOREWIND          , NOUNLOAD          , COMPRESSION          , STATS = 1;  GO  ALTER DATABASE test SET SINGLE_USER;  GO  USE Test;  GO  IF EXISTS (SELECT name FROM sys.key_constraints WHERE name = 'PK_temp')       ALTER TABLE temp DROP CONSTRAINT PK_temp;  IF EXISTS (SELECT name FROM sys.default_constraints       WHERE name = 'DF_temp_testdata')       ALTER TABLE temp DROP CONSTRAINT DF_temp_testdata;  IF EXISTS (SELECT name FROM sys.tables WHERE name = 'temp')   DROP TABLE temp;  GO  CREATE TABLE temp  (      tempID INT NOT NULL CONSTRAINT PK_temp PRIMARY KEY CLUSTERED IDENTITY(1,1)      , testdata uniqueidentifier CONSTRAINT DF_temp_testdata DEFAULT (NEWID())  );  GO    /* insert 10 rows into #temp */  INSERT INTO temp default values;  GO 10     /* get some necessary parameters */  DECLARE @partitionID bigint;  DECLARE @dbid smallint;  DECLARE @tblid int;  DECLARE @indexid int;  DECLARE @pageid bigint;  DECLARE @offset INT;  DECLARE @fileid INT;    SELECT @dbid = db_id('Test')      , @tblid = t.object_id      , @partitionID = p.partition_id      , @indexid = i.index_id  FROM sys.tables t      INNER JOIN sys.partitions p ON t.object_id = p.object_id      INNER JOIN sys.indexes i on t.object_id = i.object_id  WHERE t.name = 'temp';    SELECT TOP(1) @fileid = file_id   FROM sys.database_files;    SELECT TOP(1) @pageid = allocated_page_page_id   FROM sys.dm_db_database_page_allocations(@dbid, @tblid, null, @partitionID, 'LIMITED')  WHERE allocation_unit_type = 1;    /* get a random offset into the 8KB page */  SET @offset = FLOOR(rand() * 8192);  SELECT @offset;    /* 0x75 below is the letter 't' */  DBCC WRITEPAGE (@dbid, @fileid, @pageid, @offset, 1, 0x74, 1);      SELECT * FROM temp;    Msg 824, Level 24, State 2, Line 36  SQL Server detected a logical consistency-based I/O error: incorrect checksum   (expected: 0x298b2ce9; actual: 0x2ecb2ce9). It occurred during a read of page    (1:1054) in database ID 7 at offset 0x0000000083c000 in file 'C:\SQLServer   \MSSQL11.MSSQLSERVER\MSSQL\DATA\Test.mdf'.  Additional messages in the SQL    Server error log or system event log may provide more detail. This is a   severe error condition that threatens database integrity and must be   corrected immediately. Complete a full database consistency check   (DBCC CHECKDB). This error can be caused by many factors; for more   information, see SQL Server Books Online.  

At this point you get disconnected from the database engine, so reconnect to continue.

USE Test;  DBCC CHECKDB WITH NO_INFOMSGS, TABLERESULTS;  

Corruption is reported here.

DROP TABLE temp;    Msg 824, Level 24, State 2, Line 36  SQL Server detected a logical consistency-based I/O error: incorrect checksum   (expected: 0x298b2ce9; actual: 0x2ecb2ce9). It occurred during a read of page    (1:1054) in database ID 7 at offset 0x0000000083c000 in file 'C:\SQLServer   \MSSQL11.MSSQLSERVER\MSSQL\DATA\Test.mdf'.  Additional messages in the SQL    Server error log or system event log may provide more detail. This is a   severe error condition that threatens database integrity and must be   corrected immediately. Complete a full database consistency check   (DBCC CHECKDB). This error can be caused by many factors; for more   information, see SQL Server Books Online.  

Corruption is reported here, DROP TABLE fails.

/* assuming ENTERPRISE or DEVELOPER edition of SQL Server,      I can use PAGE='' to restore a single page from backup */  USE Master;  RESTORE DATABASE Test PAGE = '1:1054' FROM DISK = 'Test_db.bak';   BACKUP LOG Test TO DISK = 'Test_log_1.bak';    RESTORE LOG Test FROM DISK = 'Test_log.bak';  RESTORE LOG Test FROM DISK = 'Test_log_1.bak';  

Edit #2, to add the @@VERSION info requested.

SELECT @@VERSION;  

Returns:

Microsoft SQL Server 2012 (SP1) - 11.0.3000.0 (X64)       Oct 19 2012 13:38:57       Copyright (c) Microsoft Corporation      Enterprise Evaluation Edition (64-bit) on Windows NT 6.2 <X64>           (Build 9200: )  

I know this is the Evaluation Edition, we have keys for the Enterprise Edition, and will be doing an Edition Upgrade soon.

Better query to select multiple columns into a single output column?

Posted: 01 Mar 2013 09:59 AM PST

I have the following query that sort of works, but I find it to be kludgy and inefficient. Is there a better way to write this query?

I know this is a bit of an odd query, else I would probably have found some advice when searching, but it is necessary for our business logic. We've been doing this in Hibernate queries and using software to output the file but we're needing to improve performance drastically and that means using raw sql. The output of this query is to be used to partially populate a helper table that we query to figure out which of our 40-odd product tables a particular item is contained in.

Could the following query be replicated using something like GROUP_CONCAT in reverse? Note that the second and third column must be repeated on each line. The fields all contain the same type of data.

SELECT 'Pn', 'Mfg', 'TableId', 'SourceColumn', 'OtherData'  SELECT PN1, Mfg, '26', 'PN1', OtherData FROM Table   UNION ALL  SELECT PN2, Mfg, '26', 'PN2', OtherData FROM Table  UNION ALL  SELECT PN3, Mfg, '26', 'PN3', OtherData FROM Table  UNION ALL  SELECT PN4, Mfg, '26', 'PN4', OtherData   INTO OUTFILE 'XXX.dat'  FIELDS TERMINATED BY ','  OPTIONALLY ENCLOSED BY '"'  ESCAPED BY '\\'  LINES TERMINATED BY '\n'  FROM Table d;  

Input looks like the following table:

      |  PN1  |  PN2  |  PN3    |  PN4    |  MFG  |  TableId  |  OtherData  |      -----------------------------------------------------------------------      |  asdf |  abde |  12354  |  asdfc  |  2    |  26       |  0.2456     |      |  nupo |  1354 |  null   |  null   |  2    |  26       |  1.53       |      |  ...  |  ...  |  ...    |  ...    |  ...  |  ...      |  ...        |      |  ...  |  ...  |  ...    |  ...    |  ...  |  ...      |  ...        |      |  ...  |  ...  |  ...    |  ...    |  ...  |  ...      |  ...        |  

I want the output .dat file to look like this:

      "Pn","Mfg","TableId","SourceColumn","OtherData"      "asdf",2,26,"PN1",0.2456      "abde",2,26,"PN2",0.2456      "12354",2,26,"PN3",0.2456      "asdfc",2,26,"PN4",0.2456      "nupo",2,26,"PN1",1.53      "1354",2,26,"PN2",1.53      ...      ...      ...  

If I take the INTO OUTFILE clause out, I get an output that looks like this:

      |  Pn      |  Mfg     |  TableId    |  SourceColumn  | OtherData  |      -----------------------------------------------------------------      |  "Pn"    |  {blob}  |  "TableId"  |  "PN"          |  {blob}    |      |  "asdf"  |  {blob}  |  "PN1"      |  "PN1"         |  {blob}    |      |  "abde"  |  {blob}  |  "PN2"      |  "PN2"         |  {blob}    |      |  "12354" |  {blob}  |  "PN3"      |  "PN3"         |  {blob}    |      |  "asdfc" |  {blob}  |  "PN4"      |  "PN4"         |  {blob}    |      |  "nupo"  |  {blob}  |  "PN1"      |  "PN1"         |  {blob}    |      |  "1354"  |  {blob}  |  "PN2"      |  "PN2"         |  {blob}    |      |  ...     |  ...     |  ...        |   ...          |   ...      |      |  ...     |  ...     |  ...        |   ...          |   ...      |      |  ...     |  ...     |  ...        |   ...          |   ...      |  

Unrecognised system process saturating CPU

Posted: 01 Mar 2013 10:14 AM PST

We have switched to our backup DC today as primary site Bandwidth has gone.

We are getting issues with the most powerful DB server; it is spawning the following process for several db's on the server. One database is simply a small scratch db that is used for maintenance tools, so not replicated or mirrored.

Does anyone know or recognize the following code?

() select table_id, item_guid, oplsn_fseqno, oplsn_bOffset, oplsn_slotid   from [<dbname>].[sys].[filetable_updates_2105058535] with (readpast)   order by table_id  

Why can't RDBM's cluster the way NoSQL does?

Posted: 01 Mar 2013 05:39 PM PST

One of the big plusses for nosql DBMS is that they can cluster more easily. Supposedly with NoSQL you can create hundreds of cheap machines that store different pieces of data and query it all at once.

My question is this, why can't relational DBMS do this like mysql or sql server? Is it that the vendors just haven't figured out a technical way to do this with their existing product, or is there some issue with the relational model that prevents this from being feasible? What is so great about the NoSQL way of storing and accessing data (key/value, documents, etc) that makes clustering easier, if this is true at all?

How to set Password for sqlplus /as sysdba

Posted: 01 Mar 2013 03:07 PM PST

I am new to Oracle. I would like to know how to set login password for sqlplus / as sysdba in oracle 11g.

I want to set the password for my next login, i dont want to login as sqlplus / as sysdba since anyone can log in using sqlplus / as sysdba and can easily access the data or change password for the existing user.I need to restrict the login.

Say in a company 5 peoples are working and they are accessing the server with Admin userid and password to login the system and if some-one changes some modifications in the server using sqlplus / as sysdba it will affect the rest of the peoples right so in-order to avoid we need to restrict the login by setting password.

Thanks

Downloading image data type contain to desktop SQL Server 2008

Posted: 01 Mar 2013 01:30 PM PST

I am very new to SQL Server 2008 and question may look very basic to gurus.

Following in the problem universe.

  1. I have a huge (600 GB size) SQL Server 2008 table that stores pdf files in a column of type Image
  2. Need is to download all the images from database table to my desktop (not on to the server).
  3. I can't run C#, ASP.NET etc in the environment and has to rely on SQL Server Management Studio for the job (I see a option called power shell but not sure how to use it)

Table looks like following

Table name : Attachments

Columns :

FileID (PK, int, not null)    Owner (int, null)     file_name (varchar(50), null)     file_type (varchar(50), null)     file_size (int, null)     file_date (datetime, null)    file_BIN (image, null)    

SQL Server replication subscriptions marked as inactive

Posted: 01 Mar 2013 11:30 AM PST

Is there any way to force SQL Server NOT to mark subscriptions as inactive, ever?

It happens sporadically when there are connection issues and I don't want to have to reinitialize the subscription every time.

Note, I'm not talking about the subscriptions being marked as expired...just as inactive.

Thank you.

SQL Server script to delete accounts no longer in Active Directory

Posted: 01 Mar 2013 04:07 PM PST

We have a SQL Server 2000 that will shortly be migrated to SQL Server 2005. It has years of Windows Authentication accounts created that no longer exist in Active Directory, which prevent the Copy Database Wizard from creating these accounts on the new server.

Is there a script or some automated way of deleting the accounts that no longer exist in our Active Directory?


EDIT: Just to be clear, the logins needing to be deleted are on SQL Server 2000, which does not support the DROP LOGIN command.

Separately, manually deleting the logins in SQL Server 2000 would (I think) be done with exec sp_droplogin 'loginname' but on mine, the login name cannot be found, whether I use 'domain\loginname' or 'loginname'

Just to add to the confusion, exec sp_revokelogin 'domain\loginname' does appear to work.

EDIT 2: Finally resolved the issue. Many of the logons that were problematic were programmatically added to the database, and while they worked in the sense that a user could connect, the username vs NT login name had a mismatch of domain-prefixed logons when SQL Server expected no domain, and vice versa.

To resolve this, I modified the sp_droplogin procedure to take out one of the checks that was erroring.

I'm accepting my own answer as it works in SQL Server 2000.

How to Convert Horizontal to Vertical Array?

Posted: 01 Mar 2013 09:30 AM PST

I need to create a query (suitable for Standard Edition) that has data from multiple columns (Columns 1-6 with corresponding Date Started and Date Completed data) displayed vertically, but also has the column name in the preceding column to identify it, along with other data (Record Number, Status).

Sample data:

+--------------+--------+------------+-------------+---------------+  | RecordNumber | Status | ColumnName | DateStarted | DateCompleted |  +--------------+--------+------------+-------------+---------------+  |            1 | Open   | Column 1   | 2012-01-01  | 2012-02-01    |  |            2 | Hold   | Column 2   | 2012-01-03  | 2012-03-01    |  |            1 | Open   | Column 3   | 2012-02-05  | 2012-04-06    |  |            3 | Closed | Column 4   | 2012-05-10  | 2012-07-25    |  |            2 | Hold   | Column 5   | 2012-03-09  | 2012-04-01    |  |            1 | Open   | Column 6   | 2012-10-10  | 2012-12-12    |  +--------------+--------+------------+-------------+---------------+  
DECLARE @Data AS TABLE  (      RecordNumber    integer NOT NULL,      [Status]        varchar(10) NOT NULL,      ColumnName      varchar(10) NOT NULL,      DateStarted     date NOT NULL,      DateCompleted   date NOT NULL  );    INSERT @Data  (      RecordNumber,       [Status],      ColumnName,      DateStarted,      DateCompleted  )  VALUES      (1, 'Open', 'Column 1', '20120101', '20120201'),      (2, 'Hold', 'Column 2', '20120103', '20120301'),      (1, 'Open', 'Column 3', '20120205', '20120406'),      (3, 'Closed', 'Column 4', '20120510', '20120725'),      (2, 'Hold', 'Column 5', '20120309', '20120401'),      (1, 'Open', 'Column 6', '20121010', '20121212');  

How to script out push subscription creation at the subscriber?

Posted: 01 Mar 2013 10:30 AM PST

I'm trying to set up a push subscription to a SQL Server publication from the subscriber.

I could set up the subscription at the publisher using the Replication Wizard in Management Studio. However, I would prefer to script the process relative to the subscriber so I can automate the deployment of a new SQL Server subscriber instance.

Initially, I'm happy to prompt for the name of the publisher before deployment. If I can get this working, I will look for a way to inject the correct value for my environment automatically.

What is a simple way to do this for a SQL Server instance that has to create multiple subscriptions at different publishers?

I'm open to using any supported SQL Server scripting solution: SMO, RMO, Sqlcmd, WMI, PSDrive, even pure T-SQL.

I've attempted to solve this problem in two ways. The first is a complete solution using T-SQL, but it involves some manual steps.

Using T-SQL

I have a manual solution in T-SQL. The solution is based on the output of the Management Studio Replication Script Generator output.

Using Management Studio, I run the following script to generate a T-SQL script that I can run at the publisher:

PRINT N'  EXECUTE MyDatabase.dbo.sp_addsubscription    @publication = N''MyPublication'',    @subscriber = ''' + CAST(SERVERPROPERTY('ServerName') AS SYSNAME) + ''',    @destination_db = ''SubscriberDatabase'',    @subscription_type = N''Push'',    @sync_type = N''automatic'',    @article = N''all'',    @update_mode = N''read only'',    @subscriber_type = 0;    EXECUTE MyDatabase.dbo.sp_addpushsubscription_agent    @publication = N''MyPublication'',    @subscriber = ''' + CAST(SERVERPROPERTY('ServerName') AS SYSNAME) + ''',    @subscriber_db = ''SubscriberDatabase'',    @job_login = null,    @job_password = null,    @subscriber_security_mode = 1,    @frequency_type = 64,    @frequency_interval = 1,    @frequency_relative_interval = 1,    @frequency_recurrence_factor = 0,    @frequency_subday = 4,    @frequency_subday_interval = 5,    @active_start_time_of_day = 0,    @active_end_time_of_day = 235959,    @active_start_date = 0,    @active_end_date = 0,    @dts_package_location = N''Distributor'';';  

On the MYSUBSCRIBER instance, the output would look like this:

EXECUTE MyDatabase.dbo.sp_addsubscription    @publication = N'MyPublication',    @subscriber = 'MYSUBSCRIBER',    @destination_db = 'SubscriberDatabase',    @subscription_type = N'Push',    @sync_type = N'automatic',    @article = N'all',    @update_mode = N'read only',    @subscriber_type = 0;    EXECUTE MyDatabase.dbo.sp_addpushsubscription_agent    @publication = N'MyPublication',    @subscriber = 'MYSUBSCRIBER',    @subscriber_db = 'SubscriberDatabase',    @job_login = null,    @job_password = null,    @subscriber_security_mode = 1,    @frequency_type = 64,    @frequency_interval = 1,    @frequency_relative_interval = 1,    @frequency_recurrence_factor = 0,    @frequency_subday = 4,    @frequency_subday_interval = 5,    @active_start_time_of_day = 0,    @active_end_time_of_day = 235959,    @active_start_date = 0,    @active_end_date = 0,    @dts_package_location = N'Distributor';  

I copy the output and execute the script at the publisher instance to set up the subscription.

I think I can't automate this in pure T-SQL without editing the script before running it, because T-SQL by design does not handle user input.

Using PowerShell and RMO

PowerShell has simple ways to process user input, so this seems like a good way to prototype the automation process.

MSDN has an eight-step guide to set up a push subscription using the .NET Replication Management Objects (RMO).

Here are the first two steps:

  1. Create a connection to the Publisher by using the ServerConnection class.
  2. Create an instance of the TransPublication class by using the Publisher connection from step 1. Specify Name, DatabaseName, and ConnectionContext.

I'm trying to translate these steps into a PowerShell script, but I can't get past step 2.

In the following code examples, I use fictional object names. I believe this does not affect the answerability of the question because the error message is identical when I use the real object names.

First attempt: setting the properties

My first attempt is to create the TransReplication object then set its properties. The code looks like this:

Add-Type -AssemblyName "Microsoft.SqlServer.Rmo, Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91";    $Publisher = New-Object Microsoft.SqlServer.Management.Common.ServerConnection MyServer    $Publication = New-Object Microsoft.SqlServer.Replication.TransPublication  $Publication.Name = 'MyPublication'  $Publication.DatabaseName = 'MyDatabase'  $Publication.ConnectionContext = $Publisher  

When I execute this script, I see the following error:

Exception setting "ConnectionContext": "Cannot convert the "server='(local)';Trusted_Connection=true;multipleactiveresultsets=false" value   of type "Microsoft.SqlServer.Management.Common.ServerConnection" to type "Microsoft.SqlServer.Management.Common.ServerConnection"."  At line:8 char:14  + $Publication. <<<< ConnectionContext = $Publisher      + CategoryInfo          : InvalidOperation: (:) [], RuntimeException      + FullyQualifiedErrorId : PropertyAssignmentException  

It looks like it's failing becuase it can't convert the type ServerConnection to the type ServerConnection. I don't understand how this could fail for the stated reason, because the value is already of the required type.

Second attempt: overloading the constructor

My second attempt is to specify the property values of the TransReplication object in the constructor. The code looks like this:

Add-Type -AssemblyName "Microsoft.SqlServer.Rmo, Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91";    $Publisher = New-Object Microsoft.SqlServer.Management.Common.ServerConnection MyServer    $Publication = New-Object Microsoft.SqlServer.Replication.TransPublication 'MyPublication', 'MyDatabase', $Publisher  

When I execute this script, I see the following error:

New-Object : Cannot find an overload for "TransPublication" and the argument count: "3".  At line:5 char:26  + $Publication = New-Object <<<<  -TypeName Microsoft.SqlServer.Replication.TransPublication 'MyPublication', 'MyDatabase', $Publisher      + CategoryInfo          : InvalidOperation: (:) [New-Object], MethodException      + FullyQualifiedErrorId : ConstructorInvokedThrowException,Microsoft.PowerShell.Commands.NewObjectCommand  

It looks like the New-Object cmdlet can't find the three-argument constructor documented by MSDN:

public TransPublication(    string name,    string databaseName,    ServerConnection connectionContext  )  

Parameters

As far as I can tell, I'm overloading the constructor correctly.

Am I doing something wrong? Is there something unusual about my environment? Am I better off using another solution?

How can I create a MySQL database based on a SQLite schema?

Posted: 01 Mar 2013 02:52 PM PST

I have MySQL on my server and different SQLite databases on a client. I'm trying to find a way to create the same databases on the server.

Since a SQLite schema is actually a CREATE query, one way would be to write a script that extracts every critical part from the create query and converts them into a corresponding MySQL query, finally concatenate them all.

This seems quite a complex and hard-to-cover-all-situations method. I am wondering if there is any alternatives?

Can I monitor the progress of importing a large .sql file in sqlite3 using zenity --progress?

Posted: 01 Mar 2013 12:30 PM PST

I'm trying to monitor the progress of a sqlite3 command importing a large .sql file into a database using zenity --progress.

I've tried the following which will import the file, however progress is not shown:

sqlite3 DATABASE < import_file.sql | zenity --progress --percentage=0 --auto-close  

I know I need to provide zenity a numeric source for the progress, but don't really know how to obtain the number.

Can anyone help me?

Is there any benefit to defragmenting SQL indexes in a SAN environment?

Posted: 01 Mar 2013 09:21 AM PST

Our SQL server lives on a SAN. It contains dozens of OLTP databases, some with several tables containing over 1m records.

We have been running Ola Hallengren's index maintenance scripts weekly, and it runs for several hours each time. Based on the fragmentation threshold, the script will either reorganize or reindex an index. We have observed that during reindexing, the log files get huge which leads to an excessive consumption of bandwidth during the log shipping.

Then comes an article from Brent Ozar in which he says to stop worrying about SQL indexes:

Your hard drives are shared with other servers that are also making drive requests at the same time, so the drives will always be jumping all over the place to get data. Defragging your indexes is just meaningless busy work.

Googling this question leads to varying opinions, most supported with arguments that seem too brief or weak. Our tentative plan is to adjust the fragmentation threshold in our maintenance script so that it reorganizes much more often than it reindexes.

What is the final verdict? Is it worthwhile to defrag SQL indexes on a SAN considering the burdens associated with running weekly maintenance jobs?

Transactional Replication snapshot locking Publisher db from beginning

Posted: 01 Mar 2013 10:42 AM PST

I have recently upgraded my remote distributor to 2008r2. I am getting complete system locks when attempting to generate a new snapshot in the day when we have added an article to the publication (we could do this in the past with very little issues).

I have checked sp_Helppublication and we are using synchronization method 3 (concurrent), so no locks should be held until the very end.

I have been trying to following this link as I am only adding an additional table with 26 rows but don't want to reinitialize and have the whole snapshot delivered.

http://deepakrangarajan.blogspot.co.uk/2009/01/sql-2005-transaction-replication-adding.html

I attempted to run this code

EXEC sp_changepublication  @publication = 'statics',  @property = N'immediate_sync',  @value = 'false'  GO   

but this is also locking up the system completely. I guess I will have to run in the middle of the night but thought that you should be able to run this as just a config change to the publication.

Also when I try the last step in the link to add it to the subscription

EXEC sp_addsubscription    @publication = 'statics',    @article = 'dbo.SupplierCorporatePayAccountLink_tbl',    @subscriber = 'PEARLJAM',    @destination_db = 'MLT-Replicated',    @reserved='Internal'  

I get the following error message

Msg 14058, Level 16, State 1, Procedure sp_MSrepl_addsubscription_article, Line 331 Cannot create the subscription because the subscription already exists in the subscription database. Only one subscription to the same publication is allowed in each subscription database. Drop the subscription and add it again if necessary. If the problem persists, replication metadata might be incorrect; see Books Online for troubleshooting information.

After the snapshot, how can I just have the new article added to the subscriber?

Should I index a bit field in SQL Server?

Posted: 01 Mar 2013 12:54 PM PST

I remember reading at one point that indexing a field with low cardinality (a low number of distinct values) is not really worth doing. I admit I don't know enough about how indexes work to understand why that is.

So what if I have a table with 100 million rows in it, and I am selecting records where a bit field is 1? And let's say that at any point in time, there are only a handful of records where the bit field is 1 (as opposed to 0). Is it worth indexing that bit field or not? Why?

Of course I can just test it and check the execution plan, and I will do that, but I'm also curious about the theory behind it. When does cardinality matter and when does it not?

[SQL Server] Using an amemded @From and @To date to get a new result

[SQL Server] Using an amemded @From and @To date to get a new result


Using an amemded @From and @To date to get a new result

Posted: 28 Feb 2013 08:40 PM PST

This is my criteria for the SQL below:From Date 07 Jan 2013 To Date 18 Mar 20131 - First Sunday After 07 Jan 2013 is 13 Jan 2013 - Done (FirstSunday) 2 - First Sunday Before 18 Mar 2013 is 17 Mar 2013 - Done (LastSunday) 3 - How many days between 13 Jan 2013 and 17 Mar 2013 = 63, Divide 63 by 7 = 9, +1 = 10So what I need my datediff to do is use the results from the 1st and 2nd rows above (First and Last Sundays date) so I get the correct result for section 3. Can you help?Declare @From DatetimeDeclare @To DatetimeSet @From = '07 Jan 2013'Set @To = '18 Mar 2013'Select dateadd(dd, CASE WHEN datepart(weekday, @From) = 1 THEN 0 ELSE 8 - datepart(weekday, @From) END,@From) as FirstSunday, dateadd(dd, CASE WHEN datepart(weekday, @To) = 1 THEN 0 ELSE 1 - datepart(weekday, @To) END,@To) as LastSunday,datediff(d,@From, @To)

[MS SQL Server] setup SQL alert notify DBA about windows event log error

[MS SQL Server] setup SQL alert notify DBA about windows event log error


setup SQL alert notify DBA about windows event log error

Posted: 01 Mar 2013 02:56 AM PST

Is there a simpler way to setup SQL alert in SSMS to notify DBA when an error log is written to windows event log.For example if the event type is an Error, or I know the event ID, can I set up the alert in SSMS?Thanks

Moving LUNS

Posted: 01 Mar 2013 01:03 AM PST

We are planning to move LUNS from one server to other server. Do we need to detach the databases before moving the LUNS ? We have this tested without detaching the databases and it worked. But somewhere i am not comfortable to do this process on production server. Which is the best way to do it? We will be taking the disk snapshots before moving the drives after taking the sql server offline.

CDC and LSNs

Posted: 28 Feb 2013 10:24 PM PST

Hi peeps,bit of a departure from my norm I am looking at using CDC from source systems to populate a data warehouse. Currently I am looking at one SQL Server 2008 Database with circa 20 tables. We may add other instances of SQL Server at a later date.Are the Log Sequence numbers totally unique so if I want to move my changed data with LSN's between 0x00000001 and 0x00000100 then all i have to do is access all 20 Table value functions and each time I pass the same two LSN'sor are the LSN's specific to the capture instance ? I am guessing they are unique for the database where CDC is enabled?Many thanks.EPS are there any pitfalls wityh this apporach to loading a data warehouse? Planning to use net changes in the settings to assist getting the data in quickly.

Reorganize index online option

Posted: 28 Feb 2013 10:25 AM PST

Does Reorganizing index with online option causes high disk I/O...I noticed users are getting timeouts after running reorganizing index command for 2 hrs??

[Articles] Why SQL Server?

[Articles] Why SQL Server?


Why SQL Server?

Posted: 28 Feb 2013 11:00 PM PST

This Friday Steve Jones asks you how you got started working with SQL Server in your career? Let us know how you got started and why you stuck with it.

[SQL 2012] SQL SERVER 2012 MAINTANANCE TASK- BEST PRACTICE

[SQL 2012] SQL SERVER 2012 MAINTANANCE TASK- BEST PRACTICE


SQL SERVER 2012 MAINTANANCE TASK- BEST PRACTICE

Posted: 28 Feb 2013 07:48 PM PST

Hi All,Can u anyone suggest me for best practice maintenance task in sql server 2012.I am planning to do the following in my Always on server- Production database.1. Full Backup -- Every day2. Diff Backup -- Every 4 Hourly3. Log Backup-- Every One hour.4.Check database integrity -- Every day5. Reorganize and update statistics -- Weekly basis6. Rebuild index -- Every two Weeks7. sp_delete_backuphistory & sp_purge_jobhistory & Output File Cleanup every Week.Thanks for you help in advance.

Mirroring failed due to invalid connection string

Posted: 28 Feb 2013 05:07 PM PST

Hello, I'm receiving the following error:ALTER DATABASE command failed due to an invalid server connection string.when I issue the command:ALTER DATABASE DBName SET PARTNER = 'TCP://DCDRSQLv01.datacoremarketing.com\SQLMICH01:5022';GOSituation - I'm attempting to mirror from a default instance to a named instance. Each instance is running SQL 2012 SP1. Each server is in a different domain and I have configured logins and certificates to make it work properly. I've seen several posts indicating try the following:- check that both servers can be pinged, name resolves and telnet succeeds to the configured port 5022.- confirm that mirroring is started and logins are granted permission.- restore database and transaction log with NORECOVERY option.- Confirm the database and instance are correct.- Attempt connection without the instance name.- Start the Browser service on all nodes.None of these appear to be providing a different result except for the later which complains "network address cannot be reached" (note telnet to 5022 does work).Any assistance is most appreciated. I'm out of ideas.

[T-SQL] building a dataset !

[T-SQL] building a dataset !


building a dataset !

Posted: 28 Feb 2013 07:14 PM PST

Hi,I have 6 tables MustHave_...... , and a table project.i wanna display in ssrs a table which contain for each project, the percentage for each must have ! should i use a join or a union for my 7 tables ?thank you

expression in derived column issue

Posted: 28 Feb 2013 09:44 PM PST

I want to cretaed new column in derived column with default string valuve likeColumn name Expression TableName FactFDDCellQHourlyPegEricsson1I have tried in many ways converting into sting and text but it showing an error, Plz any one help me to get out of this.

Select full month number (with 0)

Posted: 28 Feb 2013 05:38 PM PST

Hi,I use[code="sql"]DATEPART(month, myDate)[/code]or[code="sql"]MONTH(myDate)[/code]the resut is: 2 (if myDate is 2013.02.03). I would like return: 02Do you have any solution for this?Thans!

Compare list of tables in two Databases from SSMS

Posted: 28 Feb 2013 04:15 PM PST

Hi SQL Gurus,Need to do some cleansing work this morning and I was wondering If I can get help here in the forum.Is there a way we can compare the list of 1) Tables2) Views3) SPsin two different Databases on same server. I would use a 3rd party tool in case there is no way at all from SSMS.The key here is that I don't want to compare the Data in tables, just checking table names would do.So what I want isSelect <List of tables> in DB1EXCEPTSELECT <List of tables> in DB2Same for view and SP.Any Ideas, suggestions ? Thanks a lot in advance.

Guidance on table valued types.

Posted: 28 Feb 2013 06:02 AM PST

We have 1 table valued parameter (tvp) coming to GetXXX stored procedures. It contains only 1 column of type uniqueidentifier. Currently, it does not have any index on it. For performance improvement, I was thinking of adding a primary key as clustered index on it. My initial testing showed if there less than 100 records in that tvp then it actually performs worse than the older one which doesn't have any index on it. I was hoping that it would perform same if not better for smaller datasets. Does anybody know if I'm missing anything here.Thanks in advance.Regards,Mayur

Need to create a Scalar Valued Function

Posted: 28 Feb 2013 03:05 AM PST

Hi ,I need to write a Scalar-valued function ( I guess) to get the highest status for each member on a given day if there are more than one status entered for that particular member on that particular day.Below is the very simple scenario:-- 1. Create Agent Status HistoryGOCREATE TABLE [dbo].[AgentStatusHistory]( [AgentID] [int] NOT NULL, [QualifyDate] [datetime] NULL, [Status] [varchar] (10)) ON [PRIMARY]GO-- 2. INSERT some data into AgentStatusHistory tableGOINSERT INTO dbo.AgentStatusHistory (AgentID, QualifyDate, Status)SELECT 101, '2010-02-01 00:00:00', 'LOW'UNION ALLSELECT 101, '2011-05-01 00:00:00', 'MEDIUM'UNION ALLSELECT 101, '2011-05-01 00:00:00', 'HIGH'UNION ALLSELECT 102, '2012-22-25 00:00:00', 'LOW'UNION ALLSELECT 102, '2012-22-25 00:00:00', 'MEDIUM'UNION ALLSELECT 102, '2012-22-25 00:00:00', 'HIGH'GOFrom the above example, if I execute the below SQL query..SELECT * FROM dbo.AgentStatusHistoryORDER BY AgentID, QualifyDate;here is what I need to see in the result set…• AgentID, QualifyDate, Status• 101, '2010-05-01 00:00:00', 'HIGH'• 101, '201-02-01 00:00:00', 'LOW'• 102, '2012-22-25 00:00:00', 'HIGH'

Compare the rowcount

Posted: 28 Feb 2013 01:06 AM PST

hello World,I have table with date and rowcount like below. Time stamp Row Count2013-02-28 10:53:50.283 12112013-02-28 11:53:50.283 12122013-02-28 12:53:50.283 1213 2013-02-28 13:53:50.283 12132013-02-28 14:53:50.283 1213I am running the below command to retrive the information.select getdate(), 'Row Count' = count(*) from dba_backuplocationNow, i have to create a table with two columns Timestamp & Row count (say the table name is Rowcount_monitor) and insert the rowcount values every hour using a sql server job. ( If we put my requiremnt is stored procedure and get that executed through job).Requiremnt :I have to monitor the table and compare the last three hours of rowconut and if the rowcount is same then i have send an database mail saying rowcount did not change from last three hours. We can use the above table as example where the last three rowcount did not change. Also we have have to delete the records if they are two days old.Can someone help on this immediately.

[SQL Server 2008 issues] Searching for Solution

[SQL Server 2008 issues] Searching for Solution


Searching for Solution

Posted: 28 Feb 2013 04:26 PM PST

Hello All, I have one query (it may sound stupid though). We have one SQL Server 2008 (Microsoft SQL Server 2008 (SP2) - 10.0.4266.0 (X64)) and on my machine I've installed the client tools. We have some process where we create soem bulky temp tables after doing some heavy processing on sybase and thereafter through a java code the data in those temp tables in Sybase are transferred to SQL batchwise. I think this is not safe and efficient strategy and somewhat time consuming and also feel that it should be handled using SSIS packages (?). Can anyone suggest on that ?Secondly, if creating SSIS package is good option then will I have to uninstall the SQL client on my machine and install the developer edition (with business studio) ? Is there license requirement (we have license for SQL Server Enterprise Edition) ?

Unable to connec the site using sql crentials.

Posted: 28 Feb 2013 05:12 PM PST

we have a new application on to a server, which uses SQL 2008 ENT for itsUser is unable to connect from the app server using login credentials[Fri Mar 01 06:52:08 2013] [error] [client xx.2xx.xx.xxx] PHP Warning: mssql_connect(): Unable to connect to server: testsql in /na/testvol/eocs/int_consumer/database/live/mssql.php on line 8But we are able to connect the databse through SSMS uing the same login and password.Please help if any one faced this issue.Many thanks,

High fragmentation of indexes

Posted: 28 Feb 2013 01:57 PM PST

Hi all,I have a table Order with primary key ID column is uniqueidentifier data type on server B. Nightly, a synchronization job runs to synchronize orders from server A to server B. After that, I check the fragmentation of clustered index I see that the value is always so high. Does uniqueidentifier data type cause the high frag because SQL insert data into Order table with random mode?I also set FILLFACTOR = 80%Thanks,

SELECT INTO vs INSERT INTO

Posted: 28 Feb 2013 01:18 PM PST

Hi,I am trying to create a temporary table. It will contain approx 25 million rows to start with and will increase in future.I tried Option1: SELECT col1 into #tempT from tableand Option2: create table #tempT (col1 int)insert into #tempTselect col1 from tableWhile doing both I ran the following to get locking information. SELECT resource_type, (CASE WHEN resource_type = 'OBJECT' THEN object_name(resource_associated_entity_id) WHEN resource_type IN ('DATABASE', 'FILE', 'METADATA') THEN 'N/A' WHEN resource_type IN ('KEY', 'PAGE', 'RID') THEN ( SELECT object_name(object_id) FROM sys.partitions WHERE hobt_id=resource_associated_entity_id ) ELSE 'Undefined' END) AS requested_object_name, request_mode AS lock_type, request_status, request_owner_id AS transaction_id FROM sys.dm_tran_locks where request_session_id =52Question:1. I have read that SELECT INTO creates exclusive lock on objects in the system catalog. In fact, creation of any object would do that, howwever when SELECT INTO runs for longer then the catlog locking will be longer and at that point in time if you try to query system catalog then you might run into blocking issues.When i ran the query to get the locking information I see only schema locks on the table from which i am trying to read the data and possibly the new temp table that was being created. This happened for both SELECT INTO and INSERT INTO.Can someone help me uunderstand -1. How INSERT INTO is better as compared to SELECT ITNO when there are huge datasets expected. 2. Would this SELECT INTO also hamper the rebuild index and other db activities if the catlog is exclusively locked.

Can I call a UDF in another database?

Posted: 28 Feb 2013 12:19 PM PST

I'm trying to write a check constraint that checks that the current column value exists in a library table in a library database.[code="sql"]USE LibGOCREATE FUNCTION [dbo].[fnIsValidResource] (@gResource UNIQUEIDENTIFIER)RETURNS BITASBEGIN RETURN 1ENDGOUSE MaintDBGOCREATE TABLE [dbo].[Als]( [gPK] UNIQUEIDENTIFIER ROWGUIDCOL NOT NULL DEFAULT NEWSEQUENTIALID(), [grObjType] UNIQUEIDENTIFIER NOT NULL CHECK (Lib.dbo.IsValidResource(grObjType)=1))GO[/code]When I try to execute the CREATE TABLE command, I get the error:Msg 4121, Level 16, State 1, Line 4Cannot find either column "Lib" or the user-defined function or aggregate "Lib.dbo.IsValidResource", or the name is ambiguous.Can someone please explain why this doesn't work and how to make it work?Thanks,Greg

Can`t build an OLAP cube from Project Server 2010

Posted: 28 Feb 2013 03:37 AM PST

I try to build for first time the OLAP Cube and show me this erroro CBSOlapProcessingFailure (17004) - Failed to process the Analysis Services database DatabaseName on the n01wdpjs01 server. Error: Internal error: The operation terminated unsuccessfully. Server: The operation has been cancelled. OLE DB error: OLE DB or ODBC error: Login failed for user 'NT AUTHORITY\NETWORK SERVICE'.; 28000; Cannot open database "ProjectServer_Reporting" requested by the login. The login failed.; 42000. Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'Project Reporting data source', Name of 'Project Reporting data source'. Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'Project List', Name of 'Project List' was being processed. Errors in the OLAP storage engine: An error occurred while the 'Start Date' attribute of the 'Project List' dimension from the 'DatabaseName' database was being processed. . Details: id='17004' name='CBSOlapProcessingFailure' uid='2a5d679a-5a15-4457-afcc-b96198f51199' QueueMessageBody='Setting UID=00007829-4392-48b3-b533-5a5a4797e3c9 ASServerName=n01wdpjs01 ASDBName=DatabaseName ASExtraNetAddress= RangeChoice=0 PastNum=1 PastUnit=0 NextNum=1 NextUnit=0 FromDate=12/26/2012 00:00:00 ToDate=12/26/2012 00:00:00 HighPriority=True' Error='Failed to process the Analysis Services database DatabaseName on the n01wdpjs01 server. Error: Internal error: The operation terminated unsuccessfully. Server: The operation has been cancelled. OLE DB error: OLE DB or ODBC error: Login failed for user 'NT AUTHORITY\NETWORK SERVICE'.; 28000; Cannot open database "ProjectServer_Reporting" requested by the login. The login failed.; 42000. Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'Project Reporting data source', Name of 'Project Reporting data source'. Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'Project List', Name of 'Project List' was being processed. Errors in the OLAP storage engine: An error occurred while the 'Start Date' attribute of the 'Project List' dimension from the 'DatabaseName' database was being processed. '. Some Idea how correct this?Regards

Generate reports

Posted: 28 Feb 2013 10:18 AM PST

I have a question related to SSRS. What i do is after all ETL process i am generating a report off of a table that has numerous columns. I have a column in the table that store integer values. My requirement is that the report should be sent out to all the recipients only if a row in that column is greater than 0 Else don't send out any email at all. I have been googling for a while and i can't seem to find a solution. Any advise/direction would be much appreciated.Thanks.

having doubt in output of dbcc shrinkdatabse command

Posted: 28 Feb 2013 04:23 AM PST

Hi on executing dbcc shrinkdatabase we get following as result a>DbId, b>FileId, c>CurrentSize, d>MinimumSize e>EstimatedPagesexcept for first id didn`t get rest allfirst database can have many file and when execute shrinkdatabase then does sql server tries to shrink all files aor only onesecond why option c,d,e are expressed in number of 8 kb pages , are this outputs mere number or they are sizes in gb/md/kb and in order to get actual size we have to do some maths to decode it .Please clear my doubts

Help getting all the weeks between two random dates for each row

Posted: 27 Feb 2013 11:58 PM PST

HI AllI need some pointers/help in getting this doneI have some data sample below each row has two week numbers yyyyww 201301 or 201352. I need to build a row of the same data but with each week in it Starting dataItemNumber | StoreNumber | WeekStart | Week Finish123456 | 01 | 201250 | 201310123456 | 02 | 201301 | 201305654321 | 01 | 201308 | 201311REQUIRED FINAL FORMATItemNumber | StoreNumber | Week 123456 | 01 | 201250 123456 | 01 | 201251 123456 | 01 | 201252 123456 | 01 | 201301 123456 | 01 | 201302 123456 | 01 | 201303 123456 | 01 | 201304 123456 | 01 | 201305 123456 | 01 | 201306123456 | 01 | 201307123456 | 01 | 201308123456 | 01 | 201309123456 | 01 | 201310123456 | 02 | 201301123456 | 02 | 201302123456 | 02 | 201303123456 | 02 | 201304123456 | 02 | 201305etc....I tried doing something with tally tables but couldn't get anywhere i'm currently trying CTE but i cant see the light at the end of the tunnel.Any help appreciated !!!

Export Contacts from Outlook to SQL Server 2008

Posted: 28 Feb 2013 06:28 AM PST

Hi All, I would like to export all the mail id's which are available in my outlook inbox and store those id's into my SQL Server 2008 table.Is there any chance do like this?If Yes can you please share your ideas

Estimated Execution Plan Curiosity

Posted: 28 Feb 2013 01:31 AM PST

I have a nightly job the runs about 50 insert & update statements and takes a few hours. I ran the whole query's Estimated Execution Plan, and query #35 said it had a 75% cost relative to the batch. It is a simple update, with 2 clustered index scans. I tested the statement originally and it took 33 seconds, added a couple of indexes on the join columns and it took 2 seconds.But when I run the estimated execution plan again it says that query is now 88% relative to the batch.Seems odd (to someone not well versed in execution plans)

Using a pecking order for addresses

Posted: 28 Feb 2013 01:11 AM PST

Hello All,I have encountered a new problem that I haven't dealt with before. I have a table of individuals who have more than 1 address type. I would like to write a query that returns 1 address for each individual. This result set should include an address based on a pecking order. For this example, the pecking order should be Main > Work > SeasonalHere's a sample table:[code="plain"]IndName Account AddressType AddressLine1 AddressLine2 City StJohn 12345 Main 123 Main St. Boston MAJohn 12345 Work 45 Spring St. Boston MAJohn 12345 Seasonal 23 Ivaloo St. Portland MEJim 23456 Seasonal 14 Church St. Houghton METhe output desired is:IndName Account AddressType AddressLine1 AddressLine2 City StJohn 12345 Main 123 Main St. Boston MAJim 23456 Seasonal 14 Church St. Houghton ME[/code]Any thoughts on the most efficient way to approach this? Thanks in advance!-Ken G

Add Radio Button

Posted: 27 Feb 2013 08:55 PM PST

How to add Radio button in SSRS more than 3 radio button with submit option.Thanks

get the first and last day of any Year/Month

Posted: 25 Feb 2013 07:11 AM PST

Hi,i have a view with two columns, lets say SpecYear and SpecMonth, both are integer. How can I build two new columns with the first and last day of this year/month:pinch:ex.2013 02 => 2013-02-01 2013-02-28etc.thanx

Restore log stop at option

Posted: 28 Feb 2013 01:10 AM PST

Why do we use "STOPAT" option while restoring logs to database?Thanks

Error login packet

Posted: 28 Feb 2013 01:15 AM PST

Hi,We are using sql server 2008 with SP3.We are getting below alerts frequently on sql server.can any one help me why we are getting ths error."The login packet used to open the connection is structurally invalid; the connection has been closed. Please contact the vendor of the client library"THanks,

Compare the rowcount

Posted: 28 Feb 2013 01:09 AM PST

hello World,I have table with date and rowcount like below.Time stamp Row Count2013-02-28 10:53:50.283 12112013-02-28 11:53:50.283 12122013-02-28 12:53:50.283 12132013-02-28 13:53:50.283 12132013-02-28 14:53:50.283 1213I am running the below command to retrive the information.select getdate(), 'Row Count' = count(*) from dba_backuplocationNow, i have to create a table with two columns Timestamp & Row count (say the table name is Rowcount_monitor) and insert the rowcount values every hour usinga sql server job. ( If we put my requiremnt is stored procedure and get that executed through job).Requiremnt :I have to monitor the table and compare the last three hours of rowconut and if the rowcount is same then i have send andatabase mail saying rowcount did not change from last three hours. We can use the above table as example where the last three rowcount did not change. Also we have have to delete the records if they are two days old.Can someone help on this immediately.

Accessing a cluster SQL Server instance without the instance name

Posted: 27 Feb 2013 11:44 PM PST

Our application is currently connecting to a SQL default clustered instance using a 'CNAME' or alias. I am in the process of building a new cluster and the new SQL instance has to be a named instance instead of a default instance. I would like to connect to this new named instance by just switching the CNAME to the new virtual server name. Essentially I would like to connect to the cluster named instance without using the instance name. See example below. Current – "SQLServer1" – no instance name since it is a default instanceNew – "SQLServer2\InstanceName"Alias/CNAME – "DBServer" currently pointing to "SQLServer1" and ideally would have it just changed to "SQLServer2". I found and tested this solution. http://trycatchfinally.net/2009/05/accessing-a-clustered-sql-server-instance-without-the-instance-name/ My initial tests didn't uncover any issues. Does anyone see any issues having a very important OLTP application connect to the database in this manner?

login_name shows up as blank in sys.dm_exec_sessions but not in sp_who2

Posted: 29 Aug 2011 06:03 AM PDT

When I probe [b]sys.dm_exec_sessions [/b](joining with other DMVs to get active-session info) I get the login_name column to be blank for a certain session.Yet, for the same spid, the Login shows up properly (not blank) when I execute "sp_who2 active".It also shows up properly in [b]sys.sysprocesses[/b].What could be the reason for the faulty output of [b]sys.dm_exec_sessions [/b]?

Query in SSRS

Posted: 27 Feb 2013 07:51 PM PST

hi All ,i have one requirement where the SSRS report is displaying only certain data of a column based on a filter on a date range ,whereas i should get all the data even though the date range is selected or not for that particular column ,how can we achieve this is SSRS report

Search This Blog