Friday, March 1, 2013

[how to] Separated databases, separeted datas

[how to] Separated databases, separeted datas


Separated databases, separeted datas

Posted: 01 Mar 2013 08:42 PM PST

We have a global project. We will have some servers all continents. We separate the datas because it is not necessary that they be in one place together. BUT! We need unique IDs (MySQL, InnoDB). I have two ideas.

  1. We use the auto_increment_increment and auto_increment_offset each server. The problem: we don't know how many servers will be!
  2. We create a custom random ID generator with VARCHAR(16-32) primary keys. The problem: How can we guarantee the unique values (maybe unique server prefix?) and it isn't slower?

Have anybody other idea? I made some fast benchmark, and I don't find differents the integer or varchar primary key.

SQL Server media belongs to which edition

Posted: 01 Mar 2013 06:59 PM PST

Could anyone please help me on below question?

Without installing SQL Server media 2008 and 2012, how can we know that the media belongs to which edition (like developer, standard or enterprise....etc)?

Thanks!

Hard Parse on Non-Identical Statements?

Posted: 01 Mar 2013 06:10 PM PST

Let's say I have this query :

SELECT X FROM Y  

If I run it for the first time then it would do a hard parse.

After that, I run this query :

select x from y  

With just a difference in alphabetical upper and lower case, does it mean they have to do the hard parse again?

Thank You.

Which triplestores have support for full-text search

Posted: 01 Mar 2013 03:33 PM PST

Which RDF triplestores support full-text search and what performance can be expected from them.

I am assuming that they will have to extend SPARQL in some way as the only way that I can think of for performing full text search currently is to perform a regular expression filter against all literals returned by the SELECT.

Find a string in MS SQL database

Posted: 01 Mar 2013 01:51 PM PST

I have an application which has logged an error like

Object <obj id> generated an exception.  

The application store all its information in MSSQL database which contains 100+ tables. For sure somewhere there must be a table which contains all the objects of this type and I'm sure obj id is the key column there.

How can I find this object ? Do I have to create 100+ SELECT queries to query each table or maybe there is a more convenient solution?

Database Design - different objects with shared tagging

Posted: 01 Mar 2013 03:56 PM PST

My background is more in web programming rather than database administration, so please correct me if I'm using the wrong terminology here. I'm trying to figure out the best way to design the database for an application I'll be coding.

The situation: I've got Reports in one table and Recommendations in another table. Each Report can have many Recommendations. I also have a separate table for Keywords (to implement tagging). However, I want to have just one set of keywords that gets applied to both Reports and Recommendations so that searching on keywords gives you Reports and Recommendations as results.

Here's the structure I started out with:

Reports  ----------  ReportID  ReportName      Recommendations  ----------  RecommendationID  RecommendationName  ReportID (foreign key)      Keywords  ----------  KeywordID  KeywordName      ObjectKeywords  ----------  KeywordID (foreign key)  ReportID (foreign key)  RecommendationID (foreign key)  

Instinctively, I feel like this isn't optimal and that I should have my taggable objects inherit from a common parent, and have that comment parent be tagged, which would give the following structure:

BaseObjects  ----------  ObjectID (primary key)  ObjectType      Reports  ----------  ObjectID_Report (foreign key)  ReportName      Recommendations  ----------  ObjectID_Recommendation (foreign key)  RecommendationName  ObjectID_Report (foreign key)      Keywords  ----------  KeywordID (primary key)  KeywordName      ObjectKeywords  ----------  ObjectID (foreign key)  KeywordID (foreign key)  

Should I go with this second structure? Am I missing any important concerns here? Also, if I do go with the second, what should I use as a non-generic name to replace "Object"?

Replication issue - CREATE SELECT alternative?

Posted: 01 Mar 2013 09:02 AM PST

I've an MySQL 5.1 slave for our BI team.

They need to make some CREATE SELECT with big select queries (several million lines).

As CREATE SELECT is a DDL, if the replication attempts to update some rows in same tables than the SELECT statement, replication is blocked until the freeing of the CREATE SELECT.

Do you now a good non-blocking alternative to thoses CREATE SELECT statements?

I thought to an SELECT INTO OUTPUT FILE then LOAD DATA INFILE but they will fill out our disks as BI guys like to do... :)

Max.

Difference between return next and return record

Posted: 01 Mar 2013 11:00 AM PST

What is the difference between RETURN NEXT and RETURN RECORD in PL/pgSQL? I find the manual quite confusing in explaining what RETURN RECORD actually does. What is a RECORD type?

sql xml performance

Posted: 01 Mar 2013 03:17 PM PST

I have a sql table with 21 columns. The majority are numerical and the rest are nvarchar's.

I need to add 4 columns that will contain XML data for each record. The XML data for each record can be anywhere from 200 to 2,000 lines.

The question is:

  • Does adding the XML typed columns to the same table change the speed of querying that table?

  • What are the performance benefits when I add the xml typed
    columns to another table and join both tables when doing queries?

  • Is it better to encode the XML data to shrink it and decode it in the application?

How to take a database dump for selective set of values in few tables?

Posted: 01 Mar 2013 10:52 AM PST

create table foo (id, val1, user_id,...)  create table bar (id, foo_id, val2, ...)  create table baz (id, bar_id, val3, ...)    select * from foo where user_id = 1;  select * from bar where id in (select id from foo where user_id = 1)  select * from baz where id in (for all the above foos)  

How do I write a dump command which does the above?

Postgres error ( XX000: invalid memory alloc request size 4294967290 )

Posted: 01 Mar 2013 01:22 PM PST

I'm getting the following error in my postgres 8.4 database:

XX000: invalid memory alloc request size 4294967290

Operating System : Windows 7 64 bit Memory : 8GB Database Version: Postgres 8.4 server

I cannot back up my database; I can't even view the data using SELECT. The server gets automatically stopped when I try to do this.

Also posted:

i added the screen shot of the error which occurred during executing the following

select * from < tablename >

enter image description here

ERROR: invalid memory alloc request size 4294967290

          • ERROR * * * * * *

ERROR: invalid memory alloc request size 4294967290

SQL state: XX000

Is it possible to use two different drivers for sql server on the same machine. Say 2000 and 2008

Posted: 01 Mar 2013 08:02 PM PST

I am trying to use two versions of sql server on the same machine. I have sql server 2000 driver installed on the system. Can I also install sql server 2008 driver on the same machine and use both side by side?

Different results rebuilding an index online and offline

Posted: 01 Mar 2013 04:10 PM PST

I have a non-clustered, non-unique index on a foreign key column of type bigint. When I rebuild the index online, the average fragmentation drops to 3%, with 2 fragments, and 30 pages.

When I run the same rebuild index offline, the average fragmentation is 25%, with 4 fragments and 28 pages.

This is the query, with names redacted.

ALTER INDEX [IX] ON [dbo].[Table]  REBUILD WITH  (      PAD_INDEX  = OFF,       STATISTICS_NORECOMPUTE  = OFF,       ALLOW_ROW_LOCKS  = ON,       ALLOW_PAGE_LOCKS  = ON,       ONLINE = ON,       SORT_IN_TEMPDB = ON  );  

What could be causing this difference? The same situation occurs on multiple tables.

How might a corrupt partition in TempDB result in DBCC CHECKDB reporting no issue?

Posted: 01 Mar 2013 10:01 AM PST

One of our SQL Servers reported the following error recently:

DATE/TIME:  2/25/2013 9:15:14 PM    DESCRIPTION:    No catalog entry found for partition ID 9079262474267394048       in database 2. The metadata is inconsistent. Run DBCC CHECKDB to check for       a metadata corruption.  

Less than 15 minutes later I connected to the server and ran:

SELECT name  FROM sys.databases  WHERE database_id = 2;  

Which returned 'tempdb'. I then ran:

DBCC CHECKDB ('tempdb') WITH NO_INFOMSGS, TABLERESULTS;  

Which returned no results, indicating no issues with the database affected.

How could corruption in the database result in the error message above yet DBCC CHECKDB not report the problem? I presume if a page checksum calculation fails, resulting in the page being marked as suspect that any object referencing that page would not be able to be dropped, but I must be wrong.

Once a page is marked 'suspect', how can it be marked not-suspect, or fixed, or reused, or whatever such that DBCC CHECKDB does not report any problem with the page in question?


Edit: 2013-02-27 13:24

Just for fun, I tried to recreate the corruption in TempDB assuming a #temp table was the culprit.

However, since I cannot set the SINGLE_USER option in TempDB, I cannot use DBCC WRITEPAGE to corrupt a page, and therefore I cannot force corruption in TempDB.

Instead of using DBCC WRITEPAGE one could set the database offline and use a hex editor to modify random bytes in the db file. Of course, that does not work either on TempDB since the database engine cannot run with TempDB offline.

If you stop the instance, TempDB is automatically recreated at next startup; hence that won't do the trick either.

If anyone can think of a way to recreate this corruption, I'd be willing to do further research.

In order to test the hypothesis that a corrupted page cannot be fixed by DROP TABLE I created a test database and used the following script to corrupt a page, then attempt to drop the affected table. Result here was the table could not be deleted; I had to RESTORE DATABASE Testdb PAGE = ''... in order to recover the affected page. I assume if I had made a change to some other part of the page in question, perhaps the page could have been corrected with DROP TABLE or perhaps TRUNCATE table.

/* ********************************************* */  /* ********************************************* */  /* DO NOT USE THIS CODE ON A PRODUCTION SYSTEM!! */  /* ********************************************* */  /* ********************************************* */  USE Master;  GO  ALTER DATABASE test SET RECOVERY FULL;  BACKUP DATABASE Test       TO DISK = 'Test_db.bak'      WITH FORMAT          , INIT          , NAME = 'Test Database backup'          , SKIP          , NOREWIND          , NOUNLOAD          , COMPRESSION          , STATS = 1;  BACKUP LOG Test      TO DISK = 'Test_log.bak'      WITH FORMAT          , INIT          , NAME = 'Test Log backup'          , SKIP          , NOREWIND          , NOUNLOAD          , COMPRESSION          , STATS = 1;  GO  ALTER DATABASE test SET SINGLE_USER;  GO  USE Test;  GO  IF EXISTS (SELECT name FROM sys.key_constraints WHERE name = 'PK_temp')       ALTER TABLE temp DROP CONSTRAINT PK_temp;  IF EXISTS (SELECT name FROM sys.default_constraints       WHERE name = 'DF_temp_testdata')       ALTER TABLE temp DROP CONSTRAINT DF_temp_testdata;  IF EXISTS (SELECT name FROM sys.tables WHERE name = 'temp')   DROP TABLE temp;  GO  CREATE TABLE temp  (      tempID INT NOT NULL CONSTRAINT PK_temp PRIMARY KEY CLUSTERED IDENTITY(1,1)      , testdata uniqueidentifier CONSTRAINT DF_temp_testdata DEFAULT (NEWID())  );  GO    /* insert 10 rows into #temp */  INSERT INTO temp default values;  GO 10     /* get some necessary parameters */  DECLARE @partitionID bigint;  DECLARE @dbid smallint;  DECLARE @tblid int;  DECLARE @indexid int;  DECLARE @pageid bigint;  DECLARE @offset INT;  DECLARE @fileid INT;    SELECT @dbid = db_id('Test')      , @tblid = t.object_id      , @partitionID = p.partition_id      , @indexid = i.index_id  FROM sys.tables t      INNER JOIN sys.partitions p ON t.object_id = p.object_id      INNER JOIN sys.indexes i on t.object_id = i.object_id  WHERE t.name = 'temp';    SELECT TOP(1) @fileid = file_id   FROM sys.database_files;    SELECT TOP(1) @pageid = allocated_page_page_id   FROM sys.dm_db_database_page_allocations(@dbid, @tblid, null, @partitionID, 'LIMITED')  WHERE allocation_unit_type = 1;    /* get a random offset into the 8KB page */  SET @offset = FLOOR(rand() * 8192);  SELECT @offset;    /* 0x75 below is the letter 't' */  DBCC WRITEPAGE (@dbid, @fileid, @pageid, @offset, 1, 0x74, 1);      SELECT * FROM temp;    Msg 824, Level 24, State 2, Line 36  SQL Server detected a logical consistency-based I/O error: incorrect checksum   (expected: 0x298b2ce9; actual: 0x2ecb2ce9). It occurred during a read of page    (1:1054) in database ID 7 at offset 0x0000000083c000 in file 'C:\SQLServer   \MSSQL11.MSSQLSERVER\MSSQL\DATA\Test.mdf'.  Additional messages in the SQL    Server error log or system event log may provide more detail. This is a   severe error condition that threatens database integrity and must be   corrected immediately. Complete a full database consistency check   (DBCC CHECKDB). This error can be caused by many factors; for more   information, see SQL Server Books Online.  

At this point you get disconnected from the database engine, so reconnect to continue.

USE Test;  DBCC CHECKDB WITH NO_INFOMSGS, TABLERESULTS;  

Corruption is reported here.

DROP TABLE temp;    Msg 824, Level 24, State 2, Line 36  SQL Server detected a logical consistency-based I/O error: incorrect checksum   (expected: 0x298b2ce9; actual: 0x2ecb2ce9). It occurred during a read of page    (1:1054) in database ID 7 at offset 0x0000000083c000 in file 'C:\SQLServer   \MSSQL11.MSSQLSERVER\MSSQL\DATA\Test.mdf'.  Additional messages in the SQL    Server error log or system event log may provide more detail. This is a   severe error condition that threatens database integrity and must be   corrected immediately. Complete a full database consistency check   (DBCC CHECKDB). This error can be caused by many factors; for more   information, see SQL Server Books Online.  

Corruption is reported here, DROP TABLE fails.

/* assuming ENTERPRISE or DEVELOPER edition of SQL Server,      I can use PAGE='' to restore a single page from backup */  USE Master;  RESTORE DATABASE Test PAGE = '1:1054' FROM DISK = 'Test_db.bak';   BACKUP LOG Test TO DISK = 'Test_log_1.bak';    RESTORE LOG Test FROM DISK = 'Test_log.bak';  RESTORE LOG Test FROM DISK = 'Test_log_1.bak';  

Edit #2, to add the @@VERSION info requested.

SELECT @@VERSION;  

Returns:

Microsoft SQL Server 2012 (SP1) - 11.0.3000.0 (X64)       Oct 19 2012 13:38:57       Copyright (c) Microsoft Corporation      Enterprise Evaluation Edition (64-bit) on Windows NT 6.2 <X64>           (Build 9200: )  

I know this is the Evaluation Edition, we have keys for the Enterprise Edition, and will be doing an Edition Upgrade soon.

Better query to select multiple columns into a single output column?

Posted: 01 Mar 2013 09:59 AM PST

I have the following query that sort of works, but I find it to be kludgy and inefficient. Is there a better way to write this query?

I know this is a bit of an odd query, else I would probably have found some advice when searching, but it is necessary for our business logic. We've been doing this in Hibernate queries and using software to output the file but we're needing to improve performance drastically and that means using raw sql. The output of this query is to be used to partially populate a helper table that we query to figure out which of our 40-odd product tables a particular item is contained in.

Could the following query be replicated using something like GROUP_CONCAT in reverse? Note that the second and third column must be repeated on each line. The fields all contain the same type of data.

SELECT 'Pn', 'Mfg', 'TableId', 'SourceColumn', 'OtherData'  SELECT PN1, Mfg, '26', 'PN1', OtherData FROM Table   UNION ALL  SELECT PN2, Mfg, '26', 'PN2', OtherData FROM Table  UNION ALL  SELECT PN3, Mfg, '26', 'PN3', OtherData FROM Table  UNION ALL  SELECT PN4, Mfg, '26', 'PN4', OtherData   INTO OUTFILE 'XXX.dat'  FIELDS TERMINATED BY ','  OPTIONALLY ENCLOSED BY '"'  ESCAPED BY '\\'  LINES TERMINATED BY '\n'  FROM Table d;  

Input looks like the following table:

      |  PN1  |  PN2  |  PN3    |  PN4    |  MFG  |  TableId  |  OtherData  |      -----------------------------------------------------------------------      |  asdf |  abde |  12354  |  asdfc  |  2    |  26       |  0.2456     |      |  nupo |  1354 |  null   |  null   |  2    |  26       |  1.53       |      |  ...  |  ...  |  ...    |  ...    |  ...  |  ...      |  ...        |      |  ...  |  ...  |  ...    |  ...    |  ...  |  ...      |  ...        |      |  ...  |  ...  |  ...    |  ...    |  ...  |  ...      |  ...        |  

I want the output .dat file to look like this:

      "Pn","Mfg","TableId","SourceColumn","OtherData"      "asdf",2,26,"PN1",0.2456      "abde",2,26,"PN2",0.2456      "12354",2,26,"PN3",0.2456      "asdfc",2,26,"PN4",0.2456      "nupo",2,26,"PN1",1.53      "1354",2,26,"PN2",1.53      ...      ...      ...  

If I take the INTO OUTFILE clause out, I get an output that looks like this:

      |  Pn      |  Mfg     |  TableId    |  SourceColumn  | OtherData  |      -----------------------------------------------------------------      |  "Pn"    |  {blob}  |  "TableId"  |  "PN"          |  {blob}    |      |  "asdf"  |  {blob}  |  "PN1"      |  "PN1"         |  {blob}    |      |  "abde"  |  {blob}  |  "PN2"      |  "PN2"         |  {blob}    |      |  "12354" |  {blob}  |  "PN3"      |  "PN3"         |  {blob}    |      |  "asdfc" |  {blob}  |  "PN4"      |  "PN4"         |  {blob}    |      |  "nupo"  |  {blob}  |  "PN1"      |  "PN1"         |  {blob}    |      |  "1354"  |  {blob}  |  "PN2"      |  "PN2"         |  {blob}    |      |  ...     |  ...     |  ...        |   ...          |   ...      |      |  ...     |  ...     |  ...        |   ...          |   ...      |      |  ...     |  ...     |  ...        |   ...          |   ...      |  

Unrecognised system process saturating CPU

Posted: 01 Mar 2013 10:14 AM PST

We have switched to our backup DC today as primary site Bandwidth has gone.

We are getting issues with the most powerful DB server; it is spawning the following process for several db's on the server. One database is simply a small scratch db that is used for maintenance tools, so not replicated or mirrored.

Does anyone know or recognize the following code?

() select table_id, item_guid, oplsn_fseqno, oplsn_bOffset, oplsn_slotid   from [<dbname>].[sys].[filetable_updates_2105058535] with (readpast)   order by table_id  

Why can't RDBM's cluster the way NoSQL does?

Posted: 01 Mar 2013 05:39 PM PST

One of the big plusses for nosql DBMS is that they can cluster more easily. Supposedly with NoSQL you can create hundreds of cheap machines that store different pieces of data and query it all at once.

My question is this, why can't relational DBMS do this like mysql or sql server? Is it that the vendors just haven't figured out a technical way to do this with their existing product, or is there some issue with the relational model that prevents this from being feasible? What is so great about the NoSQL way of storing and accessing data (key/value, documents, etc) that makes clustering easier, if this is true at all?

How to set Password for sqlplus /as sysdba

Posted: 01 Mar 2013 03:07 PM PST

I am new to Oracle. I would like to know how to set login password for sqlplus / as sysdba in oracle 11g.

I want to set the password for my next login, i dont want to login as sqlplus / as sysdba since anyone can log in using sqlplus / as sysdba and can easily access the data or change password for the existing user.I need to restrict the login.

Say in a company 5 peoples are working and they are accessing the server with Admin userid and password to login the system and if some-one changes some modifications in the server using sqlplus / as sysdba it will affect the rest of the peoples right so in-order to avoid we need to restrict the login by setting password.

Thanks

Downloading image data type contain to desktop SQL Server 2008

Posted: 01 Mar 2013 01:30 PM PST

I am very new to SQL Server 2008 and question may look very basic to gurus.

Following in the problem universe.

  1. I have a huge (600 GB size) SQL Server 2008 table that stores pdf files in a column of type Image
  2. Need is to download all the images from database table to my desktop (not on to the server).
  3. I can't run C#, ASP.NET etc in the environment and has to rely on SQL Server Management Studio for the job (I see a option called power shell but not sure how to use it)

Table looks like following

Table name : Attachments

Columns :

FileID (PK, int, not null)    Owner (int, null)     file_name (varchar(50), null)     file_type (varchar(50), null)     file_size (int, null)     file_date (datetime, null)    file_BIN (image, null)    

SQL Server replication subscriptions marked as inactive

Posted: 01 Mar 2013 11:30 AM PST

Is there any way to force SQL Server NOT to mark subscriptions as inactive, ever?

It happens sporadically when there are connection issues and I don't want to have to reinitialize the subscription every time.

Note, I'm not talking about the subscriptions being marked as expired...just as inactive.

Thank you.

SQL Server script to delete accounts no longer in Active Directory

Posted: 01 Mar 2013 04:07 PM PST

We have a SQL Server 2000 that will shortly be migrated to SQL Server 2005. It has years of Windows Authentication accounts created that no longer exist in Active Directory, which prevent the Copy Database Wizard from creating these accounts on the new server.

Is there a script or some automated way of deleting the accounts that no longer exist in our Active Directory?


EDIT: Just to be clear, the logins needing to be deleted are on SQL Server 2000, which does not support the DROP LOGIN command.

Separately, manually deleting the logins in SQL Server 2000 would (I think) be done with exec sp_droplogin 'loginname' but on mine, the login name cannot be found, whether I use 'domain\loginname' or 'loginname'

Just to add to the confusion, exec sp_revokelogin 'domain\loginname' does appear to work.

EDIT 2: Finally resolved the issue. Many of the logons that were problematic were programmatically added to the database, and while they worked in the sense that a user could connect, the username vs NT login name had a mismatch of domain-prefixed logons when SQL Server expected no domain, and vice versa.

To resolve this, I modified the sp_droplogin procedure to take out one of the checks that was erroring.

I'm accepting my own answer as it works in SQL Server 2000.

How to Convert Horizontal to Vertical Array?

Posted: 01 Mar 2013 09:30 AM PST

I need to create a query (suitable for Standard Edition) that has data from multiple columns (Columns 1-6 with corresponding Date Started and Date Completed data) displayed vertically, but also has the column name in the preceding column to identify it, along with other data (Record Number, Status).

Sample data:

+--------------+--------+------------+-------------+---------------+  | RecordNumber | Status | ColumnName | DateStarted | DateCompleted |  +--------------+--------+------------+-------------+---------------+  |            1 | Open   | Column 1   | 2012-01-01  | 2012-02-01    |  |            2 | Hold   | Column 2   | 2012-01-03  | 2012-03-01    |  |            1 | Open   | Column 3   | 2012-02-05  | 2012-04-06    |  |            3 | Closed | Column 4   | 2012-05-10  | 2012-07-25    |  |            2 | Hold   | Column 5   | 2012-03-09  | 2012-04-01    |  |            1 | Open   | Column 6   | 2012-10-10  | 2012-12-12    |  +--------------+--------+------------+-------------+---------------+  
DECLARE @Data AS TABLE  (      RecordNumber    integer NOT NULL,      [Status]        varchar(10) NOT NULL,      ColumnName      varchar(10) NOT NULL,      DateStarted     date NOT NULL,      DateCompleted   date NOT NULL  );    INSERT @Data  (      RecordNumber,       [Status],      ColumnName,      DateStarted,      DateCompleted  )  VALUES      (1, 'Open', 'Column 1', '20120101', '20120201'),      (2, 'Hold', 'Column 2', '20120103', '20120301'),      (1, 'Open', 'Column 3', '20120205', '20120406'),      (3, 'Closed', 'Column 4', '20120510', '20120725'),      (2, 'Hold', 'Column 5', '20120309', '20120401'),      (1, 'Open', 'Column 6', '20121010', '20121212');  

How to script out push subscription creation at the subscriber?

Posted: 01 Mar 2013 10:30 AM PST

I'm trying to set up a push subscription to a SQL Server publication from the subscriber.

I could set up the subscription at the publisher using the Replication Wizard in Management Studio. However, I would prefer to script the process relative to the subscriber so I can automate the deployment of a new SQL Server subscriber instance.

Initially, I'm happy to prompt for the name of the publisher before deployment. If I can get this working, I will look for a way to inject the correct value for my environment automatically.

What is a simple way to do this for a SQL Server instance that has to create multiple subscriptions at different publishers?

I'm open to using any supported SQL Server scripting solution: SMO, RMO, Sqlcmd, WMI, PSDrive, even pure T-SQL.

I've attempted to solve this problem in two ways. The first is a complete solution using T-SQL, but it involves some manual steps.

Using T-SQL

I have a manual solution in T-SQL. The solution is based on the output of the Management Studio Replication Script Generator output.

Using Management Studio, I run the following script to generate a T-SQL script that I can run at the publisher:

PRINT N'  EXECUTE MyDatabase.dbo.sp_addsubscription    @publication = N''MyPublication'',    @subscriber = ''' + CAST(SERVERPROPERTY('ServerName') AS SYSNAME) + ''',    @destination_db = ''SubscriberDatabase'',    @subscription_type = N''Push'',    @sync_type = N''automatic'',    @article = N''all'',    @update_mode = N''read only'',    @subscriber_type = 0;    EXECUTE MyDatabase.dbo.sp_addpushsubscription_agent    @publication = N''MyPublication'',    @subscriber = ''' + CAST(SERVERPROPERTY('ServerName') AS SYSNAME) + ''',    @subscriber_db = ''SubscriberDatabase'',    @job_login = null,    @job_password = null,    @subscriber_security_mode = 1,    @frequency_type = 64,    @frequency_interval = 1,    @frequency_relative_interval = 1,    @frequency_recurrence_factor = 0,    @frequency_subday = 4,    @frequency_subday_interval = 5,    @active_start_time_of_day = 0,    @active_end_time_of_day = 235959,    @active_start_date = 0,    @active_end_date = 0,    @dts_package_location = N''Distributor'';';  

On the MYSUBSCRIBER instance, the output would look like this:

EXECUTE MyDatabase.dbo.sp_addsubscription    @publication = N'MyPublication',    @subscriber = 'MYSUBSCRIBER',    @destination_db = 'SubscriberDatabase',    @subscription_type = N'Push',    @sync_type = N'automatic',    @article = N'all',    @update_mode = N'read only',    @subscriber_type = 0;    EXECUTE MyDatabase.dbo.sp_addpushsubscription_agent    @publication = N'MyPublication',    @subscriber = 'MYSUBSCRIBER',    @subscriber_db = 'SubscriberDatabase',    @job_login = null,    @job_password = null,    @subscriber_security_mode = 1,    @frequency_type = 64,    @frequency_interval = 1,    @frequency_relative_interval = 1,    @frequency_recurrence_factor = 0,    @frequency_subday = 4,    @frequency_subday_interval = 5,    @active_start_time_of_day = 0,    @active_end_time_of_day = 235959,    @active_start_date = 0,    @active_end_date = 0,    @dts_package_location = N'Distributor';  

I copy the output and execute the script at the publisher instance to set up the subscription.

I think I can't automate this in pure T-SQL without editing the script before running it, because T-SQL by design does not handle user input.

Using PowerShell and RMO

PowerShell has simple ways to process user input, so this seems like a good way to prototype the automation process.

MSDN has an eight-step guide to set up a push subscription using the .NET Replication Management Objects (RMO).

Here are the first two steps:

  1. Create a connection to the Publisher by using the ServerConnection class.
  2. Create an instance of the TransPublication class by using the Publisher connection from step 1. Specify Name, DatabaseName, and ConnectionContext.

I'm trying to translate these steps into a PowerShell script, but I can't get past step 2.

In the following code examples, I use fictional object names. I believe this does not affect the answerability of the question because the error message is identical when I use the real object names.

First attempt: setting the properties

My first attempt is to create the TransReplication object then set its properties. The code looks like this:

Add-Type -AssemblyName "Microsoft.SqlServer.Rmo, Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91";    $Publisher = New-Object Microsoft.SqlServer.Management.Common.ServerConnection MyServer    $Publication = New-Object Microsoft.SqlServer.Replication.TransPublication  $Publication.Name = 'MyPublication'  $Publication.DatabaseName = 'MyDatabase'  $Publication.ConnectionContext = $Publisher  

When I execute this script, I see the following error:

Exception setting "ConnectionContext": "Cannot convert the "server='(local)';Trusted_Connection=true;multipleactiveresultsets=false" value   of type "Microsoft.SqlServer.Management.Common.ServerConnection" to type "Microsoft.SqlServer.Management.Common.ServerConnection"."  At line:8 char:14  + $Publication. <<<< ConnectionContext = $Publisher      + CategoryInfo          : InvalidOperation: (:) [], RuntimeException      + FullyQualifiedErrorId : PropertyAssignmentException  

It looks like it's failing becuase it can't convert the type ServerConnection to the type ServerConnection. I don't understand how this could fail for the stated reason, because the value is already of the required type.

Second attempt: overloading the constructor

My second attempt is to specify the property values of the TransReplication object in the constructor. The code looks like this:

Add-Type -AssemblyName "Microsoft.SqlServer.Rmo, Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91";    $Publisher = New-Object Microsoft.SqlServer.Management.Common.ServerConnection MyServer    $Publication = New-Object Microsoft.SqlServer.Replication.TransPublication 'MyPublication', 'MyDatabase', $Publisher  

When I execute this script, I see the following error:

New-Object : Cannot find an overload for "TransPublication" and the argument count: "3".  At line:5 char:26  + $Publication = New-Object <<<<  -TypeName Microsoft.SqlServer.Replication.TransPublication 'MyPublication', 'MyDatabase', $Publisher      + CategoryInfo          : InvalidOperation: (:) [New-Object], MethodException      + FullyQualifiedErrorId : ConstructorInvokedThrowException,Microsoft.PowerShell.Commands.NewObjectCommand  

It looks like the New-Object cmdlet can't find the three-argument constructor documented by MSDN:

public TransPublication(    string name,    string databaseName,    ServerConnection connectionContext  )  

Parameters

As far as I can tell, I'm overloading the constructor correctly.

Am I doing something wrong? Is there something unusual about my environment? Am I better off using another solution?

How can I create a MySQL database based on a SQLite schema?

Posted: 01 Mar 2013 02:52 PM PST

I have MySQL on my server and different SQLite databases on a client. I'm trying to find a way to create the same databases on the server.

Since a SQLite schema is actually a CREATE query, one way would be to write a script that extracts every critical part from the create query and converts them into a corresponding MySQL query, finally concatenate them all.

This seems quite a complex and hard-to-cover-all-situations method. I am wondering if there is any alternatives?

Can I monitor the progress of importing a large .sql file in sqlite3 using zenity --progress?

Posted: 01 Mar 2013 12:30 PM PST

I'm trying to monitor the progress of a sqlite3 command importing a large .sql file into a database using zenity --progress.

I've tried the following which will import the file, however progress is not shown:

sqlite3 DATABASE < import_file.sql | zenity --progress --percentage=0 --auto-close  

I know I need to provide zenity a numeric source for the progress, but don't really know how to obtain the number.

Can anyone help me?

Is there any benefit to defragmenting SQL indexes in a SAN environment?

Posted: 01 Mar 2013 09:21 AM PST

Our SQL server lives on a SAN. It contains dozens of OLTP databases, some with several tables containing over 1m records.

We have been running Ola Hallengren's index maintenance scripts weekly, and it runs for several hours each time. Based on the fragmentation threshold, the script will either reorganize or reindex an index. We have observed that during reindexing, the log files get huge which leads to an excessive consumption of bandwidth during the log shipping.

Then comes an article from Brent Ozar in which he says to stop worrying about SQL indexes:

Your hard drives are shared with other servers that are also making drive requests at the same time, so the drives will always be jumping all over the place to get data. Defragging your indexes is just meaningless busy work.

Googling this question leads to varying opinions, most supported with arguments that seem too brief or weak. Our tentative plan is to adjust the fragmentation threshold in our maintenance script so that it reorganizes much more often than it reindexes.

What is the final verdict? Is it worthwhile to defrag SQL indexes on a SAN considering the burdens associated with running weekly maintenance jobs?

Transactional Replication snapshot locking Publisher db from beginning

Posted: 01 Mar 2013 10:42 AM PST

I have recently upgraded my remote distributor to 2008r2. I am getting complete system locks when attempting to generate a new snapshot in the day when we have added an article to the publication (we could do this in the past with very little issues).

I have checked sp_Helppublication and we are using synchronization method 3 (concurrent), so no locks should be held until the very end.

I have been trying to following this link as I am only adding an additional table with 26 rows but don't want to reinitialize and have the whole snapshot delivered.

http://deepakrangarajan.blogspot.co.uk/2009/01/sql-2005-transaction-replication-adding.html

I attempted to run this code

EXEC sp_changepublication  @publication = 'statics',  @property = N'immediate_sync',  @value = 'false'  GO   

but this is also locking up the system completely. I guess I will have to run in the middle of the night but thought that you should be able to run this as just a config change to the publication.

Also when I try the last step in the link to add it to the subscription

EXEC sp_addsubscription    @publication = 'statics',    @article = 'dbo.SupplierCorporatePayAccountLink_tbl',    @subscriber = 'PEARLJAM',    @destination_db = 'MLT-Replicated',    @reserved='Internal'  

I get the following error message

Msg 14058, Level 16, State 1, Procedure sp_MSrepl_addsubscription_article, Line 331 Cannot create the subscription because the subscription already exists in the subscription database. Only one subscription to the same publication is allowed in each subscription database. Drop the subscription and add it again if necessary. If the problem persists, replication metadata might be incorrect; see Books Online for troubleshooting information.

After the snapshot, how can I just have the new article added to the subscriber?

Should I index a bit field in SQL Server?

Posted: 01 Mar 2013 12:54 PM PST

I remember reading at one point that indexing a field with low cardinality (a low number of distinct values) is not really worth doing. I admit I don't know enough about how indexes work to understand why that is.

So what if I have a table with 100 million rows in it, and I am selecting records where a bit field is 1? And let's say that at any point in time, there are only a handful of records where the bit field is 1 (as opposed to 0). Is it worth indexing that bit field or not? Why?

Of course I can just test it and check the execution plan, and I will do that, but I'm also curious about the theory behind it. When does cardinality matter and when does it not?

1 comment:

  1. The most effective method to Solve Error XX000 in PostgreSQL through PostgreSQL Relational Database Service
    When you discover this blunder XX000 in PostgreSQL, it essentially means couldn't compose square. No space left on gadget. Extraordinary compared to other approach to settle this issue is it is possible that you need to run a less complex inquiry or to free up more space on the drive that holds. By along these lines you can essentially take care of this issue. Aside from that you can take fast help with respect to this issue through Cognegic's Postgres SQL Support for Linux or Postgres SQL Support for Windows. With our PostgreSQL Remote Database Service you can undoubtedly handle this issue in advantageous way.
    For More Info: https://cognegicsystems.com/
    Contact Number: 1-800-450-8670
    Email Address- info@cognegicsystems.com
    Company’s Address- 507 Copper Square Drive Bethel Connecticut (USA) 06801

    ReplyDelete

Search This Blog