Friday, October 4, 2013

[how to] imp paused mid import

[how to] imp paused mid import


imp paused mid import

Posted: 04 Oct 2013 03:49 PM PDT

OK, I'm new to Oracle administration but I'm trying to import a previous dump (no pun intended) which was split into two files.

The import is happening from an Oracle 9i to 11gR2 on Windows

I used the following commands:

imp userid=olduser/oldpassword indexes=n log=import.log    file(Export_DB_776full_20130829_020000_01,Export_DB_776full_20130829_0200  00_02) fromuser=olduser touser=newadminuser commit=y feedback=100  

Now this has progressed fairly well and has imported data from what seems to be the first file, peppering the console with a dot every 100 rows imported. Then when it reaches the table where the export was split (into the second dmp file) its just paused. I thought it was simply collecting its thoughts and left it overnight but with no progress

Trying to connect to Oracle from SQLPLUS on the command line, even as SYS or SYSDBA is not possible, and returns with a

"ORA-00257: archiver error. Connect internal only, until freed."  

Googling has yielded no direct answers to this but what I did find at least yielded some access to the database to perform some alteration function. Some of the other posts I saw mentioned that the log files are filling up, but there is well in excess of 150 GB free on both drives (the tablespace is split across two drives)

A few things:

What is going on? - because the error message seems a bit cryptic to me - How can I connect to the database? How can I fix this? How do I not fall into this trap again?

Thanks

Foreign key for rows added later

Posted: 04 Oct 2013 02:07 PM PDT

I'm using a queue to crawl pages and track links between them.

I have a table called page with a primary key called page_id.

I have a table called links with two columns: link_from, link_to

The queue takes a page, saves it, collects all links from it and saves those. The link_to value in some rows points to a non-existent row (temporarily, until the target pops out of the queue).

From a long-term point of view, I'd like to connect them with foreign keys (for multiple reasons, e.g. delete cascading, automatic relationship discovery from an application which uses the database).

How do you do such a thing? I can't insert the row with a foreign key reference until the row exists.

Comparison or benchmark tests of column-oriented databases [on hold]

Posted: 04 Oct 2013 01:59 PM PDT

I'm looking for some fresh comparison or benchmark tests results of column-oriented databases.

Unfortunately, I found very outdated benchmark tests such as:

Thanks!

Backup of msdb stuck in rollback

Posted: 04 Oct 2013 02:16 PM PDT

A native full backup of msdb on a SQL Server 2005 Enterprise instance (9.0.4053) deployed on a two-node Win2K3 cluster (Microsoft Windows NT 5.2 (3790)) is currently stuck in rollback.

The DB is in Simple recovery mode but the Tlog is full. Attempts to add secondary Tlogs are failing due to a 3023 error (regarding the action needing to be serialized).

Options I'm considering are moving non DB files off the volume to allow for autogrow of the Tlog and instance restart. I've tried changing the state of the DB from multi-user to single-user but the SPID stuck in rollback is blocking. SPID 126 is the session in question below.

Are there other options I can consider?

session_id  status  blocking_session_id wait_type   wait_time   wait_resource   transaction_id  1098    suspended   0   TRACEWRITE  687     25828712297  126 suspended   0   BACKUPTHREAD    1284206734      0  146 suspended   0   WAITFOR 500     0  229 suspended   0   TRACEWRITE  250     42236221330  249 suspended   0   TRACEWRITE  531     41659048209  299 suspended   441 LCK_M_U 2149156 DATABASE: 15 [BULKOP_BACKUP_LOG]    0  323 suspended   0   WAITFOR 2656        0  441 suspended   0   BACKUPTHREAD    980511734       0  

Does RMAN in Oracle only copy the archive logs? What happens to the active redo log in case of failure?

Posted: 04 Oct 2013 02:04 PM PDT

If RMAN copy copies the archived redo logs when they are full, what happens to the current redo log in the event of a failure (hardware failure, etc.)? Isn't there always an active redo log that is not an archive_log?

MySQL or MongoDB? [on hold]

Posted: 04 Oct 2013 06:20 PM PDT

I am projecting the medium system and I arrived at a crossroads. which is the best MySQL, or MongoDB ?

-If necessary I will use as a complement rails.!

Thanks in advance!

A complex but simple select statement [migrated]

Posted: 04 Oct 2013 12:21 PM PDT

Hi guys I am not much of a coder just trying to help out our demographics team. what I need is this I need a select statement that will scan through a table and based on the number from one column put that row into a new table.

So say i have a table with the following columns Name, Address, State, ZIP. I want a select statement that will go through and get the value from state and say ok it's AZ put it into a table called AZ, and then in the next one it's CA put it into CA, and then the next one may be AZ so add it to the table AZ it already created.

I also cant tell it what the values are for the states as they are random (it's not really states that we use just using that as an example.)

How to fetch file path from a folder through PL/SQL

Posted: 04 Oct 2013 09:53 AM PDT

We upload files on our library on weekly basis. I wanted to fetch the latest uploaded file path/URL from a folder through PL/SQL, but not sure if it's possible.

If possible, please suggest what should I do.

Are these indexes redundant?

Posted: 04 Oct 2013 11:19 AM PDT

I have two indexes on a table that will contain roughly 10-20 million records at a given moment.

Index 1

CREATE NONCLUSTERED INDEX [IX_NC_AccountNumber_UTC] ON [dbo].[LogWindow]  (      [AccountNumber] ASC,      [UTC] ASC  )  INCLUDE (   [Site],      [Application],      [Version],      [Machine],      [Context],      [Severity],      [Message],      [Details]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]  GO  

Index 2

CREATE NONCLUSTERED INDEX [IX_NC_AccountNumber_UTC_Site_Context_Machine] ON [dbo].[LogWindow]  (      [AccountNumber] ASC,      [UTC] ASC,      [Site] ASC,      [Context] ASC,      [Machine] ASC  )  INCLUDE (   [Application],      [Version],      [Severity],      [Message],      [Details],      [Type]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]  GO  

The reason why I am currently using two indexes is because I am issuing two types of queries. The first query only filters records based upon AccountNumber and UTC. The second query filters records based upon those same columns, plus the additional Site, Context and Machine columns.

The second index is very similar to the first; it's just covering less columns because now they are part of the key indexed columns.

Does it make sense for me to have the first index when the second one is so similar with additional key columns? Will SQL Server determine there is a benefit to using the first index over the second?

Extract data for all groups in a single SELECT statement

Posted: 04 Oct 2013 07:55 AM PDT

MySQL 5.

Table: schedule  Columns:  id  int(10) UN PK AI  startdate   date   groupid smallint(5) UN

The following statement extracts the Schedule ID for a given group (the schedule with the latest date not greater than the given date):

SELECT id FROM schedule  WHERE groupid=@group AND @date>=startdate  ORDER BY startdate DESC LIMIT 1

Question: Can I extract Schedule IDs for all groups, without enumerating (calling the above statement repeatedly) with all existing group IDs?

Incremental MySQL restore using binlogs

Posted: 04 Oct 2013 02:10 PM PDT

I've tried to restore a MySQL database using incremental binlogs and mysqlbinlog tool but I received the following message:

ERROR 1666 (HY000) at line 30: Cannot execute statement: impossible to write to binary log since statement is in row format and BINLOG_FORMAT = STATEMENT.

BINLOG_FORMAT parameter in my database is set to ROW, but is there a way to restore binglog with this parameter in ROW?

Is there any way to know all the tables(and/or views, triggers, functions) present in the Oracle 9i database?

Posted: 04 Oct 2013 07:33 AM PDT

I want to know this because, I forgot the name of the table I created

Using Oracle 9i

Thanking you for your time.

Why does @@dbts increase after backup/restore?

Posted: 04 Oct 2013 04:51 PM PDT

I have a SQL Server 2005 database, containing some tables which have a Timestamp (or RowVersion) column. I need to know the value of the current timestamp, so I use the following query: SELECT CAST(@@dbts AS BIGINT);

This will return, for example, 10505.

Immediately afterwards, without updating, inserting, ... anything, I do a BACKUP DATABASE and a RESTORE DATABASE and I run the SELECT query again. Only this time, the result is 14000, while none of the timestamps in the tables have increased.

Why/how does this happen?

Slow SQL Server Database (approx 2TB) - high CPU due to index fragmentation?

Posted: 04 Oct 2013 11:49 AM PDT

I've posted this over at ServerFault but didn't have much luck getting a response.

I have a fairly large SQL Server database (~2TB). Most of the data is in one table (~6 billion rows).

Two weeks ago I dropped two non-clustered indexes on the large table and migrated the data onto a single 6TB RAID SSD array. I then recreated the two indexes which took quite some time (assuming because I currently have the data (for table and indexes) and log on the same array and it seems that with the RAID I can't have fast sequential AND random r/w at the same time).

Anyway after recreating the indexes it ran very well for about a week. During the week I have been slowly running a clean on the large table which just removes old unneeded rows. So far I've removed about 300 million out of the 6 billion, and at a guess I still have a lot more to go.

Now after about a week of running like this it is now running very slow and I'm not sure which would be best to do.

Current situation:

  • Dual Xeon
  • 192GB RAM
  • Windows Server 2012 with SQL Server 2012
  • CPU is hitting 100% (16 cores) - was only using about 50% prior to the slowdown
  • IO doesn't seem to working too hard (no queue)

Large Table currently has (I don't have any fragmentation info prior to now):

  • 1x Clustered index: 48% fragmentation
  • 1x Non-clustered index: 36% fragmentation
  • 1x Non-clustered index: 10% fragmentation
  • I used to have two more indexes on this table but dropped them a while ago

What do you think would best fix my problem

  • Rebuild the non-clustered indexes on the same array (assume that this should fix the problem but will take a long time to do as it did before. Will probably have the same problem in the near future as I'm still cleaning the table)
  • Rebuild the non-clustered indexes on a new RAID array (should fix as above but may be faster)
  • Move the non-clustered indexes to a new RAID array (quickest option)
  • Recreate the two old indexes on a new RAID array (not sure if this relieves CPU or IO pressure)

Do fragmented indexes cause higher CPU usage?

Is there anything else I could be missing?

TIA

Rebuilding index

Posted: 04 Oct 2013 08:23 AM PDT

I have a transaction database and one of nightly job is to rebuild indexes that followed by a log backup and then shrink the log.

Is this safe to do so ?

Thanks Antony

How can I execute a PL/SQL procedure for Oracle via JDBC?

Posted: 04 Oct 2013 07:39 AM PDT

I have a Java webapp with an Oracle DB, and I try to execute this statement at the end of a data migration script:

EXEC DBMS_STATS.GATHER_TABLE_STATS (ownname => 'MY_SCHEMA', tabname => 'MY_TABLE', estimate_percent => dbms_stats.auto_sample_size);  

It works in sql*plus, but not via JDBC:

"Error: ORA-00900: invalid SQL statement"  

What am I doing wrong?

What is a "leaf expression"?

Posted: 04 Oct 2013 08:25 AM PDT

What is a "leaf expression"?

Partition-clause expressions are limited in the following ways:      Your partition-clause must calculate a single non-null value for each row. You can reference multiple columns, but each row must return a single value.      You can specify leaf expressions, functions, and operators in the partition clause expression.      All leaf expressions in the partition clause must be either constants or columns of the table.  

Source https://my.vertica.com/docs/6.1.x/HTML/index.htm#18125.htm

Selecting from an excel spreadsheet into SQL Server table

Posted: 04 Oct 2013 11:25 AM PDT

This question has been asked before but I tried giving full admin rights to the SQL Server user on C:\temp\ folder location (I am using Windows authentication into SQL Server Express).

So for the following code snippet:

Declare @strSQL as varchar(200)  declare @file as varchar(200)     SET @file='C:\temp\file.xlsx'  SET @strSQL=N'SELECT * INTO #mytemptable FROM OPENROWSET(''Microsoft.Jet.OLEDB.4.0'', ''Excel 8.0;Database='+@file+';HDR=YES'', ''SELECT * FROM [Sheet1$]'');'  SET @strSQL=@strSQL+N'SELECT * FROM ##mytemptable'  PRINT @strSQL  Exec (@strSQL)  

EXPANDED SQL STATEMENT

SELECT *   INTO #mytemptable   FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0', 'Excel 8.0;Database=C:\temp\file.xlsx;HDR=YES', 'SELECT * FROM [Sheet1$]');    SELECT * FROM ##mytemptable  

I get this error:

Msg 7399, Level 16, State 1, Line 1
The OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "(null)" reported an error. The provider did not give any information about the error.
Msg 7303, Level 16, State 1, Line 1
Cannot initialize the data source object of OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "(null)".

How do I track why my SQL Azure occupied space pulsates?

Posted: 04 Oct 2013 12:25 PM PDT

This is inspired by this StackOverflow question. I have this code:

SELECT CONVERT(INT, SUM(reserved_page_count) * 8 / 1024) FROM sys.dm_db_partition_stats  

for detecting database space consumption and it returns exactly the same result as SQL Azure management portal pie chart. The problem is the number fluctuates greatly without apparent reasons - sometimes it's around 35% and sometimes it's around 54%.

This is kinda worrying - I don't get what happens and the major concern is that we suddenly consume all the space and our service comes to a halt.

I seriously tried to look into sys.dm_db_partition_stats and sys.partitions but they look like a pile of magic numbers to me.

What data should I use to find the root cause of the fluctuations?

Scaling of PostGIS vs MySQL cos/sin calculations

Posted: 04 Oct 2013 06:25 PM PDT

I need to sort database rows according to GPS coordinates (or any other geographical representation).

The way I've understood it, MySQL Spatial Indexes are too buggy at the time of writing (at least articles and my own tests indicate so).

So I can either switch to PostgreSQL to use PostGIS which seems to be working excellently, or I can do it mathematically in MySQL:

((acos(sin(latitude * pi() / 180) * sin($latitude * pi() / 180) + cos(latitude * pi() / 180) * cos($latitude * pi() / 180) * cos((longitude - $longitude) * pi() / 180)) * 180 / pi()) * 60 * 1.1515 * 1.609344) km  

I'm not asking for a PostgreSQL vs MySQL debate, but simply how the performance of the MySQL expression scales as rows grow compared to using PostGIS/PostgreSQL.

It's a drastic step to switch database system, so the upside better be significant.

Data sharing between client and server two ways

Posted: 04 Oct 2013 09:25 AM PDT

One of my Silverlight applications is running. This Application is using one central database (SQL Server 2008) and three other client databases (SQL Server 2008) in server other locations.

Our requirement is that with minor changing or without changing the Silverlight application, when we make changes in the central database (insert, update and delete in multiple tables) these changes automatically occur in all client databases. And when any change is made in any client database all client as well as central database should be updated automatically.

For this I read some about Sync Framework and SQL Server 2008 Change Tracking. But I have no idea about how will I do this. Please help me. How can we achieve this?

Restore of logfiles in SQL Server

Posted: 04 Oct 2013 03:59 PM PDT

I'm trying to write a script which will read a folder containing a full backup and X number of log files. My script fails when it starts the loop to read the log files. The error is cannot convert varchar to nvarchar. I have tried defining @Pathandlog as a nvarchar, and I still get the same error. Any ideas?

--allow advanced options to be changed.      EXEC sp_configure 'show advanced options', 1      GO      --Update the currently configured value for advanced options.      RECONFIGURE      GO      --Enable XP_CMDSHELL      EXEC sp_configure 'xp_cmdshell', 1      GO      --Update the currently configured value for this feature.      RECONFIGURE      GO        --delete temp table if exists      if object_id('dbo.myFileList') is not null      begin        drop table dbo.myFileList      end          --Create the table to store file list      CREATE TABLE myFileList (FileNumber INT IDENTITY,FileName VARCHAR(256))      --Insert file list from directory to SQL Server      DECLARE @clientname varchar(256) ='clientname'      DECLARE @Path varchar(256) = 'C:\Import\' + @clientname     /* ' */      DECLARE @dbname varchar(256) = 'ApplicationV1A'+@clientname      DECLARE @Command varchar(1024) =  'dir ' + @Path + ' /A-D  /B'      INSERT INTO myFileList      EXEC MASTER.dbo.xp_cmdshell @Command                --Check the list      SELECT * FROM myFileList          --get first file and run restore of database - leaving db in recovery        DECLARE @fullbackname VARCHAR(MAX)      SELECT TOP (1) @fullbackname = FileName      FROM dbo.myFileList      ORDER BY Filenumber        PRINT @fullbackname        DECLARE @fullbacknamedb VARCHAR(MAX)='c:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\'+@dbname+'.DB'      DECLARE @dbnamelog VARCHAR(MAX)='c:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\'+@dbname+'.log'      PRINT @dbname      DECLARE @tmp VARCHAR(MAX)=  @Path +'\'+ @fullbackname            RESTORE DATABASE @dbname         FROM  DISK = @tmp        WITH  FILE = 1,  MOVE N'ApplicationV1A' TO @fullbacknamedb,  MOVE N'ApplicationV1A_log' TO @dbnamelog, NORECOVERY, NOUNLOAD, REPLACE, STATS = 10        /* temporarily disabled         */          --restore db commands to go here        --select last file for later.        DECLARE @taillog int      SELECT  TOP (1) @taillog = Filenumber      FROM dbo.myFileList      WHERE FileName IS NOT null      ORDER BY Filenumber DESC          DECLARE @fullbackup int      SELECT  TOP (1) @fullbackup = Filenumber      FROM dbo.myFileList      ORDER BY Filenumber        DECLARE @taillogname VARCHAR(MAX)      SELECT  TOP (1) @taillogname = FileName      FROM dbo.myFileList      WHERE FileName IS NOT null      ORDER BY Filenumber DESC        PRINT @taillogname        --get logfiles and run restore of database leaving db in recovery.  This must loop through all log files        DECLARE @FileName VARCHAR (MAX)      SELECT FileName       FROM dbo.myFileList      WHERE (Filenumber > @fullbackup AND Filenumber < @taillog)      ORDER BY Filenumber          DECLARE @logloop INT      SET @logloop = 1      WHILE (@logloop <=@taillog)      BEGIN        PRINT @FileName      DECLARE @Pathandlog VARCHAR(MAX) = @Path + @FileName        DECLARE @Manu NVARCHAR(MAX)=@dbname        RESTORE LOG @Manu      --RESTORE LOG @dbname        FROM DISK = @Pathandlog        WITH FILE = 1, NORECOVERY, NOUNLOAD,  STATS = 10                    SET @logloop = @logloop + 1      IF @logloop = @taillog      BREAK;      END                RESTORE LOG [@dbname]        FROM DISK = @Path + @FileName        WITH NORECOVERY      /**/              --select each one, run restore then move to next log        --get last logfile and run restore of database and take db out of recovery        DECLARE @tailogname VARCHAR(MAX)      SELECT  TOP (1) FileName      FROM dbo.myFileList      WHERE FileName IS NOT null      ORDER BY Filenumber DESC        --command to restore and take db out of recovery here        /*      RESTORE LOG [@dbname]        FROM DISK = @Path+@tailogname        WITH RECOVERY      */          --Clean up - may need to run this on script launch as well in case this script failed      DROP TABLE myFileList      GO        --disable xp_cmdshell once job complete      EXECUTE sp_configure 'xp_cmdshell', 0      RECONFIGURE  

Database table design question

Posted: 04 Oct 2013 03:25 PM PDT

Firstly, database novice here. I'm trying to create a system for processing Sales Orders into Purchase Orders and then split the Purchase Order into Shipping Containers. The systems needs to be able to split a Sales Order Line Item into 2 or more Purchase Orders and a Purchase Order Line Item into 2 or more Shipping Containers.

There will be some(a lot of) adjustments back and forth where once a Sales Order Line Item gets split and put into multiple Purchase Orders it might get split differently later on. And same when a Purchase Order Line Item is processed into Shipping Containers. I can't for the life of me how to best handle this operation.

If it helps, here are the tables I have so far with some details excluded for brevity:

CREATE TABLE PurchaseOrder (      Id INTEGER UNSIGNED UNIQUE NOT NULL AUTO_INCREMENT,      PurchaseOrderNumber VARCHAR(15) UNIQUE,        PRIMARY KEY(Id)  );      CREATE TABLE Container (      Id INTEGER UNSIGNED UNIQUE NOT NULL AUTO_INCREMENT,      ContainerName VARCHAR(20) UNIQUE,      PurchaseOrderId INTEGER UNSIGNED,        PRIMARY KEY(Id),      FOREIGN KEY(PurchaseOrderId) REFERENCES PurchaseOrder(Id),  );    CREATE TABLE SalesOrder (      Id INTEGER UNSIGNED UNIQUE NOT NULL AUTO_INCREMENT,      ClientId INTEGER UNSIGNED,      SalesOrderNumber VARCHAR(10),        PRIMARY KEY(Id),      FOREIGN KEY(ClientId) REFERENCES Client(Id)  );    CREATE TABLE SalesOrderLineItem (      Id INTEGER UNSIGNED UNIQUE NOT NULL AUTO_INCREMENT,      SalesOrderId INTEGER UNSIGNED,      ProductId INTEGER UNSIGNED,      Qty INTEGER,      Price DECIMAL(5,2),      Cost DECIMAL(5,2),        PRIMARY KEY(Id),      FOREIGN KEY(SalesOrderId) REFERENCES SalesOrder(Id),      FOREIGN KEY(ProductId) REFERENCES Product(Id)  );    CREATE TABLE PurchaseOrderLineItem (      Id INTEGER UNSIGNED UNIQUE NOT NULL AUTO_INCREMENT,      PurchaseOrderId INTEGER UNSIGNED,      SalesOrderId INTEGER UNSIGNED,      ProductId INTEGER UNSIGNED,      ClientId INTEGER UNSIGNED,      MfgId INTEGER UNSIGNED,        PRIMARY KEY(Id),      FOREIGN KEY(PurchaseOrderId) REFERENCES PurchaseOrder(Id),      FOREIGN KEY(SalesOrderId) REFERENCES SalesOrder(Id),      FOREIGN KEY(ProductId) REFERENCES SalesOrder(Id),      FOREIGN KEY(ClientId) REFERENCES Client(Id),      FOREIGN KEY(MfgId) REFERENCES Mfg(Id)  );  

I'm thinking to create additional association tables between Sales Order Line Item and Purchase Orders and for Purchase Order Line Item and Containers to keep track of this kind of back and froth splitting?

Help appreciated!

Update Sql Server from Mysql

Posted: 04 Oct 2013 10:12 AM PDT

We have a website that stores form data in a web hosted MySQL database. Whenever form data is entered into the MySQL server tables I need update all necessary tables on our local SQL Server database.

I have seen several articles that describe how to update MySQL using SQL Server triggers but nothing that goes the other way.

If I create a linked server and linked tables in SQL Server can I create a trigger on a MySQL table from the SQL Server management studio?

What is a good, repeatable way to calculate MAXDOP on SQL Server?

Posted: 04 Oct 2013 09:02 AM PDT

When setting up a new SQL Server 2012, I use the following code to determine a good starting point for the MAXDOP setting:

/*      This will recommend a MAXDOP setting appropriate for your machine's NUMA memory     configuration.  You will need to evaluate this setting in a non-production      environment before moving it to production.       MAXDOP can be configured using:       EXEC sp_configure 'max degree of parallelism',X;     RECONFIGURE       If this instance is hosting a Sharepoint database, you MUST specify MAXDOP=1      (URL wrapped for readability)     http://blogs.msdn.com/b/rcormier/archive/2012/10/25/     you-shall-configure-your-maxdop-when-using-sharepoint-2013.aspx       Biztalk (all versions, including 2010):      MAXDOP = 1 is only required on the BizTalk Message Box     database server(s), and must not be changed; all other servers hosting other      BizTalk Server databases may return this value to 0 if set.     http://support.microsoft.com/kb/899000  */      DECLARE @CoreCount int;  DECLARE @NumaNodes int;    SET @CoreCount = (SELECT i.cpu_count from sys.dm_os_sys_info i);  SET @NumaNodes = (      SELECT MAX(c.memory_node_id) + 1       FROM sys.dm_os_memory_clerks c       WHERE memory_node_id < 64      );    IF @CoreCount > 4 /* If less than 5 cores, don't bother. */  BEGIN      DECLARE @MaxDOP int;        /* 3/4 of Total Cores in Machine */      SET @MaxDOP = @CoreCount * 0.75;         /* if @MaxDOP is greater than the per NUMA node         Core Count, set @MaxDOP = per NUMA node core count      */      IF @MaxDOP > (@CoreCount / @NumaNodes)           SET @MaxDOP = (@CoreCount / @NumaNodes) * 0.75;        /*          Reduce @MaxDOP to an even number       */      SET @MaxDOP = @MaxDOP - (@MaxDOP % 2);        /* Cap MAXDOP at 8, according to Microsoft */      IF @MaxDOP > 8 SET @MaxDOP = 8;        PRINT 'Suggested MAXDOP = ' + CAST(@MaxDOP as varchar(max));  END  ELSE  BEGIN      PRINT 'Suggested MAXDOP = 0 since you have less than 4 cores total.';      PRINT 'This is the default setting, you likely do not need to do';      PRINT 'anything.';  END  

I realize this is a bit subjective, and can vary based on many things; however I'm attempting to create a tight catch-all piece of code to use as a starting point for a new server.

Does anyone have any input on this code?

Why do I get "The initial snapshot for publication is not yet available."?

Posted: 04 Oct 2013 12:27 PM PDT

We are using Transactional Replications with updatable subscriptions. 1 publisher, 1 subscriber.

SQL Replication started encountering a constraint error on 3/6 at 10AM because people at two different offices entered data that violated a unique constraint. Totally expected.

However, at 12:08PM on 3/6, SQL Replication stopped reporting that error and instead said:

"The initial snapshot for publication 'TheDB' is not yet available."

Why would it stop retrying replication and instead start throwing this error? Now we need to regenerate the snapshot.

  • No DB backups or restores were performed at this time
  • The snapshot was successfully applied at the subscriber months ago when replication was initially set up.
  • There is a message in the log from 12:07 on 3/6 saying

[000] Request to run job COTOPSVR\TheDB-TheDB-COTCPSVR\DB-3 (from User CTOB\Administrator) refused because the job is already running from a request by User sa

  • No one manually initiated any job or did anything at all at 12:06PM.

Any ideas? I know how to fix this...I want to prevent this from happening again.

Manually set log file size after shrink SQL Server 2008 R2

Posted: 04 Oct 2013 08:30 AM PDT

Am becoming a somewhat involuntary DBA at work at teh moment and really need some help on something.

We have a 40GB database in Full Recovery Mode, no log backup configured and a huge log file of 84GB. My plan thus far to salvage this situation is to run a full log backup on the database, shrink the log file and instigate a maintenance plan to run a log backup every night with the database backup to help keep it under control.

My problem is I do not want the log file to shrink down to nothing and spend the first morning on Monday constantly growing. I have a rough estimate as to what the file should be (about 20% of the database) and would like to set this from the get-go to ensure as much contiguous space as possible. Is this just a case of changing "Initial Size" under databse Properties -> Files? I would guess as well that the database would need to be offline for this to occur?

Thanks in advance

SQL Server Decimal(9, 0) vs INT

Posted: 04 Oct 2013 09:44 AM PDT

One of our customers uses for some columns the datatype DECIMAL(18,0) in his SQL Server 2008R2 database. Because the columns grow quite slowly, he recently proposed to change the datatype to DECIMAL(5,0) to regain some storage.

According to the MSDN library, the storage space of the DECIMAL(5,0) datatype is, just like the DECIMAL(9,0) datatype, 5 bytes. INT is 1 byte smaller, but can store everything in the range of -2^31 to 2^31 instead of the -99,999 to 99,999 which DECIMAL(5,0) can store. Even the largest DECIMAL which fits into 5 bytes (DECIMAL(9,0)) can store only integers in the range -999,999,999 to 999,999,999 (which is less than half of the range INT offers in 4 bytes).

I can think of two "benefits" of using DECIMAL over INT:

  • The ability to add scale afterwards, without using more storage space
  • The ability to scale the precision up to 38 digits, without altering data type

but these aren't real benefits in my opinion:

  • Adding scale to integers does only make sense in very few cases (in most cases where scale does make a difference, it could also be added beforehand)
  • SQL Server sees every precision / scale combination as a different data type, so the datatype isn't left alone when increasing the precision or scale.

This makes me wonder: what is the added benefit of a DECIMAL(5,0) datatype for integers?

[Articles] The Ratio

[Articles] The Ratio


The Ratio

Posted: 03 Oct 2013 11:00 PM PDT

How many developers does it take to overwhelm a DBA? It's an interesting question, and this week Steve Jones asks how many you actually support.

[MS SQL Server] AMD processor issue

[MS SQL Server] AMD processor issue


AMD processor issue

Posted: 03 Oct 2013 07:58 AM PDT

Hi,I read somewhere, don't recall if it was here or some other forum or blog, that AMD Opteron processors should have hyper-threading disabled in order for SQL Server 2008 R2 use it's "full power", since hyper-threading could "mistake" the engine...Is it true or has the problem been fixed? The R2 version is RTM and the CPU is Opteron 2435.Thanks,Pedro

[SQL 2012] Installing SQL Server 2012 Enterprise on Windows 7

[SQL 2012] Installing SQL Server 2012 Enterprise on Windows 7


Installing SQL Server 2012 Enterprise on Windows 7

Posted: 03 Oct 2013 09:43 PM PDT

Hi all,I'm totally new to SQL Server. Forgive if my question look a stupid one but I've gone throught the Microsoft website searching for the answer but just can't locate it.Can I install SQL Server 2012 Enterprise Edition on a Windows 7 machine? I just can't can't seem to find the system requirements for the Enterprise Edition!Thanks for your help!

2012 availability group split

Posted: 03 Oct 2013 08:19 AM PDT

Hi, I have two databases that I want to make into an AlwaysOn Availability Group. I would like to have synchronous connection between servers with a listener. One of the two databases I would like to have an async connection to a third server be used for reporting purposes.Is it possible to do this? Not sure if an availability group can be split in that way.

SSRS report with dynamic column headers

Posted: 22 Sep 2013 05:24 AM PDT

Hi,I have a report which runs for last 12 months data. Since this is going to be last 12 months the column headers change every month. How can we implement this with dynamic column headers in the dataset? Thanks.

Convert existing SQL Cluster to SAN bootable

Posted: 03 Oct 2013 04:53 AM PDT

I would like to be pushed in the right direction on this. We have a 4 node cluster (a-a-a-a) holding 08r2 & 2012 instances. We are being tasked to have this converted to SAN bootable.Will this be remotely possible in the current setup? if not what are our other options? I would appreciate any inputs on this.Thanks in advance

Installer for SQLDOM.msi?

Posted: 03 Oct 2013 03:56 AM PDT

I just got my Dev 2012 edition and when I checked the prereqs, it said I needed to download and install SQLDOM.msi. So I downloaded it, but when I try to install it, it tells me Windows Installer is not the right program to install it.This confuses me. If Windows Installer isn't the needed program, what is? (I'm on Windows 7 Pro 64 bit, btw).Any ideas?

[T-SQL] Format Cast INT as Text

[T-SQL] Format Cast INT as Text


Format Cast INT as Text

Posted: 03 Oct 2013 09:15 AM PDT

I'm using CAST to conver integers to Varchar or Char, but when the value is less than 10 I want it to start with a 0, as in "07". But a 12 is still 12.What is the best way to do that?

Get previous value

Posted: 03 Oct 2013 09:43 PM PDT

Hi,I need get from a history table an specific previous value, any help will be welcome and it's appreciate beforehand, thanksIt treats about get those records with zonestate_id 'A' and his previous zonestate_id 'ARM' Samplesystem_no /event_date/ seqno/ event_id/ eventrpt_id/ zone_id/ alarminc_no/ zonestate_id100035370 2013-10-03 16:44:35.640 273769808 7441 O E441 NULL ARM 100035370 2013-10-03 16:44:35.663 273769809 C O O/C NULL ARM 100035370 2013-10-03 16:44:53.840 273769868 7130 R 11 143069012 A 100035370 2013-10-03 16:44:58.193 273769880 7130 R 11 143069012 A 100035370 2013-10-03 16:44:58.570 273769883 C O O/C 143069012 ARM 100035370 2013-10-03 16:45:05.137 273769907 7441 O E441 143069012 ARM 100035370 2013-10-03 16:45:05.183 273769908 C O O/C 143069012 ARM 100035370 2013-10-03 16:45:10.880 273769931 C O O/C 143069012 ARM 100035370 2013-10-03 16:45:12.920 273769935 7130 R 11 143069012 A 100035370 2013-10-03 16:45:12.970 273769936 7130 R 11 143069012 A 100035370 2013-10-03 16:45:19.430 273769944 7130 R 11 143069012 A 100035370 2013-10-03 16:45:30.990 273769973 7130 R 11 143069012 A 100035370 2013-10-03 16:45:31.320 273769978 7130 R 11 143069012 A 100035370 2013-10-03 16:45:33.843 273769989 7130 R 11 143069012 A 100035370 2013-10-03 16:45:50.100 273770058 7130 R 11 143069012 A 100035370 2013-10-03 16:46:09.147 273770111 7130 R 11 143069012 A 100035370 2013-10-03 16:46:09.643 273770118 7130 R 11 143069012 A300010596 2013-10-03 16:16:16.240 273763678 CLMAIL O O/C NULL ARM 300010596 2013-10-03 16:16:18.363 273763681 7131 R 11 143067032 A 300010596 2013-10-03 16:16:20.700 273763700 7131 R 11 143067032 A 300010596 2013-10-03 16:16:20.760 273763701 7131 R 11 143067032 A 300010596 2013-10-03 16:16:25.560 273763721 7131 R 11 143067032 A So for the system_no 100035370 we should get100035370 2013-10-03 16:44:35.663 273769809 C O O/C NULL ARM 100035370 2013-10-03 16:44:53.840 273769868 7130 R 11 143069012 A 100035370 2013-10-03 16:45:10.880 273769931 C O O/C 143069012 ARM 100035370 2013-10-03 16:45:12.920 273769935 7130 R 11 143069012 A For system_no 300010596 we should get300010596 2013-10-03 16:16:16.240 273763678 CLMAIL O O/C NULL ARM 300010596 2013-10-03 16:16:18.363 273763681 7131 R 11 143067032 A Sample table and valuesCREATE TABLE [event_history]( [system_no] [int] NOT NULL, [event_date] [datetime] NOT NULL, [seqno] [numeric](9, 0) IDENTITY(1,1) NOT NULL, [event_id] [char](6) NULL, [eventrpt_id] [char](2) NULL, [zone_id] [char](6) NULL, [alarminc_no] [decimal](18, 0) NULL, [zonestate_id] [char](4) NULL,)INSERT INTO [event_history]([system_no], [event_date], [seqno], [event_id], [eventrpt_id], [zone_id], [alarminc_no], [zonestate_id])Values('100035370', '2013-10-03 16:44:35.640', '273769808', '7441', 'O', 'E441',NULL, 'ARM') Values('100035370', '2013-10-03 16:44:35.663', '273769809', 'C', 'O', 'O/C',NULL, 'ARM') Values('100035370','2013-10-03 16:44:53.840', '273769868', '7130', 'R', '11','143069012', 'A') Values('100035370', '2013-10-03 16:44:58.193', '273769880', '7130', 'R', '11', '143069012', 'A') Values('100035370', '2013-10-03 16:44:58.570', '273769883', 'C', 'O', 'O/C', '143069012', 'ARM') Values ('100035370', '2013-10-03 16:45:05.137', '273769907', '7441', 'O', 'E441', '143069012', 'ARM') Values('100035370', '2013-10-03 16:45:05.183', '273769908', 'C', 'O', 'O/C', '143069012', 'ARM') Values('100035370', '2013-10-03 16:45:10.880', '273769931', 'C', 'O', ' O/C', '143069012', 'ARM') Values('100035370', '2013-10-03 16:45:12.920', '273769935', '7130', 'R', '11', '143069012', 'A') Values('100035370', '2013-10-03 16:45:12.970', '273769936', '7130', 'R', '11', '143069012', 'A') Values('100035370', '2013-10-03 16:45:19.430', '273769944', '7130', 'R', '11', '143069012', 'A') Values('100035370', '2013-10-03 16:45:30.990', '273769973', '7130', 'R', '11', '143069012', 'A') Values('100035370', '2013-10-03 16:45:31.320', '273769978', '7130', 'R', '11', '143069012', 'A') Values('100035370', '2013-10-03 16:45:33.843', '273769989', '7130', 'R', '11', '143069012', 'A') Values('100035370', '2013-10-03 16:45:50.100', '273770058', '7130', 'R', '11', '143069012', 'A') Values('100035370', '2013-10-03 16:46:09.147', '273770111', '7130', 'R', '11', '143069012', 'A') Values('100035370', '2013-10-03 16:46:09.643', '273770118', '7130', 'R', '11', '143069012', 'A')Values('300010596', '2013-10-03 16:16:16.240', '273763678', 'CLMAIL', 'O', 'O/C', NULL, 'ARM') Values('300010596', '2013-10-03 16:16:18.363', '273763681', '7131', 'R', '11', '143067032', 'A') Values('300010596', '2013-10-03 16:16:20.700', '273763700', '7131', 'R', '11', '143067032', 'A') Values('300010596', '2013-10-03 16:16:20.760', '273763701', '7131', 'R', '11', '143067032', 'A') Values('300010596', 2013-10-03 16:16:25.560', '273763721', ' 7131', ' R', '11', '143067032', 'A')

Help building query, please

Posted: 03 Oct 2013 02:04 PM PDT

I have a table that contains times of server backups. There are days where nothing happens so I join this table to a calendar table to create placeholders for those dates. This results in additional rows populated with a date that was missing in the serverbackup table, but having NULL values for the remaining columns. Since I have more than one server in the table, I need a date placeholder for each server that hasn't had a backup on any given day.I don't know how to do this. I've created DDL below. Please read on.--server backup tableCREATE TABLE [dbo].[CannedBackupJobs]( [jobid] [int] NULL, [SizeTB] [float] NULL, [StartTime] [datetime] NULL, [ServerName] [varchar](20) NULL) --server backup data-- As you can see, since neither Peaches nor Pears was backed up on 2013-08-05 through 2013-08-13. insert into [dbo].[CannedBackupJobs]values(83, 365.226943141887,'2013-08-04 03:20:30.777', 'Peaches'),(83, 408.830221699759, '2013-08-14 18:26:53.220', 'Peaches'),(83, 391.654500133873, '2013-08-15 15:44:34.977', 'Peaches'),(83, 397.063717616127, '2013-08-20 02:10:57.747', 'Peaches'),(83, 353.803773579467, '2013-08-24 05:56:26.090', 'Peaches'),(100, 533.226943141887,'2013-08-07 03:20:30.777', 'Pears'),(100, 788.830221699759, '2013-08-09 18:26:53.220', 'Pears'),(100, 351.654500133873, '2013-09-07 15:44:34.977', 'Pears'),(100, 347.063717616127, '2013-09-09 02:10:57.747', 'Pears'),(100, 663.803773579467, '2013-10-09 05:56:26.090', 'Pears');--calendar look up tableCREATE TABLE _Dates ( d DATE, PRIMARY KEY (d))DECLARE @dIncr DATE = '2000-01-01'DECLARE @dEnd DATE = '2100-01-01'WHILE ( @dIncr < @dEnd )BEGIN INSERT INTO _Dates (d) VALUES( @dIncr ) SELECT @dIncr = DATEADD(DAY, 1, @dIncr )END Here's join. It only produces one placeholder for missing dates.SELECT d, StartTime, ServerName, SizeTBFROM dbo._dates dINNER JOIN ( SELECT DATEADD(DAY, DATEDIFF(DAY, 0, MIN(starttime)), 0) AS start_date, DATEADD(DAY, DATEDIFF(DAY, 0, MAX(starttime)) + 1, 0) AS end_date FROM dbo.cannedbackupjobs) AS date_range ON d.d >= date_range.start_date AND d.d < date_range.end_dateLEFT OUTER JOIN cannedbackupjobs dbj ON dbj.starttime >= d.d AND dbj.starttime < DATEADD(DAY, 1, d.d)Desired output to look like this. Notice on 2013-08-09, there is only one placeholder date for Peaches, none for Pears because Pears had a backup.. But elsewhere two placeholder rows for same date, for times when neither backed up.: SELECT '2013-08-04','2013-08-04 03:20:30.777', 'Peaches', 365.226943141887 UNION ALL SELECT '2013-08-05', NULL, 'Peaches',NULL UNION ALL SELECT '2013-08-05', NULL, 'Pears',NULL UNION ALL SELECT '2013-08-06', NULL, 'Peaches',NULL UNION ALL SELECT '2013-08-06', NULL, 'Pears',NULL UNION ALL SELECT '2013-08-07', NULL, 'Peaches',NULL UNION ALL SELECT '2013-08-07', NULL, 'Pears',NULL UNION ALL SELECT '2013-08-08', NULL, 'Peaches',NULL UNION ALL SELECT '2013-08-08', NULL, 'Pears',NULL UNION ALL SELECT '2013-08-09', NULL, 'Peaches',NULL UNION ALL SELECT '2013-08-09','2013-08-09 18:26:53.220', 'Pears', 788.830221699759 UNION ALL SELECT '2013-08-10', NULL, 'Peaches',NULL UNION ALL SELECT '2013-08-10', NULL, 'Pears',NULL UNION ALL SELECT '2013-08-11', NULL, 'Peaches',NULL UNION ALL SELECT '2013-08-11', NULL, 'Pears',NULL UNION ALL SELECT '2013-08-12', NULL, 'Peaches',NULL UNION ALL SELECT '2013-08-12', NULL, 'Pears',NULL UNION ALL SELECT '2013-08-13', NULL, 'Peaches',NULL UNION ALL SELECT '2013-08-13', NULL, 'Pears',NULL UNION ALL SELECT '2013-08-14', NULL, 'Peaches',NULL UNION ALL SELECT '2013-08-14', NULL, 'Pears',NULL Do I do this with a bunch of seperate select statements (with WHERE ServerName = Peaches and with WHERE ServerName = Pears) run against the _Dates table with a UNION ALL to recombine them? Or is there a better way?

Displaying column names

Posted: 03 Oct 2013 11:04 PM PDT

HiI have a requirement where I need to show a column name in a SSRS report in the following format:List of <Hazards.hazardCode> separated by ", "<MaterialsItems.otherHazards>Here, Hazards and MaterialsItems are table names and hazardCode and otherHazards are column names.How to display this? Pls help me on this.

openrowset question

Posted: 03 Oct 2013 04:46 AM PDT

I would like to know why I can't run openrowset the way that system stored procedures do? For example, in sys.xp_logininfo, they use it. When I try to copy the part of the code from that SP for use in my code, it gives me the error:Msg 156, Level 15, State 17, Line 16Incorrect syntax near the keyword 'OPENROWSET'.I just don't understand how it can work in the system SP, but I can't run the part of the code I want to use... please help.

Building a Comma Separated on a new line

Posted: 03 Oct 2013 01:53 PM PDT

Hi All,I need some help! firstly to tweak my results to work as I expect it to & secondly provide an alternate method if my logic is not best practice.I'm returning a list of PDF's (very minimal results < 20) into a temporary table and then running the following query to build these into a string. (E.G. 1.pdf,2.pdf,3.pdf ... 20.pdf)[code="sql"]DECLARE @PDFFileName VARCHAR(8000) = ( SELECT STUFF(( SELECT ', ' + PDF_FileName + '.pdf' FROM @PDFResult ORDER BY 1 ASC FOR XML PATH('')),1,1,'') ) [/code]My aim is to split these up 1 per line instead of a massive line (as below)1.pdf,2.pdf...20.pdfAny ideas?

Inserting auto increment values through Identity

Posted: 03 Oct 2013 05:25 PM PDT

Hi I created a table CREATE TABLE IdentityProducts(productid int IDENTITY(1,1) NOT NULL,productname nvarchar(40) NOT NULL,categoryid int NOT NULL,unitprice money NOT NULL) I want to insert values into it with the belkow statementinsert into IdentityProducts(productname,categoryid,unitprice)values('shad',1,100) when i insert the "productid" should get inserted automatically as it is defined with identity propertbut its not happeningeven though i have madeSET IDENTITY_INSERT IdentityProducts ON I am getting the following error============Msg 545, Level 16, State 1, Line 1Explicit value must be specified for identity column in table 'IdentityProducts' either when IDENTITY_INSERT is set to ON or when a replication user is inserting into a NOT FOR REPLICATION identity column.==============:crying:

Paramater Concatenation

Posted: 03 Oct 2013 07:43 AM PDT

Hi Everyone,DECLARE @CN VARCHAR (100)SET @CN = 'abc pvt Ltd'select * from [@CN'+'$Production]My output should be select * from [abc pvt Ltd$Production].I DO NOT want to use a string for my Select QueryCould anyone of you help on me this.

Advice on Agent job step

Posted: 03 Oct 2013 03:44 AM PDT

He everyone.I have a requirement to move a .xlsx file from a local location to a sharepoint site.I looked around I saw advice that suggested mappping Sharepoint site as a mapped drive location, which I have done.The line below works fine when exectuted from cmd prompt. It moves the file to Sharepoint location.It fails when I try to run it as a SQL agent job when Job type is Operating System (CmdExec)XCopy "C:\SQL\PowerShell\Failedjobs.xlsx" J:\ /yWould anyone be able to advise me what I have done wrong in the script in order for it to be able to be run as a SQL agent job ?Do I need the full Sharepoint URL, have tried many combinations but with no luck.Error I get is : Invalid drive specification 0 File(s) copied. Process Exit Code 4. The step failed.So I have incorrectly named the location it needs to go to, but do not know what to do to correct it ?Thanks for any advice.

Best method to append columns based on similar ID

Posted: 03 Oct 2013 02:52 AM PDT

Hi, sorry about long thread but here we go, so i have been trying to figure this piece out as well as search forms and i find similar questions but not the exact one. The issue is that i have a raw data table which extracts from a legacy application, the raw data comes as followscar_id: 1car_name: vwcustomer_comment: oil changedealer_comment: completeresponse_line: 1car_id:1car_name: vwcustomer_comment: brakes changedealer_comment: NULLresponse_line: 2note that this response_line field is breaking up a "text area" by line number (yes that legacy application can't just extract the text area it as a whole). i'm tying to append the dealer_comment into 1 field so i can update a staging table and clean things up...the perfect output would becar_id:1car_name: vwcustomer_comment: oil change brakes changedealer_comment: complete here is my code which works (in two different ways) but what happens is the code doesn't include any records that exactly match the number response_lines (there can be a max of 3), example if one record has 3 response_lines (3 lines filled in the text area) this works and another record has 2 response_line (only 2 line filled in the text area) this code would not include the 2 response_line record;with temp (num1,t1)as(select car_id, dealer_commentfrom cars where Response_Line ='1' and cars.query='dealer_comment'),temp2 (num2,t2)as(select car_id, dealer_commentfrom carswhere Response_Line ='2' and cars.query='dealer_comment'),temp3 (num3,t3)as(select car_id,dealer_commentfrom carswhere Response_Line ='3' and cars.query='dealer_comment')select t1+' '+t2+' '+t3 from temp inner join temp2on temp.num1=temp2.num2*shows all records with only 3 response lines* i was hoping to show all recordsor update staging.cars set dealer_comments= (select dealer_comment from cars where Response_Line = '1' and cars.query='dealer_comment' and staging.cars.car_id = cars.car_id ) + ' ' + (select Response_Text from cars where Response_Line = '2' and cars.query='dealer_comment' and staging.cars.car_id = cars.car_id ) + ' ' + (select Response_Text from cars where Response_Line = '3' and cars.query='dealer_comment' and staging.cars.car_id = cars.car_id )*updates records with only 3 response linesmy plan would be to update based on either method, either CTE tables or a update and i would certainly understand i need to incorporate a case when else but i can't' seem to figure that piece out. i hope this all makes sense and any advice would be appreciated

[SQL Server 2008 issues] Page Life Expectancy dropping in value

[SQL Server 2008 issues] Page Life Expectancy dropping in value


Page Life Expectancy dropping in value

Posted: 03 Oct 2013 02:50 AM PDT

I have an issue whereby every now and again throughout the day the PLE value plummits from something like 14hrs to 0.1hr. It is at this point I get complaints of the app running slowly. CPU looks fine. Also the buffer cache hit ratio is within normal operational levels. Is this indicative of some poorly written code somewhere ? What can I do to further investigate this to help increase performance ?

Replication from sql server to oracle

Posted: 03 Oct 2013 06:36 PM PDT

Hi I am not sure how to start this. I was given a task to start replication from Sql server 2008 R2 to Oracle 11gI have tried google, I couldn't understand clearly where to start.What permissions required on oracle and what is to be done at Oracle end to support replication.The same from sql server side?Can anyone give details step by step?thanks in advance.

split a comma delimited string into columns

Posted: 03 Oct 2013 04:21 PM PDT

DECLARE @p NVARCHAR(MAX)SELECT @P = 'MONDAY,SUN,FUN,D'DECLARE @STRSQL NVARCHAR(MAX)SET @SQL = 'SELECT ''' + REPLACE(@P,',',''',''') + ''''EXEC( SQL)Above Code is splitting the comma delimited string into four columns but how to give columns names with in this code.?? I was trying this with creating new dynamic table with four column but i am not sure that if it really works.

ssis question

Posted: 03 Oct 2013 06:27 AM PDT

i have a proc that is getting called and returns a filename that loads into cache in ssis. How do i then make a connection to that file? I dont see anywhere in the connection manager a way to full the filename from the result set

Using missing index DMVs to look for (potentially) helpful indexes...

Posted: 03 Oct 2013 02:02 AM PDT

So I had come across a query (don't recall where) which uses four missing index DMVs to give an idea as to where the SQL engine thinks an index might help. Now, I'm not blindly following it's suggestions (as of now I've used its recommendations to add ONE index to replace an existing index,) but I'd like to get an idea of what others think.The columns I'm paying the most attention to when looking for possible improvements are:sys.dm_db_missing_index_group_stats User_seeks User_scans avg_total_user_costSo far, my feeling is, if the seeks and scans are low, then don't add the index regardless of what the user_cost is.Does anyone have any suggested guidelines for those numbers?I'm using 100 or higher for seeks and scans combined, and 250 for user_cost or higher.Obviously, if I'm going to implement one of these index suggestions, I'll need to weigh the impact of maintaining the index as well.Last, the code I'm using (and if anyone recognizes it, let me know whose it is so I can credit them:)[code="sql"]SELECT db_name(mid.database_id) as [DBName], statement AS [database.scheme.table], column_id, column_name, column_usage, mid.included_columns, migs.user_seeks, migs.user_scans, migs.last_user_seek, migs.avg_total_user_cost, migs.avg_user_impact, (migs.user_seeks + migs.user_scans) * migs.avg_total_user_cost * (migs.avg_user_impact * 0.01) as [Index Advantage]FROM sys.dm_db_missing_index_details AS mid CROSS APPLY sys.dm_db_missing_index_columns (mid.index_handle) INNER JOIN sys.dm_db_missing_index_groups AS mig ON mig.index_handle = mid.index_handle INNER JOIN sys.dm_db_missing_index_group_stats AS migs ON mig.index_group_handle=migs.group_handlewhere db_name(mid.database_id) not in ('msdb','master','model','tempdb')and migs.avg_total_user_cost >= 250and migs.user_seeks + migs.user_scans >= 100ORDER BY mid.database_id, migs.avg_total_user_cost[/code]

Unable to start SQL Server agent after server restart( Sql Server 2012)

Posted: 02 Oct 2013 09:34 PM PDT

Hi I am not bale to start Server agent after server restart( Sql Server 2012) .It is giving me the following error ."The service did not respond to the start or control request in a timely fashion .[0x8007041d]"Can anyone help pls.Gita

Windows/SSIS - File Transfer Protocol question

Posted: 03 Oct 2013 02:17 AM PDT

Can we download file from an ftp server without knowing the file extension. Directory would have multiple file formats. I dont want to use get *.txt or the like. I know the file name only, but not the extension. Is it doable?

Cannot open New SSIS Project in SQL Server 2008 R2

Posted: 02 Nov 2012 12:03 PM PDT

I installed SQL Server 2008 R2 and Visual Studio 2008. However, after several installations and un-installations. I could no longer use the SSIS to create New Projects. I cleaned out the registry with a Registry cleaner from Wise. I deleted the SQL Server and Visual Studio folders as well.When I create a New SSIS Project, I immediately get an error as follows:"Microsoft Visual Studio""An error prevented the view from loading.""Additional information:""Specified Module cannot be Found. (Exception from HRESULT: 0x8007007E)""(System.Windows.Forms)"How do I resolve this problem? What this means is that I cannot create any new SSIS Projects.Cecil

SQL Transaction Log

Posted: 03 Oct 2013 12:37 AM PDT

Hi everyone,My transaction log file is bigger and bigger every day, can I truncate the file? If it can, does it affect any thing such as performance?Best regards;

SQL Transactional Replication

Posted: 03 Oct 2013 12:35 AM PDT

Hi,Can someone guide me how to back up transactional replication? The problem is when my live db got the problem, I need to restore back, but require me to reconfigure transactional replication and initialize schema again. Thanks and Best Regards;

SQL 2008 SSIS package problem with data export

Posted: 03 Oct 2013 12:22 AM PDT

I have an SSIS package I developed using the Import/Export Wizard as a base. It truncates certain tables in our Dev environment then populates them with Production data. While the package works fine on my desktop (and does not fail when run via a job on the Production server), running it via the job doesn't always fully populate the tables down in Dev.For instance, this is what I got from the log file:[quote]DataFlow: 2013-10-02 09:31:05.10 Source: Load Destinations 0-4 Component "Destination 4 - Staging_X1" (1258) will receive 3744 rows on input "Destination Input" (1271)End DataFlow[/quote]Staging_X1 in Production has 16802593 rows worth of data. Dev ends up with less than a third of that but more than the message above states (so I'm thinking the log entry is just telling me the rows per batch).Has anyone seen this behavior before? These are OLE DB Source and Destination connections, nothing between them. I'm using Fast Load with Keep identity, Table lock, and Check constraints. I don't have a Rows per batch set, and the Maximum insert commit size is set to 2147483647. AccessMode is OpenRowset Using FastLoad.

Memory uitilization

Posted: 02 Oct 2013 09:06 PM PDT

Hi I asked bu one of my clients to schedule daily moinotring reports and provide cpu and memory uitilization of sql server.Now please help me how to proceed with it specially memory partThanks and RegardsAnoop pandey

Thursday, October 3, 2013

[SQL Server] Replace

[SQL Server] Replace


Replace

Posted: 22 Mar 2010 10:34 PM PDT

Hi Consider this tableMan_name ModelMotorola MotoQMotorola RAZR2Nokia N73Nokia 6100Samsung C118I have used Distinct Statement to retrieve the value. Now I want to replace if the Man_Name comes 2nd time then I need to replace that value with "Blank". Is it possible to do this?

Select top, how to get the rest?

Posted: 23 Mar 2010 07:14 AM PDT

HiI want to use paging and I don't want to get so many records.But if I use select top (100) for example, how will I get the rest of the records, split in 100 per each select query. I have seen example like this [quote]SELECT TOP (100) OrderDate, ShipName FROM orders WHERE orderID NOT IN (SELECT TOP OFFSET OrderID FROM orders)[/quote]. But I have read and tried that Offset doesn't work in SQL Server. Is there anything else I can do?ThanksFia

why used WITH XMLNAMESPACES in tsql ?

Posted: 02 Oct 2013 06:38 PM PDT

What is the used WITH XMLNAMESPACES in tsql. Does it provide some functionality ? What would break if do not used WITH XMLNAMESPACES.

Search This Blog