Friday, October 4, 2013

[how to] imp paused mid import

[how to] imp paused mid import


imp paused mid import

Posted: 04 Oct 2013 03:49 PM PDT

OK, I'm new to Oracle administration but I'm trying to import a previous dump (no pun intended) which was split into two files.

The import is happening from an Oracle 9i to 11gR2 on Windows

I used the following commands:

imp userid=olduser/oldpassword indexes=n log=import.log    file(Export_DB_776full_20130829_020000_01,Export_DB_776full_20130829_0200  00_02) fromuser=olduser touser=newadminuser commit=y feedback=100  

Now this has progressed fairly well and has imported data from what seems to be the first file, peppering the console with a dot every 100 rows imported. Then when it reaches the table where the export was split (into the second dmp file) its just paused. I thought it was simply collecting its thoughts and left it overnight but with no progress

Trying to connect to Oracle from SQLPLUS on the command line, even as SYS or SYSDBA is not possible, and returns with a

"ORA-00257: archiver error. Connect internal only, until freed."  

Googling has yielded no direct answers to this but what I did find at least yielded some access to the database to perform some alteration function. Some of the other posts I saw mentioned that the log files are filling up, but there is well in excess of 150 GB free on both drives (the tablespace is split across two drives)

A few things:

What is going on? - because the error message seems a bit cryptic to me - How can I connect to the database? How can I fix this? How do I not fall into this trap again?

Thanks

Foreign key for rows added later

Posted: 04 Oct 2013 02:07 PM PDT

I'm using a queue to crawl pages and track links between them.

I have a table called page with a primary key called page_id.

I have a table called links with two columns: link_from, link_to

The queue takes a page, saves it, collects all links from it and saves those. The link_to value in some rows points to a non-existent row (temporarily, until the target pops out of the queue).

From a long-term point of view, I'd like to connect them with foreign keys (for multiple reasons, e.g. delete cascading, automatic relationship discovery from an application which uses the database).

How do you do such a thing? I can't insert the row with a foreign key reference until the row exists.

Comparison or benchmark tests of column-oriented databases [on hold]

Posted: 04 Oct 2013 01:59 PM PDT

I'm looking for some fresh comparison or benchmark tests results of column-oriented databases.

Unfortunately, I found very outdated benchmark tests such as:

Thanks!

Backup of msdb stuck in rollback

Posted: 04 Oct 2013 02:16 PM PDT

A native full backup of msdb on a SQL Server 2005 Enterprise instance (9.0.4053) deployed on a two-node Win2K3 cluster (Microsoft Windows NT 5.2 (3790)) is currently stuck in rollback.

The DB is in Simple recovery mode but the Tlog is full. Attempts to add secondary Tlogs are failing due to a 3023 error (regarding the action needing to be serialized).

Options I'm considering are moving non DB files off the volume to allow for autogrow of the Tlog and instance restart. I've tried changing the state of the DB from multi-user to single-user but the SPID stuck in rollback is blocking. SPID 126 is the session in question below.

Are there other options I can consider?

session_id  status  blocking_session_id wait_type   wait_time   wait_resource   transaction_id  1098    suspended   0   TRACEWRITE  687     25828712297  126 suspended   0   BACKUPTHREAD    1284206734      0  146 suspended   0   WAITFOR 500     0  229 suspended   0   TRACEWRITE  250     42236221330  249 suspended   0   TRACEWRITE  531     41659048209  299 suspended   441 LCK_M_U 2149156 DATABASE: 15 [BULKOP_BACKUP_LOG]    0  323 suspended   0   WAITFOR 2656        0  441 suspended   0   BACKUPTHREAD    980511734       0  

Does RMAN in Oracle only copy the archive logs? What happens to the active redo log in case of failure?

Posted: 04 Oct 2013 02:04 PM PDT

If RMAN copy copies the archived redo logs when they are full, what happens to the current redo log in the event of a failure (hardware failure, etc.)? Isn't there always an active redo log that is not an archive_log?

MySQL or MongoDB? [on hold]

Posted: 04 Oct 2013 06:20 PM PDT

I am projecting the medium system and I arrived at a crossroads. which is the best MySQL, or MongoDB ?

-If necessary I will use as a complement rails.!

Thanks in advance!

A complex but simple select statement [migrated]

Posted: 04 Oct 2013 12:21 PM PDT

Hi guys I am not much of a coder just trying to help out our demographics team. what I need is this I need a select statement that will scan through a table and based on the number from one column put that row into a new table.

So say i have a table with the following columns Name, Address, State, ZIP. I want a select statement that will go through and get the value from state and say ok it's AZ put it into a table called AZ, and then in the next one it's CA put it into CA, and then the next one may be AZ so add it to the table AZ it already created.

I also cant tell it what the values are for the states as they are random (it's not really states that we use just using that as an example.)

How to fetch file path from a folder through PL/SQL

Posted: 04 Oct 2013 09:53 AM PDT

We upload files on our library on weekly basis. I wanted to fetch the latest uploaded file path/URL from a folder through PL/SQL, but not sure if it's possible.

If possible, please suggest what should I do.

Are these indexes redundant?

Posted: 04 Oct 2013 11:19 AM PDT

I have two indexes on a table that will contain roughly 10-20 million records at a given moment.

Index 1

CREATE NONCLUSTERED INDEX [IX_NC_AccountNumber_UTC] ON [dbo].[LogWindow]  (      [AccountNumber] ASC,      [UTC] ASC  )  INCLUDE (   [Site],      [Application],      [Version],      [Machine],      [Context],      [Severity],      [Message],      [Details]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]  GO  

Index 2

CREATE NONCLUSTERED INDEX [IX_NC_AccountNumber_UTC_Site_Context_Machine] ON [dbo].[LogWindow]  (      [AccountNumber] ASC,      [UTC] ASC,      [Site] ASC,      [Context] ASC,      [Machine] ASC  )  INCLUDE (   [Application],      [Version],      [Severity],      [Message],      [Details],      [Type]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]  GO  

The reason why I am currently using two indexes is because I am issuing two types of queries. The first query only filters records based upon AccountNumber and UTC. The second query filters records based upon those same columns, plus the additional Site, Context and Machine columns.

The second index is very similar to the first; it's just covering less columns because now they are part of the key indexed columns.

Does it make sense for me to have the first index when the second one is so similar with additional key columns? Will SQL Server determine there is a benefit to using the first index over the second?

Extract data for all groups in a single SELECT statement

Posted: 04 Oct 2013 07:55 AM PDT

MySQL 5.

Table: schedule  Columns:  id  int(10) UN PK AI  startdate   date   groupid smallint(5) UN

The following statement extracts the Schedule ID for a given group (the schedule with the latest date not greater than the given date):

SELECT id FROM schedule  WHERE groupid=@group AND @date>=startdate  ORDER BY startdate DESC LIMIT 1

Question: Can I extract Schedule IDs for all groups, without enumerating (calling the above statement repeatedly) with all existing group IDs?

Incremental MySQL restore using binlogs

Posted: 04 Oct 2013 02:10 PM PDT

I've tried to restore a MySQL database using incremental binlogs and mysqlbinlog tool but I received the following message:

ERROR 1666 (HY000) at line 30: Cannot execute statement: impossible to write to binary log since statement is in row format and BINLOG_FORMAT = STATEMENT.

BINLOG_FORMAT parameter in my database is set to ROW, but is there a way to restore binglog with this parameter in ROW?

Is there any way to know all the tables(and/or views, triggers, functions) present in the Oracle 9i database?

Posted: 04 Oct 2013 07:33 AM PDT

I want to know this because, I forgot the name of the table I created

Using Oracle 9i

Thanking you for your time.

Why does @@dbts increase after backup/restore?

Posted: 04 Oct 2013 04:51 PM PDT

I have a SQL Server 2005 database, containing some tables which have a Timestamp (or RowVersion) column. I need to know the value of the current timestamp, so I use the following query: SELECT CAST(@@dbts AS BIGINT);

This will return, for example, 10505.

Immediately afterwards, without updating, inserting, ... anything, I do a BACKUP DATABASE and a RESTORE DATABASE and I run the SELECT query again. Only this time, the result is 14000, while none of the timestamps in the tables have increased.

Why/how does this happen?

Slow SQL Server Database (approx 2TB) - high CPU due to index fragmentation?

Posted: 04 Oct 2013 11:49 AM PDT

I've posted this over at ServerFault but didn't have much luck getting a response.

I have a fairly large SQL Server database (~2TB). Most of the data is in one table (~6 billion rows).

Two weeks ago I dropped two non-clustered indexes on the large table and migrated the data onto a single 6TB RAID SSD array. I then recreated the two indexes which took quite some time (assuming because I currently have the data (for table and indexes) and log on the same array and it seems that with the RAID I can't have fast sequential AND random r/w at the same time).

Anyway after recreating the indexes it ran very well for about a week. During the week I have been slowly running a clean on the large table which just removes old unneeded rows. So far I've removed about 300 million out of the 6 billion, and at a guess I still have a lot more to go.

Now after about a week of running like this it is now running very slow and I'm not sure which would be best to do.

Current situation:

  • Dual Xeon
  • 192GB RAM
  • Windows Server 2012 with SQL Server 2012
  • CPU is hitting 100% (16 cores) - was only using about 50% prior to the slowdown
  • IO doesn't seem to working too hard (no queue)

Large Table currently has (I don't have any fragmentation info prior to now):

  • 1x Clustered index: 48% fragmentation
  • 1x Non-clustered index: 36% fragmentation
  • 1x Non-clustered index: 10% fragmentation
  • I used to have two more indexes on this table but dropped them a while ago

What do you think would best fix my problem

  • Rebuild the non-clustered indexes on the same array (assume that this should fix the problem but will take a long time to do as it did before. Will probably have the same problem in the near future as I'm still cleaning the table)
  • Rebuild the non-clustered indexes on a new RAID array (should fix as above but may be faster)
  • Move the non-clustered indexes to a new RAID array (quickest option)
  • Recreate the two old indexes on a new RAID array (not sure if this relieves CPU or IO pressure)

Do fragmented indexes cause higher CPU usage?

Is there anything else I could be missing?

TIA

Rebuilding index

Posted: 04 Oct 2013 08:23 AM PDT

I have a transaction database and one of nightly job is to rebuild indexes that followed by a log backup and then shrink the log.

Is this safe to do so ?

Thanks Antony

How can I execute a PL/SQL procedure for Oracle via JDBC?

Posted: 04 Oct 2013 07:39 AM PDT

I have a Java webapp with an Oracle DB, and I try to execute this statement at the end of a data migration script:

EXEC DBMS_STATS.GATHER_TABLE_STATS (ownname => 'MY_SCHEMA', tabname => 'MY_TABLE', estimate_percent => dbms_stats.auto_sample_size);  

It works in sql*plus, but not via JDBC:

"Error: ORA-00900: invalid SQL statement"  

What am I doing wrong?

What is a "leaf expression"?

Posted: 04 Oct 2013 08:25 AM PDT

What is a "leaf expression"?

Partition-clause expressions are limited in the following ways:      Your partition-clause must calculate a single non-null value for each row. You can reference multiple columns, but each row must return a single value.      You can specify leaf expressions, functions, and operators in the partition clause expression.      All leaf expressions in the partition clause must be either constants or columns of the table.  

Source https://my.vertica.com/docs/6.1.x/HTML/index.htm#18125.htm

Selecting from an excel spreadsheet into SQL Server table

Posted: 04 Oct 2013 11:25 AM PDT

This question has been asked before but I tried giving full admin rights to the SQL Server user on C:\temp\ folder location (I am using Windows authentication into SQL Server Express).

So for the following code snippet:

Declare @strSQL as varchar(200)  declare @file as varchar(200)     SET @file='C:\temp\file.xlsx'  SET @strSQL=N'SELECT * INTO #mytemptable FROM OPENROWSET(''Microsoft.Jet.OLEDB.4.0'', ''Excel 8.0;Database='+@file+';HDR=YES'', ''SELECT * FROM [Sheet1$]'');'  SET @strSQL=@strSQL+N'SELECT * FROM ##mytemptable'  PRINT @strSQL  Exec (@strSQL)  

EXPANDED SQL STATEMENT

SELECT *   INTO #mytemptable   FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0', 'Excel 8.0;Database=C:\temp\file.xlsx;HDR=YES', 'SELECT * FROM [Sheet1$]');    SELECT * FROM ##mytemptable  

I get this error:

Msg 7399, Level 16, State 1, Line 1
The OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "(null)" reported an error. The provider did not give any information about the error.
Msg 7303, Level 16, State 1, Line 1
Cannot initialize the data source object of OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "(null)".

How do I track why my SQL Azure occupied space pulsates?

Posted: 04 Oct 2013 12:25 PM PDT

This is inspired by this StackOverflow question. I have this code:

SELECT CONVERT(INT, SUM(reserved_page_count) * 8 / 1024) FROM sys.dm_db_partition_stats  

for detecting database space consumption and it returns exactly the same result as SQL Azure management portal pie chart. The problem is the number fluctuates greatly without apparent reasons - sometimes it's around 35% and sometimes it's around 54%.

This is kinda worrying - I don't get what happens and the major concern is that we suddenly consume all the space and our service comes to a halt.

I seriously tried to look into sys.dm_db_partition_stats and sys.partitions but they look like a pile of magic numbers to me.

What data should I use to find the root cause of the fluctuations?

Scaling of PostGIS vs MySQL cos/sin calculations

Posted: 04 Oct 2013 06:25 PM PDT

I need to sort database rows according to GPS coordinates (or any other geographical representation).

The way I've understood it, MySQL Spatial Indexes are too buggy at the time of writing (at least articles and my own tests indicate so).

So I can either switch to PostgreSQL to use PostGIS which seems to be working excellently, or I can do it mathematically in MySQL:

((acos(sin(latitude * pi() / 180) * sin($latitude * pi() / 180) + cos(latitude * pi() / 180) * cos($latitude * pi() / 180) * cos((longitude - $longitude) * pi() / 180)) * 180 / pi()) * 60 * 1.1515 * 1.609344) km  

I'm not asking for a PostgreSQL vs MySQL debate, but simply how the performance of the MySQL expression scales as rows grow compared to using PostGIS/PostgreSQL.

It's a drastic step to switch database system, so the upside better be significant.

Data sharing between client and server two ways

Posted: 04 Oct 2013 09:25 AM PDT

One of my Silverlight applications is running. This Application is using one central database (SQL Server 2008) and three other client databases (SQL Server 2008) in server other locations.

Our requirement is that with minor changing or without changing the Silverlight application, when we make changes in the central database (insert, update and delete in multiple tables) these changes automatically occur in all client databases. And when any change is made in any client database all client as well as central database should be updated automatically.

For this I read some about Sync Framework and SQL Server 2008 Change Tracking. But I have no idea about how will I do this. Please help me. How can we achieve this?

Restore of logfiles in SQL Server

Posted: 04 Oct 2013 03:59 PM PDT

I'm trying to write a script which will read a folder containing a full backup and X number of log files. My script fails when it starts the loop to read the log files. The error is cannot convert varchar to nvarchar. I have tried defining @Pathandlog as a nvarchar, and I still get the same error. Any ideas?

--allow advanced options to be changed.      EXEC sp_configure 'show advanced options', 1      GO      --Update the currently configured value for advanced options.      RECONFIGURE      GO      --Enable XP_CMDSHELL      EXEC sp_configure 'xp_cmdshell', 1      GO      --Update the currently configured value for this feature.      RECONFIGURE      GO        --delete temp table if exists      if object_id('dbo.myFileList') is not null      begin        drop table dbo.myFileList      end          --Create the table to store file list      CREATE TABLE myFileList (FileNumber INT IDENTITY,FileName VARCHAR(256))      --Insert file list from directory to SQL Server      DECLARE @clientname varchar(256) ='clientname'      DECLARE @Path varchar(256) = 'C:\Import\' + @clientname     /* ' */      DECLARE @dbname varchar(256) = 'ApplicationV1A'+@clientname      DECLARE @Command varchar(1024) =  'dir ' + @Path + ' /A-D  /B'      INSERT INTO myFileList      EXEC MASTER.dbo.xp_cmdshell @Command                --Check the list      SELECT * FROM myFileList          --get first file and run restore of database - leaving db in recovery        DECLARE @fullbackname VARCHAR(MAX)      SELECT TOP (1) @fullbackname = FileName      FROM dbo.myFileList      ORDER BY Filenumber        PRINT @fullbackname        DECLARE @fullbacknamedb VARCHAR(MAX)='c:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\'+@dbname+'.DB'      DECLARE @dbnamelog VARCHAR(MAX)='c:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\'+@dbname+'.log'      PRINT @dbname      DECLARE @tmp VARCHAR(MAX)=  @Path +'\'+ @fullbackname            RESTORE DATABASE @dbname         FROM  DISK = @tmp        WITH  FILE = 1,  MOVE N'ApplicationV1A' TO @fullbacknamedb,  MOVE N'ApplicationV1A_log' TO @dbnamelog, NORECOVERY, NOUNLOAD, REPLACE, STATS = 10        /* temporarily disabled         */          --restore db commands to go here        --select last file for later.        DECLARE @taillog int      SELECT  TOP (1) @taillog = Filenumber      FROM dbo.myFileList      WHERE FileName IS NOT null      ORDER BY Filenumber DESC          DECLARE @fullbackup int      SELECT  TOP (1) @fullbackup = Filenumber      FROM dbo.myFileList      ORDER BY Filenumber        DECLARE @taillogname VARCHAR(MAX)      SELECT  TOP (1) @taillogname = FileName      FROM dbo.myFileList      WHERE FileName IS NOT null      ORDER BY Filenumber DESC        PRINT @taillogname        --get logfiles and run restore of database leaving db in recovery.  This must loop through all log files        DECLARE @FileName VARCHAR (MAX)      SELECT FileName       FROM dbo.myFileList      WHERE (Filenumber > @fullbackup AND Filenumber < @taillog)      ORDER BY Filenumber          DECLARE @logloop INT      SET @logloop = 1      WHILE (@logloop <=@taillog)      BEGIN        PRINT @FileName      DECLARE @Pathandlog VARCHAR(MAX) = @Path + @FileName        DECLARE @Manu NVARCHAR(MAX)=@dbname        RESTORE LOG @Manu      --RESTORE LOG @dbname        FROM DISK = @Pathandlog        WITH FILE = 1, NORECOVERY, NOUNLOAD,  STATS = 10                    SET @logloop = @logloop + 1      IF @logloop = @taillog      BREAK;      END                RESTORE LOG [@dbname]        FROM DISK = @Path + @FileName        WITH NORECOVERY      /**/              --select each one, run restore then move to next log        --get last logfile and run restore of database and take db out of recovery        DECLARE @tailogname VARCHAR(MAX)      SELECT  TOP (1) FileName      FROM dbo.myFileList      WHERE FileName IS NOT null      ORDER BY Filenumber DESC        --command to restore and take db out of recovery here        /*      RESTORE LOG [@dbname]        FROM DISK = @Path+@tailogname        WITH RECOVERY      */          --Clean up - may need to run this on script launch as well in case this script failed      DROP TABLE myFileList      GO        --disable xp_cmdshell once job complete      EXECUTE sp_configure 'xp_cmdshell', 0      RECONFIGURE  

Database table design question

Posted: 04 Oct 2013 03:25 PM PDT

Firstly, database novice here. I'm trying to create a system for processing Sales Orders into Purchase Orders and then split the Purchase Order into Shipping Containers. The systems needs to be able to split a Sales Order Line Item into 2 or more Purchase Orders and a Purchase Order Line Item into 2 or more Shipping Containers.

There will be some(a lot of) adjustments back and forth where once a Sales Order Line Item gets split and put into multiple Purchase Orders it might get split differently later on. And same when a Purchase Order Line Item is processed into Shipping Containers. I can't for the life of me how to best handle this operation.

If it helps, here are the tables I have so far with some details excluded for brevity:

CREATE TABLE PurchaseOrder (      Id INTEGER UNSIGNED UNIQUE NOT NULL AUTO_INCREMENT,      PurchaseOrderNumber VARCHAR(15) UNIQUE,        PRIMARY KEY(Id)  );      CREATE TABLE Container (      Id INTEGER UNSIGNED UNIQUE NOT NULL AUTO_INCREMENT,      ContainerName VARCHAR(20) UNIQUE,      PurchaseOrderId INTEGER UNSIGNED,        PRIMARY KEY(Id),      FOREIGN KEY(PurchaseOrderId) REFERENCES PurchaseOrder(Id),  );    CREATE TABLE SalesOrder (      Id INTEGER UNSIGNED UNIQUE NOT NULL AUTO_INCREMENT,      ClientId INTEGER UNSIGNED,      SalesOrderNumber VARCHAR(10),        PRIMARY KEY(Id),      FOREIGN KEY(ClientId) REFERENCES Client(Id)  );    CREATE TABLE SalesOrderLineItem (      Id INTEGER UNSIGNED UNIQUE NOT NULL AUTO_INCREMENT,      SalesOrderId INTEGER UNSIGNED,      ProductId INTEGER UNSIGNED,      Qty INTEGER,      Price DECIMAL(5,2),      Cost DECIMAL(5,2),        PRIMARY KEY(Id),      FOREIGN KEY(SalesOrderId) REFERENCES SalesOrder(Id),      FOREIGN KEY(ProductId) REFERENCES Product(Id)  );    CREATE TABLE PurchaseOrderLineItem (      Id INTEGER UNSIGNED UNIQUE NOT NULL AUTO_INCREMENT,      PurchaseOrderId INTEGER UNSIGNED,      SalesOrderId INTEGER UNSIGNED,      ProductId INTEGER UNSIGNED,      ClientId INTEGER UNSIGNED,      MfgId INTEGER UNSIGNED,        PRIMARY KEY(Id),      FOREIGN KEY(PurchaseOrderId) REFERENCES PurchaseOrder(Id),      FOREIGN KEY(SalesOrderId) REFERENCES SalesOrder(Id),      FOREIGN KEY(ProductId) REFERENCES SalesOrder(Id),      FOREIGN KEY(ClientId) REFERENCES Client(Id),      FOREIGN KEY(MfgId) REFERENCES Mfg(Id)  );  

I'm thinking to create additional association tables between Sales Order Line Item and Purchase Orders and for Purchase Order Line Item and Containers to keep track of this kind of back and froth splitting?

Help appreciated!

Update Sql Server from Mysql

Posted: 04 Oct 2013 10:12 AM PDT

We have a website that stores form data in a web hosted MySQL database. Whenever form data is entered into the MySQL server tables I need update all necessary tables on our local SQL Server database.

I have seen several articles that describe how to update MySQL using SQL Server triggers but nothing that goes the other way.

If I create a linked server and linked tables in SQL Server can I create a trigger on a MySQL table from the SQL Server management studio?

What is a good, repeatable way to calculate MAXDOP on SQL Server?

Posted: 04 Oct 2013 09:02 AM PDT

When setting up a new SQL Server 2012, I use the following code to determine a good starting point for the MAXDOP setting:

/*      This will recommend a MAXDOP setting appropriate for your machine's NUMA memory     configuration.  You will need to evaluate this setting in a non-production      environment before moving it to production.       MAXDOP can be configured using:       EXEC sp_configure 'max degree of parallelism',X;     RECONFIGURE       If this instance is hosting a Sharepoint database, you MUST specify MAXDOP=1      (URL wrapped for readability)     http://blogs.msdn.com/b/rcormier/archive/2012/10/25/     you-shall-configure-your-maxdop-when-using-sharepoint-2013.aspx       Biztalk (all versions, including 2010):      MAXDOP = 1 is only required on the BizTalk Message Box     database server(s), and must not be changed; all other servers hosting other      BizTalk Server databases may return this value to 0 if set.     http://support.microsoft.com/kb/899000  */      DECLARE @CoreCount int;  DECLARE @NumaNodes int;    SET @CoreCount = (SELECT i.cpu_count from sys.dm_os_sys_info i);  SET @NumaNodes = (      SELECT MAX(c.memory_node_id) + 1       FROM sys.dm_os_memory_clerks c       WHERE memory_node_id < 64      );    IF @CoreCount > 4 /* If less than 5 cores, don't bother. */  BEGIN      DECLARE @MaxDOP int;        /* 3/4 of Total Cores in Machine */      SET @MaxDOP = @CoreCount * 0.75;         /* if @MaxDOP is greater than the per NUMA node         Core Count, set @MaxDOP = per NUMA node core count      */      IF @MaxDOP > (@CoreCount / @NumaNodes)           SET @MaxDOP = (@CoreCount / @NumaNodes) * 0.75;        /*          Reduce @MaxDOP to an even number       */      SET @MaxDOP = @MaxDOP - (@MaxDOP % 2);        /* Cap MAXDOP at 8, according to Microsoft */      IF @MaxDOP > 8 SET @MaxDOP = 8;        PRINT 'Suggested MAXDOP = ' + CAST(@MaxDOP as varchar(max));  END  ELSE  BEGIN      PRINT 'Suggested MAXDOP = 0 since you have less than 4 cores total.';      PRINT 'This is the default setting, you likely do not need to do';      PRINT 'anything.';  END  

I realize this is a bit subjective, and can vary based on many things; however I'm attempting to create a tight catch-all piece of code to use as a starting point for a new server.

Does anyone have any input on this code?

Why do I get "The initial snapshot for publication is not yet available."?

Posted: 04 Oct 2013 12:27 PM PDT

We are using Transactional Replications with updatable subscriptions. 1 publisher, 1 subscriber.

SQL Replication started encountering a constraint error on 3/6 at 10AM because people at two different offices entered data that violated a unique constraint. Totally expected.

However, at 12:08PM on 3/6, SQL Replication stopped reporting that error and instead said:

"The initial snapshot for publication 'TheDB' is not yet available."

Why would it stop retrying replication and instead start throwing this error? Now we need to regenerate the snapshot.

  • No DB backups or restores were performed at this time
  • The snapshot was successfully applied at the subscriber months ago when replication was initially set up.
  • There is a message in the log from 12:07 on 3/6 saying

[000] Request to run job COTOPSVR\TheDB-TheDB-COTCPSVR\DB-3 (from User CTOB\Administrator) refused because the job is already running from a request by User sa

  • No one manually initiated any job or did anything at all at 12:06PM.

Any ideas? I know how to fix this...I want to prevent this from happening again.

Manually set log file size after shrink SQL Server 2008 R2

Posted: 04 Oct 2013 08:30 AM PDT

Am becoming a somewhat involuntary DBA at work at teh moment and really need some help on something.

We have a 40GB database in Full Recovery Mode, no log backup configured and a huge log file of 84GB. My plan thus far to salvage this situation is to run a full log backup on the database, shrink the log file and instigate a maintenance plan to run a log backup every night with the database backup to help keep it under control.

My problem is I do not want the log file to shrink down to nothing and spend the first morning on Monday constantly growing. I have a rough estimate as to what the file should be (about 20% of the database) and would like to set this from the get-go to ensure as much contiguous space as possible. Is this just a case of changing "Initial Size" under databse Properties -> Files? I would guess as well that the database would need to be offline for this to occur?

Thanks in advance

SQL Server Decimal(9, 0) vs INT

Posted: 04 Oct 2013 09:44 AM PDT

One of our customers uses for some columns the datatype DECIMAL(18,0) in his SQL Server 2008R2 database. Because the columns grow quite slowly, he recently proposed to change the datatype to DECIMAL(5,0) to regain some storage.

According to the MSDN library, the storage space of the DECIMAL(5,0) datatype is, just like the DECIMAL(9,0) datatype, 5 bytes. INT is 1 byte smaller, but can store everything in the range of -2^31 to 2^31 instead of the -99,999 to 99,999 which DECIMAL(5,0) can store. Even the largest DECIMAL which fits into 5 bytes (DECIMAL(9,0)) can store only integers in the range -999,999,999 to 999,999,999 (which is less than half of the range INT offers in 4 bytes).

I can think of two "benefits" of using DECIMAL over INT:

  • The ability to add scale afterwards, without using more storage space
  • The ability to scale the precision up to 38 digits, without altering data type

but these aren't real benefits in my opinion:

  • Adding scale to integers does only make sense in very few cases (in most cases where scale does make a difference, it could also be added beforehand)
  • SQL Server sees every precision / scale combination as a different data type, so the datatype isn't left alone when increasing the precision or scale.

This makes me wonder: what is the added benefit of a DECIMAL(5,0) datatype for integers?

No comments:

Post a Comment

Search This Blog