Wednesday, March 13, 2013

[how to] Completely, uninstall the Oracle server on Linux

[how to] Completely, uninstall the Oracle server on Linux


Completely, uninstall the Oracle server on Linux

Posted: 13 Mar 2013 08:29 PM PDT

I have searched lot of time in the google uninstall oracle server in linux os,but i can't figure out , any correct solution.because i trying to hope that i need to reinstall the server because reason of lot of errors is occurred when iam trying to startup the server.does any one know completely remove all data belong to the oracle server?

Postgres Automated Install

Posted: 13 Mar 2013 05:55 PM PDT

Is it possible to automatically create a database after a postgres one click install (from Enterprise DB)?

How to become oracle developer

Posted: 13 Mar 2013 04:09 PM PDT

In future I want to become an oracle developer, now I just have basic questions for those who are or currently was an oracle developer:

  • I go a 2:2 in my ICT degree, do I need a masters in order to become an oracle developer or is it not necessary? Or am I better to follow the Oracle and PL/SQL for dummies books and go through the books and generate the knowledge and mention in CV I have gone through the books?

  • I obviously need Oracle at home in future but it is expensive. Does business give you license of software for home use or are you expected to have oracle at home yourself with your own purchase?

  • I need experience obviously to become an oracle developer. I have done a little bit of oracle at undergraduate standard in final year where I simply followed a portfolio doing tasks such as queries, inserting data, triggers, little bit of XML. Is it a lot different in the work place? Are there companies willing to take in learning developers for voluntary purposes to learn?

Thanks

Database research ideas [closed]

Posted: 13 Mar 2013 04:00 PM PDT

I am wanting to participate in a Master by Research degree in University and I am really interested in database programming. Especially in either Mysql or in Oracle. Now I just finished my final undergraduate year and my strongest subject was in Databases where I got an A. The assignment included following a portfolio where I normalize data and then by using Oracle I was able to follow steps in inserting data, updating data, triggers, queries etc.

Now I normally like to make things but in a master b research I need to research a topic and then create a product and write a thesis based on my research.

My question is that for a person like me who has only little bit experience using oracle and has used Mysql during my undergraduate years, does anybody recommend any research topics I could get into which is suitable for me and what I can handle in order to produce my research on? I am expected to write a research proposal but do not know what is suitable research for databases? Should I combine a database with a web application for example or solely just concentrate on database only?

Thanks

How to properly configure PostgreSQL RAM usage?

Posted: 13 Mar 2013 03:08 PM PDT

Using PostgreSQL, we have several hundred concurrent processes doing regular database access. We have our data spread out through several database servers, each of which houses a connection pool with PgBouncer. The problem is that RAM usage (via top - not sure if this is the best way to determine that) tends to climb to nearly or at 100%, on all servers. I am pretty sure this is bad.

I have tried out several configurations of pgbouncer / postgres, and eventually (after a few minutes of my system running) the RAM usage goes up to this point.

My questions are:

  1. Should I set up my connection pooler on a different server as the database? It seems so many open connections on the same server could be causing this.
  2. In general, what are good guidelines for RAM usage on Postgres? I really don't know how to tell if the server is behaving well / as expected or badly.

Thanks very much! Let me know if more information is needed and I'll try to post ASAP.

How to properly configure pgBouncer transaction level pooling?

Posted: 13 Mar 2013 03:04 PM PDT

What is a good way to determine the number of available transaction connections when using pgBouncer to pool database connections? Also, how should I determine the number of max_connections and max_transaction_locks on postgresql.conf?

Our application has over 200 concurrent processes, each of which regularly reads and writes to the database. Using session-level, I would expect the number of pooled connections to be around the number of concurrent processes working on the DB at a given time, but I don't really know how to translate that to transaction-level pooling.

Thanks very much! Let me know if more information is necessary and I will gladly oblige.

Consistent Database Export of Oracle Database

Posted: 13 Mar 2013 04:14 PM PDT

I'm quite confused and I need some clarification!

I do full export of database with Export Pump (for replication/duplication).

what usualy I do is:

SQL>SHUTDOWN IMMEDIATE  SQL>STARTUP RESTRICT  

then export, so I can have Consistent export (DMP-file)!

now can I export with only use the parameter CONSISTENT=Y or/and FLASHBACK_SCN=SOME_NUMBER and what is the differences (if any) and when to use them

I hope some one explain it to me in simple language!

any one can recommend best practice?

How to speed up SQL Server RIGHT JOIN on 100MM+ Records - use TVF?

Posted: 13 Mar 2013 04:19 PM PDT

  1. I use a table insert to UNION a CTE of converting transactions to visits (converting or not) and then summarize the visit data (no details, just pages visited & txns completed) to get funnel clickthrough & overall conversion; after this, RIGHT JOIN to the VisitData table

    • The transaction table has 500k relevant rows or less for every run of this process, but
    • The VisitData table has 3MM or more rows per day, so if I run this weekly it ends up in a 25MM-row batch.
  2. To get overall visit info for converting and non-converting visits into one summary table, as mentioned above, I have to:

    1. RIGHT JOIN the reasonably-sized Transactions CTE_T, 500k of ~2MM Txn table rows, to
    2. the Ridicularge Visits table V, ~90MM rows per partition
      • Since the Visits table is so huge, it is partitioned by date, and although it has indices for the keys I join on, SQL Server still does a table scan to attach the rest of the augmenting data (e.g. visitor location etc.) from the visit table V to the transactions info from CTE_T.

This takes a very long time, between 3-7 hours depending on how wide the timespan is (the Visit table is partitioned by month @ 90MM rows per partition), but never less than 2 hours JUST TO SCAN THE TABLE (!) before any query/join logic is applied or any other operation performed.

  • My question is this: is it possible to write a table-valued function that will return (@UserID,VisitData1,VisitData2,etc), allowing SQL Server to spool (I'm told) this in the background while allowing other things to occur in the meantime?
  • If the answer is YES, have you experienced significant (or even just noticeable) gains in performance using this method?

Join Calender Table

Posted: 13 Mar 2013 02:43 PM PDT

I have a Datatable and a Calendartable. I do a join on this tables to get sequential dates if there is no data for this day.

My problem now is that there a several id's and i need a count for each one per day. The problem is that I get a 'null' value for the id-field if there is no data for this date. Also my query below does only work if there is one id in the table, if i add a second id, the empty rows are not shown because they exist for the other id.

My sample tables are:

    --Datatable      CREATE TABLE [dbo].[datatable](          [the_id] int,          [the_date] date,          [the_count] int NULL      ) ON [PRIMARY]        INSERT INTO datatable (the_id, the_date, the_count) VALUES      (1, dateadd(d, -5, getdate()), 37),      (1, dateadd(d, -5, getdate()), 30),      (1, dateadd(d, -5, getdate()), 70),      (1, dateadd(d, -4, getdate()), 8),      (1, dateadd(d, -4, getdate()), 9),      (1, dateadd(d, -2, getdate()), 19),      (1, dateadd(d, -2, getdate()), 3),      (1, dateadd(d, -1, getdate()), 20)        INSERT INTO datatable (the_id, the_date, the_count) VALUES      (2, dateadd(d, -5, getdate()), 27),      (2, dateadd(d, -5, getdate()), 17),      (2, dateadd(d, -5, getdate()), 37),      (2, dateadd(d, -3, getdate()), 8),      (2, dateadd(d, -3, getdate()), 89),      (2, dateadd(d, -2, getdate()), 19),      (2, dateadd(d, -2, getdate()), 9),      (2, dateadd(d, -1, getdate()), 20),      (2, dateadd(d, -1, getdate()), 2)        --Calendartable      CREATE TABLE [dbo].[calendartable](          [the_day] date      )        DECLARE @StartDate date      DECLARE @EndDate date      SET @StartDate = DATEADD(d, -10, GETDATE())      SET @EndDate = DATEADD(d, 10, GETDATE())        WHILE @StartDate <= @EndDate      BEGIN      INSERT INTO [calendartable] (the_day)      SELECT @StartDate      SET @StartDate = DATEADD(dd, 1, @StartDate)      END  

My query:

    --Query      DECLARE @mindate date      DECLARE @maxdate date      SELECT @mindate = MIN(CONVERT(date, the_date)),      @maxdate = MAX(CONVERT(date, the_date))      FROM datatable        SELECT  dt.the_id, isnull(dt.the_date, ct.the_day),      (SELECT SUM(the_count) WHERE the_id = dt.the_id and the_date = dt.the_date)      as the_sum_count      FROM calendartable AS ct      LEFT JOIN datatable AS dt      ON dt.the_date = ct.the_day      AND ct.the_day BETWEEN @mindate AND @maxdate      WHERE ct.the_day BETWEEN @mindate AND @maxdate      GROUP BY dt.the_id, dt.the_date, ct.the_day      ORDER BY dt.the_id, dt.the_date ASC  

This query shows the row with no data at this day, but I also need the id to which this date belongs. The query shows no empty rows if there is a second id in the datatable.

Please help!

EDIT:

To be more explicit which output I want:

    ID  the_date    the_count      1   2013-03-08  137      1   2013-03-09  17      1   2013-03-10  null  <--- this is missing in the above query      1   2013-03-11  22      1   2013-03-12  20      2   2013-03-08  81      2   2013-03-09  null <--- this is also missing      2   2013-03-10  97      2   2013-03-11  28      2   2013-03-12  22  

I hope my questions is clearer now!

1286 - Unknown storage engine 'InnoDB'

Posted: 13 Mar 2013 01:38 PM PDT

I am trying to use roundcube and it recently just broke. I don't know if this is due to a MySQL update that happened recently or not but in phpMyAdmin I get the following error if I try and view a table:

1286 - Unknown storage engine 'InnoDB'  

and

mysql> SHOW ENGINES;  +--------------------+---------+----------------------------------------------------------------+--------------+------+------------+  | Engine             | Support | Comment                                                        | Transactions | XA   | Savepoints |  +--------------------+---------+----------------------------------------------------------------+--------------+------+------------+  | MRG_MYISAM         | YES     | Collection of identical MyISAM tables                          | NO           | NO   | NO         |  | CSV                | YES     | CSV storage engine                                             | NO           | NO   | NO         |  | MyISAM             | DEFAULT | MyISAM storage engine                                          | NO           | NO   | NO         |  | BLACKHOLE          | YES     | /dev/null storage engine (anything you write to it disappears) | NO           | NO   | NO         |  | FEDERATED          | NO      | Federated MySQL storage engine                                 | NULL         | NULL | NULL       |  | PERFORMANCE_SCHEMA | YES     | Performance Schema                                             | NO           | NO   | NO         |  | ARCHIVE            | YES     | Archive storage engine                                         | NO           | NO   | NO         |  | MEMORY             | YES     | Hash based, stored in memory, useful for temporary tables      | NO           | NO   | NO         |  +--------------------+---------+----------------------------------------------------------------+--------------+------+------------+  8 rows in set (0.00 sec)  

and

[mysqld]  default-storage-engine=MyISAM  local-infile=0  symbolic-links=0  skip-networking  max_connections = 500  max_user_connections = 20  key_buffer = 512M  myisam_sort_buffer_size = 64M  join_buffer_size = 64M  read_buffer_size = 12M  sort_buffer_size = 12M  read_rnd_buffer_size = 12M  table_cache = 2048  thread_cache_size = 16K  wait_timeout = 30  connect_timeout = 15  tmp_table_size = 64M  max_heap_table_size = 64M  max_allowed_packet = 64M  max_connect_errors = 10  query_cache_limit = 1M  query_cache_size = 64M  query_cache_type = 1  low_priority_updates=1  concurrent_insert=ALWAYS  log-error=/var/log/mysql/error.log  tmpdir=/home/mysqltmp  myisam_repair_threads=4  [mysqld_safe]  open_files_limit = 8192  log-error=/var/log/mysql/error.log    [mysqldump]  quick  max_allowed_packet = 512M    [myisamchk]  key_buffer = 64M  sort_buffer = 64M  read_buffer = 16M  write_buffer = 16M  

Ideas as to how to fix? It used to work just fine.

Why is SQL running the same query longer from another connection?

Posted: 13 Mar 2013 03:29 PM PDT

Here is the issue overview: Why does my stored procedure run faster when executed localy vs remotely?

Dont jump to any conclusion just yet, let me explain what I mean...

Here is the setup:
A Windows 2008R2 (correction: 2003) application server executes a stored procedure that performs some action, (what its doing is really important at this point in time). This stored procedure is executed on the SQL server over a TCP/IP connection to the database server. The DB server is physicaly located right next to the application server, and they are connected to eachother via 1GB NICs to a 1GB Switch. The DB server is running SQL 2005 SP2 Enterprise Edition, and has 16GB of memory and several vLUNS striped across 48 15k drives in an HP-EVA FC connected SAN. From all indicators thus far, there are no I/O, Mem, or CPU constreints or limits being hit. Trace Falg 1118 is on and TempDB is split across 8 file on their own vLUN. Data, and TLogs also have their own vLUNS too.

So, here is what I am seeing:
Using SQLCMD on the database server, with SQLProfiler running from the same DB server, I can execute the stored procedure and I see that the execution starts immediatly, and compleats with a durration of about 2,100ms with an IO of about 1200.

Using SQLCMD on the application server, with SQLProfiler running from the DB server, I can execute the same exact stored procedure, with the SAME exact parameters, and I see that the execution starts immediatly, and compleats with a durration of about 110,000ms with an IO of about 1200.

The query results in 1 row, with 4 columns [INT, INT, VARCHAR(50), VARCHAR(100)]

ASIDE:(I know the query is a train wreck, this is a regulated system and I cannot change it on a live prodution server, so please dont make any sugestions about doing so. The next version has been rewritten to be better.)

From everything we can see, there is no reason that we should be seeing differances like this, but what is heppening is the .NET application that calls this query from the application server is timing out waiting for the responce.

We have checked locking and blocking, WAIT states, Query plans, IO contention, CPU contention, MEM contention, NETWORK saturation/utilization, performed indexes rebuilds on all indexes, updates all stats, and a hand full of other items, but haven't come up with anything that points to why this is happening.

Please ask more questions if you have any, make recomendations as you come up with them, and depending on the impact (remember this is a production environment) we will try them and respond back.

-Thanks! -Chris

Can't add field that begins with an underscore in Crystal Reports

Posted: 13 Mar 2013 12:28 PM PDT

There's a field in a FoxPro table that I'm trying to pull in a Crystal Report (v10), and it begins with an _ and it doesn't appear in Field Explorer->Fields->Database Fields->->_fieldName

Are fields that start with _ invisible to Crystal Reports?

Troubleshooting Database Mail issues and Service Account permissions in SQL Server

Posted: 13 Mar 2013 02:08 PM PDT

I am having difficulties troubleshooting a Database Mail issue. When I try to stop and start the service using sysmail_start_sp and sysmail_stop_sp, I see a critical error in the Administrative Logs on the server.

The relevant exception appears to be:

Message: Access to the path 'C:\Windows\Microsoft.NET\Framework64\      v2.0.50727\Config\machine.config' is denied.  

I am not finding good documentation on which Windows account is used for Database Mail. I thought it would be the SQLAgent service. I have tried changing the permissions of this user to no avail.

Any guidance is appreciated.

Handling concurrent access to a key table without deadlocks in SQL Server

Posted: 13 Mar 2013 12:57 PM PDT

I have a table that is used by a legacy application as a substitute for IDENTITY fields in various other tables.

Each row in the table stores the last used ID LastID for the field named in IDName.

Occasionally the stored proc gets a deadlock - I believe I've built an appropriate error handler; however I'm interested to see if this methodology works as I think it does, or if I'm barking up the wrong tree here.

I'm fairly certain there should be a way to access this table without any deadlocks at all.

The database itself is configured with READ_COMMITTED_SNAPSHOT = 1.

First, here is the table:

CREATE TABLE [dbo].[tblIDs](      [IDListID] [int] NOT NULL CONSTRAINT PK_tblIDs PRIMARY KEY CLUSTERED IDENTITY(1,1) ,      [IDName] [nvarchar](255) NULL,      [LastID] [int] NULL,  );  

And the nonclustered index on the IDName field:

CREATE NONCLUSTERED INDEX [IX_tblIDs_IDName] ON [dbo].[tblIDs]  (      [IDName] ASC  )   WITH (      PAD_INDEX = OFF      , STATISTICS_NORECOMPUTE = OFF      , SORT_IN_TEMPDB = OFF      , DROP_EXISTING = OFF      , ONLINE = OFF      , ALLOW_ROW_LOCKS = ON      , ALLOW_PAGE_LOCKS = ON      , FILLFACTOR = 80  );    GO  

Some sample data:

INSERT INTO tblIDs (IDName, LastID) VALUES ('SomeTestID', 1);  INSERT INTO tblIDs (IDName, LastID) VALUES ('SomeOtherTestID', 1);  GO  

The stored procedure used to update the values stored in the table, and return the next ID:

CREATE PROCEDURE [dbo].[GetNextID](      @IDName nvarchar(255)  )  AS  BEGIN      /*          Description:    Increments and returns the LastID value from tblIDs          for a given IDName          Author:         Max Vernon          Date:           2012-07-19      */        DECLARE @Retry int;      DECLARE @EN int, @ES int, @ET int;      SET @Retry = 5;      DECLARE @NewID int;      SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;      SET NOCOUNT ON;      WHILE @Retry > 0      BEGIN          BEGIN TRY              BEGIN TRANSACTION;              SET @NewID = COALESCE((SELECT LastID FROM tblIDs WHERE IDName = @IDName),0)+1;              IF (SELECT COUNT(IDName) FROM tblIDs WHERE IDName = @IDName) = 0                   INSERT INTO tblIDs (IDName, LastID) VALUES (@IDName, @NewID)              ELSE                  UPDATE tblIDs SET LastID = @NewID WHERE IDName = @IDName;              COMMIT TRANSACTION;              SET @Retry = -2; /* no need to retry since the operation completed */          END TRY          BEGIN CATCH              IF (ERROR_NUMBER() = 1205) /* DEADLOCK */                  SET @Retry = @Retry - 1;              ELSE                  BEGIN                  SET @Retry = -1;                  SET @EN = ERROR_NUMBER();                  SET @ES = ERROR_SEVERITY();                  SET @ET = ERROR_STATE()                  RAISERROR (@EN,@ES,@ET);                  END              ROLLBACK TRANSACTION;          END CATCH      END      IF @Retry = 0 /* must have deadlock'd 5 times. */      BEGIN          SET @EN = 1205;          SET @ES = 13;          SET @ET = 1          RAISERROR (@EN,@ES,@ET);      END      ELSE          SELECT @NewID AS NewID;  END  GO  

Sample executions of the stored proc:

EXEC GetNextID 'SomeTestID';    NewID  2    EXEC GetNextID 'SomeTestID';    NewID  3    EXEC GetNextID 'SomeOtherTestID';    NewID  2  

EDIT:

I've added a new index, since the existing index IX_tblIDs_Name is not being used by the SP; I assume the query processor is using the clustered index since it needs the value stored in LastID. Anyway, this index IS used by the actual execution plan:

CREATE NONCLUSTERED INDEX IX_tblIDs_IDName_LastID ON dbo.tblIDs  (      IDName ASC  )   INCLUDE  (      LastID  )  WITH (FILLFACTOR = 100, ONLINE=ON, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);  

EDIT #2:

I've taken the advice that @AaronBertrand gave and modified it slight. The general idea here is to refine the statement to eliminate unnecessary locking, and overall to make the SP more efficient.

The code below replaces the code above from BEGIN TRANSACTION to END TRANSACTION:

BEGIN TRANSACTION;  SET @NewID = COALESCE((SELECT LastID FROM dbo.tblIDs WHERE IDName = @IDName), 0) + 1;    IF @NewID = 1      INSERT INTO tblIDs (IDName, LastID) VALUES (@IDName, @NewID);  ELSE      UPDATE dbo.tblIDs SET LastID = @NewID WHERE IDName = @IDName;    COMMIT TRANSACTION;  

Since our code never adds a record to this table with 0 in LastID we can make the assumption that if @NewID is 1 then the intention is append a new ID to the list, else we are updating an existing row in the list.

SQL Server cluster questions please

Posted: 13 Mar 2013 02:13 PM PDT

We have SQL Server 2008 R2 cluster active/active/active (a, b, c) and three instances run on this environment.

We did a test on one of the instances, and failover test.

  1. Manual failover from node c to b. After 3 minutes, the application cannot connect to SQL Server.
  2. Back to node c we are OK and application is happy
  3. Manual failover from node c to a. After 3 minutes, the application cannot connect to SQL Server.
  4. Back to node c we are OK and application is happy

Please give me some helps. Thanks

oracle null value in quotes - yet another newbie question

Posted: 13 Mar 2013 01:34 PM PDT

I have a where clause like

"and uc.completion_status in ('[P....]') "   

here ('[P....]') is the user input from taken a dropdown list in the user interface. and dropdown list consists three options Y, N or NotStarted

not started is defined with nvl(uc.completion_status, 'NotStarted') therefore NotStarted is null actually. and null in quotes is not working. how can this problem can be handled?

thanks in advance.

Can I move up rows in the memory itself?

Posted: 13 Mar 2013 08:55 PM PDT

Lets say i have the following table :

id         col2     col3   1          1        1   2          2        2   3          3        3     4          4        4  

And I'm trying to move rows to different id as a set, which will result :

 id         col2     col3   1          2        2   2          3        3   3          4        4     4          null     null  

And I don't mean in the select but actually transfering rows up the table.

Is there a simple way to do that?

The idea is to enter number of rows ( or range of id's) which will create some kind of "padding" to the rows and moves them backwords in the storage.

Similar to the option to maintain a gapless id in case u remove rows, I'm trying to achive the same but for the data itself, So in the above example i'm trying to "delete" 1 row

Is this possible with some kind of an option or I have to write a query that implemets the logic?

Alternative tools to export Oracle database to SQL Server?

Posted: 13 Mar 2013 02:41 PM PDT

I've got an Oracle database that I need to export (schema and data) to SQL Server.

I am trying the Microsoft SQL Server Migration Assistant for Oracle, but it is horribly slow, grossly inefficient and very un-user-friendly, e.g. I was having problems connecting to the SQL Server DB during data migration - but it still spent ~5 minutes preparing all the data before attempting a connection to SQL Server, then when it failed, the 5 minutes of preparatory work were wasted.

Right now, I'm just trying to connect to another Oracle DB using this tool, I left it overnight and came back this morning, and it's still stuck on 19% of "Loading objects..." And this is on a machine with a good 18GB RAM, of which maybe 8.5 GB currently in use. Task Manager shows me that SSMAforOracle.exe is using 0 CPU, 0 PF Delta, and no change whatsoever in memory usage. In other words: frozen stiff. Absolutely intolerable.

Are there any other tools out there that can migrate an Oracle DB to SQL Server a little more efficiently?

unique constraint violated (ORA-00001) in concurrent insert

Posted: 13 Mar 2013 08:46 PM PDT

I have a procedure that is called from concurrent transactions:

//Some actions here    INSERT INTO table1                  (table1_id, table1_val1, table1_val2, table1_val3)                VALUES                  (gettablenewid('TABLE1'), val1, val2, val3);    INSERT INTO table1                  (table1_id, table1_val1, table1_val2, table1_val3)                VALUES                  (gettablenewid('TABLE1'), val1, val2, val3);  INSERT INTO table1                  (table1_id, table1_val1, table1_val2, table1_val3)                VALUES                  (gettablenewid('TABLE1'), val1, val2, val3);  //some other actions  

Function gettablenewid code (id_table stores PKs for each table):

create or replace      function        GetTableNewId(tablename in varchar2)        return number is        PRAGMA AUTONOMOUS_TRANSACTION;        Result number;      cursor c1 is SELECT ig.id_value+1 id_new                                FROM id_table ig                                WHERE ig.table_identifier = tablename                                FOR UPDATE of  ig.id_value;      begin      for c1_rec in c1 loop          UPDATE id_table ig           SET ig.id_value = c1_rec.id_new         WHERE current of c1 ;         Result:=c1_rec.id_new;      end loop;      commit;        return(Result);      end GetTableNewId;  

Occasionally insert statement fails with ORA-00001 for table1_id, I can't undestand why it happens.

SQL Server deadlock too long [closed]

Posted: 13 Mar 2013 01:34 PM PDT

I have a really weird problem with a deadlock between a business process and a monitoring job. The both were involved on a deadlock that lasted around 5.5 hours. I don't know why SQL Server took so long in identifying that deadlock. Here is the fragment of the errorlog file:

Process ID 103 was killed by hostname Server1, host process ID 9908.  2013-03-11 07:38:11.33 spid16s     deadlock-list  2013-03-11 07:38:11.33 spid16s      deadlock victim=process3c1dc18  2013-03-11 07:38:11.33 spid16s       process-list  2013-03-11 07:38:11.33 spid16s        process id=process3c1dc18 taskpriority=0 logused=0 waitresource=OBJECT: 10:2073058421:2  waittime=19049468 ownerId=484658244 transactionname=WstrObjnameI4I4 lasttranstarted=2013-03-11T02:20:37.643 XDES=0x2a2ac81c8 lockMode=Sch-S schedulerid=3 kpid=11148 status=suspended spid=62 sbid=0 ecid=0 priority=0 transcount=1 lastbatchstarted=2013-03-11T02:20:02.900 lastbatchcompleted=2013-03-11T02:20:02.897 clientapp=SQLAgent - TSQL JobStep (Job 0x05BCA8DF1ECB76448CAB7B7FBBDCF12C : Step 1) hostname=CHQSQL10 hostpid=3180 loginname=Domain\SQLUser isolationlevel=read committed (2) xactid=484655352 currentdb=10 lockTimeout=4294967295 clientoption1=673185824 clientoption2=128056  2013-03-11 07:38:11.33 spid16s         executionStack  2013-03-11 07:38:11.33 spid16s          frame procname=adhoc line=4 stmtstart=78 sqlhandle=0x020000002e6f9032eba1a214586a5ff0b388fbf56e7c1535  2013-03-11 07:38:11.33 spid16s     SELECT 'Mar 11 2013  2:20:02:900AM',  2013-03-11 07:38:11.33 spid16s         DB_NAME() DBName,  2013-03-11 07:38:11.33 spid16s         OBJECT_NAME(p.OBJECT_ID) BlockedObjectName,  2013-03-11 07:38:11.33 spid16s         tl.resource_type,  2013-03-11 07:38:11.33 spid16s         tl.request_mode,  2013-03-11 07:38:11.33 spid16s         cast(wt.wait_duration_ms / 60000.00 as decimal(10,2)) as [LockDuration],  2013-03-11 07:38:11.33 spid16s         tl.request_session_id,  2013-03-11 07:38:11.33 spid16s         se1.host_name as [request_Host],  2013-03-11 07:38:11.33 spid16s         se1.program_name as [request_Program],  2013-03-11 07:38:11.33 spid16s         se1.login_name as [request_Login],  2013-03-11 07:38:11.33 spid16s         h1.TEXT AS RequestingText,  2013-03-11 07:38:11.33 spid16s         wt.blocking_session_id,  2013-03-11 07:38:11.33 spid16s         se2.host_name as [blocking_Host],  2013-03-11 07:38:11.33 spid16s         se2.program_name as [blocking_Program],  2013-03-11 07:38:11.33 spid16s         se2.login_name as [blocking_Login],  2013-03-11 07:38:11.33 spid16s         h2.TEXT AS BlockingTest  2013-03-11 07:38:11.33 spid16s         FROM sys.dm_tran_locks AS tl  2013-03-11 07:38:11.33 spid16s         INNER JOIN sys.databases db ON db.database_id = tl.resource_database_id  2013-03-11 07:38:11.33 spid16s         INNER JOIN sys.dm_os_waiting_tasks AS wt ON tl.lock_owner_address = wt.resource_address  2013-03-11 07:38:11.33 spid16s         INNER JOIN sys.partitions AS p ON p.hobt_id = tl.resource_associated_entity_id  2013-03-11 07:38:11.33 spid16s         INNER JOIN sys.dm_exec_connections ec1 ON ec1.session_id = tl.request_session_id  2013-03-11 07:38:11.33 spid16s         INNER JOIN sys.dm_exec_connections ec2 ON ec2.se       2013-03-11 07:38:11.33 spid16s          frame procname=mssqlsystemresource.sys.sp_executesql line=1 sqlhandle=0x0400ff7fbe80662601000000000000000000000000000000  2013-03-11 07:38:11.33 spid16s     sp_executesql       2013-03-11 07:38:11.33 spid16s          frame procname=DBATools.dbo.sp_foreachdb line=72 stmtstart=5630 stmtend=5694 sqlhandle=0x03001e0020c96c7eba4db70070a100000100000000000000  2013-03-11 07:38:11.33 spid16s     EXEC sp_executesql @sql;       2013-03-11 07:38:11.33 spid16s          frame procname=DBATools.dbo.usp_CheckBlockingSessions line=62 stmtstart=5850 stmtend=6840 sqlhandle=0x03001e00ae80847cbc31b80070a100000100000000000000  2013-03-11 07:38:11.33 spid16s     INSERT INTO dbo.BlockingSessions  2013-03-11 07:38:11.33 spid16s         (GatherTime  2013-03-11 07:38:11.33 spid16s         ,DBName  2013-03-11 07:38:11.33 spid16s         ,BlockedObjectName  2013-03-11 07:38:11.33 spid16s         ,resource_type  2013-03-11 07:38:11.33 spid16s         ,request_mode  2013-03-11 07:38:11.33 spid16s         ,LockDuration  2013-03-11 07:38:11.33 spid16s         ,request_session_id  2013-03-11 07:38:11.33 spid16s         ,request_Host  2013-03-11 07:38:11.33 spid16s         ,request_Program  2013-03-11 07:38:11.33 spid16s         ,request_Login  2013-03-11 07:38:11.33 spid16s         ,RequestingText  2013-03-11 07:38:11.33 spid16s         ,blocking_session_id  2013-03-11 07:38:11.33 spid16s         ,blocking_Host  2013-03-11 07:38:11.33 spid16s         ,blocking_Program  2013-03-11 07:38:11.33 spid16s         ,blocking_Login  2013-03-11 07:38:11.33 spid16s         ,BlockingText)  2013-03-11 07:38:11.33 spid16s       EXEC dbo.sp_foreachdb  @command=@v_SQLCommand, @user_only=1, @compatibility_level=90  2013-03-11 07:38:11.33 spid16s       --EXEC sp_MSForEachDB @v_SQLCommand       2013-03-11 07:38:11.33 spid16s          frame procname=adhoc line=1 sqlhandle=0x01001e006da0d429b0e25a9e040000000000000000000000  2013-03-11 07:38:11.33 spid16s     EXEC dbo.usp_CheckBlockingSessions       2013-03-11 07:38:11.33 spid16s         inputbuf  2013-03-11 07:38:11.33 spid16s     EXEC dbo.usp_CheckBlockingSessions      2013-03-11 07:38:11.33 spid16s        process id=process3c769b8 taskpriority=0 logused=1088 waitresource=KEY: 10:327680 (00001df3833b) waittime=4656 ownerId=484617865 transactionname=TRUNCATE TABLE lasttranstarted=2013-03-11T02:05:07.180 XDES=0x4c89b55b0 lockMode=X schedulerid=12 kpid=13200 status=suspended spid=64 sbid=0 ecid=0 priority=0 transcount=1 lastbatchstarted=2013-03-11T02:05:07.180 lastbatchcompleted=2013-03-11T02:05:07.163 clientapp=Microsoft SQL Server hostname=Server1 hostpid=9248 loginname=Domain\SQLUser isolationlevel=read committed (2) xactid=484617865 currentdb=10 lockTimeout=4294967295 clientoption1=671088672 clientoption2=128056  2013-03-11 07:38:11.33 spid16s         executionStack  2013-03-11 07:38:11.33 spid16s          frame procname=adhoc line=1 stmtend=58 sqlhandle=0x01000a0037917905903bd7c8010000000000000000000000  2013-03-11 07:38:11.33 spid16s     truncate table dbo.Table1       2013-03-11 07:38:11.33 spid16s         inputbuf  2013-03-11 07:38:11.33 spid16s     truncate table dbo.Table1  truncate table dbo.Table2  truncate table dbo.Table3  truncate table dbo.Table4  truncate table d      2013-03-11 07:38:11.33 spid16s       resource-list  2013-03-11 07:38:11.33 spid16s        objectlock lockPartition=2 objid=2073058421 subresource=FULL dbid=10 objectname=DBStore.dbo.Table1 id=lock4a2309200 mode=Sch-M associatedObjectId=2073058421  2013-03-11 07:38:11.33 spid16s         owner-list  2013-03-11 07:38:11.33 spid16s          owner id=process3c769b8 mode=Sch-M  2013-03-11 07:38:11.33 spid16s         waiter-list  2013-03-11 07:38:11.33 spid16s          waiter id=process3c1dc18 mode=Sch-S requestType=wait  2013-03-11 07:38:11.33 spid16s        keylock hobtid=327680 dbid=10 objectname=UKStoreReporting .sys.sysrowsets indexname=clust id=lock4ad68f980 mode=U associatedObjectId=327680  2013-03-11 07:38:11.33 spid16s         owner-list  2013-03-11 07:38:11.33 spid16s          owner id=process3c1dc18 mode=S  2013-03-11 07:38:11.33 spid16s         waiter-list  2013-03-11 07:38:11.33 spid16s          waiter id=process3c769b8 mode=X requestType=convert  

I really don't know in the first place why the monitoring process acquire a lock on a user table if the query does not mention it. Another question would be if system tables are involved on a deadlock does sql server ignore it?

Here is full first query:

SELECT       DB_NAME() DBName,      OBJECT_NAME(p.OBJECT_ID) BlockedObjectName,      tl.resource_type,      tl.request_mode,      cast(wt.wait_duration_ms / 60000.00 as decimal(10,2)) as [LockDuration],      tl.request_session_id,      se1.host_name as [request_Host],      se1.program_name as [request_Program],      se1.login_name as [request_Login],      h1.TEXT AS RequestingText,      wt.blocking_session_id,      se2.host_name as [blocking_Host],      se2.program_name as [blocking_Program],      se2.login_name as [blocking_Login],      h2.TEXT AS BlockingTest      FROM sys.dm_tran_locks AS tl      INNER JOIN sys.databases db ON db.database_id = tl.resource_database_id      INNER JOIN sys.dm_os_waiting_tasks AS wt ON tl.lock_owner_address = wt.resource_address      INNER JOIN sys.partitions AS p ON p.hobt_id = tl.resource_associated_entity_id      INNER JOIN sys.dm_exec_connections ec1 ON ec1.session_id = tl.request_session_id      INNER JOIN sys.dm_exec_connections ec2 ON ec2.session_id = wt.blocking_session_id      INNER JOIN sys.dm_exec_sessions se1 ON ec1.session_id = se1.session_id      INNER JOIN sys.dm_exec_sessions se2 ON ec2.session_id = se2.session_id      CROSS APPLY sys.dm_exec_sql_text(ec1.most_recent_sql_handle) AS h1      CROSS APPLY sys.dm_exec_sql_text(ec2.most_recent_sql_handle) AS h2      WHERE db.database_id = DB_ID()      AND cast(wt.wait_duration_ms / 60000.00 as decimal(10,2)) >= 5  

The truncate is a series of truncates all in a single transaction because it is a task on a SSIS.

Finally how could I avoid this problem? Thanks

Named Pipe Provider Error code 40

Posted: 13 Mar 2013 02:50 PM PDT

I have literally tried everything, from enabling named pipe to adding exception to ports in the firewall, to everything possible in surface configuration. I can connect to the SQL instance(using TCP and Named Pipes) with SQL Server Management Studio. But sqlcmd throws an error:

Login timeout expired  

Help!

MySQL auto increment problem with deleting rows / archive table

Posted: 13 Mar 2013 05:39 PM PDT

A hosted server is running "maintenance" each weekend. I am not privy to the details.

In a database on this server there is a MyISAM table. This table never holds more than 1000 rows and usually much less. It is MyISAM so that the auto increment does not reset (and with so few rows it really doesn't matter). Rows are regluarly deleted from this table and moved to an archive table (1M rows).

The problem is lately the auto increment has "rolled back" slightly after each maintenance.

Is there any easy way to verify the auto increment of the insert table by reading the max id from both the insert and the archive table?

I'd rather not verify before each insert unless that is the only solution.

Here are the basic table layouts:

CREATE TABLE x  (      xid int(10) unsigned NOT NULL AUTO_INCREMENT, //snip      PRIMARY KEY (xid)  ) ENGINE=MyISAM AUTO_INCREMENT=124 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;    CREATE TABLE xhistory  (      xid int(10) unsigned NOT NULL DEFAULT '0', //snip      PRIMARY KEY (xid)  ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;  

Far from perfect workaround: (this was somewhat urgent, I had to manually update over 100 rows)

select xid from xhistory where x=?  

Check if just inserted row in x exists in history. If it does:

select greatest(max(x.xid),max(xhistory.xid)) as newval from x,xhistory  

Find a new id.

INSERT INTO x SELECT * FROM x AS iv WHERE iv.xid=? ON DUPLICATE KEY UPDATE xid=?  

And update our row with this id.

Cannot generate reports from SQL Management Data Warehouse

Posted: 13 Mar 2013 02:53 PM PDT

I'm running SQL Server 2008 R2 and have installed the MDW on one server and have a Data Collector collecting and uploading the server activity, query results, and Disk activity data to the MDW. When I select any of the reports from the MDW with Data Collection > Reports > Management Data Warehouse I receive the error:

Exception has been thrown by the target of an invocation - > Object reference not set to an instance of an object.

This occurs for all 3 reports and after I've waiting some time and data has been uploaded from the data collector. I do not have SSRS running, but read that isn't necessary.

Any suggestions?

How to prevent SQL Server service account from registering / deregistering SPN?

Posted: 13 Mar 2013 12:48 PM PDT

The service account is a domain account. It is not a domain admin (nor is it a member of any group that is a domain admin). It has neither "write servicePrincipalName" nor "Write public information" permission (nor a member of a group with these permissions). Yet it is still able to register / deregister it's SPN upon startup and shutdown.

What permission am I missing?

Bulk insert into SQL Server from VMWare guest using distributed switch

Posted: 13 Mar 2013 02:42 PM PDT

This is mostly likely not a SQL server issue but the setup seems to only be affecting BULK INSERTS to SQL Servers.

We have recently moved VM Hardware and all the guests that were moved had their virtual switches changed from standard to distributed.

I then started receiving

A fatal error occurred while reading the input stream from the network. The session will be terminated (input error: 64, output error: 0)

on a two SQL servers during BULK INSERT operations. One of the SQL servers was a VM with the new configuration and the other was a physical server. Both BULK INSERT operation originated from a VM with the new configuration. The BULK INSERTs would not fail every time, it was very random when it would.

When we changed the virtual switch to be a standard switch instead of a distributed switch the issue goes away.

I am looking for more of an explanation to why it doesn't work with a distributed switch instead of a resolution. My guess would be that the BULK INSERT operation is serial and with a distributed switch the packets are being routed through different hosts, some of which may be busier than others, and are arriving at the destination server beyond some latency threshold. (note: there is nothing in the windows event log at the times of the errors on either the source or destination server)

Empty LONGTEXT or NULL?

Posted: 13 Mar 2013 06:39 PM PDT

In my MySQL DB I have one field called html_contents that contains all the html to be shown in a webpage. Obviously the html could be huge, and certaintly bigger than 64KB, therefore I decided to use LONGTEXT rather than VARCHAR.

When the field is not set or left empty would you say it is better (alias more efficient for the DB) to set it to NULL or to empty string?

I read this: When to use NULL and when to use an empty string? but it talks about empty strings in general (probably small strings and not LONGTEXT).

I was wondering if with LONGTEXT is a different story, whether it saves a lot of space or execution time to use NULL instead of leaving empty LONGTEXT around.

No comments:

Post a Comment

Search This Blog