Monday, April 1, 2013

[how to] IO Frozen in error SQL error log

[how to] IO Frozen in error SQL error log


IO Frozen in error SQL error log

Posted: 01 Apr 2013 07:38 PM PDT

The log shows IO DBName is Frozen follow by thaw but yet my database schedule backup task which normally been running isn't since the network team are doing backs (not database backups, according to them) on the database server, using ArcServ 16.5. They run their backup utility around 11:00pm or later And my maintenance backup runs at 2:00am. This is in a 2000 SP4 environment. I would like to gather much evidence to show them that their utility is cause my schedule backs not to run.

Postgresql output doubt

Posted: 01 Apr 2013 05:48 PM PDT

I am getting output and I have to press spacebar after one page of output, I want to get rid of that, how can I and which option I have to enable.

By automatically, how can I automatically execute all commands saved in a file when I logged in psql via terminal?

Thanks in advance.

Foreign Keys with Composite Key

Posted: 01 Apr 2013 05:26 PM PDT

Imagine I have a table with the following dependency:

(Name, BoughtFrom, TimeBought) --> Cost

Where the three attributes in bold form a composite primary key for the table. Now then, I want to link (form a relationship) from another table to this one. How do I do that? If I had an ID column I'd know what to do, but I've never come across a scenario like this.

Would I have to add all three columns (Name, BoughtFrom, TimeBought) to the other table? Or is there another way?

Find out number of active locks on a table

Posted: 01 Apr 2013 05:13 PM PDT

I was wondering if there is a way to find out the number of active locks held on a table in SQL Server?

SSIS Expressions Date Format get just YY for year

Posted: 01 Apr 2013 02:28 PM PDT

I found many articles on date formatting expressions in SSIS but I couldn't find one that specified how to get just the Year such as 13 for 2013 or 12 for 2012... I tried

DATAPART("yy",GETDATE())  

but it gives me the same result as

DATAPART("yyyy",GETDATE())  

I was thinking of a dirty/hack fix which would be something like

DATAPART("yyyy",GETDATE()) - 2000   

but it just doesn't feel right.... what if someone runs my package 1000 years from now? lol. Any suggestions?

I am trying to add a time stamp to a file name.. so today's file should be Auth040113.txt. I am creating the file name in a variable using expressions.

How can I determine what is using a particular resource (table, view, or function, etc) on my SQL Server?

Posted: 01 Apr 2013 02:34 PM PDT

I've been looking for tables that could use better indexing. I've created a stored procedure that provides a list of tables that have no indexes other than a primary key:

CREATE PROCEDURE [dbo].[GetTablesWithPoorIndexing]  (      @Database sysname  )  AS  BEGIN      DECLARE @TableList TABLE (          TableName sysname      );      DECLARE @IndexList TABLE (          ObjectName sysname,           IndexName sysname NULL,           user_seeks int,          user_scans int,          user_lookups int,          user_updates int,          last_user_seek datetime,          last_user_scan datetime,          last_user_lookup datetime,          last_user_update datetime      );      INSERT INTO @TableList       EXEC GetTablesWithoutIndexes @Database;      INSERT INTO @IndexList       EXEC dbo.GetIndexStats @Database;        SELECT TL.TableName          , IL.user_lookups + IL.user_scans + IL.user_seeks AS ReadActivity          , IL.user_updates AS WriteActivity      FROM @TableList TL           INNER JOIN @IndexList IL ON TL.TableName = IL.ObjectName      ORDER BY 2 DESC, 3 DESC;  END    CREATE PROCEDURE [dbo].[GetTablesWithoutIndexes]  (      @Database sysname = ''  )  AS  BEGIN      SET NOCOUNT ON;      IF @Database = '' SET @Database = DB_NAME(DB_ID());      SET @Database = QUOTENAME(@Database);      DECLARE @cmd nvarchar(max);      SET @cmd = '';      SET @cmd = '  SELECT o.name  FROM ' + @Database + '.sys.indexes i      INNER JOIN ' + @Database + '.sys.dm_db_index_usage_stats ius           ON i.index_id = ius.index_id AND i.object_id = ius.object_id      INNER JOIN ' + @Database + '.sys.objects o           on i.object_id = o.object_id  GROUP BY o.name  HAVING COUNT(ius.index_id)<=1;  ';      PRINT @cmd;      EXEC sp_executesql @cmd;  END    CREATE PROCEDURE [dbo].[GetIndexStats]  (      @Database sysname = ''      , @Object sysname = ''      , @IndexName sysname = ''  )  AS  BEGIN      SET NOCOUNT ON;      IF @Database = '' SET @Database = DB_NAME(DB_ID());      DECLARE @cmd nvarchar(max);      DECLARE @where nvarchar(max);      DECLARE @connector nvarchar(max);      SET @cmd = '  SELECT o.name AS ObjectName, i.name AS IndexName, ius.user_seeks,       ius.user_scans, ius.user_lookups, ius.user_updates, ius.last_user_seek,      ius.last_user_scan, ius.last_user_lookup, ius.last_user_update  FROM ' + @Database + '.sys.objects o       INNER JOIN ' + @Database + '.sys.indexes i               on o.object_id = i.object_id      LEFT JOIN ' + @Database + '.sys.dm_db_index_usage_stats ius               ON i.object_id = ius.object_id AND i.index_id = ius.index_id  ';      SET @where = '';      SET @connector = 'WHERE '      IF @Object <> ''      BEGIN          SET @where = @where + @connector + 'o.name = ''' + @Object + '''';          SET @connector = ' AND ';      END      IF @IndexName <> ''      BEGIN          SET @where = @where + @connector + 'i.name = ''' + @IndexName + '''';          SET @connector = ' AND ';      END      IF @where <> '' SET @cmd = @cmd + @where;      SET @cmd = @cmd + CHAR(13) + CHAR(10) + 'ORDER BY i.index_id;'      EXEC sp_executesql @cmd;      PRINT @cmd;  END  

The results of running these stored procs has raised some questions about several objects on SQL Server that are being called repetitively.

I can no longer remember what system is using them, and although the purpose is easily identified by looking at the source code for the given object, I'd like to identify what process is using them.

Since I'm using SQL Server 2012, I thought I could use Extended Events to find the culprit. The following statement sets up an Extended Event. This Extended Event definition does not save it's results anywhere, so you must right-click the event and "Watch Live Data" to see events as they happen (or very near to as-they-happen).

DROP EVENT SESSION FindCaller ON SERVER;    CREATE EVENT SESSION [FindCaller] ON SERVER   ADD EVENT sqlserver.sql_batch_starting  (      ACTION      (          sqlserver.client_app_name          , sqlserver.client_hostname          , sqlserver.database_name          , sqlserver.nt_username          , sqlserver.session_id          , sqlserver.sql_text      )      WHERE ([sqlserver].[like_i_sql_unicode_string]([sqlserver].[sql_text],N'ObjectInQuestion'))  )  WITH   (      MAX_MEMORY=4096 KB      , EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS      , MAX_DISPATCH_LATENCY=30 SECONDS      , MAX_EVENT_SIZE=0 KB      , MEMORY_PARTITION_MODE=NONE      , TRACK_CAUSALITY=OFF      , STARTUP_STATE=OFF  );  

In the above code, ObjectInQuestion should be changed to the text you are looking for - this could be the name of a table, view, Stored Proc, etc that appears in the SQL Text executed by the server.

I ran this extended event session, but it returns no events, even if I manually execute SQL text with the ObjectInQuestion in a new session.

Since Extended Events are the "best new thing" in SQL Server for diagnostics, I must be missing something in my definition.

Bitmask Flags with Lookup Tables Clarification

Posted: 01 Apr 2013 06:21 PM PDT

I've received a dataset from an outside source which contains several bitmask fields as varchars. They come in length as low as 3 and as long as 21 values long. I need to be able to run SELECT queries based on these fields using AND or OR logic.

Using a calculated field, where I just convert the bits into an integer value, I can easily find rows that match an AND query, by using a simple WHERE rowvalue = requestvalue, but the OR logic would require using bitwise & in order to find matching records.

Given that I would need to work with several of these columns and select from hundreds of millions of records, I feel that there would be a huge performance hit when doing bitwise & operations to filter my SELECT results.

I came across this answer from searching and it looked like it may fit my needs, but I need some clarification on how it is implemented.

Is this as simple as creating a lookup table that has all possible search conditions?

Example for 3 bits using (a & b) (Edit: Wrong bitwise op)

001,001  001,011  001,101  001,111  010,010  010,011  010,110  011,011  011,111  etc  

The author mentions that it's counter-intuitive initially, but I can't help but feel I'm interpreting the solution incorrectly, as this would give me a single lookup table with likely billions of rows.

Any clarifications on the answer I linked above or other suggestions that would preserve the existing database are appreciated.

Edit: A more concrete example using small data.

Four flags, HasHouse,HasCar,HasCat,HasDog, 0000 is has none, 1111 is has all.

Any number of flags, from all to none, can be flipped, and results must be filtered where selection matches all (Using exact value comparison) or at least 1 (Using bitwise &).

Adding a single calculated column for each bitmask is ok, but adding a column for each bit for more than 100 bits, coupled with how to insert/update the data is why I'm trying to find alternative solutions.

Slow Memcached: Average 10ms memcached `get`

Posted: 01 Apr 2013 01:08 PM PDT

We're using Newrelic to measure our Python/Django application performance. Newrelic is reporting that across our system "Memcached" is taking an average of 12ms to respond to commands.

Drilling down into the top dozen or so web views (by # of requests) I can see that some Memcache get take up to 30ms; I can't find a single use of Memcache get that returns in less than 10ms.

More details on the system architecture:

  • Currently we have four application servers each of which has a memcached member. All four memcached members participate in a memcache cluster.
  • We're running on a cloud hosting provider and all traffic is running across the "internal" network (via "internal" IPs)
  • When I ping from one application server to another the responses are in ~0.5ms

Isn't 10ms a slow response time for Memcached?

As far as I understand if you think "Memcache is too slow" then "you're doing it wrong". So am I doing it wrong?

SSIS SQL Result to tab delimited txt file

Posted: 01 Apr 2013 12:53 PM PDT

I have created a package that Runs a Sql Query and then puts the results into a flat tab delimited txt file.

When I opened the flat file in excel everything was in the right columns but in columns where i had a combination of alpha characters and numerical characters I was getting extra trailing blank spaces in certain records (as shown below). What I did was, added a derived Column step where i would do RTRIM(memberid) which successfully removed the trailing spaces but now the column was still aligned to the left while all the other records were aligned to the right. Has anyone ever come across this scenario and is there anything i can do so that the columns are all aligned to the RIGHT?

    123123      123333  MC1234       32212       12546       35265  C423621  

Ubuntu / MySQL / Percona XtraBackup gives error "sh: 1: xtrabackup_55: not found"

Posted: 01 Apr 2013 08:36 PM PDT

Hello I installed Ubuntu 12.04.2 LTS on a fresh box.

This installed MySQL 5.5.29.

I installed percona-xtrabackup-2.0.6-521.tar.gz, and moved the executables to /usr/bin/

But innobackupex is giving an error: "sh: 1: xtrabackup_55: not found"


david@server1:/usr/bin$ sudo innobackupex --user=root --password=123 /home/david/dbb/  [sudo] password for david:     InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy  and Percona Ireland Ltd 2009-2012.  All Rights Reserved.    This software is published under  the GNU GENERAL PUBLIC LICENSE Version 2, June 1991.    130401 12:17:04  innobackupex: Starting mysql with options:  --password=xxxxxxxx --user='root' --unbuffered --  130401 12:17:04  innobackupex: Connected to database with mysql child process (pid=1488)  130401 12:17:10  innobackupex: Connection to database server closed  IMPORTANT: Please check that the backup run completes successfully.             At the end of a successful backup run innobackupex             prints "completed OK!".    innobackupex: Using mysql  Ver 14.14 Distrib 5.5.29, for debian-linux-gnu (x86_64) using readline 6.2  innobackupex: Using mysql server version Copyright (c) 2000, 2012, Oracle and/or its affiliates. All rights reserved.    sh: 1: xtrabackup_55: not found  innobackupex: fatal error: no 'mysqld' group in MySQL options  

The files are present and executable:

david@server1:/usr/bin$ ls -l inno*  -rwxr-xr-x 1 root  root  108823 Mar 19 14:24 innobackupex  lrwxrwxrwx 1 david david     12 Mar 19 14:24 innobackupex-1.5.1 -> innobackupex  -rwxr-xr-x 1 root  root   10096 Jan 16 17:24 innochecksum  -rwxr-xr-x 1 root  root  400192 Jan 16 17:24 innotop  david@server1:/usr/bin$   david@server1:/usr/bin$ ls -l xtra*  -rwxr-xr-x 1 root root  9799993 Mar 19 14:21 xtrabackup  -rwxr-xr-x 1 root root  8361704 Mar 19 14:24 xtrabackup_51  -rwxr-xr-x 1 root root 12699861 Mar 19 14:15 xtrabackup_55  david@server1:/usr/bin$   

Path is correct:

david@server1:/usr/bin$ echo $PATH  /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games  

I get the same results if I run

innobackupex   ./innobackupex   sudo innobackupex   sudo ./innobackupex   

Any ideas?

Setting DATEFIRST to 6

Posted: 01 Apr 2013 12:53 PM PDT

I would like to set FIRSTDATE to 6 (ie: saturday), i have read that to change it parmanently i could set the default language of the login to one of an existing language from the sys.syslanguages and alter the login of the user like:

USE [master]  GO  ALTER LOGIN [some_user] WITH DEFAULT_LANGUAGE = [Français]  GO  

But:

SELECT * FROM sys.syslanguages;  

i get many languages, but no one has the DATEFIRST set to 6 (they are all set to 1 or 7).

Question:

Can i add another language based on an existing one, and set de field datefirst to 6 ?

Can i modify an existing syslanguage ?

All what i want, is te set it to 6 parmanently, because i need it to get the right week number.

Thank you for help !

SQL Server 2012 catalog.executions to sysjobhistory - any way to join them?

Posted: 01 Apr 2013 12:38 PM PDT

I have exhausted my resources and can't find a foolproof way to join the ssisdb.catalog tables to the jobs that run them. Trying to write some custom sprocs to monitor my execution times and rows written from the catalog tables, and it would be greatly beneficial to be able to tie them together with the calling job.

UPDATE / SELECT incredibly slow in MySQL InnoDB

Posted: 01 Apr 2013 01:13 PM PDT

I have 1 table with 2.5 billion records(2.5 Cr.) and I am trying to get count of records based on indexed column. it takes 115 seconds to execute same.

Query : Select count(1) from table1 where column=code

Table1 Info

  • 2 billion (2 Cr.)
  • code gets resultset from DB with 1 to 10k rows,
  • Column is indexed.

Server Details

  • Intel Xeon 3.10ghz
  • 8 GB RAM
  • MySQL server 5.1
  • MySQL Connector 5.0.6

my.cnf details

  • Default Engine =InnoDB
  • innodb_buffer_pool_size=4G

Also take too much time for Update query also please suggest how do i tune mysql.

How to suppress an informational raiserror message

Posted: 01 Apr 2013 11:26 AM PDT

I have a setup script that calls sp_configure a few times to set up some system variables, it works fine but I have a few of the following statement

Configuration option '%ls' changed from %ld to %ld. Run the RECONFIGURE statement to install.

with the variables filled in accordingly. Internally I can see sp_configure is calling

raiserror(15457,-1,-1, @configname, @prevvalue, @configvalue) with log  

Is there anything I can do to prevent those informational messages (which are useless to the script, I call RECONFIGURE from inside the script) from displaying in the messages window in MSSMS when a user runs the script?

BCNF when no keys are available

Posted: 01 Apr 2013 11:08 AM PDT

I have a problem understanding a particular case of BCNF. Let's suppose that we have a relation R(B, C, D) with df:{C->D}. Is my relation in BCNF and if yes why??? Is C considered as superkey?

Using Oracle hierarchical queries for transposing field value into rows

Posted: 01 Apr 2013 10:55 AM PDT

One column in my table stores list of values separated by comma (I know it's a poor design, but refactoring is not an option at the current moment). I seem to come up with a working solution but I'm afraid it's not very efficient :

select distinct test_id , regexp_substr(str_data,'[^,]+', 1, level)    from   (  select 1 as test_id, '1,2,3' as str_data from dual   union all   select 2 as test_id, '4,5,6' as str_data from dual  )test_data   connect   by   regexp_substr(str_data, '[^,]+', 1, level) is not null  

I don't like that I have to use distinct . Is there a better approach ?

Thank you.

Can I use replication/mirroring to have a "Current" database and a "Full History" copy?

Posted: 01 Apr 2013 10:45 AM PDT

I have a database that is used for current operations. The amount of data is starting to get a bit to much for the applications using it.

Most of the data is older and is very rarely needed.

I am wondering if there is a way to configure SQL Server to keep a mirror of the original database, but allow me to occasionally purge out old data (and not have it purge from the copy).

NOTE: My database is currently SQL Server 2008 R2, but we will be upgrading to SQL Server 2012 soon.

MySQL slave stuck in "Reading event from the relay log"?

Posted: 01 Apr 2013 09:42 AM PDT

  • 5.5.28-log MySQL Community Server (GPL) by Remi
  • binlog-format=MIXED

My problem is similar to this question.

*************************** 2. row ***************************       Id: 973415     User: system user     Host:        db: NULL  Command: Connect     Time: 29062    State: Reading event from the relay log     Info: NULL  

It looks like a bug, except for no one mentioned that verion 5.5.28 is effected.

Here're the additional informations:

mysql> show slave status\G

          Master_Log_File: mysql-bin.006413        Read_Master_Log_Pos: 40371120             Relay_Log_File: mysqld-relay-bin.011003              Relay_Log_Pos: 226592090      Relay_Master_Log_File: mysql-bin.006392           Slave_IO_Running: Yes          Slave_SQL_Running: Yes            Replicate_Do_DB:         Replicate_Ignore_DB:          Replicate_Do_Table:      Replicate_Ignore_Table:     Replicate_Wild_Do_Table:   Replicate_Wild_Ignore_Table:                  Last_Errno: 0                 Last_Error:                Skip_Counter: 0        Exec_Master_Log_Pos: 226591944            Relay_Log_Space: 5708184440            Until_Condition: None             Until_Log_File:               Until_Log_Pos: 0         Master_SSL_Allowed: No         Master_SSL_CA_File:          Master_SSL_CA_Path:             Master_SSL_Cert:           Master_SSL_Cipher:              Master_SSL_Key:       Seconds_Behind_Master: 29531  

mysql> show engine innodb status;

---TRANSACTION 1709EE56, ACTIVE 29524 sec fetching rows  mysql tables in use 8, locked 8  94647 lock struct(s), heap size 14121400, 45755176 row lock(s), undo log entries 7547  MySQL thread id 973415, OS thread handle 0x40d90940, query id 1773214543 Reading event from the relay log  

mysqlbinlog --start-position=226591944 mysql-bin.006392

#130401 15:18:23 server id 248  end_log_pos 226592078   Table_map: `reportingdb`.`v3_cam_ip` mapped to number 4973102  #130401 15:18:23 server id 248  end_log_pos 226593102   Delete_rows: table id 4973102  #130401 15:18:23 server id 248  end_log_pos 226594131   Delete_rows: table id 4973102  #130401 15:18:23 server id 248  end_log_pos 226595169   Delete_rows: table id 4973102  

mysql> show keys from v3_cam_ip\G

*************************** 1. row ***************************          Table: v3_cam_ip     Non_unique: 1       Key_name: ix_campaignid   Seq_in_index: 1    Column_name: campaignid      Collation: A    Cardinality: 20       Sub_part: NULL         Packed: NULL           Null: YES     Index_type: BTREE        Comment:   Index_comment:   *************************** 2. row ***************************          Table: v3_cam_ip     Non_unique: 1       Key_name: ix_dt   Seq_in_index: 1    Column_name: dt      Collation: A    Cardinality: 20       Sub_part: NULL         Packed: NULL           Null:      Index_type: BTREE        Comment:   Index_comment:   2 rows in set (0.95 sec)  

What I have done on one of two Slaves:

  • stop slave; hangs so I have to press Ctrl-C
  • then I tried to restart the MySQL (using init script) but it failed
  • after that I did a kill -9 (silly?) and start again
  • now the slave is started but it doesn't get update from the master

What should I do on the remaining slave?

Real Time data collection - Help select Database

Posted: 01 Apr 2013 09:01 AM PDT

My Scenario:

I want to collect data from various sensors[for now I have 600 sensors sending data, but this may have to scale up to 3000 more data sensors, all sensors send real time data] Now I have to run network based queries on this data, the data does not have to be in the database for a long time, as I query I have to make calculations based on the current state of the sensors. I do not need historic data. There would be multiple queries made to the database.

My requirements are;

A real time database that is quick. Scalability is an issue , but can be overlooked as I exactly know how many sensors would send data. Large number of queries on the DB. This is an academic project so my preferences are to open source.

I am not a database person , and the shear number of selections that there are is overwhelming. Any help and suggestions are really appreciated.

How do I demonstrate Transaction Log activity?

Posted: 01 Apr 2013 09:31 AM PDT

Say I have the following setup:

use tempdb  go    set nocount on  go    create table MyTest  (      Column1 varchar(100),      Column2 text   )  go    insert mytest (Column1, Column2)  select REPLICATE('a', 100), REPLICATE('a', 100)  from sys.syscolumns a, sys.syscolumns b  

I'd like to convert each of the columns to varchar(max) like this:

-- processes every page:  alter table mytest  alter column Column1 varchar(max)    -- processes only metadata:  alter table mytest  alter column Column2 varchar(max)  

How can I demonstrate that the first command processes the whole table while the second command only processes metadata. I was thinking of using SET STATISTICS IO which reports thousands of logical reads for the first command and nothing for the other. I was also thinking of using DBCC LOG or fn_dblog. But I wasn't sure how to interpret or tie the results to the queries I issued.

sql server partitioned view execution plan questions

Posted: 01 Apr 2013 04:21 PM PDT

I have created a partitioned view for 3 tables tb_sales2010,tb_sales2011,tb_sales2012 using check constraint (respectivly for each table)

datepart(year,[Date])=(2010)
datepart(year,[Date])=(2011)
datepart(year,[Date])=(2012)

schema for table 2010 (same for all other tables)

CREATE TABLE [dbo].[tb_Sales2010](      [Date] [smalldatetime] NOT NULL,      [ID] [int] NOT NULL     CONSTRAINT [PK_tb_Sales2010] PRIMARY KEY NONCLUSTERED   (      [ID] ASC  )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]  ) ON [PRIMARY]    GO    ALTER TABLE [dbo].[tb_Sales2010]  WITH CHECK ADD  CONSTRAINT [CK_Date_2010] CHECK  ((datepart(year,[Date])=(2010)))  GO  

My query is

SELECT TOP 1 *  FROM partitionedTb_sales  WHERE DATEPART(YY, DATE) = 2011  

My question is, the execution plan indicates that it is scanning all 3 tables. Instead of looking at the check constraint and saying table b has the information it needs. Why is that. i feel the execution plan should only show 1 table being scanned.

enter image description here

Statistics turned on results

Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'tb_Sales2012'. Scan count 1, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'tb_Sales2011'. Scan count 1, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'tb_Sales2010'. Scan count 1, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

View Definition CREATE VIEW PartitionedTb_Sales AS SELECT * FROM tb_sales2010

UNION ALL

SELECT * FROM tb_sales2011

UNION ALL

SELECT * FROM tb_sales2012

BIT columns all "1" after a phpMyAdmin export/import

Posted: 01 Apr 2013 02:51 PM PDT

I have to import data from a MySQL database using phpMyAdmin because that's the only interface my hosting provider supports.

I have exported my database from my local machine using phpMyAdmin. After that I imported the script file to my host. All of data in the columns that is BIT type are changed to '1'. Hosting database version is 5.5.29.

Is it a phpMyAdmin problem, or MySQL version problem? How can I fix this?

Moving SQL Server to different domain

Posted: 01 Apr 2013 09:48 AM PDT

What are best practices/important steps when the Windows Server machine hosting SQL Server 2008 R2 is moved to another domain? What steps definitely/might need to be done for SQL Server before and after the new domain is joined?

Unable to start Oracle (11g1) due to bad memory configuration (ORA-00837)

Posted: 01 Apr 2013 06:00 PM PDT

I decided to reduce the memory allocation of a local developer instance, and was told that the following command did the job;

alter system set memory_max_target = 512M scope = spfile;  

Which I guess it did, since I no longer can start the Oracle instance in question. This is the error I get when trying to connect as a non-DBA-user;

ORA-01034: ORACLE not available  ORA-27101: shared memory realm does not exist  

When connecting as SYSDBA I get the following;

Connected to an inactive instance.  SQL>startup nomount;  ORA-00837: Specified value of MEMORY_TARGET greater than MEMORY_MAX_TARGET  SQL>alter system set memory_max_target = 2G scope = spfile;  ORA-01034: ORACLE not available  

("Connected to an inactive instance." is my translation of a localized error message, might not be exactly correct) So, I'm in a catch 22 situation, I can't change the memory parameters without starting up, and I can't start up.

Is there a way to change these parameters offline?

Handling concurrent access to a key table without deadlocks in SQL Server

Posted: 01 Apr 2013 02:20 PM PDT

I have a table that is used by a legacy application as a substitute for IDENTITY fields in various other tables.

Each row in the table stores the last used ID LastID for the field named in IDName.

Occasionally the stored proc gets a deadlock - I believe I've built an appropriate error handler; however I'm interested to see if this methodology works as I think it does, or if I'm barking up the wrong tree here.

I'm fairly certain there should be a way to access this table without any deadlocks at all.

The database itself is configured with READ_COMMITTED_SNAPSHOT = 1.

First, here is the table:

CREATE TABLE [dbo].[tblIDs](      [IDListID] [int] NOT NULL CONSTRAINT PK_tblIDs PRIMARY KEY CLUSTERED IDENTITY(1,1) ,      [IDName] [nvarchar](255) NULL,      [LastID] [int] NULL,  );  

And the nonclustered index on the IDName field:

CREATE NONCLUSTERED INDEX [IX_tblIDs_IDName] ON [dbo].[tblIDs]  (      [IDName] ASC  )   WITH (      PAD_INDEX = OFF      , STATISTICS_NORECOMPUTE = OFF      , SORT_IN_TEMPDB = OFF      , DROP_EXISTING = OFF      , ONLINE = OFF      , ALLOW_ROW_LOCKS = ON      , ALLOW_PAGE_LOCKS = ON      , FILLFACTOR = 80  );    GO  

Some sample data:

INSERT INTO tblIDs (IDName, LastID) VALUES ('SomeTestID', 1);  INSERT INTO tblIDs (IDName, LastID) VALUES ('SomeOtherTestID', 1);  GO  

The stored procedure used to update the values stored in the table, and return the next ID:

CREATE PROCEDURE [dbo].[GetNextID](      @IDName nvarchar(255)  )  AS  BEGIN      /*          Description:    Increments and returns the LastID value from tblIDs          for a given IDName          Author:         Max Vernon          Date:           2012-07-19      */        DECLARE @Retry int;      DECLARE @EN int, @ES int, @ET int;      SET @Retry = 5;      DECLARE @NewID int;      SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;      SET NOCOUNT ON;      WHILE @Retry > 0      BEGIN          BEGIN TRY              BEGIN TRANSACTION;              SET @NewID = COALESCE((SELECT LastID FROM tblIDs WHERE IDName = @IDName),0)+1;              IF (SELECT COUNT(IDName) FROM tblIDs WHERE IDName = @IDName) = 0                   INSERT INTO tblIDs (IDName, LastID) VALUES (@IDName, @NewID)              ELSE                  UPDATE tblIDs SET LastID = @NewID WHERE IDName = @IDName;              COMMIT TRANSACTION;              SET @Retry = -2; /* no need to retry since the operation completed */          END TRY          BEGIN CATCH              IF (ERROR_NUMBER() = 1205) /* DEADLOCK */                  SET @Retry = @Retry - 1;              ELSE                  BEGIN                  SET @Retry = -1;                  SET @EN = ERROR_NUMBER();                  SET @ES = ERROR_SEVERITY();                  SET @ET = ERROR_STATE()                  RAISERROR (@EN,@ES,@ET);                  END              ROLLBACK TRANSACTION;          END CATCH      END      IF @Retry = 0 /* must have deadlock'd 5 times. */      BEGIN          SET @EN = 1205;          SET @ES = 13;          SET @ET = 1          RAISERROR (@EN,@ES,@ET);      END      ELSE          SELECT @NewID AS NewID;  END  GO  

Sample executions of the stored proc:

EXEC GetNextID 'SomeTestID';    NewID  2    EXEC GetNextID 'SomeTestID';    NewID  3    EXEC GetNextID 'SomeOtherTestID';    NewID  2  

EDIT:

I've added a new index, since the existing index IX_tblIDs_Name is not being used by the SP; I assume the query processor is using the clustered index since it needs the value stored in LastID. Anyway, this index IS used by the actual execution plan:

CREATE NONCLUSTERED INDEX IX_tblIDs_IDName_LastID ON dbo.tblIDs  (      IDName ASC  )   INCLUDE  (      LastID  )  WITH (FILLFACTOR = 100, ONLINE=ON, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);  

EDIT #2:

I've taken the advice that @AaronBertrand gave and modified it slightly. The general idea here is to refine the statement to eliminate unnecessary locking, and overall to make the SP more efficient.

The code below replaces the code above from BEGIN TRANSACTION to END TRANSACTION:

BEGIN TRANSACTION;  SET @NewID = COALESCE((SELECT LastID FROM dbo.tblIDs WHERE IDName = @IDName), 0) + 1;    IF @NewID = 1      INSERT INTO tblIDs (IDName, LastID) VALUES (@IDName, @NewID);  ELSE      UPDATE dbo.tblIDs SET LastID = @NewID WHERE IDName = @IDName;    COMMIT TRANSACTION;  

Since our code never adds a record to this table with 0 in LastID we can make the assumption that if @NewID is 1 then the intention is append a new ID to the list, else we are updating an existing row in the list.

Slow query performance

Posted: 01 Apr 2013 06:58 PM PDT

My database contains 20,000 rows and the following query is slow. Please help me to speed this up:

SELECT r.report_id,         r.user_id,         u.user_name,         u.user_mail,         d.department_name,         r.report_comment,         r.report_target_date,         r.report_create_date,         r.report_revised_date,         r.report_root_id  FROM   report r         JOIN user u           ON u.user_id = r.user_id         JOIN department d           ON u.department_id = d.department_id  WHERE  ( u.user_name LIKE Concat('%', 'hieu', '%') )         AND ( d.department_name LIKE Concat('%', 'BOM', '%') )         AND r.report_id IN (SELECT r.report_id                             FROM   report r                             WHERE  r.user_id = 1                             UNION ALL                             SELECT r.report_id                             FROM   report r                                    JOIN user u                                      ON u.user_id = r.user_id                                    JOIN department d                                      ON u.department_id = d.department_id                                    JOIN authority a                                      ON r.user_id = a.user_src_id                             WHERE  a.user_dest_id = 1)         AND ( r.report_id IN (SELECT r.report_id                               FROM   report r                               WHERE  r.report_comment LIKE                                      Concat('%', 'this', '%'))                OR r.report_id IN (SELECT rc.report_id                                   FROM   report_content rc                                   WHERE  rc.visited_company LIKE                                          Concat('%', 'this', '%')                                           OR rc.report_description LIKE                                              Concat('%', 'this', '%')) )  ORDER  BY r.report_target_date DESC  LIMIT  0, 30   

CREATE TABLE :

CREATE TABLE IF NOT EXISTS `report` (    `report_id` int(11) NOT NULL AUTO_INCREMENT,    `user_id` int(11) NOT NULL,    `report_comment` text CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL,    `report_target_date` datetime NOT NULL,    `report_create_date` datetime NOT NULL,    `report_revised_date` datetime DEFAULT NULL,    `report_revised_id` int(11) DEFAULT NULL,    `report_root_id` int(11) DEFAULT NULL,    `enable` int(2) NOT NULL DEFAULT '1',    PRIMARY KEY (`report_id`),    UNIQUE KEY `unique_report` (`report_id`),    KEY `support_report` (`report_id`,`report_target_date`,`report_create_date`,`report_revised_date`,`report_revised_id`,`report_root_id`)  ) ENGINE=InnoDB  DEFAULT CHARSET=latin1 AUTO_INCREMENT=18497 ;      CREATE TABLE IF NOT EXISTS `report_content` (    `report_content_id` int(11) NOT NULL AUTO_INCREMENT,    `report_id` int(11) NOT NULL,    `timestamp` text COLLATE utf8_unicode_ci NOT NULL,    `visited_company` varchar(255) COLLATE utf8_unicode_ci NOT NULL,    `report_description` text COLLATE utf8_unicode_ci NOT NULL,    PRIMARY KEY (`report_content_id`),    UNIQUE KEY `unique_report` (`report_content_id`,`report_id`),    UNIQUE KEY `unique_report_content` (`report_content_id`,`report_id`),    KEY `support_report_content` (`report_content_id`,`report_id`,`visited_company`)  ) ENGINE=InnoDB  DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=278 ;  

My EXPLAIN :

id  | select |   table   |    type   |    possible_keys    |    key   |key_len|    ref  | rows  |  extra  1   |PRIMARY |     r     |    ALL    |         (NULL)      |  (NULL)  | (NULL)| (NULL)  | 20246 | using where/filesort  1   |PRIMARY |     p     |  eq_ref   | PRIMARY,unique_user |  NULL    | 4   |   NULL    |1      | using where  5   |SUBQUERY| report_content|  all  |          NULL       |    NULL  | 4   |   NULL    |160    |  Using where  6   |UNION   | report_content|  all  |          NULL       |    NULL  | 4   |   NULL    |160    |  Using where  

NOW I change query following :

    SELECT r.report_id,           r.user_id,           u.user_name,           u.user_mail,           d.department_name,           r.report_comment,            r.report_target_date,           r.report_create_date,           r.report_revised_date,           r.report_root_id  FROM report AS r FORCE INDEX (unique_report) , `user`                   AS u FORCE INDEX (unique_user) , `department`                   AS d FORCE INDEX (unique_department)  WHERE (u.user_name LIKE CONCAT('%', 'hieu', '%'))           AND (d.department_name LIKE CONCAT('%', 'bom', '%'))           AND r.report_id IN (  SELECT r.report_id      FROM report r      WHERE r.user_id =1 UNION ALL  SELECT r.report_id      FROM report r FORCE INDEX (unique_report)      JOIN USER u FORCE INDEX (unique_user) ON u.user_id = r.user_id      JOIN department d FORCE INDEX (unique_department) ON u.department_id = d.department_id      JOIN authority a FORCE INDEX (unique_authority) ON r.user_id = a.user_src_id      WHERE a.user_dest_id =1) AND (r.report_id IN (  SELECT  r.report_id      FROM report r      WHERE r.report_comment LIKE CONCAT('%', 'when', '%')) OR EXISTS (  SELECT rc.report_id      FROM report_content rc      WHERE rc.visited_company LIKE CONCAT('%', 'when', '%')       OR MATCH (`timestamp`,report_description)       AGAINST('+when+' WITH QUERY EXPANSION) AND r.report_id = rc.report_id))  ORDER BY r.report_target_date DESC  LIMIT 0, 30  

AND faster than before : 1.97s.

So somebody help me for speedup more.

How to modify an update in Oracle so it performs faster?

Posted: 01 Apr 2013 09:58 AM PDT

I have this query:

UPDATE   (      SELECT   h.valid_through_dt, h.LAST_UPDATE_TMSTMP      FROM   ETL_FEE_SCH_TMP d, FEE_SCHEDULE_HISTORICAL h      WHERE       h.FUND_ID = d.FUND_ID      AND h.FEETYPE_NAME = d.FEETYPE_NAME      AND h.BREAKPOINT_TYPE = d.BREAKPOINT_TYPE      AND h.BREAKPOINT_QTY = d.BREAKPOINT_QTY      AND h.LOW_BREAKPOINT_AMT = d.LOW_BREAKPOINT_AMT      AND h.VALID_THROUGH = TO_DATE ('31-DEC-9999', 'dd-mon-yyyy')      AND h.universe = 'DC'      AND h.universe = d.universe      AND EXISTS      (          SELECT 1          FROM FEE_SCHEDULE s          WHERE s.FUND_ID = h.FUND_ID          AND s.FEETYPE_NAME = h.FEETYPE_NAME          AND s.BREAKPOINT_TYPE = h.BREAKPOINT_TYPE          AND s.BREAKPOINT_QTY = h.BREAKPOINT_QTY          AND s.LOW_BREAKPOINT_AMT = h.LOW_BREAKPOINT_AMT          AND s.universe = 'DC'      )  ) updateTable  SET     updateTable.VALID_THROUGH = (SYSDATE - 1),  updateTable.LAST_UPDATE_TMSTMP = SYSTIMESTAMP;  

The trouble that I am having is that this query takes a long time to run. I don't know whether it is possible to run this on parallel, or it would be easier to update a cursor in a pipeline function.

What would you suggest?

This is all the information that I believe it is relevant.

This is the execution plan of the internal select:

Execution Plan  ----------------------------------------------------------  Plan hash value: 57376096  ---------------------------------------------------------------------------------------------------------  | Id  | Operation                    | Name                     | Rows  | Bytes| Cost (%CPU)| Time     |  ---------------------------------------------------------------------------------------------------------  |   0 | SELECT STATEMENT             |                          |     1 |   306 |  8427   (1)| 00:01:42 |  |   1 |  NESTED LOOPS                |                          |       |       |            |          |  |   2 |   NESTED LOOPS               |                          |     1 |    306|  8427   (1)| 00:01:42 |  |   3 |    MERGE JOIN CARTESIAN      |                          |     1 |    192|  8426   (1)| 00:01:42 |  |*  4 |     INDEX RANGE SCAN         | SYS_C000666              |     1 |     96|     2   (0)| 00:00:01 |  |   5 |     BUFFER SORT              |                          |  3045K|   278M|  8425   (1)| 00:01:42 |  |   6 |      SORT UNIQUE             |                          |  3045K|   278M|  8425   (1)| 00:01:42 |  |*  7 |       TABLE ACCESS FULL      | FEE_SCHEDULE             |  3045K|   278M|  8425   (1)| 00:01:42 |  |*  8 |    INDEX RANGE SCAN          | FEE_SCHDL_IDX1           |     1 |       |     1   (0)| 00:00:01 |  |*  9 |   TABLE ACCESS BY INDEX ROWID| FEE_SCHEDULE_HISTORICAL  |     1 |   114 |     1   (0)| 00:00:01 |  ---------------------------------------------------------------------------------------------------------  Predicate Information (identified by operation id):  ---------------------------------------------------     4 - access("D"."UNIVERSE"='DC')     7 - filter("S"."UNIVERSE"='DC')     8 - access("H"."UNIVERSE"='DC' AND "S"."FUND_ID"="H"."FUND_ID" AND                "S"."FEETYPE_NAME"="H"."FEETYPE_NAME" AND                "S"."BREAKPOINT_TYPE"="H"."BREAKPOINT_TYPE" AND                "S"."BREAKPOINT_QTY"="H"."BREAKPOINT_QTY" AND                "S"."LOW_BREAKPOINT_AMT"="H"."LOW_BREAKPOINT_AMT")         filter("H"."FUND_ID"="D"."FUND_ID" AND                "H"."FEETYPE_NAME"="D"."FEETYPE_NAME" AND                "H"."BREAKPOINT_TYPE"="D"."BREAKPOINT_UNIT_TY  

Table data:

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------  UNIVERSE|FUND_ID   |FEETYPE_NAME |BREAKPOINT_TYPE|BREAKPOINT_QTY|LOW_BREAKPOINT_AMT|HIGH_BREAKPOINT_AMT|FEE_PCT|FEE_SCHDL_SEQ_ID|GROUP_ID|LAST_UPDATE_TMSTMP  |VALID_FROM|VALID_THROUGH|INSERT_TMSTMP        |JOB_ID|  ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------  DC      |DC9ZTPLPHO|DeferLoad    |Percentage     |4             |10000             |300000             |3.14   |780250          |null    |1/4/2012  3:59:54 PM|6/23/2012 |12/31/9999   |1/5/2011   3:59:54 PM|666   |  DC      |DCE86Y8XFU|RedemptionFee|Percentage     |9             |  100             |100500             |7.67   |780251          |null    |6/4/2012  4:49:54 PM|11/12/2011|12/31/9999   |8/17/2011  2:00:54 PM|666   |  DC      |DCAYL0KONA|FrontLoad    |Percentage     |2             |50000             |601500             |5.00   |780252          |null    |4/25/2012 4:49:54 PM|8/2/2012  |12/31/9999   |12/19/2012 9:59:00 PM|666   |  DC      |DC9ZTPLPHO|DeferLoad    |Percentage     |7             |80000             |900000             |2.24   |780252          |null    |4/25/2012 4:49:54 PM|8/2/2012  |12/31/9999   |12/19/2012 9:59:00 PM|666   |  ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------  

This is the script of the historical table:

CREATE TABLE FEE_SCHEDULE_HISTORICAL  (    UNIVERSE                        VARCHAR2(2 BYTE) NOT NULL,    FUND_ID                         VARCHAR2(10 BYTE) NOT NULL,    FEETYPE_NAME                    VARCHAR2(75 BYTE),    BREAKPOINT_TYPE                 VARCHAR2(50 BYTE),    BREAKPOINT_QTY                  VARCHAR2(10 BYTE),    LOW_BREAKPOINT_AMT              NUMBER(19,6),    HIGH_BREAKPOINT_AMT             NUMBER(19,6),    FEE_PCT                         NUMBER(19,6),    FEE_SCHDL_SEQ_ID                NUMBER        NOT NULL,    GROUP_ID                        NUMBER,    LAST_UPDATE_TMSTMP              DATE          NOT NULL,    VALID_FROM                      DATE          NOT NULL,    VALID_THROUGH                   DATE          NOT NULL,    INSERT_TMSTMP                   DATE          NOT NULL,    JOB_ID                          NUMBER        NOT NULL  );    CREATE UNIQUE INDEX FEE_SCHDL_PK ON FEE_SCHEDULE_HISTORICAL(FEE_SCHDL_SEQ_ID);    CREATE UNIQUE INDEX FEE_SCHDL_HST_IDX ON FEE_SCHEDULE_HISTORICAL (      UNIVERSE,      FUND_ID,      FEETYPE_NAME,      BREAKPOINT_TYPE,      BREAKPOINT_QTY,       LOW_BREAKPOINT_AMT,      VALID_FROM,      JOB_ID  )    CREATE INDEX FEE_SCHEDULE_HST_IDX2 ON FEE_SCHEDULE_HISTORICAL(LAST_UPDATE_TMSTMP)    CREATE INDEX FEE_SCHEDULE_HST_IDX3 ON FEE_SCHEDULE_HISTORICAL(VALID_THROUGH)    ALTER TABLE FEE_SCHEDULE_HISTORICAL ADD (      CONSTRAINT FEE_SCHDL_PK      PRIMARY KEY      (FEE_SCHDL_SEQ_ID)  );  

This is the other table:

CREATE TABLE FEE_SCHEDULE  (    UNIVERSE                        VARCHAR2(2 BYTE) NOT NULL,    FUND_ID                         VARCHAR2(10 BYTE) NOT NULL,    FEETYPE_NAME                    VARCHAR2(75 BYTE),    BREAKPOINT_TYPE                 VARCHAR2(50 BYTE),    BREAKPOINT_QTY                  VARCHAR2(10 BYTE),    LOW_BREAKPOINT_AMT              NUMBER(19,6),    HIGH_BREAKPOINT_AMT             NUMBER(19,6),    FEE_PCT                         NUMBER(19,6),    JOB_RUN_ID                      NUMBER        NOT NULL,    FILE_DATE                       DATE          NOT NULL,    CYCLE_DATE                      DATE          NOT NULL  )  

The temporary table is the result of FEE_SCHEDULE_HISTORICAL minus FEE_SCHEDULE

No comments:

Post a Comment

Search This Blog