Monday, April 1, 2013

[how to] IO Frozen in error SQL error log

[how to] IO Frozen in error SQL error log


IO Frozen in error SQL error log

Posted: 01 Apr 2013 07:38 PM PDT

The log shows IO DBName is Frozen follow by thaw but yet my database schedule backup task which normally been running isn't since the network team are doing backs (not database backups, according to them) on the database server, using ArcServ 16.5. They run their backup utility around 11:00pm or later And my maintenance backup runs at 2:00am. This is in a 2000 SP4 environment. I would like to gather much evidence to show them that their utility is cause my schedule backs not to run.

Postgresql output doubt

Posted: 01 Apr 2013 05:48 PM PDT

I am getting output and I have to press spacebar after one page of output, I want to get rid of that, how can I and which option I have to enable.

By automatically, how can I automatically execute all commands saved in a file when I logged in psql via terminal?

Thanks in advance.

Foreign Keys with Composite Key

Posted: 01 Apr 2013 05:26 PM PDT

Imagine I have a table with the following dependency:

(Name, BoughtFrom, TimeBought) --> Cost

Where the three attributes in bold form a composite primary key for the table. Now then, I want to link (form a relationship) from another table to this one. How do I do that? If I had an ID column I'd know what to do, but I've never come across a scenario like this.

Would I have to add all three columns (Name, BoughtFrom, TimeBought) to the other table? Or is there another way?

Find out number of active locks on a table

Posted: 01 Apr 2013 05:13 PM PDT

I was wondering if there is a way to find out the number of active locks held on a table in SQL Server?

SSIS Expressions Date Format get just YY for year

Posted: 01 Apr 2013 02:28 PM PDT

I found many articles on date formatting expressions in SSIS but I couldn't find one that specified how to get just the Year such as 13 for 2013 or 12 for 2012... I tried

DATAPART("yy",GETDATE())  

but it gives me the same result as

DATAPART("yyyy",GETDATE())  

I was thinking of a dirty/hack fix which would be something like

DATAPART("yyyy",GETDATE()) - 2000   

but it just doesn't feel right.... what if someone runs my package 1000 years from now? lol. Any suggestions?

I am trying to add a time stamp to a file name.. so today's file should be Auth040113.txt. I am creating the file name in a variable using expressions.

How can I determine what is using a particular resource (table, view, or function, etc) on my SQL Server?

Posted: 01 Apr 2013 02:34 PM PDT

I've been looking for tables that could use better indexing. I've created a stored procedure that provides a list of tables that have no indexes other than a primary key:

CREATE PROCEDURE [dbo].[GetTablesWithPoorIndexing]  (      @Database sysname  )  AS  BEGIN      DECLARE @TableList TABLE (          TableName sysname      );      DECLARE @IndexList TABLE (          ObjectName sysname,           IndexName sysname NULL,           user_seeks int,          user_scans int,          user_lookups int,          user_updates int,          last_user_seek datetime,          last_user_scan datetime,          last_user_lookup datetime,          last_user_update datetime      );      INSERT INTO @TableList       EXEC GetTablesWithoutIndexes @Database;      INSERT INTO @IndexList       EXEC dbo.GetIndexStats @Database;        SELECT TL.TableName          , IL.user_lookups + IL.user_scans + IL.user_seeks AS ReadActivity          , IL.user_updates AS WriteActivity      FROM @TableList TL           INNER JOIN @IndexList IL ON TL.TableName = IL.ObjectName      ORDER BY 2 DESC, 3 DESC;  END    CREATE PROCEDURE [dbo].[GetTablesWithoutIndexes]  (      @Database sysname = ''  )  AS  BEGIN      SET NOCOUNT ON;      IF @Database = '' SET @Database = DB_NAME(DB_ID());      SET @Database = QUOTENAME(@Database);      DECLARE @cmd nvarchar(max);      SET @cmd = '';      SET @cmd = '  SELECT o.name  FROM ' + @Database + '.sys.indexes i      INNER JOIN ' + @Database + '.sys.dm_db_index_usage_stats ius           ON i.index_id = ius.index_id AND i.object_id = ius.object_id      INNER JOIN ' + @Database + '.sys.objects o           on i.object_id = o.object_id  GROUP BY o.name  HAVING COUNT(ius.index_id)<=1;  ';      PRINT @cmd;      EXEC sp_executesql @cmd;  END    CREATE PROCEDURE [dbo].[GetIndexStats]  (      @Database sysname = ''      , @Object sysname = ''      , @IndexName sysname = ''  )  AS  BEGIN      SET NOCOUNT ON;      IF @Database = '' SET @Database = DB_NAME(DB_ID());      DECLARE @cmd nvarchar(max);      DECLARE @where nvarchar(max);      DECLARE @connector nvarchar(max);      SET @cmd = '  SELECT o.name AS ObjectName, i.name AS IndexName, ius.user_seeks,       ius.user_scans, ius.user_lookups, ius.user_updates, ius.last_user_seek,      ius.last_user_scan, ius.last_user_lookup, ius.last_user_update  FROM ' + @Database + '.sys.objects o       INNER JOIN ' + @Database + '.sys.indexes i               on o.object_id = i.object_id      LEFT JOIN ' + @Database + '.sys.dm_db_index_usage_stats ius               ON i.object_id = ius.object_id AND i.index_id = ius.index_id  ';      SET @where = '';      SET @connector = 'WHERE '      IF @Object <> ''      BEGIN          SET @where = @where + @connector + 'o.name = ''' + @Object + '''';          SET @connector = ' AND ';      END      IF @IndexName <> ''      BEGIN          SET @where = @where + @connector + 'i.name = ''' + @IndexName + '''';          SET @connector = ' AND ';      END      IF @where <> '' SET @cmd = @cmd + @where;      SET @cmd = @cmd + CHAR(13) + CHAR(10) + 'ORDER BY i.index_id;'      EXEC sp_executesql @cmd;      PRINT @cmd;  END  

The results of running these stored procs has raised some questions about several objects on SQL Server that are being called repetitively.

I can no longer remember what system is using them, and although the purpose is easily identified by looking at the source code for the given object, I'd like to identify what process is using them.

Since I'm using SQL Server 2012, I thought I could use Extended Events to find the culprit. The following statement sets up an Extended Event. This Extended Event definition does not save it's results anywhere, so you must right-click the event and "Watch Live Data" to see events as they happen (or very near to as-they-happen).

DROP EVENT SESSION FindCaller ON SERVER;    CREATE EVENT SESSION [FindCaller] ON SERVER   ADD EVENT sqlserver.sql_batch_starting  (      ACTION      (          sqlserver.client_app_name          , sqlserver.client_hostname          , sqlserver.database_name          , sqlserver.nt_username          , sqlserver.session_id          , sqlserver.sql_text      )      WHERE ([sqlserver].[like_i_sql_unicode_string]([sqlserver].[sql_text],N'ObjectInQuestion'))  )  WITH   (      MAX_MEMORY=4096 KB      , EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS      , MAX_DISPATCH_LATENCY=30 SECONDS      , MAX_EVENT_SIZE=0 KB      , MEMORY_PARTITION_MODE=NONE      , TRACK_CAUSALITY=OFF      , STARTUP_STATE=OFF  );  

In the above code, ObjectInQuestion should be changed to the text you are looking for - this could be the name of a table, view, Stored Proc, etc that appears in the SQL Text executed by the server.

I ran this extended event session, but it returns no events, even if I manually execute SQL text with the ObjectInQuestion in a new session.

Since Extended Events are the "best new thing" in SQL Server for diagnostics, I must be missing something in my definition.

Bitmask Flags with Lookup Tables Clarification

Posted: 01 Apr 2013 06:21 PM PDT

I've received a dataset from an outside source which contains several bitmask fields as varchars. They come in length as low as 3 and as long as 21 values long. I need to be able to run SELECT queries based on these fields using AND or OR logic.

Using a calculated field, where I just convert the bits into an integer value, I can easily find rows that match an AND query, by using a simple WHERE rowvalue = requestvalue, but the OR logic would require using bitwise & in order to find matching records.

Given that I would need to work with several of these columns and select from hundreds of millions of records, I feel that there would be a huge performance hit when doing bitwise & operations to filter my SELECT results.

I came across this answer from searching and it looked like it may fit my needs, but I need some clarification on how it is implemented.

Is this as simple as creating a lookup table that has all possible search conditions?

Example for 3 bits using (a & b) (Edit: Wrong bitwise op)

001,001  001,011  001,101  001,111  010,010  010,011  010,110  011,011  011,111  etc  

The author mentions that it's counter-intuitive initially, but I can't help but feel I'm interpreting the solution incorrectly, as this would give me a single lookup table with likely billions of rows.

Any clarifications on the answer I linked above or other suggestions that would preserve the existing database are appreciated.

Edit: A more concrete example using small data.

Four flags, HasHouse,HasCar,HasCat,HasDog, 0000 is has none, 1111 is has all.

Any number of flags, from all to none, can be flipped, and results must be filtered where selection matches all (Using exact value comparison) or at least 1 (Using bitwise &).

Adding a single calculated column for each bitmask is ok, but adding a column for each bit for more than 100 bits, coupled with how to insert/update the data is why I'm trying to find alternative solutions.

Slow Memcached: Average 10ms memcached `get`

Posted: 01 Apr 2013 01:08 PM PDT

We're using Newrelic to measure our Python/Django application performance. Newrelic is reporting that across our system "Memcached" is taking an average of 12ms to respond to commands.

Drilling down into the top dozen or so web views (by # of requests) I can see that some Memcache get take up to 30ms; I can't find a single use of Memcache get that returns in less than 10ms.

More details on the system architecture:

  • Currently we have four application servers each of which has a memcached member. All four memcached members participate in a memcache cluster.
  • We're running on a cloud hosting provider and all traffic is running across the "internal" network (via "internal" IPs)
  • When I ping from one application server to another the responses are in ~0.5ms

Isn't 10ms a slow response time for Memcached?

As far as I understand if you think "Memcache is too slow" then "you're doing it wrong". So am I doing it wrong?

SSIS SQL Result to tab delimited txt file

Posted: 01 Apr 2013 12:53 PM PDT

I have created a package that Runs a Sql Query and then puts the results into a flat tab delimited txt file.

When I opened the flat file in excel everything was in the right columns but in columns where i had a combination of alpha characters and numerical characters I was getting extra trailing blank spaces in certain records (as shown below). What I did was, added a derived Column step where i would do RTRIM(memberid) which successfully removed the trailing spaces but now the column was still aligned to the left while all the other records were aligned to the right. Has anyone ever come across this scenario and is there anything i can do so that the columns are all aligned to the RIGHT?

    123123      123333  MC1234       32212       12546       35265  C423621  

Ubuntu / MySQL / Percona XtraBackup gives error "sh: 1: xtrabackup_55: not found"

Posted: 01 Apr 2013 08:36 PM PDT

Hello I installed Ubuntu 12.04.2 LTS on a fresh box.

This installed MySQL 5.5.29.

I installed percona-xtrabackup-2.0.6-521.tar.gz, and moved the executables to /usr/bin/

But innobackupex is giving an error: "sh: 1: xtrabackup_55: not found"


david@server1:/usr/bin$ sudo innobackupex --user=root --password=123 /home/david/dbb/  [sudo] password for david:     InnoDB Backup Utility v1.5.1-xtrabackup; Copyright 2003, 2009 Innobase Oy  and Percona Ireland Ltd 2009-2012.  All Rights Reserved.    This software is published under  the GNU GENERAL PUBLIC LICENSE Version 2, June 1991.    130401 12:17:04  innobackupex: Starting mysql with options:  --password=xxxxxxxx --user='root' --unbuffered --  130401 12:17:04  innobackupex: Connected to database with mysql child process (pid=1488)  130401 12:17:10  innobackupex: Connection to database server closed  IMPORTANT: Please check that the backup run completes successfully.             At the end of a successful backup run innobackupex             prints "completed OK!".    innobackupex: Using mysql  Ver 14.14 Distrib 5.5.29, for debian-linux-gnu (x86_64) using readline 6.2  innobackupex: Using mysql server version Copyright (c) 2000, 2012, Oracle and/or its affiliates. All rights reserved.    sh: 1: xtrabackup_55: not found  innobackupex: fatal error: no 'mysqld' group in MySQL options  

The files are present and executable:

david@server1:/usr/bin$ ls -l inno*  -rwxr-xr-x 1 root  root  108823 Mar 19 14:24 innobackupex  lrwxrwxrwx 1 david david     12 Mar 19 14:24 innobackupex-1.5.1 -> innobackupex  -rwxr-xr-x 1 root  root   10096 Jan 16 17:24 innochecksum  -rwxr-xr-x 1 root  root  400192 Jan 16 17:24 innotop  david@server1:/usr/bin$   david@server1:/usr/bin$ ls -l xtra*  -rwxr-xr-x 1 root root  9799993 Mar 19 14:21 xtrabackup  -rwxr-xr-x 1 root root  8361704 Mar 19 14:24 xtrabackup_51  -rwxr-xr-x 1 root root 12699861 Mar 19 14:15 xtrabackup_55  david@server1:/usr/bin$   

Path is correct:

david@server1:/usr/bin$ echo $PATH  /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games  

I get the same results if I run

innobackupex   ./innobackupex   sudo innobackupex   sudo ./innobackupex   

Any ideas?

Setting DATEFIRST to 6

Posted: 01 Apr 2013 12:53 PM PDT

I would like to set FIRSTDATE to 6 (ie: saturday), i have read that to change it parmanently i could set the default language of the login to one of an existing language from the sys.syslanguages and alter the login of the user like:

USE [master]  GO  ALTER LOGIN [some_user] WITH DEFAULT_LANGUAGE = [Français]  GO  

But:

SELECT * FROM sys.syslanguages;  

i get many languages, but no one has the DATEFIRST set to 6 (they are all set to 1 or 7).

Question:

Can i add another language based on an existing one, and set de field datefirst to 6 ?

Can i modify an existing syslanguage ?

All what i want, is te set it to 6 parmanently, because i need it to get the right week number.

Thank you for help !

SQL Server 2012 catalog.executions to sysjobhistory - any way to join them?

Posted: 01 Apr 2013 12:38 PM PDT

I have exhausted my resources and can't find a foolproof way to join the ssisdb.catalog tables to the jobs that run them. Trying to write some custom sprocs to monitor my execution times and rows written from the catalog tables, and it would be greatly beneficial to be able to tie them together with the calling job.

UPDATE / SELECT incredibly slow in MySQL InnoDB

Posted: 01 Apr 2013 01:13 PM PDT

I have 1 table with 2.5 billion records(2.5 Cr.) and I am trying to get count of records based on indexed column. it takes 115 seconds to execute same.

Query : Select count(1) from table1 where column=code

Table1 Info

  • 2 billion (2 Cr.)
  • code gets resultset from DB with 1 to 10k rows,
  • Column is indexed.

Server Details

  • Intel Xeon 3.10ghz
  • 8 GB RAM
  • MySQL server 5.1
  • MySQL Connector 5.0.6

my.cnf details

  • Default Engine =InnoDB
  • innodb_buffer_pool_size=4G

Also take too much time for Update query also please suggest how do i tune mysql.

How to suppress an informational raiserror message

Posted: 01 Apr 2013 11:26 AM PDT

I have a setup script that calls sp_configure a few times to set up some system variables, it works fine but I have a few of the following statement

Configuration option '%ls' changed from %ld to %ld. Run the RECONFIGURE statement to install.

with the variables filled in accordingly. Internally I can see sp_configure is calling

raiserror(15457,-1,-1, @configname, @prevvalue, @configvalue) with log  

Is there anything I can do to prevent those informational messages (which are useless to the script, I call RECONFIGURE from inside the script) from displaying in the messages window in MSSMS when a user runs the script?

BCNF when no keys are available

Posted: 01 Apr 2013 11:08 AM PDT

I have a problem understanding a particular case of BCNF. Let's suppose that we have a relation R(B, C, D) with df:{C->D}. Is my relation in BCNF and if yes why??? Is C considered as superkey?

Using Oracle hierarchical queries for transposing field value into rows

Posted: 01 Apr 2013 10:55 AM PDT

One column in my table stores list of values separated by comma (I know it's a poor design, but refactoring is not an option at the current moment). I seem to come up with a working solution but I'm afraid it's not very efficient :

select distinct test_id , regexp_substr(str_data,'[^,]+', 1, level)    from   (  select 1 as test_id, '1,2,3' as str_data from dual   union all   select 2 as test_id, '4,5,6' as str_data from dual  )test_data   connect   by   regexp_substr(str_data, '[^,]+', 1, level) is not null  

I don't like that I have to use distinct . Is there a better approach ?

Thank you.

Can I use replication/mirroring to have a "Current" database and a "Full History" copy?

Posted: 01 Apr 2013 10:45 AM PDT

I have a database that is used for current operations. The amount of data is starting to get a bit to much for the applications using it.

Most of the data is older and is very rarely needed.

I am wondering if there is a way to configure SQL Server to keep a mirror of the original database, but allow me to occasionally purge out old data (and not have it purge from the copy).

NOTE: My database is currently SQL Server 2008 R2, but we will be upgrading to SQL Server 2012 soon.

MySQL slave stuck in "Reading event from the relay log"?

Posted: 01 Apr 2013 09:42 AM PDT

  • 5.5.28-log MySQL Community Server (GPL) by Remi
  • binlog-format=MIXED

My problem is similar to this question.

*************************** 2. row ***************************       Id: 973415     User: system user     Host:        db: NULL  Command: Connect     Time: 29062    State: Reading event from the relay log     Info: NULL  

It looks like a bug, except for no one mentioned that verion 5.5.28 is effected.

Here're the additional informations:

mysql> show slave status\G

          Master_Log_File: mysql-bin.006413        Read_Master_Log_Pos: 40371120             Relay_Log_File: mysqld-relay-bin.011003              Relay_Log_Pos: 226592090      Relay_Master_Log_File: mysql-bin.006392           Slave_IO_Running: Yes          Slave_SQL_Running: Yes            Replicate_Do_DB:         Replicate_Ignore_DB:          Replicate_Do_Table:      Replicate_Ignore_Table:     Replicate_Wild_Do_Table:   Replicate_Wild_Ignore_Table:                  Last_Errno: 0                 Last_Error:                Skip_Counter: 0        Exec_Master_Log_Pos: 226591944            Relay_Log_Space: 5708184440            Until_Condition: None             Until_Log_File:               Until_Log_Pos: 0         Master_SSL_Allowed: No         Master_SSL_CA_File:          Master_SSL_CA_Path:             Master_SSL_Cert:           Master_SSL_Cipher:              Master_SSL_Key:       Seconds_Behind_Master: 29531  

mysql> show engine innodb status;

---TRANSACTION 1709EE56, ACTIVE 29524 sec fetching rows  mysql tables in use 8, locked 8  94647 lock struct(s), heap size 14121400, 45755176 row lock(s), undo log entries 7547  MySQL thread id 973415, OS thread handle 0x40d90940, query id 1773214543 Reading event from the relay log  

mysqlbinlog --start-position=226591944 mysql-bin.006392

#130401 15:18:23 server id 248  end_log_pos 226592078   Table_map: `reportingdb`.`v3_cam_ip` mapped to number 4973102  #130401 15:18:23 server id 248  end_log_pos 226593102   Delete_rows: table id 4973102  #130401 15:18:23 server id 248  end_log_pos 226594131   Delete_rows: table id 4973102  #130401 15:18:23 server id 248  end_log_pos 226595169   Delete_rows: table id 4973102  

mysql> show keys from v3_cam_ip\G

*************************** 1. row ***************************          Table: v3_cam_ip     Non_unique: 1       Key_name: ix_campaignid   Seq_in_index: 1    Column_name: campaignid      Collation: A    Cardinality: 20       Sub_part: NULL         Packed: NULL           Null: YES     Index_type: BTREE        Comment:   Index_comment:   *************************** 2. row ***************************          Table: v3_cam_ip     Non_unique: 1       Key_name: ix_dt   Seq_in_index: 1    Column_name: dt      Collation: A    Cardinality: 20       Sub_part: NULL         Packed: NULL           Null:      Index_type: BTREE        Comment:   Index_comment:   2 rows in set (0.95 sec)  

What I have done on one of two Slaves:

  • stop slave; hangs so I have to press Ctrl-C
  • then I tried to restart the MySQL (using init script) but it failed
  • after that I did a kill -9 (silly?) and start again
  • now the slave is started but it doesn't get update from the master

What should I do on the remaining slave?

Real Time data collection - Help select Database

Posted: 01 Apr 2013 09:01 AM PDT

My Scenario:

I want to collect data from various sensors[for now I have 600 sensors sending data, but this may have to scale up to 3000 more data sensors, all sensors send real time data] Now I have to run network based queries on this data, the data does not have to be in the database for a long time, as I query I have to make calculations based on the current state of the sensors. I do not need historic data. There would be multiple queries made to the database.

My requirements are;

A real time database that is quick. Scalability is an issue , but can be overlooked as I exactly know how many sensors would send data. Large number of queries on the DB. This is an academic project so my preferences are to open source.

I am not a database person , and the shear number of selections that there are is overwhelming. Any help and suggestions are really appreciated.

How do I demonstrate Transaction Log activity?

Posted: 01 Apr 2013 09:31 AM PDT

Say I have the following setup:

use tempdb  go    set nocount on  go    create table MyTest  (      Column1 varchar(100),      Column2 text   )  go    insert mytest (Column1, Column2)  select REPLICATE('a', 100), REPLICATE('a', 100)  from sys.syscolumns a, sys.syscolumns b  

I'd like to convert each of the columns to varchar(max) like this:

-- processes every page:  alter table mytest  alter column Column1 varchar(max)    -- processes only metadata:  alter table mytest  alter column Column2 varchar(max)  

How can I demonstrate that the first command processes the whole table while the second command only processes metadata. I was thinking of using SET STATISTICS IO which reports thousands of logical reads for the first command and nothing for the other. I was also thinking of using DBCC LOG or fn_dblog. But I wasn't sure how to interpret or tie the results to the queries I issued.

sql server partitioned view execution plan questions

Posted: 01 Apr 2013 04:21 PM PDT

I have created a partitioned view for 3 tables tb_sales2010,tb_sales2011,tb_sales2012 using check constraint (respectivly for each table)

datepart(year,[Date])=(2010)
datepart(year,[Date])=(2011)
datepart(year,[Date])=(2012)

schema for table 2010 (same for all other tables)

CREATE TABLE [dbo].[tb_Sales2010](      [Date] [smalldatetime] NOT NULL,      [ID] [int] NOT NULL     CONSTRAINT [PK_tb_Sales2010] PRIMARY KEY NONCLUSTERED   (      [ID] ASC  )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]  ) ON [PRIMARY]    GO    ALTER TABLE [dbo].[tb_Sales2010]  WITH CHECK ADD  CONSTRAINT [CK_Date_2010] CHECK  ((datepart(year,[Date])=(2010)))  GO  

My query is

SELECT TOP 1 *  FROM partitionedTb_sales  WHERE DATEPART(YY, DATE) = 2011  

My question is, the execution plan indicates that it is scanning all 3 tables. Instead of looking at the check constraint and saying table b has the information it needs. Why is that. i feel the execution plan should only show 1 table being scanned.

enter image description here

Statistics turned on results

Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'tb_Sales2012'. Scan count 1, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'tb_Sales2011'. Scan count 1, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table 'tb_Sales2010'. Scan count 1, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

View Definition CREATE VIEW PartitionedTb_Sales AS SELECT * FROM tb_sales2010

UNION ALL

SELECT * FROM tb_sales2011

UNION ALL

SELECT * FROM tb_sales2012

BIT columns all "1" after a phpMyAdmin export/import

Posted: 01 Apr 2013 02:51 PM PDT

I have to import data from a MySQL database using phpMyAdmin because that's the only interface my hosting provider supports.

I have exported my database from my local machine using phpMyAdmin. After that I imported the script file to my host. All of data in the columns that is BIT type are changed to '1'. Hosting database version is 5.5.29.

Is it a phpMyAdmin problem, or MySQL version problem? How can I fix this?

Moving SQL Server to different domain

Posted: 01 Apr 2013 09:48 AM PDT

What are best practices/important steps when the Windows Server machine hosting SQL Server 2008 R2 is moved to another domain? What steps definitely/might need to be done for SQL Server before and after the new domain is joined?

Unable to start Oracle (11g1) due to bad memory configuration (ORA-00837)

Posted: 01 Apr 2013 06:00 PM PDT

I decided to reduce the memory allocation of a local developer instance, and was told that the following command did the job;

alter system set memory_max_target = 512M scope = spfile;  

Which I guess it did, since I no longer can start the Oracle instance in question. This is the error I get when trying to connect as a non-DBA-user;

ORA-01034: ORACLE not available  ORA-27101: shared memory realm does not exist  

When connecting as SYSDBA I get the following;

Connected to an inactive instance.  SQL>startup nomount;  ORA-00837: Specified value of MEMORY_TARGET greater than MEMORY_MAX_TARGET  SQL>alter system set memory_max_target = 2G scope = spfile;  ORA-01034: ORACLE not available  

("Connected to an inactive instance." is my translation of a localized error message, might not be exactly correct) So, I'm in a catch 22 situation, I can't change the memory parameters without starting up, and I can't start up.

Is there a way to change these parameters offline?

Handling concurrent access to a key table without deadlocks in SQL Server

Posted: 01 Apr 2013 02:20 PM PDT

I have a table that is used by a legacy application as a substitute for IDENTITY fields in various other tables.

Each row in the table stores the last used ID LastID for the field named in IDName.

Occasionally the stored proc gets a deadlock - I believe I've built an appropriate error handler; however I'm interested to see if this methodology works as I think it does, or if I'm barking up the wrong tree here.

I'm fairly certain there should be a way to access this table without any deadlocks at all.

The database itself is configured with READ_COMMITTED_SNAPSHOT = 1.

First, here is the table:

CREATE TABLE [dbo].[tblIDs](      [IDListID] [int] NOT NULL CONSTRAINT PK_tblIDs PRIMARY KEY CLUSTERED IDENTITY(1,1) ,      [IDName] [nvarchar](255) NULL,      [LastID] [int] NULL,  );  

And the nonclustered index on the IDName field:

CREATE NONCLUSTERED INDEX [IX_tblIDs_IDName] ON [dbo].[tblIDs]  (      [IDName] ASC  )   WITH (      PAD_INDEX = OFF      , STATISTICS_NORECOMPUTE = OFF      , SORT_IN_TEMPDB = OFF      , DROP_EXISTING = OFF      , ONLINE = OFF      , ALLOW_ROW_LOCKS = ON      , ALLOW_PAGE_LOCKS = ON      , FILLFACTOR = 80  );    GO  

Some sample data:

INSERT INTO tblIDs (IDName, LastID) VALUES ('SomeTestID', 1);  INSERT INTO tblIDs (IDName, LastID) VALUES ('SomeOtherTestID', 1);  GO  

The stored procedure used to update the values stored in the table, and return the next ID:

CREATE PROCEDURE [dbo].[GetNextID](      @IDName nvarchar(255)  )  AS  BEGIN      /*          Description:    Increments and returns the LastID value from tblIDs          for a given IDName          Author:         Max Vernon          Date:           2012-07-19      */        DECLARE @Retry int;      DECLARE @EN int, @ES int, @ET int;      SET @Retry = 5;      DECLARE @NewID int;      SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;      SET NOCOUNT ON;      WHILE @Retry > 0      BEGIN          BEGIN TRY              BEGIN TRANSACTION;              SET @NewID = COALESCE((SELECT LastID FROM tblIDs WHERE IDName = @IDName),0)+1;              IF (SELECT COUNT(IDName) FROM tblIDs WHERE IDName = @IDName) = 0                   INSERT INTO tblIDs (IDName, LastID) VALUES (@IDName, @NewID)              ELSE                  UPDATE tblIDs SET LastID = @NewID WHERE IDName = @IDName;              COMMIT TRANSACTION;              SET @Retry = -2; /* no need to retry since the operation completed */          END TRY          BEGIN CATCH              IF (ERROR_NUMBER() = 1205) /* DEADLOCK */                  SET @Retry = @Retry - 1;              ELSE                  BEGIN                  SET @Retry = -1;                  SET @EN = ERROR_NUMBER();                  SET @ES = ERROR_SEVERITY();                  SET @ET = ERROR_STATE()                  RAISERROR (@EN,@ES,@ET);                  END              ROLLBACK TRANSACTION;          END CATCH      END      IF @Retry = 0 /* must have deadlock'd 5 times. */      BEGIN          SET @EN = 1205;          SET @ES = 13;          SET @ET = 1          RAISERROR (@EN,@ES,@ET);      END      ELSE          SELECT @NewID AS NewID;  END  GO  

Sample executions of the stored proc:

EXEC GetNextID 'SomeTestID';    NewID  2    EXEC GetNextID 'SomeTestID';    NewID  3    EXEC GetNextID 'SomeOtherTestID';    NewID  2  

EDIT:

I've added a new index, since the existing index IX_tblIDs_Name is not being used by the SP; I assume the query processor is using the clustered index since it needs the value stored in LastID. Anyway, this index IS used by the actual execution plan:

CREATE NONCLUSTERED INDEX IX_tblIDs_IDName_LastID ON dbo.tblIDs  (      IDName ASC  )   INCLUDE  (      LastID  )  WITH (FILLFACTOR = 100, ONLINE=ON, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);  

EDIT #2:

I've taken the advice that @AaronBertrand gave and modified it slightly. The general idea here is to refine the statement to eliminate unnecessary locking, and overall to make the SP more efficient.

The code below replaces the code above from BEGIN TRANSACTION to END TRANSACTION:

BEGIN TRANSACTION;  SET @NewID = COALESCE((SELECT LastID FROM dbo.tblIDs WHERE IDName = @IDName), 0) + 1;    IF @NewID = 1      INSERT INTO tblIDs (IDName, LastID) VALUES (@IDName, @NewID);  ELSE      UPDATE dbo.tblIDs SET LastID = @NewID WHERE IDName = @IDName;    COMMIT TRANSACTION;  

Since our code never adds a record to this table with 0 in LastID we can make the assumption that if @NewID is 1 then the intention is append a new ID to the list, else we are updating an existing row in the list.

Slow query performance

Posted: 01 Apr 2013 06:58 PM PDT

My database contains 20,000 rows and the following query is slow. Please help me to speed this up:

SELECT r.report_id,         r.user_id,         u.user_name,         u.user_mail,         d.department_name,         r.report_comment,         r.report_target_date,         r.report_create_date,         r.report_revised_date,         r.report_root_id  FROM   report r         JOIN user u           ON u.user_id = r.user_id         JOIN department d           ON u.department_id = d.department_id  WHERE  ( u.user_name LIKE Concat('%', 'hieu', '%') )         AND ( d.department_name LIKE Concat('%', 'BOM', '%') )         AND r.report_id IN (SELECT r.report_id                             FROM   report r                             WHERE  r.user_id = 1                             UNION ALL                             SELECT r.report_id                             FROM   report r                                    JOIN user u                                      ON u.user_id = r.user_id                                    JOIN department d                                      ON u.department_id = d.department_id                                    JOIN authority a                                      ON r.user_id = a.user_src_id                             WHERE  a.user_dest_id = 1)         AND ( r.report_id IN (SELECT r.report_id                               FROM   report r                               WHERE  r.report_comment LIKE                                      Concat('%', 'this', '%'))                OR r.report_id IN (SELECT rc.report_id                                   FROM   report_content rc                                   WHERE  rc.visited_company LIKE                                          Concat('%', 'this', '%')                                           OR rc.report_description LIKE                                              Concat('%', 'this', '%')) )  ORDER  BY r.report_target_date DESC  LIMIT  0, 30   

CREATE TABLE :

CREATE TABLE IF NOT EXISTS `report` (    `report_id` int(11) NOT NULL AUTO_INCREMENT,    `user_id` int(11) NOT NULL,    `report_comment` text CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL,    `report_target_date` datetime NOT NULL,    `report_create_date` datetime NOT NULL,    `report_revised_date` datetime DEFAULT NULL,    `report_revised_id` int(11) DEFAULT NULL,    `report_root_id` int(11) DEFAULT NULL,    `enable` int(2) NOT NULL DEFAULT '1',    PRIMARY KEY (`report_id`),    UNIQUE KEY `unique_report` (`report_id`),    KEY `support_report` (`report_id`,`report_target_date`,`report_create_date`,`report_revised_date`,`report_revised_id`,`report_root_id`)  ) ENGINE=InnoDB  DEFAULT CHARSET=latin1 AUTO_INCREMENT=18497 ;      CREATE TABLE IF NOT EXISTS `report_content` (    `report_content_id` int(11) NOT NULL AUTO_INCREMENT,    `report_id` int(11) NOT NULL,    `timestamp` text COLLATE utf8_unicode_ci NOT NULL,    `visited_company` varchar(255) COLLATE utf8_unicode_ci NOT NULL,    `report_description` text COLLATE utf8_unicode_ci NOT NULL,    PRIMARY KEY (`report_content_id`),    UNIQUE KEY `unique_report` (`report_content_id`,`report_id`),    UNIQUE KEY `unique_report_content` (`report_content_id`,`report_id`),    KEY `support_report_content` (`report_content_id`,`report_id`,`visited_company`)  ) ENGINE=InnoDB  DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=278 ;  

My EXPLAIN :

id  | select |   table   |    type   |    possible_keys    |    key   |key_len|    ref  | rows  |  extra  1   |PRIMARY |     r     |    ALL    |         (NULL)      |  (NULL)  | (NULL)| (NULL)  | 20246 | using where/filesort  1   |PRIMARY |     p     |  eq_ref   | PRIMARY,unique_user |  NULL    | 4   |   NULL    |1      | using where  5   |SUBQUERY| report_content|  all  |          NULL       |    NULL  | 4   |   NULL    |160    |  Using where  6   |UNION   | report_content|  all  |          NULL       |    NULL  | 4   |   NULL    |160    |  Using where  

NOW I change query following :

    SELECT r.report_id,           r.user_id,           u.user_name,           u.user_mail,           d.department_name,           r.report_comment,            r.report_target_date,           r.report_create_date,           r.report_revised_date,           r.report_root_id  FROM report AS r FORCE INDEX (unique_report) , `user`                   AS u FORCE INDEX (unique_user) , `department`                   AS d FORCE INDEX (unique_department)  WHERE (u.user_name LIKE CONCAT('%', 'hieu', '%'))           AND (d.department_name LIKE CONCAT('%', 'bom', '%'))           AND r.report_id IN (  SELECT r.report_id      FROM report r      WHERE r.user_id =1 UNION ALL  SELECT r.report_id      FROM report r FORCE INDEX (unique_report)      JOIN USER u FORCE INDEX (unique_user) ON u.user_id = r.user_id      JOIN department d FORCE INDEX (unique_department) ON u.department_id = d.department_id      JOIN authority a FORCE INDEX (unique_authority) ON r.user_id = a.user_src_id      WHERE a.user_dest_id =1) AND (r.report_id IN (  SELECT  r.report_id      FROM report r      WHERE r.report_comment LIKE CONCAT('%', 'when', '%')) OR EXISTS (  SELECT rc.report_id      FROM report_content rc      WHERE rc.visited_company LIKE CONCAT('%', 'when', '%')       OR MATCH (`timestamp`,report_description)       AGAINST('+when+' WITH QUERY EXPANSION) AND r.report_id = rc.report_id))  ORDER BY r.report_target_date DESC  LIMIT 0, 30  

AND faster than before : 1.97s.

So somebody help me for speedup more.

How to modify an update in Oracle so it performs faster?

Posted: 01 Apr 2013 09:58 AM PDT

I have this query:

UPDATE   (      SELECT   h.valid_through_dt, h.LAST_UPDATE_TMSTMP      FROM   ETL_FEE_SCH_TMP d, FEE_SCHEDULE_HISTORICAL h      WHERE       h.FUND_ID = d.FUND_ID      AND h.FEETYPE_NAME = d.FEETYPE_NAME      AND h.BREAKPOINT_TYPE = d.BREAKPOINT_TYPE      AND h.BREAKPOINT_QTY = d.BREAKPOINT_QTY      AND h.LOW_BREAKPOINT_AMT = d.LOW_BREAKPOINT_AMT      AND h.VALID_THROUGH = TO_DATE ('31-DEC-9999', 'dd-mon-yyyy')      AND h.universe = 'DC'      AND h.universe = d.universe      AND EXISTS      (          SELECT 1          FROM FEE_SCHEDULE s          WHERE s.FUND_ID = h.FUND_ID          AND s.FEETYPE_NAME = h.FEETYPE_NAME          AND s.BREAKPOINT_TYPE = h.BREAKPOINT_TYPE          AND s.BREAKPOINT_QTY = h.BREAKPOINT_QTY          AND s.LOW_BREAKPOINT_AMT = h.LOW_BREAKPOINT_AMT          AND s.universe = 'DC'      )  ) updateTable  SET     updateTable.VALID_THROUGH = (SYSDATE - 1),  updateTable.LAST_UPDATE_TMSTMP = SYSTIMESTAMP;  

The trouble that I am having is that this query takes a long time to run. I don't know whether it is possible to run this on parallel, or it would be easier to update a cursor in a pipeline function.

What would you suggest?

This is all the information that I believe it is relevant.

This is the execution plan of the internal select:

Execution Plan  ----------------------------------------------------------  Plan hash value: 57376096  ---------------------------------------------------------------------------------------------------------  | Id  | Operation                    | Name                     | Rows  | Bytes| Cost (%CPU)| Time     |  ---------------------------------------------------------------------------------------------------------  |   0 | SELECT STATEMENT             |                          |     1 |   306 |  8427   (1)| 00:01:42 |  |   1 |  NESTED LOOPS                |                          |       |       |            |          |  |   2 |   NESTED LOOPS               |                          |     1 |    306|  8427   (1)| 00:01:42 |  |   3 |    MERGE JOIN CARTESIAN      |                          |     1 |    192|  8426   (1)| 00:01:42 |  |*  4 |     INDEX RANGE SCAN         | SYS_C000666              |     1 |     96|     2   (0)| 00:00:01 |  |   5 |     BUFFER SORT              |                          |  3045K|   278M|  8425   (1)| 00:01:42 |  |   6 |      SORT UNIQUE             |                          |  3045K|   278M|  8425   (1)| 00:01:42 |  |*  7 |       TABLE ACCESS FULL      | FEE_SCHEDULE             |  3045K|   278M|  8425   (1)| 00:01:42 |  |*  8 |    INDEX RANGE SCAN          | FEE_SCHDL_IDX1           |     1 |       |     1   (0)| 00:00:01 |  |*  9 |   TABLE ACCESS BY INDEX ROWID| FEE_SCHEDULE_HISTORICAL  |     1 |   114 |     1   (0)| 00:00:01 |  ---------------------------------------------------------------------------------------------------------  Predicate Information (identified by operation id):  ---------------------------------------------------     4 - access("D"."UNIVERSE"='DC')     7 - filter("S"."UNIVERSE"='DC')     8 - access("H"."UNIVERSE"='DC' AND "S"."FUND_ID"="H"."FUND_ID" AND                "S"."FEETYPE_NAME"="H"."FEETYPE_NAME" AND                "S"."BREAKPOINT_TYPE"="H"."BREAKPOINT_TYPE" AND                "S"."BREAKPOINT_QTY"="H"."BREAKPOINT_QTY" AND                "S"."LOW_BREAKPOINT_AMT"="H"."LOW_BREAKPOINT_AMT")         filter("H"."FUND_ID"="D"."FUND_ID" AND                "H"."FEETYPE_NAME"="D"."FEETYPE_NAME" AND                "H"."BREAKPOINT_TYPE"="D"."BREAKPOINT_UNIT_TY  

Table data:

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------  UNIVERSE|FUND_ID   |FEETYPE_NAME |BREAKPOINT_TYPE|BREAKPOINT_QTY|LOW_BREAKPOINT_AMT|HIGH_BREAKPOINT_AMT|FEE_PCT|FEE_SCHDL_SEQ_ID|GROUP_ID|LAST_UPDATE_TMSTMP  |VALID_FROM|VALID_THROUGH|INSERT_TMSTMP        |JOB_ID|  ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------  DC      |DC9ZTPLPHO|DeferLoad    |Percentage     |4             |10000             |300000             |3.14   |780250          |null    |1/4/2012  3:59:54 PM|6/23/2012 |12/31/9999   |1/5/2011   3:59:54 PM|666   |  DC      |DCE86Y8XFU|RedemptionFee|Percentage     |9             |  100             |100500             |7.67   |780251          |null    |6/4/2012  4:49:54 PM|11/12/2011|12/31/9999   |8/17/2011  2:00:54 PM|666   |  DC      |DCAYL0KONA|FrontLoad    |Percentage     |2             |50000             |601500             |5.00   |780252          |null    |4/25/2012 4:49:54 PM|8/2/2012  |12/31/9999   |12/19/2012 9:59:00 PM|666   |  DC      |DC9ZTPLPHO|DeferLoad    |Percentage     |7             |80000             |900000             |2.24   |780252          |null    |4/25/2012 4:49:54 PM|8/2/2012  |12/31/9999   |12/19/2012 9:59:00 PM|666   |  ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------  

This is the script of the historical table:

CREATE TABLE FEE_SCHEDULE_HISTORICAL  (    UNIVERSE                        VARCHAR2(2 BYTE) NOT NULL,    FUND_ID                         VARCHAR2(10 BYTE) NOT NULL,    FEETYPE_NAME                    VARCHAR2(75 BYTE),    BREAKPOINT_TYPE                 VARCHAR2(50 BYTE),    BREAKPOINT_QTY                  VARCHAR2(10 BYTE),    LOW_BREAKPOINT_AMT              NUMBER(19,6),    HIGH_BREAKPOINT_AMT             NUMBER(19,6),    FEE_PCT                         NUMBER(19,6),    FEE_SCHDL_SEQ_ID                NUMBER        NOT NULL,    GROUP_ID                        NUMBER,    LAST_UPDATE_TMSTMP              DATE          NOT NULL,    VALID_FROM                      DATE          NOT NULL,    VALID_THROUGH                   DATE          NOT NULL,    INSERT_TMSTMP                   DATE          NOT NULL,    JOB_ID                          NUMBER        NOT NULL  );    CREATE UNIQUE INDEX FEE_SCHDL_PK ON FEE_SCHEDULE_HISTORICAL(FEE_SCHDL_SEQ_ID);    CREATE UNIQUE INDEX FEE_SCHDL_HST_IDX ON FEE_SCHEDULE_HISTORICAL (      UNIVERSE,      FUND_ID,      FEETYPE_NAME,      BREAKPOINT_TYPE,      BREAKPOINT_QTY,       LOW_BREAKPOINT_AMT,      VALID_FROM,      JOB_ID  )    CREATE INDEX FEE_SCHEDULE_HST_IDX2 ON FEE_SCHEDULE_HISTORICAL(LAST_UPDATE_TMSTMP)    CREATE INDEX FEE_SCHEDULE_HST_IDX3 ON FEE_SCHEDULE_HISTORICAL(VALID_THROUGH)    ALTER TABLE FEE_SCHEDULE_HISTORICAL ADD (      CONSTRAINT FEE_SCHDL_PK      PRIMARY KEY      (FEE_SCHDL_SEQ_ID)  );  

This is the other table:

CREATE TABLE FEE_SCHEDULE  (    UNIVERSE                        VARCHAR2(2 BYTE) NOT NULL,    FUND_ID                         VARCHAR2(10 BYTE) NOT NULL,    FEETYPE_NAME                    VARCHAR2(75 BYTE),    BREAKPOINT_TYPE                 VARCHAR2(50 BYTE),    BREAKPOINT_QTY                  VARCHAR2(10 BYTE),    LOW_BREAKPOINT_AMT              NUMBER(19,6),    HIGH_BREAKPOINT_AMT             NUMBER(19,6),    FEE_PCT                         NUMBER(19,6),    JOB_RUN_ID                      NUMBER        NOT NULL,    FILE_DATE                       DATE          NOT NULL,    CYCLE_DATE                      DATE          NOT NULL  )  

The temporary table is the result of FEE_SCHEDULE_HISTORICAL minus FEE_SCHEDULE

[Articles] Google Glass Your Database

[Articles] Google Glass Your Database


Google Glass Your Database

Posted: 31 Mar 2013 11:00 PM PDT

A new idea from a small startup may revolutionize the way that you search for data.

[MS SQL Server] Not able to shrink transaction log file

[MS SQL Server] Not able to shrink transaction log file


Not able to shrink transaction log file

Posted: 31 Mar 2013 09:55 PM PDT

Why am i not able to shrink the transaction log file of my database even though there's 91% free space in it?I'm using the following script for testing purposes:/* FULL Recovery and Log File Growth */USE [master]GO-- Create Database SimpleTranIF EXISTS (SELECT name FROM sys.databases WHERE name = N'SimpleTran')BEGINALTER DATABASE [SimpleTran] SET SINGLE_USER WITH ROLLBACK IMMEDIATE;DROP DATABASE [SimpleTran]ENDGOCREATE DATABASE [SimpleTran]GO-- Set Database backup model to FULLALTER DATABASE [SimpleTran] SET RECOVERY FULLGOBACKUP DATABASE [SimpleTran] TO DISK = N'D:\Backup\SimpleTran\SimpleTran.bak' WITH NOFORMAT, NOINIT, NAME = N'SimpleTran-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10GO-- Check Database Log File SizeSELECT DB_NAME(database_id) AS DatabaseName,Name AS Logical_Name,Physical_Name, (size*8)/1024 SizeMBFROM sys.master_filesWHERE DB_NAME(database_id) = 'SimpleTran'GO-- Create Table in Database with TransactionUSE SimpleTranGOIF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[RealTempTable]') AND TYPE IN (N'U'))DROP TABLE [dbo].[RealTempTable]GOCREATE TABLE RealTempTable (ID INT)INSERT INTO RealTempTable (ID)SELECT TOP 50000 ROW_NUMBER() OVER (ORDER BY a.name) RowIDFROM sys.all_objects aCROSS JOIN sys.all_objects bGO-- Check the size of the DatabaseSELECT DB_NAME(database_id) AS DatabaseName,Name AS Logical_Name,Physical_Name, (size*8)/1024 SizeMBFROM sys.master_filesWHERE DB_NAME(database_id) = 'SimpleTran'GO-- Take Full BackupBACKUP DATABASE [SimpleTran] TO DISK = N'D:\Backup\SimpleTran\SimpleTran.bak' WITH NOFORMAT, NOINIT, NAME = N'SimpleTran-Transaction Log Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10GO-- Run following transaction multiple times and check the size of T-LogINSERT INTO RealTempTable (ID)SELECT TOP 50000 ROW_NUMBER() OVER (ORDER BY a.name) RowIDFROM sys.all_objects aCROSS JOIN sys.all_objects bGO-- Check the size of the DatabaseSELECT DB_NAME(database_id) AS DatabaseName,Name AS Logical_Name,Physical_Name, (size*8)/1024 SizeMBFROM sys.master_filesWHERE DB_NAME(database_id) = 'SimpleTran'GO/*Now run following code multiple times.You will notice that it will not increase the size of .ldf file but will for sureincreasethe size of the log backup.*/-- Second Time-- STARTBACKUP LOG [SimpleTran] TO DISK = N'D:\Backup\SimpleTran\SimpleTran.trn' WITH NOFORMAT, NOINIT, NAME = N'SimpleTran-Transaction Log Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10GO-- Run following transaction and check the size of T-LogINSERT INTO RealTempTable (ID)SELECT TOP 50000 ROW_NUMBER() OVER (ORDER BY a.name) RowIDFROM sys.all_objects aCROSS JOIN sys.all_objects bGO-- Check the size of the DatabaseSELECT DB_NAME(database_id) AS DatabaseName,Name AS Logical_Name,Physical_Name, (size*8)/1024 SizeMBFROM sys.master_filesWHERE DB_NAME(database_id) = 'SimpleTran'GO-- END

Howto. ..Load 2 columns from excel file into table with corresponding columns ?

Posted: 01 Apr 2013 01:00 AM PDT

Hello,Since I'm not a programmer and hopefully there's a simply way...I need to load one column from an excel file into a database table while trying to match the corresponding columns :unsure:1) I have 3 columns in the excel with a server , account and account description column 2) The database table has these same columns but the account description is BLANK 3) I need to populate the account description from the excel from the corresponding row.I hope i'm clearmany thx

Can I strictly limit access to database ?

Posted: 31 Mar 2013 06:07 AM PDT

Hi,I'm new in SQL Server administration however I need to make some major modifications in my SQL Server database. There are several applications from different places access tables. I've already called them not to use apps during process but I need to be sure nothing happens by mistake. So, I want to know if there is an option to restrict any access from outside during my work except my access. As I also develop a user interface by VS, I will need to have access through my VS too (Visual Studio is installed on the SQL Server computer so it will be a local access).Thanks in advance for helps

[SQL 2012] Deadlocks in SQL 2012

[SQL 2012] Deadlocks in SQL 2012


Deadlocks in SQL 2012

Posted: 31 Mar 2013 02:32 PM PDT

Hello guys, good dayI need your experts advice, I'm having a weird situation with a deadlock recurring issue, where there's a deadlock between two processes only that are actually blocking and waiting for the same resource, an index.I created a trace in profiler to track them out and this is the XML output of it:<deadlock-list> <deadlock victim="process44c8d6928"> <process-list> <process id="process44c8d6928" taskpriority="0" logused="0" waitresource="KEY: 5:72057595823259648 [/highlight](fa4d64ad8d28)" waittime="2972" ownerId="265573708" transactionname="__GET" lasttranstarted="2013-03-30T09:23:42.640" XDES="0x44c8fe3a8" lockMode="U" schedulerid="6" kpid="7080" status="suspended" spid="93" sbid="0" ecid="0" priority="0" trancount="1" lastbatchstarted="2013-03-30T09:23:42.640" lastbatchcompleted="2013-03-30T09:23:42.640" lastattention="1900-01-01T00:00:00.640" clientapp=".Net SqlClient Data Provider" hostname="OBCYRK1VPRD02" hostpid="4192" loginname="ACS_AETNA\obcprod" isolationlevel="read committed (2)" xactid="265573708" currentdb="5" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056"> <executionStack> <frame procname="OBC.dbo.mcp_GETITEM_" line="34" stmtstart="1376" stmtend="2128" sqlhandle="0x030005007c2b7c1d9776fe0087a1000001000000000000000000000000000000000000000000000000000000">SELECT TOP 1 @ITE_NAME1 = I.ITE_NAME, @ITE_ID1 = I.ITE_ID, @ITE_DTPROCESS1=I.ITE_DTPROCESS, @APP_NAME1=I.APP_NAME, @FLG_NAME1=I.FLG_NAME FROM mcpITEMS I join mcpITEMTASKS T WITH (UPDLOCK) on T.ITE_ID = I.ITE_ID AND T.TAS_ID = @TAS_ID1 AND T.ITS_STATUS = 0 WHERE I.APP_NAME = @APP_NAME1 AND I.FLG_NAME = @FLG_NAME1 ORDER BY I.ITE_APRIORITY </frame> <frame procname="OBC.dbo.mcp_GETET_" line="85" stmtstart="5452" stmtend="5702" sqlhandle="0x0300050060bc4c213945c6007ba1000001000000000000000000000000000000000000000000000000000000">EXEC @RES=mcp_GETITEM_ @TAS_ID, @ITE_ID OUT, @APP_NAME OUT, @FLG_NAME OUT,@ITE_NAME OUT, @ITE_DTPROCESS OUT,@ERROR OUT </frame> <frame procname="OBC.dbo.mcp_GET" line="28" stmtstart="1374" stmtend="1780" sqlhandle="0x0300050028deed286176a6007aa1000001000000000000000000000000000000000000000000000000000000">EXEC @RES=mcp_GETET_ @ITT_NAME, @TAS_ID, @USE_LOGIN, @ITS_MACHINE, @ITS_INSTANCE, @ITE_NAME OUT,@APP_NAME OUT, @FLG_NAME OUT, @ITE_DTPROCESS OUT, @ERROR OUT </frame> </executionStack> <inputbuf>Proc [Database Id = 5 Object Id = 686677544] </inputbuf> </process> <process id="process44c8e3498" taskpriority="0" logused="0" waitresource="KEY: 5:72057595823259648 (7b2aa6cb912d)" waittime="2972" ownerId="265573667" transactionname="__GET" lasttranstarted="2013-03-30T09:23:42.480" XDES="0x4e4542eb8" lockMode="U" schedulerid="5" kpid="4000" status="suspended" spid="128" sbid="0" ecid="0" priority="0" trancount="1" lastbatchstarted="2013-03-30T09:23:42.480" lastbatchcompleted="2013-03-30T09:23:09.720" lastattention="1900-01-01T00:00:00.720" clientapp=".Net SqlClient Data Provider" hostname="OBCYRK1VPRD03" hostpid="352" loginname="ACS_AETNA\obcprod" isolationlevel="read committed (2)" xactid="265573667" currentdb="5" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056"> <executionStack> <frame procname="OBC.dbo.mcp_GETITEM_" line="43" stmtstart="2268" stmtend="2896" sqlhandle="0x030005007c2b7c1d9776fe0087a1000001000000000000000000000000000000000000000000000000000000">SELECT TOP 1 @ITE_NAME1 = I.ITE_NAME, @ITE_ID1 = I.ITE_ID, @ITE_DTPROCESS1=I.ITE_DTPROCESS, @APP_NAME1=I.APP_NAME, @FLG_NAME1=I.FLG_NAME FROM mcpITEMS I join mcpITEMTASKS T WITH (UPDLOCK) ON T.ITE_ID = I.ITE_ID AND T.TAS_ID = @TAS_ID1 AND T.ITS_STATUS = 0 ORDER BY I.ITE_APRIORITY </frame> <frame procname="OBC.dbo.mcp_GETET_" line="85" stmtstart="5452" stmtend="5702" sqlhandle="0x0300050060bc4c213945c6007ba1000001000000000000000000000000000000000000000000000000000000">EXEC @RES=mcp_GETITEM_ @TAS_ID, @ITE_ID OUT, @APP_NAME OUT, @FLG_NAME OUT,@ITE_NAME OUT, @ITE_DTPROCESS OUT,@ERROR OUT </frame> <frame procname="OBC.dbo.mcp_GET" line="28" stmtstart="1374" stmtend="1780" sqlhandle="0x0300050028deed286176a6007aa1000001000000000000000000000000000000000000000000000000000000">EXEC @RES=mcp_GETET_ @ITT_NAME, @TAS_ID, @USE_LOGIN, @ITS_MACHINE, @ITS_INSTANCE, @ITE_NAME OUT,@APP_NAME OUT, @FLG_NAME OUT, @ITE_DTPROCESS OUT, @ERROR OUT </frame> </executionStack> <inputbuf>Proc [Database Id = 5 Object Id = 686677544] </inputbuf> </process> </process-list> <resource-list> <keylock hobtid="72057595823259648" dbid="5" objectname="OBC.dbo.mcpITEMTASKS" indexname="2" id="lock49f996980" mode="U" associatedObjectId="72057595823259648"> <owner-list> <owner id="process44c8e3498" mode="U"/> </owner-list> <waiter-list> <waiter id="process44c8d6928" mode="U" requestType="wait"/> </waiter-list> </keylock> <keylock hobtid="72057595823259648" dbid="5" objectname="OBC.dbo.mcpITEMTASKS" indexname="2" id="lock4e2f30480" mode="U" associatedObjectId="72057595823259648"> <owner-list> <owner id="process44c8d6928" mode="U"/> </owner-list> <waiter-list> <waiter id="process44c8e3498" mode="U" requestType="wait"/> </waiter-list> </keylock> </resource-list> </deadlock></deadlock-list>Both processes execute the same stored procedure, which executes (depending on the parameters) similar queries that read data from the table mcpItemTasks, as you may see the queries are very similar but one has where clause for parameters.Where I'm kind of lost is that both have the lock mode on "U" because of the updlock hint, but also own it and thus they create a deadlock when they request it... but not sure why they request if they already own it...The indexes are set to allow page and row locks (which is default right?) and the table to lock escalation as "TABLE" (but from sys.tables you can read "lock_escalation" = 0 and "lock_escalation_desc"= TABLE, which I believe is default too)All the help you can give me to help understand this deadlock will be very appreciated.

[T-SQL] Creating code objects within a single script

[T-SQL] Creating code objects within a single script


Creating code objects within a single script

Posted: 01 Apr 2013 01:11 AM PDT

I need to deliver a large number of code objects (views and functions, not stored procedures) to our customers via a website. Due to dependencies of these objects, we want to have the script create these objects in a specific order. We also want to include error handling and success/failure reporting (as well as possible). Many of these code objects create statements are lengthy and contain many literals and concatenations.Given these "specifications", as well as the limitations of creating code objects and error handling within T-SQL, I believe my choices are limited to 1) using dynamic SQL to create the objects, thereby allowing better error handling and "reporting" , or 2) plain coding, thereby allowing much easier coding, readability, and maintenance.If anyone has another method that might work given the "specs", or a clever work-around for this, please let me know.Thanks,Jeff

Looking for a query to return rows in tableA that are inclusive of ID in tableB

Posted: 31 Mar 2013 11:52 PM PDT

I am looking for a query to get all records from tableA that exists in tableb inclusive.tableB contains the combination of ID that must all exists in table A in order to be returned. Query should return 1, 'Yes1' 2, 'Yes1'99, 'Yes1'2, 'Yes2'99, 'Yes2'30, 'Yes3'Would like to avoid dynamic queries if possible. CREATE TABLE [dbo].[tableA]( [A_ID] [int] NOT NULL, [A_text] varchar(6) NOT NULL) --these records should be returned by queryInsert tableA SELECT 1, 'Yes1'Insert tableA SELECT 2, 'Yes1'Insert tableA SELECT 99, 'Yes1'Insert tableA SELECT 2, 'Yes2'Insert tableA SELECT 99, 'Yes2'Insert tableA SELECT 30, 'Yes3'--query should not return these records because not all records are in Table BInsert tableA SELECT 1, 'No1'Insert tableA SELECT 99, 'No1'Insert tableASELECT 99, 'No2'--these have no records in tableB and should not be returnedInsert tableA SELECT 100, 'No3'Insert tableA SELECT 83, 'No4'Insert tableASELECT 19, 'No5'CREATE TABLE [dbo].[tableB]( [B_ID] [int] NOT NULL, [A_ID] [int] NOT NULL CONSTRAINT [PK_tableb] PRIMARY KEY CLUSTERED ( [B_ID] ASC, [A_ID] ASC ) ) insert tableBSelect 1, 1insert tableBSelect 1, 2insert tableBSelect 1, 99insert tableBSelect 2, 99insert tableBSelect 2, 2insert tableBSelect 3, 30insert tableBSelect 4, 30insert tableBSelect 4, 35insert tableBSelect 5, 31insert tableBSelect 5, 32insert tableBSelect 5, 33insert tableBSelect 5, 34insert tableBSelect 5, 35insert tableBSelect 6, 300insert tableBSelect 7, 300insert tableBSelect 7, 100

Need to unpivot a table

Posted: 31 Mar 2013 06:43 PM PDT

[b]Hello,I have a table like this[/b]CREATE TABLE [temp]( [Country] [varchar](200) NULL, [Country_Group] [varchar](200) NULL, [Year] [varchar](200) NULL, [R1_TV] [decimal](38, 2) NULL, [R2_TV] [decimal](38, 2) NULL, [R3_TV] [decimal](38, 2) NULL, [R1_Google] [decimal](38, 2) NULL, [R2_Google] [decimal](38, 2) NULL, [R3_Google] [decimal](38, 2) NULL, [R1_Yahoo] [decimal](38, 2) NULL, [R2_Yahoo] [decimal](38, 2) NULL) ON [PRIMARY][b] and data is following[/b]INSERT [temp] ([Country], [Country_Group], [Year], [R1_TV], [R2_TV], [R3_TV], [R1_Google], [R2_Google], [R3_Google], [R1_Yahoo], [R2_Yahoo]) VALUES (N'Greece', N'Test111', N'2011', CAST(3304.24 AS Decimal(38, 2)), CAST(5.56 AS Decimal(38, 2)), CAST(30378.95 AS Decimal(38, 2)), CAST(568.23 AS Decimal(38, 2)), CAST(4.25 AS Decimal(38, 2)), CAST(56458.00 AS Decimal(38, 2)), CAST(2456.00 AS Decimal(38, 2)), CAST(6.20 AS Decimal(38, 2)))INSERT [temp] ([Country], [Country_Group], [Year], [R1_TV], [R2_TV], [R3_TV], [R1_Google], [R2_Google], [R3_Google], [R1_Yahoo], [R2_Yahoo]) VALUES (N'Ukraine', N'Test111', N'2011', CAST(792.12 AS Decimal(38, 2)), CAST(6.11 AS Decimal(38, 2)), CAST(32391.28 AS Decimal(38, 2)), CAST(456.14 AS Decimal(38, 2)), CAST(6.69 AS Decimal(38, 2)), CAST(78562.00 AS Decimal(38, 2)), CAST(3652.00 AS Decimal(38, 2)), CAST(736.00 AS Decimal(38, 2)))INSERT [temp] ([Country], [Country_Group], [Year], [R1_TV], [R2_TV], [R3_TV], [R1_Google], [R2_Google], [R3_Google], [R1_Yahoo], [R2_Yahoo]) VALUES (N'ZZZZZZZZZZ', N'Test111', N'2011', CAST(2007.91 AS Decimal(38, 2)), CAST(5.66 AS Decimal(38, 2)), CAST(62770.22 AS Decimal(38, 2)), CAST(785.25 AS Decimal(38, 2)), CAST(3.21 AS Decimal(38, 2)), CAST(35412.00 AS Decimal(38, 2)), CAST(5214.00 AS Decimal(38, 2)), CAST(3.12 AS Decimal(38, 2)))[b]Now I want to Unpivot it like this[/b]CREATE TABLE [tt]( [Country_Group] [varchar](200) NULL, [Country] [varchar](200) NULL, [Year] [varchar](200) NULL, [Media] [nvarchar](4000) NULL, [R1] [decimal](38, 2) NULL, [R2] [decimal](38, 2) NULL, [R3] [decimal](38, 2) NULL) ON [PRIMARY]INSERT [tt] ([Country_Group], [Country], [Year], [Media], [R1], [R2], [R3]) VALUES (N'Test111', N'Ukraine', N'2011', N'TV', CAST(792.12 AS Decimal(38, 2)), CAST(6.11 AS Decimal(38, 2)), CAST(32391.28 AS Decimal(38, 2)))INSERT [tt] ([Country_Group], [Country], [Year], [Media], [R1], [R2], [R3]) VALUES (N'Test111', N'Ukraine', N'2011', N'Google', CAST(456.14 AS Decimal(38, 2)), CAST(6.69 AS Decimal(38, 2)), CAST(78562.00 AS Decimal(38, 2)))INSERT [tt] ([Country_Group], [Country], [Year], [Media], [R1], [R2], [R3]) VALUES (N'Test111', N'Ukraine', N'2011', N'Yahoo', CAST(3652.00 AS Decimal(38, 2)), CAST(736.00 AS Decimal(38, 2)), CAST(0.00 AS Decimal(38, 2)))INSERT [tt] ([Country_Group], [Country], [Year], [Media], [R1], [R2], [R3]) VALUES (N'Test111', N'Greece', N'2011', N'TV', CAST(792.12 AS Decimal(38, 2)), CAST(6.11 AS Decimal(38, 2)), CAST(32391.28 AS Decimal(38, 2)))INSERT [tt] ([Country_Group], [Country], [Year], [Media], [R1], [R2], [R3]) VALUES (N'Test111', N'Greece', N'2011', N'Google', CAST(456.14 AS Decimal(38, 2)), CAST(6.69 AS Decimal(38, 2)), CAST(78562.00 AS Decimal(38, 2)))INSERT [tt] ([Country_Group], [Country], [Year], [Media], [R1], [R2], [R3]) VALUES (N'Test111', N'Greece', N'2011', N'Yahoo', CAST(3652.00 AS Decimal(38, 2)), CAST(736.00 AS Decimal(38, 2)), CAST(0.00 AS Decimal(38, 2)))INSERT [tt] ([Country_Group], [Country], [Year], [Media], [R1], [R2], [R3]) VALUES (N'Test111', N'ZZZZZZZZZZ', N'2011', N'TV', CAST(792.12 AS Decimal(38, 2)), CAST(6.11 AS Decimal(38, 2)), CAST(32391.28 AS Decimal(38, 2)))INSERT [tt] ([Country_Group], [Country], [Year], [Media], [R1], [R2], [R3]) VALUES (N'Test111', N'ZZZZZZZZZZ', N'2011', N'Google', CAST(456.14 AS Decimal(38, 2)), CAST(6.69 AS Decimal(38, 2)), CAST(78562.00 AS Decimal(38, 2)))INSERT [tt] ([Country_Group], [Country], [Year], [Media], [R1], [R2], [R3]) VALUES (N'Test111', N'ZZZZZZZZZZ', N'2011', N'Yahoo', CAST(3652.00 AS Decimal(38, 2)), CAST(736.00 AS Decimal(38, 2)), CAST(0.00 AS Decimal(38, 2)))[b]Please help[/b]

[SQL Server 2008 issues] sql server readonly permission

[SQL Server 2008 issues] sql server readonly permission


sql server readonly permission

Posted: 31 Mar 2013 07:16 PM PDT

Which permission under sql server database properties->permission in SSMS should i select to make the database readonly for a user?

Transaction log in Subscriber database for Transactional Replication

Posted: 04 Mar 2013 08:31 AM PST

I have created replication between two SQL Server 2008R2 databases and it has been functioning well since the initial setup 3 months ago. The publisher database is the back-end for an OLTP application with 50 users who are working constantly, making changes and add new information. The Subscriber database is used for reporting. Today I noticed that the Subscriber database was set to Full Recovery Mode with no transaction log backups. I panicked for a minute and took a quick look at the Transaction Log drive on the Subscriber server - it has 50% free space, 24 GB. I was amazed that I had been avoiding what normally would cause a drive-full issue.My question is this. Since replication is constantly making changes to the Subscriber database (updating various tables as they change in the Publisher database), why is the Subscriber log file not filling up and giving me grief since I don't run backups on the Subscriber database - and haven't done for the entire three months this has been running? Gail Shaw mentioned in another forum question while I was researching this, that 'Replication does not in any way use the Subscriber's log file'. I'm not sure that I understand the seeming contradiction been the updating going on in the Subscriber database, Full Recovery mode, no Transaction Log backups taking place and the Transaction Log not filling up.Can anyone enlighten me and/or point me to articles that might help me understand this more thoroughly??Thank you. Willem

Publishing and accessing SQL Server 2008 Reports

Posted: 31 Mar 2013 12:20 PM PDT

I 'm new to SQL Server database and I need help. I 'm trying to publish sql reports for users to access from the web. We have SQL server 2008 database with data, and I have installed a separate SQL server 2008 with Reporting Services on my windows xp machine. I have created data source from the remote sql server 2008 installed on a Windows Server, and publish the reports. I can access the report manager through the URL (//localhost:8080/ReportServer) on my xp machine browser. However, when I typed the same URL from another computer, I get page cannot be displayed message. What is going on? Does the report manager URL needs to be configured on the SQL Server that has the database instance? Please help. ThanksTo make myself clear, what I 'm trying to acomplish is to create and publish reports for users to access from their workstations. Any help will be appreciated.

split fixed width row into multiple rows in SSIS

Posted: 31 Mar 2013 11:13 AM PDT

I Have a fixed width flat file and that needs to be loaded into multiple oracle tables(one row need to be splitted into multiple rows)the numbers which are on top of each column is there size,and my desired output should look like shown below. [code="sql"]Flatfile data(fixed width):3 6 3 11 3 10 3 10 3ID NAME AGE CTY1 ST1 CTY2 ST2 CTY3 ST3200JOHN 46 LOSANGELES CA HOUSTON TX CHARLOTTE NC 201TIMBER54 PHOENIX AZ CHICAGO IL 202DAVID 32 ATLANTA GA PORTLAND AZ [/code]the occurrence may vary.. it can grow upto 20-30[code="sql"]DESIRED OUTPUT:TABLE1ID NAME AGE200JOHN 46201TIMBER54202DAVID 32[/code][code="sql"]TABLE2ID SEQ CTY ST200 1 LOSANGELES CA 200 2 HOUSTON TX 200 3 CHARLOTTE NC201 1 PHOENIX AZ 201 2 CHICAGO IL 202 1 ATLANTA GA 202 2 PORTLAND AZ [/code]can some one help me outThanks

TRIM and Date Format question

Posted: 31 Mar 2013 05:25 AM PDT

I have tried using the TRIM function in the below query to remove a leading zero in the Department Code (vEPayrollBase.[PrbPayDept] AS [Department Code]). Does anyone know how what the correct language is? Also, how would I modify Date of Birth to format as mm/dd/yyyy instead of the default? ThanksSELECT vEPerson.[EpFirstName] AS [First Name], vEPerson.[EpMiddleName] AS [Middle Name], vEPerson.[EpLastName] AS [Last Name], vEPerson.[EpEmail] AS [E-mail address], vEJob.[EjTitle] AS [Job Title], vEbase.[EbClock] AS [Employee ID], vEPayrollBase.[PrbPayDiv] AS [Location Code], vEPayrollBase.[PrbPayDept] AS [Department Code], (CASE WHEN [EpCountry] = 'USA' THEN 'US' ELSE [EpCountry] END) AS [Country], (CASE WHEN [EeStatus] = 'Active' THEN 'A' ELSE [EeStatus] END) AS [Status Indicator], vEPerson.[EpDateBorn] AS [Date Of Birth], vEPerson.[EpSex] AS [Gender], vEPerson.[EpStreet1] AS [Home address], vEPerson.[EpStreet2] AS [Home address 2], vEPerson.[EpCity] AS [City], vEPerson.[EpState] AS [State], vEPerson.[EpZip] AS [Zip], vEPerson.[EpHomePhone] AS [Home Phone], vEJob.[EjWorkPhone] AS [Work Phone], vEPerson.[EpCellPhone] AS [Mobile Phone ], vEPayrollBase.[PrbDateOriginalHire] AS [Original Hire Date]FROM vEPerson INNER JOIN vEbase ON vEbase.[EbFlxID] = vEPerson.[EpFlxIDEb] INNER JOIN vEJob ON vEbase.[EbFlxID] = vEJob.[EjFlxIDEb] INNER JOIN vEPayrollBase ON vEbase.[EbFlxID] = vEPayrollBase.[PrbFlxIDEb] INNER JOIN vEEmploy ON vEbase.[EbFlxID] = vEEmploy.[EeFlxIDEb]WHERE (((((vEPerson.[EpDateBeg] <= '2020-06-20 23:59:59' AND (vEPerson.[EpDateEnd] >= '2020-06-20 00:00:00' OR vEPerson.[EpDateEnd] IS NULL))) AND (((vEJob.[EjDateBeg] <= '2020-06-20 23:59:59' AND (vEJob.[EjDateEnd] >= '2020-06-20 00:00:00' OR vEJob.[EjDateEnd] IS NULL)) AND vEJob.[EjPrimary] = 'P'))) AND (vEbase.[EbArchive] <> 'Y' AND vEbase.[EbFlagEmp] = 'Y')) AND (vEPayrollBase.[PrbDateBeg] <= '2020-06-20 23:59:59' AND (vEPayrollBase.[PrbDateEnd] >= '2020-06-20 00:00:00' OR vEPayrollBase.[PrbDateEnd] IS NULL)))AND (vEEmploy.[EeDateBeg] <= '2020-06-20 23:59:59' AND (vEEmploy.[EeDateEnd] >= '2020-06-20 00:00:00' OR vEEmploy.[EeDateEnd] IS NULL))

Newbie (Know-nothing-bie:) Needs Urgent Assistance with SQLExpress

Posted: 31 Mar 2013 02:28 AM PDT

I have very little experience with SQL other than using it. I am a hardware tech. But I do have some knowledge of when something isn't working right and not due to hardware.One of my oldest clients had a problem with a dead hard drive. Unfortunately, replacing the drive and reinstalling the OS was the least of their problems. They have a business that has gone through several downsizings and currently there are only 2 people who need to access a piece of software they use to schedule appointments. This is a very OLD piece of software they have used for years. The company who originally sold the program is also gone, none of the tech support or sales phone numbers connect. They must have sold the "name" of the software as I can find the place that now sells it but they offer no support for it in it's original form. The last time I was involved with it was about 4 years ago when they had to replace their Windows XP system which was used as a "peer to peer" server for the data files.I remember talking to a tech at the company on the phone who guided me through getting the MDF/LDF's off of XP and onto Windows 7. I also remember it was not easy but once done it worked as before. The current problem was due to a hard drive failure on the 4 year old Windows 7 system. They have many years' of backup files but it is only the "data" in a ".bak" format. They still have an installation program to create the client end. But I remember four years ago that creating the SQL instance was a "do it yourself" thing. I still have the original password used at the time but it is of little use. Every attempt I have made to link the database using the supplied "DB connect utility" fails.I am not sure if the problem is something I am doing or if I need to be using a different version of SQLEXPRESS. Or maybe some small thing I have missed. I have tried various versions of SQL 2008 (and even 2005 ). I thought about trying 2012 but after spending so much time getting nowhere, I think I need some advice about what I am doing wrongI anyone here can point me the right way I would appreciate it. Better yet, I wouldn't mind paying someone to provide remote support for the SQL installation. Without this program they are back to pen and paper and the quotes for a replacement program are not in their budget.

Search This Blog