Monday, March 25, 2013

[how to] Speed efficient query for membership first joined, latest category from membership table (min, max)

[how to] Speed efficient query for membership first joined, latest category from membership table (min, max)


Speed efficient query for membership first joined, latest category from membership table (min, max)

Posted: 25 Mar 2013 06:57 PM PDT

I have the following table representing membership information:

    CREATE TABLE IF NOT EXISTS `membership` (        `id` int(11) NOT NULL AUTO_INCREMENT,        `organisation_id` int(11) NOT NULL,        `membership_subcategory_id` int(11) NOT NULL,        `start` datetime DEFAULT NULL,        `end` datetime DEFAULT NULL,        `amount` decimal(9,2) DEFAULT NULL,        `amount_paid` decimal(9,2) DEFAULT NULL,        `notes` mediumtext,        `order_id` int(11) DEFAULT NULL,        `payment_type` varchar(20) NOT NULL,        `active` tinyint(4) NOT NULL DEFAULT '1',        `cancelled` tinyint(4) NOT NULL DEFAULT '0',        `cancelled_date` datetime DEFAULT NULL,        `cancellation_reason` mediumtext,        `certificate_sent` date DEFAULT NULL,        `welcome_email_sent` date DEFAULT NULL,        PRIMARY KEY (`id`),        UNIQUE KEY `id` (`id`),        UNIQUE KEY `order_id_2` (`order_id`,`start`,`end`,`organisation_id`),        KEY `membership_subcategory_id_idx` (`membership_subcategory_id`),        KEY `organisation_id_idx` (`organisation_id`),        KEY `order_id` (`order_id`)      )  
  • organisation_id is a member
  • the membership year goes from 1 Jul to 30 Jun, start records when the membership in each year has started - this may be anywhere in the first year, but then its always 1 Jul unless a year is skipped
  • membership_subcategory_id is an industry category the membership applies to for each year.

I need an efficient query to get the date joined and latest membership category.

I've tried this query, but I get "Invalid use of group function" as an error

SELECT m.organisation_id, m2.membership_subcategory_id, MIN( m.start )   FROM membership m  INNER JOIN membership m2 ON m.organisation_id = m2.organisation_id  WHERE MAX( m.start ) = m2.start  GROUP BY m.organisation_id, m2.membership_subcategory_id  

For sample mysql runtime, is this performant or should I have concerns?

Posted: 25 Mar 2013 06:33 PM PDT

For sample mysql runtime, is this performant or should I have concerns? If so, what are the key concerns? Please note that persistent connection is set to TRUE for my application, which I believe impacts "Aborted". Please advise. What steps can I take to rectify the issue.

MySQL my.cnf won't take any effect

Posted: 25 Mar 2013 08:32 PM PDT

I'm experiencing issues using MySQL on Linux server.

I set some timeout and characterset options in my.cnf but they won't take any effect.
My goal is to set all character set default as utf8 and set wait_timeout and/or interactive_timeout to 30 seconds.

My environment

Linux CentOS 5.x
MySQL Server 5.6
MySQL client

I installed both MySQL Server and client as root using RPM. I start MySQL server with

] mysqld -u root  

I stop MySQL server with

service mysql stop  

The reason why I use mysqld -u root is that it often says this.

[root@kserver145-208 ~]# service mysql stop  Shutting down MySQL....                                    [  OK  ]  [root@kserver145-208 ~]# service mysql start  Starting MySQL...The server quit without updating PID file [실패]lib/mysql/kserver145-208.pid).  

Here is what mysql says from command line.

select @@session.wait_timeout, @@global.wait_timeout;  +------------------------+-----------------------+  | @@session.wait_timeout | @@global.wait_timeout |  +------------------------+-----------------------+  |                  28800 |                 28800 |  +------------------------+-----------------------+  1 row in set (0.00 sec)       select @@session.character_set_database, @@global.character_set_database;                          +----------------------------------+---------------------------------+  | @@session.character_set_database | @@global.character_set_database |  +----------------------------------+---------------------------------+  | latin1                           | latin1                          |  +----------------------------------+---------------------------------+  1 row in set (0.00 sec)  

But I previously set my.cnf like below. Sorry I't a bit long. I could have just cut the parts that count but I don't know what might be wrong with the way I set options in this file. So I'll just show everything to you here.

# For advice on how to change settings please see  # http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html    [mysqld]    # Remove leading # and set to the amount of RAM for the most important data  # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.  # innodb_buffer_pool_size = 128M    # Remove leading # to turn on a very important data integrity option: logging  # changes to the binary log between backups.  # log_bin    # These are commonly set, remove the # and set as required.  # basedir = .....  # datadir = .....  # port = .....  # server_id = .....  # socket = .....    # Remove leading # to set options mainly useful for reporting servers.  # The server defaults are faster for transactions and fast SELECTs.  # Adjust sizes as needed, experiment to find the optimal values.  # join_buffer_size = 128M  # sort_buffer_size = 2M  # read_rnd_buffer_size = 2M    sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES  wait_timeout=30  interactive_timeout=30  character-set-server=utf8  collation-server=utf8_general_ci  character-set-database=utf8    #init_connect=SET collation_connection =utf8_general_ci  #init_connect=SET NAMES utf8  #init_connect=SET character_set_database = utf8    [client]  character-set-database=utf8  character-set-server=utf8  wait_timeout=30  interactive_timeout=30    [mysqldump]  #default-character-set=utf8    [mysql]  wait_timeout=30  interactive_timeout=30  character-set-database=utf8  character-set-server=utf8  

I've been banging my head against this wall over 72 hours. What have I done wrong ?

PS Here is what my kserver145-208.err says. Hope it helps understanding what's wrong.

130326 09:51:47 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql  2013-03-26 09:51:47 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).  2013-03-26 09:51:47 13587 [Note] Plugin 'FEDERATED' is disabled.  ^G/usr/sbin/mysqld: Can't find file: './mysql/plugin.frm' (errno: 13 - Permission denied)  2013-03-26 09:51:47 13587 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it.  2013-03-26 09:51:47 13587 [Note] InnoDB: The InnoDB memory heap is disabled  2013-03-26 09:51:47 13587 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins  2013-03-26 09:51:47 13587 [Note] InnoDB: Compressed tables use zlib 1.2.3  2013-03-26 09:51:47 13587 [Note] InnoDB: CPU does not support crc32 instructions  2013-03-26 09:51:47 13587 [Note] InnoDB: Using Linux native AIO  2013-03-26 09:51:47 13587 [Note] InnoDB: Initializing buffer pool, size = 128.0M  2013-03-26 09:51:47 13587 [Note] InnoDB: Completed initialization of buffer pool  2013-03-26 09:51:47 13587 [Note] InnoDB: Highest supported file format is Barracuda.  2013-03-26 09:51:47 2ac0d44d9590  InnoDB: Operating system error number 13 in a file operation.  InnoDB: The error means mysqld does not have the access rights to  InnoDB: the directory.  2013-03-26 09:51:47 13587 [ERROR] InnoDB: Could not find a valid tablespace file for 'yoursmart/S_MEM_POINT'. See http://dev.mysql.com/doc/refman/5.6/en/innodb-troubleshooting-datadict.html for how to resolve the issue.  2013-03-26 09:51:47 13587 [ERROR] InnoDB: Tablespace open failed for '"yoursmart"."S_MEM_POINT"', ignored.  2013-03-26 09:51:47 13587 [Note] InnoDB: 128 rollback segment(s) are active.  2013-03-26 09:51:47 13587 [Note] InnoDB: Waiting for purge to start  2013-03-26 09:51:47 13587 [Note] InnoDB: 1.2.10 started; log sequence number 55877336  2013-03-26 09:51:47 13587 [ERROR] /usr/sbin/mysqld: unknown variable 'character-set-database=utf8'  2013-03-26 09:51:47 13587 [ERROR] Aborting    2013-03-26 09:51:47 13587 [Note] Binlog end  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'partition'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'BLACKHOLE'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'ARCHIVE'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_SYS_DATAFILES'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_SYS_TABLESPACES'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN_COLS'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_SYS_FIELDS'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_SYS_INDEXES'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_SYS_TABLESTATS'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_SYS_TABLES'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_FT_INDEX_TABLE'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_FT_INDEX_CACHE'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_FT_CONFIG'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_FT_BEING_DELETED'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_FT_DELETED'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_FT_INSERTED'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_FT_DEFAULT_STOPWORD'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_METRICS'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_BUFFER_POOL_STATS'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE_LRU'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX_RESET'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_CMPMEM'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_CMP_RESET'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_CMP'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_LOCKS'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'INNODB_TRX'  2013-03-26 09:51:47 13587 [Note] Shutting down plugin 'InnoDB'  2013-03-26 09:51:47 13587 [Note] InnoDB: FTS optimize thread exiting.  2013-03-26 09:51:47 13587 [Note] InnoDB: Starting shutdown...  2013-03-26 09:51:49 13587 [Note] InnoDB: Shutdown completed; log sequence number 55877346  2013-03-26 09:51:49 13587 [Note] Shutting down plugin 'MRG_MYISAM'  2013-03-26 09:51:49 13587 [Note] Shutting down plugin 'MEMORY'  2013-03-26 09:51:49 13587 [Note] Shutting down plugin 'CSV'  2013-03-26 09:51:49 13587 [Note] Shutting down plugin 'MyISAM'  2013-03-26 09:51:49 13587 [Note] Shutting down plugin 'sha256_password'  2013-03-26 09:51:49 13587 [Note] Shutting down plugin 'mysql_old_password'  2013-03-26 09:51:49 13587 [Note] Shutting down plugin 'mysql_native_password'  2013-03-26 09:51:49 13587 [Note] Shutting down plugin 'binlog'  2013-03-26 09:51:49 13587 [Note] /usr/sbin/mysqld: Shutdown complete    130326 09:51:49 mysqld_safe mysqld from pid file /var/lib/mysql/kserver145-208.pid ended  

EDIT I changed my.cnf to replace character-set-database to character_set_database along with the other variables in my.cnf as Phill suggested. However, kserver***.err still says

2013-03-26 10:14:32 14300 [ERROR] /usr/sbin/mysqld: unknown variable 'character_set_database=utf8'  

Also

mysql> select @@session.wait_timeout, @@global.wait_timeout;  +------------------------+-----------------------+  | @@session.wait_timeout | @@global.wait_timeout |  +------------------------+-----------------------+  |                  28800 |                 28800 |  +------------------------+-----------------------+  1 row in set (0.00 sec)    mysql> select @@session.character_set_database, @@global.character_set_database;  +----------------------------------+---------------------------------+  | @@session.character_set_database | @@global.character_set_database |  +----------------------------------+---------------------------------+  | latin1                           | latin1                          |  +----------------------------------+---------------------------------+  1 row in set (0.01 sec)  

EDIT2

I have modified my.cnf as rolaldo suggested. kserver***.err does not show those signs of error anymore. So it's more promising.
But when I issue this command

mysql> show create database yoursmart;  +-----------+--------------------------------------------------------------------+  | Database  | Create Database                                                    |  +-----------+--------------------------------------------------------------------+  | yoursmart | CREATE DATABASE `yoursmart` /*!40100 DEFAULT CHARACTER SET utf8 */ |  +-----------+--------------------------------------------------------------------+  1 row in set (0.00 sec)    mysql> select @@session.character_set_server, @@global.character_set_server;  +--------------------------------+-------------------------------+  | @@session.character_set_server | @@global.character_set_server |  +--------------------------------+-------------------------------+  | latin1                         | latin1                        |  +--------------------------------+-------------------------------+  1 row in set (0.00 sec)  

It still says character_set_server is latin1. I don't understand... Maybe this is normal ?
The database contains Korean characters and I'm not able to do like search with korean words.

Concatenation of setof type or setof record

Posted: 25 Mar 2013 05:47 PM PDT

I use Postgresql 9.1 with Ubuntu 12.04.

In a plpgsql function I try to concatenate setof type returned from another function.

the type pair_id_value in question is created with create type pair_id_value as (id bigint, value integer);

the function that returns elementary setof pair_id_value (those that will be concatenated later) is this one:

create or replace function compute_pair_id_value(id bigint, value integer)      returns setof pair_id_value  as $$      listResults = []      for x in range(0,value+1):          listResults.append({ "id": id, "value": x})      return listResults  $$  language plpython3u;  

this straigth-forward plpython code should be good, for exemple the query: select * from compute_pair_id_value(1712437,2); returns nicely:

  id            | value    ---------------+-----------          1712437 |         0          1712437 |         1          1712437 |         2   (3 rows)  

this python function is fairly simple for now, for this example, but above all for my proof of concept. It will be more complex in the near future.


The problem arises when I try to concatenate all the result tables from multiples id.

create or replace function compute_all_pair_id_value(id_obj bigint)      returns setof pair_id_value as $$  declare      pair pair_id_value;  begin      for pair in (select compute_pair_id_value(t.id, t.obj_value) from my_obj as t where t.id = id_obj)      loop              return next pair;      end loop;      return;   end; $$ language plpgsql;  

I receive the error: invalid input syntax for integer "(1712437,0)" as if it is no longer seen as a pair_id_value with two columns but as a tuple (1712437,0).

So I changed the output type of the function from setof pair_id_value to setof record... and if I execute this similar concatenation function:

create or replace function compute_all_pair_id_value(id_obj bigint)      returns setof record as $$  declare      pair record;  begin      for pair in (select compute_pair_id_value(t.id, t.obj_value)  from my_obj as t where t.id = id_obj)      loop              return next pair;      end loop;      return;   end; $$ language plpgsql;  

I get the error: a column definition list is required for functions returning "record"

Trying to follow the answer to this SO question: I have tried defining the column definition in the select this way select compute_pair_id_value(t.id, t.obj_value) as f(id bigint, value integer), the complete code is here:

create or replace function compute_all_pair_id_value(id_obj bigint)      returns setof record as $$  declare      pair record;  begin      for pair in (select compute_pair_id_value(t.id, t.obj_value) as f(id bigint, value integer) from my_obj as t where t.id = id_obj)      loop              return next pair;      end loop;      return;   end; $$ language plpgsql;  

But when launching the sql script, psql doesn't accept to create the function: syntax error at or near "(" select compute_pair_id_value(t.id, t.obj_value) as f(id bigint, value integer) ... pointing the finger at the f(

Any idea how to do it properly ?

Should I consider to create temporary table to do the job ?

Why Am I Getting Deadlock - Please Read Trace

Posted: 25 Mar 2013 07:03 PM PDT

can anyone help me? i'm getting deadlock on this trace, i'm using symmetricds and i can't understand why i'm getting a deadlock on a certain table.

I'm using sql server 2008 r2, read_committed_snapshot is turned on. The table has one clustered index and it is on node_id

I don't know why I'm getting deadlock, can anyone explain the trace file?

waiter id=process1f53674c8 mode=U requestType=wait  waiter-list  owner id=process2200f5948 mode=U  owner-list  keylock hobtid=72057594175946752 dbid=19 objectname=tester-prd-tester-main.dbo.sym_node indexname=PK__sym_node__5F19EF1676C4B313 id=lock1a3f14e80 mode=U associatedObjectId=72057594175946752  waiter id=process2200f5948 mode=U requestType=wait  waiter-list  owner id=process1f53674c8 mode=X  owner-list  keylock hobtid=72057594175946752 dbid=19 objectname=tester-prd-tester-main.dbo.sym_node indexname=PK__sym_node__5F19EF1676C4B313 id=lock250b1cb00 mode=X associatedObjectId=72057594175946752  resource-list  (@P0 nvarchar(4000)@P1 nvarchar(4000)@P2 nvarchar(4000)@P3 int@P4 nvarchar(4000)@P5 nvarchar(4000)@P6 nvarchar(4000)@P7 nvarchar(4000)@P8 nvarchar(4000)@P9 datetime@P10 nvarchar(4000)@P11 int@P12 int@P13 nvarchar(4000)@P14 nvarchar(4000))update dbo.sym_node set "node_id" =  @P0  "node_group_id" =  @P1  "external_id" =  @P2  "sync_enabled" =  @P3  "sync_url" =  @P4  "schema_version" =  @P5  "symmetric_version" =  @P6  "database_type" =  @P7  "database_version" =  @P8  "heartbeat_time" =  @P9  "timezone_offset" =  @P10  "batch_to_send_count" =  @P11  "batch_in_error_count" =  @P12  "created_at_node_id" =  @P13  where "node_id" =  @P14  inputbuf  update dbo.sym_node set "node_id" =  @P0  "node_group_id" =  @P1  "external_id" =  @P2  "sync_enabled" =  @P3  "sync_url" =  @P4  "schema_version" =  @P5  "symmetric_version" =  @P6  "database_type" =  @P7  "database_version" =  @P8  "heartbeat_time" =  @P9  "timezone_offset" =  @P10  "batch_to_send_count" =  @P11  "batch_in_error_count" =  @P12  "created_at_node_id" =  @P13  where "node_id" =  @P14  frame procname=adhoc line=1 stmtstart=504 sqlhandle=0x020000007430040fc90acec8b230d13e24f8efc99977f395  executionStack  process id=process1f53674c8 taskpriority=0 logused=1488 waitresource=KEY: 19:72057594175946752 (fb992e63ce7c) waittime=4927 ownerId=5484183322 transactionname=implicit_transaction lasttranstarted=2012-11-28T19:47:10.653 XDES=0xb08a83b0 lockMode=U schedulerid=11 kpid=7984 status=suspended spid=3683 sbid=0 ecid=0 priority=0 trancount=2 lastbatchstarted=2012-11-28T19:47:11.647 lastbatchcompleted=2012-11-28T19:47:11.647 clientapp=jTDS hostname=clienttesterAPP hostpid=123 loginname=tester-PRD-tester-MAIN isolationlevel=read committed (2) xactid=5484183322 currentdb=19 lockTimeout=4294967295 clientoption1=671088672 clientoption2=128058  (@P0 nvarchar(4000)@P1 nvarchar(4000)@P2 nvarchar(4000)@P3 int@P4 nvarchar(4000)@P5 nvarchar(4000)@P6 nvarchar(4000)@P7 nvarchar(4000)@P8 nvarchar(4000)@P9 datetime@P10 nvarchar(4000)@P11 int@P12 int@P13 nvarchar(4000)@P14 nvarchar(4000))update dbo.sym_node set "node_id" =  @P0  "node_group_id" =  @P1  "external_id" =  @P2  "sync_enabled" =  @P3  "sync_url" =  @P4  "schema_version" =  @P5  "symmetric_version" =  @P6  "database_type" =  @P7  "database_version" =  @P8  "heartbeat_time" =  @P9  "timezone_offset" =  @P10  "batch_to_send_count" =  @P11  "batch_in_error_count" =  @P12  "created_at_node_id" =  @P13  where "node_id" =  @P14  inputbuf  update dbo.sym_node set "node_id" =  @P0  "node_group_id" =  @P1  "external_id" =  @P2  "sync_enabled" =  @P3  "sync_url" =  @P4  "schema_version" =  @P5  "symmetric_version" =  @P6  "database_type" =  @P7  "database_version" =  @P8  "heartbeat_time" =  @P9  "timezone_offset" =  @P10  "batch_to_send_count" =  @P11  "batch_in_error_count" =  @P12  "created_at_node_id" =  @P13  where "node_id" =  @P14  frame procname=adhoc line=1 stmtstart=504 sqlhandle=0x020000007430040fc90acec8b230d13e24f8efc99977f395  executionStack  process id=process2200f5948 taskpriority=0 logused=488 waitresource=KEY: 19:72057594175946752 (303c5a46f094) waittime=4972 ownerId=5484183543 transactionname=implicit_transaction lasttranstarted=2012-11-28T19:47:11.230 XDES=0x283e0b950 lockMode=U schedulerid=22 kpid=6320 status=suspended spid=3581 sbid=0 ecid=0 priority=0 trancount=2 lastbatchstarted=2012-11-28T19:47:11.543 lastbatchcompleted=2012-11-28T19:47:11.543 clientapp=jTDS hostname=clienttesterAPP hostpid=123 loginname=tester-PRD-tester-MAIN isolationlevel=read committed (2) xactid=5484183543 currentdb=19 lockTimeout=4294967295 clientoption1=671088672 clientoption2=128058  process-list  deadlock victim=process2200f5948  deadlock-list  ResType:LockOwner Stype:'OR'Xdes:0x0000000283E0B950 Mode: U SPID:3581 BatchID:0 ECID:0 TaskProxy:(0x00000000EE40C538) Value:0xdc87ce80 Cost:(0/488)  Victim Resource Owner:  Log Viewer could not read information for this log entry. Cause: Data is Null. This method or property cannot be called on Null values.. Content:  ResType:LockOwner Stype:'OR'Xdes:0x00000000B08A83B0 Mode: U SPID:3683 BatchID:0 ECID:0 TaskProxy:(0x00000002E1E62538) Value:0x5ae0b9c0 Cost:(0/1488)  Requested by:  Input Buf: Language Event: (@P0 nvarchar(4000)@P1 nvarchar(4000)@P2 nvarchar(4000)@P3 int@P4 nvarchar(4000)@P5 nvarchar(4000)@P6 nvarchar(4000)@P7 nvarchar(4000)@P8 nvarchar(4000)@P9 datetime@P10 nvarchar(4000)@P11 int@P12 int@P13 nvarchar(4000)@P14 nvarchar(4000))upd  SPID: 3581 ECID: 0 Statement Type: UPDATE Line #: 1  Owner:0x000000026872AC80 Mode: U        Flg:0x40 Ref:0 Life:00000001 SPID:3581 ECID:0 XactLockInfo: 0x0000000283E0B990  Grant List 2:  KEY: 19:72057594175946752 (fb992e63ce7c) CleanCnt:2 Mode:U Flags: 0x1  Node:2  Log Viewer could not read information for this log entry. Cause: Data is Null. This method or property cannot be called on Null values.. Content:  ResType:LockOwner Stype:'OR'Xdes:0x0000000283E0B950 Mode: U SPID:3581 BatchID:0 ECID:0 TaskProxy:(0x00000000EE40C538) Value:0xdc87ce80 Cost:(0/488)  Requested by:  Input Buf: Language Event: (@P0 nvarchar(4000)@P1 nvarchar(4000)@P2 nvarchar(4000)@P3 int@P4 nvarchar(4000)@P5 nvarchar(4000)@P6 nvarchar(4000)@P7 nvarchar(4000)@P8 nvarchar(4000)@P9 datetime@P10 nvarchar(4000)@P11 int@P12 int@P13 nvarchar(4000)@P14 nvarchar(4000))upd  SPID: 3683 ECID: 0 Statement Type: UPDATE Line #: 1  Owner:0x000000025A793580 Mode: X        Flg:0x40 Ref:0 Life:02000000 SPID:3683 ECID:0 XactLockInfo: 0x00000000B08A83F0  Grant List 1:  KEY: 19:72057594175946752 (303c5a46f094) CleanCnt:2 Mode:X Flags: 0x1  Node:1  Log Viewer could not read information for this log entry. Cause: Data is Null. This method or property cannot be called on Null values.. Content:  Wait-for graph  Deadlock encountered .... Printing deadlock information  

Oracle 11g: performance improvements of inserts

Posted: 25 Mar 2013 06:33 PM PDT

I have a table of 500 millions of rows (and growing)

I did the following to improve performance of inserts:

On database side:

  • dropped all indexes and constraints
  • disabled logging

On application side:

  • switched from JPA managed entities to native insert queries, added APPEND Oracle hint to the query
  • tried to commit in batches per 1k/2k/3k of rows
  • tried to write in parallel (multiple threads, thread count = to core count on server) to one table

This gave me about 300 rows per second

Additionally tried:

  • write in parallel in batches to multiple tables (to group then back results using UNION)

This gave me about 1k rows per second, but on empty tables. But when I filled tables with dummy data (200 of millions each), speed of inserts dropped to 250 - 300 per second.

Could anyone suggest what else can I do to speed-up inserts? Basically I want to understand what is (what could be) the bottleneck first.

UPD: Table is partitioned by insert date, table has about 60 columns - most of columns are VARCHAR2(2000 BYTE)

Landed as BI, but databases are a big WTF, what to do?

Posted: 25 Mar 2013 01:00 PM PDT

Maybe a duplicate, but I believe my case is a bit different. From one of the answers I got to this post on SQL Server Central that also comes handy too but is not quite the same scenario: 9 Things to Do When You Inherit a Database

Couple of weeks ago started on a new job. I'm supposed to be working as BI analyst and SQL Developer. But right on the firsts assignments noticed in general everything was taking long to execute. Asked the person that is guiding me on the first days, my supervisor you could say, and he told me that they know the databases are a mess. Asked if I could take a look and see what could be done, got a yes as answer.

So I began to digg in, using several really handy scripts, like for example:

What I've found is a big mess as they told me. As an example, blitzindex procedure returns almost 2000 rows with lot of duplicate indexes, NC indexes including all the columns from a table, lot of heap tables, really wide indexes and more. As for backups, none is done since several weeks, asked about it and IT guys just copy the databases each night to a different server. Couple of databases are over 100Gb and several others are close to that size too. Statistics are updated everyday for every table. There are reports that take more than hour to finish, on not so big tables (just couple of millions of rows). And so on.

As a test I spent couple of days tuning couple of big tables and different procedures and queries that use them. Prepared a baseline, using the profiler. Then made few changes and ran again the test queries. As spected, a report that was taking about 8min now is running in around a minute and a couple of other queries also now take less than half the time. All these changes are done on a test server, we still have a reporting server and a production server.

Taking into consideration that I'm supposed to be a BI and sql developer with limited rights new on the office and not a DBA. What other actions do you recomend me to do in order to approach this scenario? There is an appointed DBA but seems to be just an sql developer doing some dba tasks. There was DBA but he left around half year ago they told me. Should I forget about these issues? Or as someone that is heavily using the database I must point out the problems and propose solutions? Has someone been on the same scenario?

Postgres continuous backups - should WAL numbers be sequential after pg_start_backup call?

Posted: 25 Mar 2013 05:32 PM PDT

I'm using PostgreSQL WAL archiving and pg_start_backup as outlined in the Continuous Archiving section of the Postgres manual to backup a large database. Twice a month we do a full backup and every other night we just copy the WALs over to our backup space so if there's ever an issue we just do a restore of the last full backup + the WALs that have been created since then.

I've noticed that the WALs are sequential except where pg_start_backup is called (or technically it may be when calling pg_stop_backup I'm not sure which) at which point it skips a few. Is this normal behaviour? I know when calling pg_stop_backup moves to the next WAL segment but I'd expect that to be the next number incrementally or am I missing something?

For example on the 14th of March we had the following WALs after a 1st of March full backup: 000000010000008500000090 000000010000008500000091 000000010000008500000092 000000010000008500000093

After the backup it then went to: 000000010000008500000096 000000010000008500000096.00000020.backup

After this normal WAL creation resumed: 000000010000008500000097 000000010000008500000098 and so on

So should there be a 94 or 95 in this case?

I'm in the process of doing a test restore but as that involves copying 60GB (compressed) of data between two servers in different continents, I'd prefer to know sooner rather than later if I'm going to have an issue!

Can't select database for transaction log backup in maintenance plan

Posted: 25 Mar 2013 07:17 PM PDT

I am running into a problem while trying to save a transaction log database backup maintenance task:

  1. Add the 'Back Up Database Task' to the maintenance plan.
  2. Edit the task.
  3. Select backup type: 'Transaction Log'.
  4. Select any database running with Full Recovery Model.
  5. Close the task.
  6. Re-open the task. Notice that the database I selected is not checked anymore.

I can select all databases, all user databases, specific databases...none of them ever save. Every time I re-open the task that field is switched back to < Select one or more > option.

I've connected as user sa and I've connected with Windows Authentication. Same result. Also, I've double-checked that the databases I'm trying to backup are in Full recovery model.

I'm running SQL Server 2005 SP2 on Windows 2003 R2 SP2.

enter image description here

EDIT: This happens even if the database had a full backup run against it.

SQL Server 2012, restore database

Posted: 25 Mar 2013 09:21 PM PDT

I am using MS SQL Management Studio 2012 to restore the database from a .bak file. (That file is from the backup of the database on server)

In my new database, the triggers are missing. Why did I lose the triggers?

MS SQL Server not importing data after expanding Partition Schema and Function

Posted: 25 Mar 2013 07:42 PM PDT

Context: I use SSIS to import data into MS SQL Server. A regular import process failed by saying that the particular FileGroup I was trying to import data into was already full and couldn't allocate more space. So I created new FileGroups and associated new Files to it, however the import was still trying to import to the same "already full" Files instead of the new ones that I created. At this point I deleted the FileGroups and Files without emptying them which caused me to not be able to create new Files with the same name as the deleted Files. Initially this was the problem but soon I was able to fix this with the Help of MSSQLSupport and the rest of the problem I describe below came up.

Problem:

  • Cannot import data into a table (with 4B rows, 2300 Files 30+ FileGroups) in a DB with (50+ tables, 10+ TB) database using SSIS import package (this package can import data to other database without any modifications)
  • Database is partitioned into Files and FileGroups using a Partition Schema and a Partition Function
  • Partition is based on an Integer field in the data that corresponds to a Week Number
  • Initial Partition function range and the Schema was defined up until the last week of Dec 2012
  • After creating additional Partition function and Schema and Files and FileGroups the data will still not import

I will provide additional information if I knew where to start. Any pointers or places to look will be much appreciated. I have also contacted few support services for SQLServer and have not gotten any answers.

How can I count the total number of document indexed by my oracle.text index?

Posted: 25 Mar 2013 03:47 PM PDT

I know I can use ctx_query.count_hits to count the number of document matching a query, and similarly, I can use

count(*) where CONTAINS(...)  

But these are only options if I actually have a query. Those functions throw an exception if I pass an empty string as a query (thanks oracle...). So,

Is there a way to count the total number of document indexed by my oracle.text index?

Foreign Key - One Child Table to Multiple Parent Tables

Posted: 25 Mar 2013 10:27 AM PDT

Maybe I'm thinking of this problem in the wrong way, but we maintain an AccessControl table that is a child to several parent tables (e.g. CompanyItems, SystemFiles, etc.). I'm trying to make sure we don't create orphaned AccessControl recs, but I'm not sure how to properly set up this relationship. Any tips?

AccessControl    ItemId    AccessId (for user access)     ItemType     OwnerCd (1 for owner, 0 for read-only access)    CompanyItems    CompanyId    ItemId    ItemType    ItemName    SystemFiles    FileId    FileName    FileType  

Sample Data:

  AccessControl: ItemId=1, AccessId=1 (UserId), ItemType=0(for a CompanyItems rec), OwnerCd=1 (owner)  AccessControl: ItemId=1, AccessId=2 (different UserId), ItemType=0, OwnerCd=0 (read-only access)  AccessControl: ItemId=10, AccessId=1, ItemType=1 (for SystemFiles entry), OwnerCd=0  CompanyItems: CompanyId={whatever company}, ItemId=0, ItemType=0, ItemName='Test Item'  SystemFiles: FileId=10, FileId='Test File', FileType='pdf', etc.  

If my CompanyItems is deleted, the two corresponding AccessControl recs should also be deleted. If my SystemFiles rec is deleted, then its one corresponding AccessControl rec should be deleted.

InnoDB best index practises

Posted: 25 Mar 2013 08:58 AM PDT

I have an InnoDB table with around ~3.7m rows in it. Here are my current indexes:

+-------------+------------+---------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+-  | Table       | Non_unique | Key_name      | Seq_in_index | Column_name   | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment   +-------------+------------+---------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+-  | hotel_avail |          0 | PRIMARY       |            1 | id            | A         |     3720035 |     NULL | NULL   |      | BTREE      |         |               |  | hotel_avail |          0 | PRIMARY       |            2 | nights        | A         |     3720035 |     NULL | NULL   |      | BTREE      |         |               |  | hotel_avail |          1 | hotelname     |            1 | hotelname     | A         |        5914 |     NULL | NULL   |      | BTREE      |         |               |  | hotel_avail |          1 | destair       |            1 | destair       | A         |         347 |     NULL | NULL   |      | BTREE      |         |               |  | hotel_avail |          1 | boardbasis    |            1 | boardbasis    | A         |        2436 |     NULL | NULL   | YES  | BTREE      |         |               |  | hotel_avail |          1 | iso           |            1 | iso           | A         |         347 |     NULL | NULL   | YES  | BTREE      |         |               |  | hotel_avail |          1 | cheapestprice |            1 | cheapestprice | A         |      372003 |     NULL | NULL   |      | BTREE      |         |               |  | hotel_avail |          1 | dates         |            1 | checkindate   | A         |        6262 |     NULL | NULL   | YES  | BTREE      |         |               |  | hotel_avail |          1 | dates         |            2 | checkoutdate  | A         |        6262 |     NULL | NULL   | YES  | BTREE      |         |               |  | hotel_avail |          1 | engineid      |            1 | engineid      | A         |         347 |     NULL | NULL   |      | BTREE      |         |               |  | hotel_avail |          1 | itemcode      |            1 | itemcode      | A         |       28182 |     NULL | NULL   |      | BTREE      |         |               |  +-------------+------------+---------------+--------------+---------------+-----------+-------------+----------+--------+------+------------+---------+-  

Does it look like overkill? Can I merge some of these indexes further? I've seen in InnoDB you can create a PRIMARY KEY with several fields?

I'm always selecting on several of these fields and filtering the results using WHERE and JOIN a few of them as well.

Foreign Key constraint on fixed value field - Ever appropriate?

Posted: 25 Mar 2013 08:55 AM PDT

I have a short (15 rows) look-up table that lists the valid values for several columns in the database.
I don't think it can be considered a One-True-Lookup-Table, it is about a single definite concept, but it may conceptually be partitioned in some subgroups (3).
I now happen to have to add a column that actually needs to accept only the values from one of these subgroups.

The proper thing to do now would probably to make one table for each subgroup and turn the original table into a simple list of IDs, from which the subgroup tables take their primary ids from.

It is though very unlikely that I ever further need to refer to the subgroups, I will instead frequently use things for which all the values in the original table are valid.

So the partitioning option would make things more complex for the vast part of the application just to support one case.

The only other option I know of is to add a Type column to the original lookup table and a fixed-value column to the single table that need to refer to the subgroup, and use a two-columns foreign key (to ID + Type) in this single case.

This is very ugly, and I'm not sure if it is theoretically correct to use columns that are not the primary key as foreign key referents, but given the context, is it acceptable, or even the best way?

Error on creating database in SSMS

Posted: 25 Mar 2013 08:19 AM PDT

I am using SQL Server Management Studio 2008. I am connect to localhost and I want to create a database. When I click on New Database, type the name and click ok, I get the message:

Index was outside the bounds of the array (Microsoft.SQLServer.Smo).

Replicating databases for local and cloud instances

Posted: 25 Mar 2013 07:50 AM PDT

Forgive me if this is an inappropriate place for this architectural question but I'm not sure where it best fits in the StackExchange family.

I have been tasked with developing an online/live auction system that operates at multiple locations simultaneously as well as online. These auctions are high paced, selling thousands of items in 3-4 hours with roughly 3 "auction blocks" at each location. The point is, there are several people connecting to this at once. Normally, I'd begin designing this "in the cloud" but I've been told it MUST operate even if internet access is lost (and this internet access is of low quality to begin with).

Initially I thought it might be best to simply write two pieces of software, one running locally and one running in the cloud with an API bridge between them. The unfortunate part of this is the database would have to live locally and be served from the location and I don't particularly care for that especially considering the bandwidth limitations. Additionally, the scope requires that each location should be able to access another locations auction items.

Lately I've thought a replication method may work where the database is replicated across all locations and the local "client" operates with the cloud until latency hits a predefined threshold or internet completely dies and then switches to the local copy of the database (it's ok if the online auction gets shut off in a bad connections scenario – it would be the first casualty).

Does anyone have any best practices or experience in dealing with large amounts of data, changing rapidly with tons of connections, that must be available both on a local network and online (in the cloud)? What works best in these scenarios?

EDIT

Would MySQL or MongoDB using replication, replicate fast enough to be used in this scenario? Perhaps with the primary database in the cloud and several "fail-over" (one at each location).

How to write this self join based on three columns

Posted: 25 Mar 2013 06:24 PM PDT

Hello there I have a following table

------------------------------------------  | id | language | parentid | no_daughter |  ------------------------------------------  | 1  |     1    |    0     |      2      |  ------------------------------------------  | 1  |     1    |    0     |      2      |  ------------------------------------------  | 2  |     1    |    1     |      1      |  ------------------------------------------  | 2  |     2    |    1     |      1      |  ------------------------------------------  | 3  |     1    |    1     |      0      |  ------------------------------------------  | 3  |     2    |    1     |      0      |      ------------------------------------------  | 4  |     1    |    2     |      0      |  ------------------------------------------  | 4  |     2    |    2     |      0      |  ------------------------------------------  | 5  |     1    |    2     |      0      |  ------------------------------------------  | 5  |     2    |    2     |      1      |  -----------------------------------------  | 5  |     1    |    4     |      1      |  ------------------------------------------  | 5  |     2    |    4     |      1      |  ------------------------------------------  

Scenario

Every record has more than one rows in table with different language ids. parentid tells who is the parent of this record. no_daughter columns tells against each record that how many child one record has. Means in Ideal scenario If no_daughter has value 2 of id = 1 , it means 1 should be parentid of 2 records in same table. But If a record has more than one exitance with respect to language, it will be considered as one record.

My Problem

I need to find out those records where no_daughter value is not correct. It means if no_daughter is 2, there must be two records whoes parentid has that id. In above case record with id = 1 is valid. But record having id = 2 is not valid because the no_daughter = 1 but actual daughter of this record is 2. Same is the case with id=4

Can any body tell me how can I find these faulty records?

So far help received

SELECT DISTINCT   id   FROM  tbl_info t   INNER JOIN     (SELECT       parentid,      COUNT(DISTINCT id) AS childs     FROM      tbl_info     GROUP BY parentid) AS parentchildrelation     ON t.id = parentchildrelation.parentid     AND t.no_daughters != parentchildrelation.childs   

This query is returning those ids who have been used as parentid somewhere in table but having wrong no_daughter values. But not returning ids that has value in no_daugter columns but have not been used as parentid any where in table. For exampl id = 5 has no_daughter = 1 but it is not used as parentid in table. So it is also a faulty record. But above query is not capturing such records.

Any help will be much appreciated.

Optimization of a select statement

Posted: 25 Mar 2013 06:30 PM PDT

I'm using MySQL and have a table user_data like this:

user_id         int(10) unsigned  reg_date        int(10) unsigned  carrier         char(1)  

The reg_data is the unix timestamp of the registration date, and the carrier is the type of carriers, the possible values of which could ONLY be 'D', 'A' or 'V'.

I need to write a sql statement to select the registered user number of different carriers on each day from 2013/01/01 to 2013/01/31. So the desirable result could be:

2013/01/01   D   10  2013/01/01   A   31  2013/01/01   V   24  2013/01/02   D    9  2013/01/02   A   23  2013/01/02   V   14  ....  2013/01/31   D   11  2013/01/31   A   34  2013/01/31   V   22  

Can anyone help me with this question? I'm required to give the BEST answer, which means I can add index if necessary, but I need to keep the query efficient.

This is what I have right now:

select FLOOR((FROM_UNIXTIME(reg_date)) / 1000000) as reg_day, carrier, count(user_id) as user_count  from user_data  where reg_date >= UNIX_TIMESTAMP('2013-01-01 00:00:00') and reg_date < UNIX_TIMESTAMP('2013-02-01 00:00:00')  group by reg_day, carrier;  

Thanks!

The question has been moved to here and please find all the updates in the link.

why would command text not execute at all?

Posted: 25 Mar 2013 03:43 PM PDT

in my report i have::

SELECT         column1,          column2,           'poop'  from mytable    

i am using sql server profiler to see exactly what statement is being set.

i have set only two filters:

  1. databaseName
  2. enter image description here

yet after running the report, no statement gets intercepted.

i suspect that because i am a beginner at SSRS, i am missing something crucial here.

for what reason would commandtext not be executed at all?

i did follow this question, to make sure that i am using sql profiler correctly, and indeed, i am: http://stackoverflow.com/questions/9107383/sql-server-profiler-capture-calls-to-your-databases-stored-procs-during-ssrs

another bit of important information is although the chart shows no data:

enter image description here

i actually am indeed showing data when i run the commandtext from ssms!

SQL Server split mdf into multiple files

Posted: 25 Mar 2013 01:53 PM PDT

I have a database called example.mdf with a total size of 1GB which suffers from performance issues. I checked the allocated hardware and it is higher than required, I double checked the design and every thing looks normal, when I look at the .mdf files in their physical location (C:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA\example.mdf) I found that the database is split into multiple files example_1.mdf, example_2.mdf, example_3.mdf, up to example_7.mdf.

I have another database file on the same SQL Server that has the same issue.

Why does this happen? Does this affect the performance? How can I prevent or stop SQL Server from splitting my .mdf files? Moreover I need to combine back the already split files.

Rent weekly cost database design

Posted: 25 Mar 2013 11:01 AM PDT

I have a database which contains a table BUILDING with in each row details about some building, another table BUILDING_UNIT contains rows with details about a single building unit which refers with a foreign key to the belonging BUILDING.ID. The BUILDING_UNIT table also refers to a table CATEGORY which tells whether the BUILDING_UNIT is of category A,B,C,D again with a foreign key pointing to CATEGORY.ID.

Now the final cost of renting the building unit depends on its building, category and on the number of days it is rented and specific period of the year. We only rent them weekly so I might as well use weeks only however I'd like it to be as flexible as possible in the future.

I cannot convince myself on a table which can represent this situation.

Do I have to use a table with coefficients for each day of the year and then a table with coefficients for A,B,C,D and then a table with coefficients for each Building and then somehow calculate a result?

Is there some standard and recognized implementation for problems of this type?

Thank you

EDIT: Notice the solution should abstract from the formula for calculating the cost which might change in the future. However I might be asked to make a specific week of the year, for building unit X inside building Y to cost 300$ while the week after 600$. Generally building units inside the same building and in the same week cost the same, however that might change in future so I'd like to treat already all specific cases.

Proper procedure for migrating a MySQL database to another Debian machine?

Posted: 25 Mar 2013 08:51 AM PDT

I have one server running an older Debian version with MySQL 5.x and a newer Debian server, also running MySQL.

I've created a backup of all databases on the first server like so:

mysqldump -uuser -ppass --all-databases > dump.sql  

On the other server, I did a:

mysql -uuser -ppass < dump.sql  

At first, everything seemed great. I could browse my databases in phpMyAdmin, but as soon as I tried logging in again, it failed. Turns out, my root password had been overwritten with the one from the older database.

I wanted to reset it, but in order to do so, I would have needed to start mysqld_safe. Which I couldn't because the password for the debian-sys-maint user had been overwritten as well in the database. When I thought all hell had broken loose, I somehow reset both the root and debian-sys-maint passwords to the original values of the new server, and I managed to revert to a clean state.

Since I obviously don't want to go down that road again, here's the question(s):

  • Was I right with my approach of using a complete --all-databases dump?
  • Was there something I needed to do in advance to reading in that dump to prevent this desaster from happening? Or even before creating the dump?

If I'm going about this the wrong way:

  • What is the proper procedure for migrating all databases and their users to another server?

Note that I'm not that experienced with MySQL and server administration at all, so I might be missing something obvious. All the tutorials and how-tos I've found never mention anything like this and just talk about importing the complete dump.

How to add 'root' MySQL user back on MAMP?

Posted: 25 Mar 2013 11:51 AM PDT

On PhpMyAdmin, I removed 'root' user by mistake. I was also logged in as 'root'. How can I add the user 'root' back, on MAMP?

Database Link from Oracle 10g on Windows over to MySQL database

Posted: 25 Mar 2013 10:57 AM PDT

Didn't see any answers on this subject so hoping for a miracle! :-)

I've got an Oracle 10g database running on Windows Server 2003 and I'd like to create a database link to an external MySQL database which is running in the Amazon Web Service RDS cloud.

We may have the option of upgrading the Oracle version to 11g if that would help?

Has anyone done anything like this before?

Thanks for your help! Josh

Slow insert with MySQL full-text index

Posted: 25 Mar 2013 07:51 AM PDT

I use a full-text index in a MySQL table, and each insert into this table takes about 3 seconds. It seems that MySQL rebuilds (a part) of the full text index after each insert/update. Is this right?

How can I get better performance from the INSERT? Is there perhaps an option to set when MySQL rebuilds the full-text index?

SQL Server equivalent to functionality of Oracle RAC?

Posted: 25 Mar 2013 10:38 AM PDT

I did some Googling and couldn't find an answer to this question more recent than a few years ago, so I thought I'd ask. Oracle's RAC feature offers load-balancing for both read and write transactions, as well as scale-out and high-availability without downtime (at least, as I understand it - we're about to deploy our first databases that use RAC, so we'll see how it goes).

Is there any SQL Server feature set (or third party component you could install on top) that delivers equivalent functionality? We've always used Windows clustering, where a failover event causes about 20-30 seconds of SQL downtime - always tolerable, but not ideal. Now, with AlwaysOn in SQL 2012, SQL Server shrinks that to about 15 seconds and adds the concept of read-only-secondary databases, but they still require that write transactions are choked through a single connection point (much improved, since many transactions are just read, but still not really load balancing), and in the case of a node failure or the need to patch, there's still downtime.

I suppose it's just more curiosity - I feel like this is the only area that SQL Server falls behind Oracle (at least among the features I've personally seen used). I wanted to see if there are any options out there to close that gap and possibly improve our own SQL Server deployment while we wait for Microsoft's equivalent feature to be added - maybe in SQL 2014/2015?

Howto use Windows Authentication with SQL Server 2008 Express on a workgroup network?

Posted: 25 Mar 2013 08:29 PM PDT

I have two computers running SQL Server 2008 Express: c01 and c02, I setup both for remote connection using windows authentication. Worked fine for c02 but not for c01.

This is the error message I'm getting:

Cannot connect to ACAMP001\SQLEXPRESS.


ADDITIONAL INFORMATION:

Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (Microsoft SQL Server, Error: 18452)

For help, click: http://go.microsoft.com/fwlink?>ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=18452&LinkId=20476

I don't know if I'm missing something, here is what I did:

  1. Enabled TCP/IP protocol for client from Sql Server Configuration Manager.
  2. Modified Windows firewall exceptions for respective ports.
  3. Started the Sql Browser service as a local service
  4. Added Windows user to this group: "SQLServerMSSQLUser$c01$SQLEXPRESS"
  5. From Management Studio, I added "SQLServerMSSQLUser$c01$SQLEXPRESS" to SQLEXPRESS instance's logins under security folder, and I granted sysadmin permissions to it.
  6. Restarted c01\SQLEXPRESS
  7. Restarted Sql Browser service.
  8. There is no domain here. It's only a workgroup.

Please any help is appreciated, Thank you.

No comments:

Post a Comment

Search This Blog