Thursday, March 28, 2013

[how to] Extracting Multiple Values from Single Field

[how to] Extracting Multiple Values from Single Field


Extracting Multiple Values from Single Field

Posted: 28 Mar 2013 05:39 PM PDT

I'm an SQL beginner and I'm having trouble searching correctly for the answer to this question, although I'm sure that it's been asked before. I have an column in an SQL table that looks like this:

custom_fields (this is the column name)
age_range=35-49
gender=Male
how_heard=Other (All three of these values are in the same field)

Is there any way to reach inside of this field with an SQL statement to get the values individually? I'd like to make three lists of the individual values to export to CSV files in a vertical columns.

Basically, the result should be a column like (for each value):

age_range
35-49
45-55
25-35
... etc.

I've been trying:

SELECT custom_fields.age_range FROM arp3_cam WHERE 1

But this is obviously incorrect. I'd really appreciate any hints on how to filter for values inside the same field -- feel free to redirect me to an answer if this has already been asked somewhere else. Thanks!

EDIT: based on other posts I've been reading, this setup violates first normal form, however, I still need to extract the data, if possible. I definitely wish the values in "custom_fields" had been added as individual columns.

MySQL: parallel restores at same time from backup files?

Posted: 28 Mar 2013 08:54 PM PDT

Slightly related to this question, but for MySQL: Would it be possible/wise to execute 2 parallel restores from a single backup?

Is it possible/wise to do a parallel restore of two independent databases from two separate MySQL dump files?

Restoring one of them seems to only take up about 10% of CPU cycles consistently and takes a long time to finish, so I'm thinking that a second restore shouldn't cause any load problems - but might it cause any other MySQL/DB issues?

Relating ExecutionInstanceGUID to the SSISDB

Posted: 28 Mar 2013 04:06 PM PDT

The 2012 release of SQL Server Integration Services, SSIS, has delivered an SSISDB catalog which tracks the operations of packages (among other things). The default package execution for solutions using the Project Deployment model will have logging to the SSISDB turned on.

When a package executes, the System::ExecutionInstanceGUID is populated with a value that, if one were using explicit logging (to sys.sysdtslog90/sys.sysssislog) would record all the events for a specific package execution.

What I'd like to know, is how do I tie an ExecutionInstanceGUID to anything in the SSISDB catalog. Alternatively, is an SSIS package executing in the SSISDB privy to the value of its catalog.executions.execution_id

Ultimately, I am trying to use the existing, custom audit table and link it back to the detailed history in the SSISDB catalog but can't seem to find the link.

limitations of key-values store on top of mysql table [closed]

Posted: 28 Mar 2013 02:20 PM PDT

Consider some implementation of key-value store on top of one mysql table with only two field "key" and "value". Would this be a "good" key-values store, or creating it on top of mysql would have some limitations?

MySQL - ERROR 1045 (28000): Access denied for user

Posted: 28 Mar 2013 06:37 PM PDT

I just installed a fresh copy of Ubuntu 10.04.2 LTS on a new machine. I logged into MySQL as root:

david@server1:~$ mysql -u root -p123  

I created a new user called repl. I left host blank, so the new user can may have access from any location.

mysql> CREATE USER 'repl' IDENTIFIED BY '123';  Query OK, 0 rows affected (0.00 sec)  

I checked the user table to verify the new user repl was properly created.

mysql> select host, user, password from user;  +-----------+------------------+-------------------------------------------+  | host      | user             | password                                  |  +-----------+------------------+-------------------------------------------+  | localhost | root             | *23AE809DDACAF96AF0FD78ED04B6A265E05AA257 |  | server1   | root             | *23AE809DDACAF96AF0FD78ED04B6A265E05AA257 |  | 127.0.0.1 | root             | *23AE809DDACAF96AF0FD78ED04B6A265E05AA257 |  | ::1       | root             | *23AE809DDACAF96AF0FD78ED04B6A265E05AA257 |  | localhost |                  |                                           |  | server1   |                  |                                           |  | localhost | debian-sys-maint | *27F00A6BAAE5070BCEF92DF91805028725C30188 |  | %         | repl             | *23AE809DDACAF96AF0FD78ED04B6A265E05AA257 |  +-----------+------------------+-------------------------------------------+  8 rows in set (0.00 sec)  

I then exit, try to login as user repl, but access is denied.

david@server1:~$ mysql -u repl -p123  ERROR 1045 (28000): Access denied for user 'repl'@'localhost' (using password: YES)  david@server1:~$ mysql -urepl -p123  ERROR 1045 (28000): Access denied for user 'repl'@'localhost' (using password: YES)  david@server1:~$   

Why is access denied?

h2 Java server synchronization and caching

Posted: 28 Mar 2013 10:26 AM PDT

I'm happily coding a client-server application in Java using H2 as database. The server runs the database as an embedded local database while clients connect to it via TCP.

Now more and more I see myself in need of syncing between clients and server which requires complex messaging and threads keeping track of the last change to the database. Caching is also a good thing I might need because right now each client requests the same table every time an update has been made.

Now my question is: isn't there something for syncing and caching with Java and H2 which is quick and easy library not an entire stack of database layer like hibernate or Mongrel? I feel like reinventing the wheel for something not extremely complex to do.

Which database could handle storage of billions/trillions of records?

Posted: 28 Mar 2013 03:36 PM PDT

We are looking at developing a tool to capture and analyze netflow data, of which we gather tremendous amounts of. Each day we capture about ~1.4 billion flow records which would look like this in json format:

{     "tcp_flags": "0",     "src_as": "54321",     "nexthop": "1.2.3.4",     "unix_secs": "1352234521",     "src_mask": "23",     "tos": "0",     "prot": "6",     "input": "105",     "doctets": "186",     "engine_type": "0",     "exaddr": "2.3.4.5",     "engine_id": "2",     "srcaddr": "9.8.7.6",     "dst_as": "12345",     "unix_nsecs": "752265174",     "sysuptime": "2943529544",     "dst_mask": "24",     "dstport": "80",     "last": "2943523241",     "srcport": "52672",     "dpkts": "4",     "output": "111",     "dstaddr": "6.5.4.3",     "first": "2943517993"  }  

We would like to be able to do fast searches (less than 10 seconds) on the data set, most likely over narrow slices of time (10 - 30 mintes intervals). We also want to index the majority of the data points so we can do searches on each of them quickly. We would also like to have an up to date view of the data when searches are executed. It would be great to stay in the open source world, but we are not opposed to looking at proprietary solutions for this project.

The idea is to keep approximately one month of data, which would be ~43.2 billion records. A rough estimate that each record would contain about 480 bytes of data, would equate to ~18.7 terabytes of data in a month, and maybe three times that with indexes. Eventually we would like to grow the capacity of this system to store trillions of records.

We have (very basically) evaluated couchbase, cassandra, and mongodb so far as possible candidates for this project, however each proposes their own challenges. With couchbase the indexing is done at intervals and not during insertion of the data so the views are not up to date, cassandra's secondary indexes are not very efficient at returning results as they typically require scanning the entire cluster for results, and mongodb looks promising but appears to be far more difficult to scale as it is master/slave/sharded. Some other candidates we plan to evaluate are elasticsearch, mysql (not sure if this is even applicable), and a few column oriented relational databases. Any suggestions or real world experience would be appreciated.

For a InnoDB only DB, which of these elements can be removed?

Posted: 28 Mar 2013 10:46 AM PDT

So, I'm trying to set up a Drupal 7 my.conf file that's combining best practices from various performance blogs. I'm realizing though that some of them are older than others, and many aren't assuming InnoDB. So of this list of configs, which are irrelevant if you're building for InnoDB.

[client]  port = 3306  socket = /var/run/mysqld/mysqld.sock    [mysqld_safe]  socket = /var/run/mysqld/mysqld.sock  nice = 0  open-files-limit = 4096    [mysqld]  port = 3306  user = mysql    default_storage_engine  default-storage-engine = InnoDB  socket = /var/run/mysqld/mysqld.sock  pid_file = /var/run/mysqld/mysqld.pid  basedir = /usr  tmpdir = /tmp  lc-messages-dir = /usr/share/mysql  local-infile = 0  automatic_sp_privileges = 0  safe-user-create = 1  secure-auth = 1  secure-file-priv = /tmp  symbolic-links = 0  key_buffer_size = 32M  myisam-recover = BACKUP,FORCE  concurrent_insert = 2  max_allowed_packet = 16M  max_connect_errors = 1000000  datadir = /var/lib/mysql  tmp_table_size = 64M  max_heap_table_size = 64M  query_cache_type = 1  query_cache_size = 0  query_cache_limit = 8M  query_cache_min_res_unit = 1K  default-storage-engine = InnoDB  thread_stack = 256K  thread_cache_size = 128  max_connections = 128  open_files_limit = 65535  skip-locking  skip-bdb  server-id  = 1  log_bin = /var/log/mysql/mysql-bin.log  binlog_cache_size = 256K  sync_binlog  = 256  expire_logs_days  = 14  max_binlog_size = 1G  binlog_do_db  = include_database_name  binlog_ignore_db = include_database_name  max_user_connections = 150  key_buffer = 16M  key_cache_block_size     = 4K  bulk_insert_buffer_size  = 8M  myisam_sort_buffer_size  = 64M  join_buffer_size = 8M  read_buffer_size = 2M  sort_buffer_size = 3M  read_rnd_buffer_size = 64M  table_cache = 4096  table_definition_cache = 4096  table_open_cache = 16384  optimizer_search_depth   = 4  collation-server = utf8_general_ci  interactive_timeout = 400  wait_timeout = 300  connect_timeout = 10  thread_concurrency=8  back_log = 2048  open-files = 10000  query_prealloc_size = 65536  query_alloc_block_size = 131072  

Missing data after attaching .mdf file

Posted: 28 Mar 2013 12:42 PM PDT

Recently, one of my customers had a server crash. Unfortunately, the 3th party who was responsible for the backup was 'forgotten' to schedule a backup of the DB (I know, I should have notice that).

After a professional recovery, I have the .mdf and .ldf files back. I was able to attach them back to SQL Server 2005 Express. At first glance, everything looked ok.

However, the most recent changes (say from one month before crash) are missing. Has anyone seen that before? Does anyone knows a solution for that? Is it possible that it is still in the .ldf? I think the recovery model was Simple.

As a side note, I'm sure that the data was there since there is a printed version. I'm also sure that it is the most recent .mdf / .ldf that is recovered and sent.

I did already several DBCC checkDB with different parameters (every time on a fresh .mdf/.ldf combination of course). I know there is a consistency error somewhere but it doesn't block me to query the last 20000 rows. So expect to find the last records there (or I am wrong here?). The fix of this consistency error is lower prio since it occurs in the older data region. Worst case scenario, we can remove that part.

So, please help me with the missing data.

Thanks in advance!

How did this MV refresh automatically, when it is defined to refresh only "ON DEMAND"?

Posted: 28 Mar 2013 04:03 PM PDT

This seems a bit odd but I can't put my finger on the issue.. As this happened on the Production database, I am confident that no DB user has initiated the refresh from their side. Is there some know bug or other factors that could have caused the refresh?

I defined an MV (materialized view) that is refreshed COMPLETE ON DEMAND using the below code:

DBMS_MVIEW.REFRESH(ON_DEMAND_MV, 'C');  

The Source of ON_DEMAND_MV is as follows:

CREATE MATERIALIZED VIEW ON_DEMAND_MV  REFRESH COMPLETE ON DEMAND  AS SELECT ... FROM ...;  

Recently I noticed that the MV had refreshed automatically in between of our on-demand refreshes that are triggered by Informatica Sessions (an ETL tool) - I confirmed the same by looking at the last_refresh_date column in the all_mviews table for corresponding MV.

Is there anything I could do to further debug this issue..? Thank you.

What is the relationship between keys, indices, primary keys and clustered keys in RDBMS?

Posted: 28 Mar 2013 06:13 PM PDT

I am pretty confused about these terms in RDBMS: Keys, and indices: Are they the same concept? For example:

CREATE TABLE FOO_TABLE (      THE_KEY INTEGER PRIMARY KEY      FOO INTEGER NOT NULL,      BAR INTEGER NOT NULL,      QUZ INTEGER UNIQUE,      CONSTRAINT UNQ_FOO UNIQUE (FOO, BAR)  );  

In this table, which is a key? Which can be called an index? Is there 'non-primary key' in this table? And how does they improve the efficiency of searching? Thanks.

Advise On SQL Server Data Replication

Posted: 28 Mar 2013 10:49 AM PDT

We have a huge enterprise system, with geographically seperated SQL servers.

We have some requirements related with data replication between these servers.

We have considered the Sync Framework option, but there is a requirement regarding multiple parallel data replication. Thus, Sync framework does not seem to be an option.

Also, we do not have a central database; requirements state that user shall configure the topology and related conflict resolution steps. There are three conflict resolution options; "source wins", "destionation wins" and "timestamp based".

Also, note that, database structure is a bit complicated and we have some deep hierarchies.

Lastly, the only way of communication between these servers is using port 80; using other ports is not an option.

I need to decide how to implement the replication structure, but I do not seem to have any options other than creating a replication structure manually and implementing all features.

Do you know any tools, or have any ideas how to handle replication easily without any central SQL server and with user configurable topologies?

How does ORDER BY really work? [duplicate]

Posted: 28 Mar 2013 11:03 AM PDT

This question already has an answer here:

I have one question that confusing me.

By logical query processing, the SELECT clause comes before the ORDER BY clause, and in theory, every step in a query creates a virtual table. So, how is it even possible to order result by a column that is not listed in the SELECT clause, when the SELECT clause generates a virtual table with the selected columns?

In case your query has a GROUP BY clause, then the behavior become normal, but in the absence of a GROUP BY clause, it seems a little tricky. Does anybody have an explanation?

Drupal 7 Uses InnoDB, so do I need other configs?

Posted: 28 Mar 2013 10:47 AM PDT

If I'm optimizing a database that will be delivering multiple Drupal 7 websites, all using InnoDB, do I need to worry about [myisamchk] & [isamchk] sections, or can I just worry about the stuff within [mysqld]?

I'm assuming if I delete these sections, that the defaults will just kick in if they are needed.

I'm trying to document, outline my.conf best practices here http://groups.drupal.org/node/289613

MySQL Federated acces to PostgreSQL table

Posted: 28 Mar 2013 04:35 PM PDT

For example 2 DBases:

  1. DBMS Postgresql
  2. DBMS MySQL

In each DB include one DB and one table in it. Can I work with table in PostgreSQL through MySQL?

Is it possible to link as federated postgresql table to mysql database? For make insert/update/select/delete or join operations with such linked table.

I can link MySQL to MySQL by code below and make all operations.

CREATE TABLE `f_t21` (   `id` int(11) DEFAULT NULL,   `name` varchar(255) DEFAULT NULL   ) ENGINE=FEDERATED DEFAULT CHARSET=latin1 CONNECTION='mysql://user:pass@192.168.56.102:3306/db/table'  

Can I link postgresql table of anyone postgresql version to someone mysql version? for it using as part of federated DB?

Inheritance and its alternatives in PostgreSQL

Posted: 28 Mar 2013 05:20 PM PDT

I make an archaeological database in PostgreSQL 9.2. The description of archaeological context has a subdivision between different types of context. I'm not sure how to implement it:

1) to violate third normal form

It is the dirtiest and probably the worst solution. I could check potencial duplicities by triggers, but I suspect it would be the hardest way, for there are quite a lot of type specific columns.

2) inheritance

This is better and might be good enough, but my "finds" table should have a foreign key to the "context" table regardles of its type. According to the manual this would cause problems. With all data in tables "layer", "cut", "structure" etc., would the foreign key to "context" table really point to the data stored in the inherited tables? If there's any caveat, is there some (relatively) easy workaround?

3) some sort of 1:1 relation

The "finds" table would be connected to "context" table, and tables for various types of context would have a foreign key pointing to the "context" table. But how to assure that exactly one row in all the dependent tables corresponds to each row in "context"?

4) ???

There might be some other solution which I didn't know about.

Which of these is easiest to implement (including avoiding duplicity)? And which is best in the terms of performance?
The database will never be really big, so slightly suboptimal performance is no problem, but big differences in performance might be more important than easiness to write it.

I'd be glad for answers telling me which option I should use, why, and especially: how to overcome its drawbacks (or that I missed something and there's no real drawback).

Maintenance in MYSQL when innodb_file_per_table disabled

Posted: 28 Mar 2013 03:57 PM PDT

I have read your post and I completely understand OPTIMIZE TABLE to perform in an environment where innodb_file_per_table is disabled, does not shrink the global ibdata1 tablespace.

But what if I need to perform index maintenance on InnoDB tables with ANALYZE command, it will grow the single tablespace also?

What other alternatives are there to increase performance or doing some maintenance in Innodb engine, when using a single tablespace and without grow out of control the single ibdata1.

Regards. Matthew

slow queries - set weight to token type in tsquery - postgresql

Posted: 28 Mar 2013 07:57 PM PDT

Postgresql version 9.2.3!

I'm working on a database for mapping of chemical names. My main table contains aprox 91 million records and it's indexed by gin. I want to query it with multiple names (I'm trying now with 100 entries), which I first put in a query table, create a tsquery column of the names and index it with gist.

Main table structure:

CREATE TABLE IF NOT EXISTS   pubchem_compounds_synonyms_all (      cid         int NOT NULL references  pubchem_compounds_index(cid) ON UPDATE CASCADE ON DELETE CASCADE,      name            VARCHAR(2000) NOT NULL,        synonym         VARCHAR(2000) NOT NULL,      PRIMARY KEY (cid, name, synonym)  );    CREATE TEMPORARY TABLE synonyms_temp AS SELECT * FROM pubchem_compounds_synonyms_all;    COPY synonyms_temp FROM  '...' delimiters E'\t';  INSERT INTO pubchem_compounds_synonyms_all SELECT DISTINCT * FROM  synonyms_temp;  

I was trying different approaches, thus for testing gin index I created a clone:

CREATE TABLE synonyms_all_gin_tsvcolumn AS  SELECT * FROM pubchem_compounds_synonyms_all;  

then:

 ALTER TABLE synonyms_all_gin_tsvcolumn ADD COLUMN tsv_syns TSVECTOR;   UPDATE synonyms_all_gin_tsvcolumn SET tsv_syns = to_tsvector(synonym);   CREATE INDEX gin_tsv_colum_index ON synonyms_all_gin_tsvcolumn USING GIN (tsv_syns);  

The query table is:

 CREATE TABLE IF NOT EXISTS cmap_names_query (      name        VARCHAR (2000) PRIMARY KEY NOT NULL  );  

Same as in the main table, I fill it in with COPY from via a temp table and then I add the tsquery column:

 .....   ALTER TABLE cmap_names_query ADD COLUMN lexemes TSQUERY;   UPDATE cmap_names_query SET lexemes = plainto_tsquery(name);   CREATE INDEX cmap_gist_tsquery_index ON cmap_names_query USING  gist (lexemes tsquery_ops);  

The query is basically a join between both tables:

SELECT DISTINCT ON (c.lexemes) c.name, s.cid, s.synonym,  ts_rank(s.tsv_syns,c.lexemes)  FROM synonyms_all_gin_tsvcolumn s   JOIN cmap_names_query c ON c.lexemes @@ s.tsv_syns;  

lexemes is the gist indexed tsquery column on my query table, whereas tsv_syns is the gin indexed tsvector column in the main names table, the one with 91 million records.

The query is intended to match names, exact matches if possible. It works very well for such a large table. Normal names, containing only characters, can be retreated even in microseconds. The problem is when the string names contains numbers. The operation tsvector and tsquery create one token for each number, and all together makes the query for this sort of entries rather slow, well, slower. Instead of a few milliseconds, they take aprox 1-2 seconds each. I would like to reduce this query time to a few milliseconds like the other entries, but I don't know how. I have tested it with and without ts_rank to find out that ranking only add half a second to the total query, if it even makes a difference. so that's not my problem

Some samples queries are:

query:

Limit  (cost=32.13..36.14 rows=1 width=92) (actual time=40.001..40.002 rows=1 loops=1)  Buffers: shared hit=201  ->  Bitmap Heap Scan on synonyms_all_gin_tsvcolumn  (cost=32.13..100.24 rows=17 width=92) (actual time=39.998..39.998 rows=1 loops=1)       Recheck Cond: (tsv_syns @@ plainto_tsquery('10-methoxyharmalan'::text))       Buffers: shared hit=201       ->  Bitmap Index Scan on gin_tsv_colum_index  (cost=0.00..32.13 rows=17 width=0) (actual time=39.984..39.984 rows=1 loops=1)             Index Cond: (tsv_syns @@ plainto_tsquery('10-methoxyharmalan'::text))              Buffers: shared hit=200   Total runtime: 40.037 ms  

result: (cid |name|synonym|tsv vector)

5474706 | 6-Methoxyharmalan | 10-Methoxyharmalan | '10':1 'methoxyharmalan':2  

query:

Limit  (cost=32.23..36.23 rows=1 width=92) (actual time=2215.127..2215.128 rows=1 loops=1)  Buffers: shared hit=10086  ->  Bitmap Heap Scan on synonyms_all_gin_tsvcolumn  (cost=32.23..148.34 rows=29 width=92) (actual time=2215.125..2215.125 rows=1 loops=1)       Recheck Cond: (tsv_syns @@ plainto_tsquery('1,4-chrysenequinone'::text))       Buffers: shared hit=10086       ->  Bitmap Index Scan on gin_tsv_colum_index  (cost=0.00..32.22 rows=29 width=0) (actual time=2215.108..2215.108 rows=1 loops=1)             Index Cond: (tsv_syns @@ plainto_tsquery('1,4-chrysenequinone'::text))              Buffers: shared hit=10085  Total runtime: 2215.182 ms  

result:

 180933 | 1,4-Chrysenedione | 1,4-Chrysenequinone | '1':1 '4':2 'chrysenequinon':3  

query:

 Limit  (cost=32.00..36.02 rows=1 width=92) (actual time=2852.934..2852.936 rows=1 loops=1)   Buffers: shared hit=7292   ->  Bitmap Heap Scan on synonyms_all_gin_tsvcolumn  (cost=32.00..36.02 rows=1 width=92) (actual time=2852.930..2852.930 rows=1 loops=1)       Recheck Cond: (tsv_syns @@ plainto_tsquery('2-deoxy-D-glucose'::text))       Buffers: shared hit=7292       ->  Bitmap Index Scan on gin_tsv_colum_index  (cost=0.00..32.00 rows=1 width=0) (actual time=2852.897..2852.897 rows=121 loops=1)             Index Cond: (tsv_syns @@ plainto_tsquery('2-deoxy-D-glucose'::text))             Buffers: shared hit=7291    Total runtime: 2852.992 ms  

result:

 40 | 2-deoxy-D-glucose | 2-deoxy-D-glucose | '2':1 'd':4 'deoxi':3 'deoxy-d-glucos':2 'glucos':5  

I wonder what the best way to make this last queries faster would be. I have tried with a pre-processing script that removes all the numbers, it speeds up the search up to 3 seconds in total, but I miss the exact/closest match that I was looking for in some of the cases, so that's no use. Other approaches that came to mind where:

Sol 1: assign priority weight to word tokens in contrast to numeric ones   

I think this could be a potential good solution for me, but as far as I have seen cannot be done. Tsvectors/queries can be labelled, but not token types or IS THERE A WAY TO LABEL TOKENS DIFFERENTLY WITHIN THE SAME TSVECTOR?

Sol 2: create a new configuration and delete numeric mappings with ALTER FULLTEXTMAPPING   or add numbers and characters like - ( ) to the stopword list  

Same as the parser, it might lead me to wrong matches, although since it keeps the positional information it me perform good. I'm not sure how i should do this though.

My postgres.conf parameters:

shared_buffers = 24GB  checkpoint_segments=512  effective_cache_size=72GB  work_mem = 1GB  system specif. are:  4 x Intel X7550 (@2.00GHz) (total 32 Cores), 256GB Memory, CentOS 5.6 (64 bit), 18TB local disk (SATA-RAID)  kernel.shmmax=64GB  kernel.shmall=4GB  

I have tried lower amounts of shared_buffer and effective_cache_size (16GB and 32GB respectively), no difference in performance from the current one, so I'm planing to change it back to those limits

I tried a gist index on querytree lexemes, didn't make much difference

I'm a little bit lost and I would appreciate any ideas or possible solutions to speed up my queries.

Thanks :)

PD: Any recommendations for nonSQL DBs that could improve performance?

how to import mysql system "mysql" database

Posted: 28 Mar 2013 04:57 PM PDT

This question might be already answered but it's almost impossible to google it.

When I perform full backup of mysql server using

mysqldump --all-databases  

then reinstall it from scratch I want to import FULL dump of all databases, including "mysql".

I successfully done that by executing

mysql -u -p < dumpfile  

but now even if all user records are in mysql.user these accounts are not working.

I can't even change their password, I always get

ERROR 1133 (42000): Can't find any matching row in the user table

Restarting mysqld is of no help

answer:

It was fixed by executing

MariaDB [mysql]> flush privileges;  Query OK, 0 rows affected (0.26 sec)  

now all users work as before

SUPER privilege not defined for master user in Amazon MySQL RDS

Posted: 28 Mar 2013 12:57 PM PDT

I have created one medium instance on amazon rds in asia pecific (singapore) region. i have created my master user with master password. and it is working/connecting fine with workbench installed on my local PC. When, I am going to create function on that instance, it show me following error

ERROR 1418: This function has none of DETERMINISTIC, NO SQL, or READS SQL DATA in its declaration and binary logging is enabled (you might want to use the less safe log_bin_trust_function_creators variable)

At my instance, my variable (log_bin_trust_function_creators) shows OFF. now when I go to change with variable using

SET GLOBAL log_bin_trust_function_creators = 1;  

it gives me another error

Error Code: 1227. Access denied; you need (at least one of) the SUPER privilege(s) for this operation

I don't know how to solve this error.

Can anybody help???

MySQL optimization

Posted: 28 Mar 2013 10:57 AM PDT

I'm trying to optimize a MySQL Server to be able to serve as many connections as it can.

The server is in AmazonAWS RDS and has currently the following resources:

--7.5 GB memory, 4 ECUs (2 virtual cores with 2 ECUs each), 64-bit platform, High I/O Capacity

I have run some stress test to check how many connections can serve but no matter what changes I make to the configuration, mysql can not serve more than 800. The rest of the processes are dropped. If someone can help I would be grateful.

These are the variables in the configuration:

connect_timeout=10  default_storage_engine=InnoDB  innodb_adaptive_flushing=ON  innodb_adaptive_hash_index=ON  innodb_additional_mem_pool_size=2097152  innodb_autoextend_increment=8  innodb_autoinc_lock_mode=1  innodb_buffer_pool_instances=1  innodb_buffer_pool_size=5882511360  innodb_change_buffering=all  innodb_checksums=ON  innodb_commit_concurrency=0  innodb_concurrency_tickets=500  innodb_data_file_path=ibdata1:10M:autoextend  innodb_data_home_dir=/rdsdbdata/db/innodb  innodb_doublewrite=ON  innodb_fast_shutdown=1  innodb_file_format=Antelope  innodb_file_format_check=ON  innodb_file_format_max=Antelope  innodb_file_per_table=ON  innodb_flush_log_at_trx_commit=1  innodb_flush_method=O_DIRECT  innodb_force_load_corrupted=OFF  innodb_force_recovery=0  innodb_io_capacity=200  innodb_large_prefix=OFF  innodb_locks_unsafe_for_binlog=OFF  innodb_lock_wait_timeout=50  innodb_log_buffer_size=8388608  innodb_log_files_in_group=2  innodb_log_file_size=134217728  innodb_log_group_home_dir=/rdsdbdata/log/innodb  innodb_max_dirty_pages_pct=75  innodb_max_purge_lag=0  innodb_mirrored_log_groups=1  innodb_old_blocks_pct=37  innodb_old_blocks_time=0  innodb_open_files=300  innodb_purge_batch_size=20  innodb_purge_threads=0  innodb_random_read_ahead=OFF  innodb_read_ahead_threshold=56  innodb_read_io_threads=4  innodb_replication_delay=0  innodb_rollback_on_timeout=OFF  innodb_rollback_segments=128  innodb_spin_wait_delay=6  innodb_stats_method=nulls_equal  innodb_stats_on_metadata=ON  innodb_stats_sample_pages=8  innodb_strict_mode=OFF  innodb_support_xa=ON  innodb_sync_spin_loops=30  innodb_table_locks=ON  innodb_thread_concurrency=0  innodb_thread_sleep_delay=10000  innodb_use_native_aio=ON  innodb_use_sys_malloc=ON  innodb_version=1.1.8  innodb_write_io_threads=4  lock_wait_timeout=31536000  lower_case_table_names=1  low_priority_updates=OFF  max_allowed_packet=16777216  max_binlog_cache_size=18446744073709547520  max_binlog_size=134217728  max_binlog_stmt_cache_size=18446744073709547520  max_connections=2000  max_connect_errors=10  max_delayed_threads=20  max_error_count=64  max_heap_table_size=16777216  max_insert_delayed_threads=20  max_join_size   18446744073709551615  max_length_for_sort_data=1024  max_long_data_size=16777216  max_prepared_stmt_count=16382  max_relay_log_size=0  max_seeks_for_key=18446744073709551615  max_sort_length=1024  max_sp_recursion_depth=0  max_tmp_tables=32  max_user_connections=0  max_write_lock_count=18446744073709551615  metadata_locks_cache_size=1024  min_examined_row_limit=0  multi_range_count=256  open_files_limit=65535  range_alloc_block_size=4096  read_buffer_size=262144  read_only=OFF  read_rnd_buffer_size=524288  skip_external_locking=ON  skip_name_resolve=OFF  skip_networking=OFF  skip_show_database=OFF  sort_buffer_size=2097152  storage_engine=InnoDB  stored_program_cache=256  sync_binlog=0  sync_frm=ON  sync_master_info=0  sync_relay_log=0  sync_relay_log_info=0  table_definition_cache=400  table_open_cache=2048  thread_cache_size=10240  thread_concurrency=10  thread_handling=one-thread-per-connection  thread_stack=262144  tmp_table_size=16777216  transaction_alloc_block_size=8192  transaction_prealloc_size=4096  wait_timeout=28800  warning_count=0  

How to import table's data in MySql from Sql Server?

Posted: 28 Mar 2013 11:57 AM PDT

I am trying to export table from SQL Server 2008 R2 TO MySql 5.5. For this I am using SQL Server Import and Export Wizard of Sql Server 2008, but it is giving this error. Error

Here this error may be occurring because table in Sql Server has a column with data type image and table in MySql this column's data type is LONGBLOB.

Please provide your expert answers. If not possible through SQL Server Import and Export Wizard than please suggest any other method for data transfer.

Is it possible to pipe the result of a mysqldump straight to rsync as the source argument?

Posted: 28 Mar 2013 02:57 PM PDT

Is it possible to pipe the result of a mysqldump straight to rsync as the source argument?

Conceptually, I was thinking something like:

mysqldump -u root -p database_to_backup > db_backup_file.sql | sudo rsync -avz db_backup_file.sql myuser@mysite.com:/var/www/db_backup_file.sql  

I've seen people pipe the result to mysql for their one liner backup solution, but I was curious if it was possible with rsync. You know--- cause rsync is magic :)

Thanks for your time!

Need help with long running query

Posted: 28 Mar 2013 01:57 PM PDT

UPDATE: Adding a clustered index to each work table covering all fields reduced the query run times to su-bsecond. I consider this question closed.

Thanks for taking the time to read this post - it's a bit late so hopefully this is probably borderline coherent if I'm lucky.

SQL 2008 on windows 2008.

We've got a 3rd party budgeting application that's part of our new ERP system. Users are running a forecasting process. From what I can tell, there are a set of work tables that are populated. Values from these tables are then inserted into a results table.

The process has been getting slower and slower over the past few weeks. 2 hours has now become a 24 hour run that doesn't end. I've got sp_whoisactive (a augemented sp_who2 of sorts) running and I see CPU and Reads going through the roof. 10 Billion logical reads during one of today's runs - for one query. The entire DB is cached in memory with only a few physical reads. It's been running for 20 min now and is at 500 million reads.

I dug into the INSERT / SELECT statement and started running subsections of it, removing joins and implementing the index recommendations from the showplan. This got everything humming up to a certain point in the process. The work tables were all heaps. Now it's stuck again on a similar SELECT / INSERT and I can't capture the parameters it's currently running with, since I have a trace running with Batch Completed only being captured - at least I don't know a way how to. Wait_Info from sp_whoisactive shows nothing.

So tomorrow morning we are going to run it again and I'll capture those parameters. Hopefully that will shed some light.

Below is my 'test harness' for the query. I grab a batch id from one of the work tables and then use it to build parameters for the query. The app servers are running JAVA using the Microsoft JDBC drivers, which wraps everything in sp prepares and execs, complicating things a bit.

The application vendor has supplied us with a script to insert the same dummy batchID a few thousand times into these temp tables, generate stats and then set them to NORECOMPUTE. However, this wasn't helping and I don't really understand how it would.

We're using the Hallengren maintenance scripts for index and stats maintenance. No other systems are having performance problems running on this server at this time. I ran a stock Index and Stats maintenance plan just to be sure I hadn't misconfigured the Hallengren scripts.

I'm looking for some additional guidance on how to see what this query is actually working on while it is churning away.

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED  --All data is uncommitted at this point.    DECLARE @batchid VARCHAR(8000)    SELECT  @batchid = BF_BATC_STUS_EXEC_ID  FROM    dbo.BF_ALOC_WORK_ACTY    SELECT  @batchid        DECLARE @P0 VARCHAR(8000)  DECLARE @P1 DECIMAL(38, 6)  DECLARE @P2 VARCHAR(8000)  DECLARE @P3 VARCHAR(8000)  DECLARE @P4 VARCHAR(8000)  DECLARE @P5 VARCHAR(8000)  DECLARE @P6 VARCHAR(8000)  DECLARE @P7 VARCHAR(8000)  DECLARE @P8 VARCHAR(8000)  DECLARE @P9 VARCHAR(8000)  DECLARE @P10 VARCHAR(8000)  DECLARE @P11 VARCHAR(8000)  DECLARE @P12 VARCHAR(8000)  DECLARE @P13 VARCHAR(8000)  DECLARE @P14 VARCHAR(8000)  DECLARE @P15 VARCHAR(8000)  DECLARE @P16 VARCHAR(8000)  DECLARE @P17 VARCHAR(8000)  DECLARE @P18 VARCHAR(8000)  DECLARE @P19 VARCHAR(8000)    SET @P0 = 'S'  SET @P1 = -0.084125  SET @P2 = @batchid  SET @P3 = @batchid  SET @P4 = @batchid  SET @P5 = @batchid  SET @P6 = @batchid  SET @P7 = @batchid  SET @P8 = @batchid  SET @P9 = @batchid  SET @P10 = @batchid  SET @P11 = @batchid  SET @P12 = @batchid  SET @P13 = @batchid  SET @P14 = @batchid  SET @P15 = @batchid  SET @P16 = @batchid  SET @P17 = @batchid  SET @P18 = 'FINAL_BUD_TOT'  SET @P19 = 'FINAL_BUD_TOT'        INSERT  INTO BF_ALOC_RSLT          ( BF_ALOC_RSLT.ACTY_CD ,            BF_ALOC_RSLT.BDOB_CD ,            BF_ALOC_RSLT.FUND_CD ,            BF_ALOC_RSLT.ORGN_CD ,            BF_ALOC_RSLT.PROG_CD ,            BF_ALOC_RSLT.PROJ_CD ,            BF_ALOC_RSLT.USER_DM1_CD ,            BF_ALOC_RSLT.USER_DM2_CD ,            BF_ALOC_RSLT.USER_DM3_CD ,            BF_ALOC_RSLT.USER_DM4_CD ,            BF_ALOC_RSLT.USER_DM5_CD ,            BF_ALOC_RSLT.USER_DM6_CD ,            BF_ALOC_RSLT.USER_DM7_CD ,            BF_ALOC_RSLT.USER_DM8_CD ,            BF_ALOC_RSLT.TYP ,            BF_ALOC_RSLT.DATA ,            BF_ALOC_RSLT.ALOC_LINE_GUID ,            BF_ALOC_RSLT.BF_BATC_STUS_EXEC_ID          )          ( SELECT    BF_ALOC_WORK_ACTY.RSLT ,                      BF_ALOC_WORK_BDOB.RSLT ,                      BF_ALOC_WORK_FUND.RSLT ,                      BF_ALOC_WORK_ORGN.RSLT ,                      BF_ALOC_WORK_PROG.RSLT ,                      BF_ALOC_WORK_PROJ.RSLT ,                      BF_ALOC_WORK_USER_DM1.RSLT ,                      BF_ALOC_WORK_USER_DM2.RSLT ,                      BF_ALOC_WORK_USER_DM3.RSLT ,                      BF_ALOC_WORK_USER_DM4.RSLT ,                      BF_ALOC_WORK_USER_DM5.RSLT ,                      BF_ALOC_WORK_USER_DM6.RSLT ,                      BF_ALOC_WORK_USER_DM7.RSLT ,                      BF_ALOC_WORK_USER_DM8.RSLT ,                      @P0 ,                      ROUND(SUM(BF_DATA.DATA) * @P1, 2) ,                      @P2 ,                      @P3            FROM      BF_ALOC_WORK_ACTY ,                      BF_ALOC_WORK_BDOB ,                      BF_ALOC_WORK_FUND ,                      BF_ALOC_WORK_ORGN ,                      BF_ALOC_WORK_PROG ,                      BF_ALOC_WORK_PROJ ,                      BF_ALOC_WORK_USER_DM1 ,                      BF_ALOC_WORK_USER_DM2 ,                      BF_ALOC_WORK_USER_DM3 ,                      BF_ALOC_WORK_USER_DM4 ,                      BF_ALOC_WORK_USER_DM5 ,                      BF_ALOC_WORK_USER_DM6 ,                      BF_ALOC_WORK_USER_DM7 ,                      BF_ALOC_WORK_USER_DM8 ,                      BF_DATA            WHERE     ( ( ( BF_DATA.BF_ACTY_CD = BF_ALOC_WORK_ACTY.SRC )                          AND ( BF_ALOC_WORK_ACTY.BF_BATC_STUS_EXEC_ID = @P4 )                        )                        AND ( ( BF_DATA.BF_BDOB_CD = BF_ALOC_WORK_BDOB.SRC )                              AND ( BF_ALOC_WORK_BDOB.BF_BATC_STUS_EXEC_ID = @P5 )                            )                        AND ( ( BF_DATA.BF_FUND_CD = BF_ALOC_WORK_FUND.SRC )                              AND ( BF_ALOC_WORK_FUND.BF_BATC_STUS_EXEC_ID = @P6 )                            )                        AND ( ( BF_DATA.BF_ORGN_CD = BF_ALOC_WORK_ORGN.SRC )                              AND ( BF_ALOC_WORK_ORGN.BF_BATC_STUS_EXEC_ID = @P7 )                            )                        AND ( ( BF_DATA.BF_PROG_CD = BF_ALOC_WORK_PROG.SRC )                              AND ( BF_ALOC_WORK_PROG.BF_BATC_STUS_EXEC_ID = @P8 )                            )                        AND ( ( BF_DATA.BF_PROJ_CD = BF_ALOC_WORK_PROJ.SRC )                              AND ( BF_ALOC_WORK_PROJ.BF_BATC_STUS_EXEC_ID = @P9 )                            )                        AND ( ( BF_DATA.BF_USER_DM1_CD = BF_ALOC_WORK_USER_DM1.SRC )                              AND ( BF_ALOC_WORK_USER_DM1.BF_BATC_STUS_EXEC_ID = @P10 )                            )                        AND ( ( BF_DATA.BF_USER_DM2_CD = BF_ALOC_WORK_USER_DM2.SRC )                              AND ( BF_ALOC_WORK_USER_DM2.BF_BATC_STUS_EXEC_ID = @P11 )                            )                        AND ( ( BF_DATA.BF_USER_DM3_CD = BF_ALOC_WORK_USER_DM3.SRC )                              AND ( BF_ALOC_WORK_USER_DM3.BF_BATC_STUS_EXEC_ID = @P12 )                            )                        AND ( ( BF_DATA.BF_USER_DM4_CD = BF_ALOC_WORK_USER_DM4.SRC )                              AND ( BF_ALOC_WORK_USER_DM4.BF_BATC_STUS_EXEC_ID = @P13 )                            )                        AND ( ( BF_DATA.BF_USER_DM5_CD = BF_ALOC_WORK_USER_DM5.SRC )                              AND ( BF_ALOC_WORK_USER_DM5.BF_BATC_STUS_EXEC_ID = @P14 )                            )                        AND ( ( BF_DATA.BF_USER_DM6_CD = BF_ALOC_WORK_USER_DM6.SRC )                              AND ( BF_ALOC_WORK_USER_DM6.BF_BATC_STUS_EXEC_ID = @P15 )                            )                        AND ( ( BF_DATA.BF_USER_DM7_CD = BF_ALOC_WORK_USER_DM7.SRC )                              AND ( BF_ALOC_WORK_USER_DM7.BF_BATC_STUS_EXEC_ID = @P16 )                            )                        AND ( ( BF_DATA.BF_USER_DM8_CD = BF_ALOC_WORK_USER_DM8.SRC )                              AND ( BF_ALOC_WORK_USER_DM8.BF_BATC_STUS_EXEC_ID = @P17 )                            )                        AND ( ( BF_DATA.BF_TM_PERD_CD = @P18 )                              OR ( BF_DATA.BF_TM_PERD_CD IN (                                   SELECT BF_TM_PERD_RLUP.BF_TM_PERD_CHLD_CD                                   FROM   BF_TM_PERD_RLUP                                   WHERE  ( BF_TM_PERD_RLUP.BF_TM_PERD_PARN_CD = @P19 ) ) )                            )                      )            GROUP BY  BF_ALOC_WORK_ACTY.RSLT ,                      BF_ALOC_WORK_BDOB.RSLT ,                      BF_ALOC_WORK_FUND.RSLT ,                      BF_ALOC_WORK_ORGN.RSLT ,                      BF_ALOC_WORK_PROG.RSLT ,                      BF_ALOC_WORK_PROJ.RSLT ,                      BF_ALOC_WORK_USER_DM1.RSLT ,                      BF_ALOC_WORK_USER_DM2.RSLT ,                      BF_ALOC_WORK_USER_DM3.RSLT ,                      BF_ALOC_WORK_USER_DM4.RSLT ,                      BF_ALOC_WORK_USER_DM5.RSLT ,                      BF_ALOC_WORK_USER_DM6.RSLT ,                      BF_ALOC_WORK_USER_DM7.RSLT ,                      BF_ALOC_WORK_USER_DM8.RSLT          )                                                                                                                                                                                                                                                                                                              

Edits:


SHOWPLAN Text

  |--Table Insert(OBJECT:([PB].[dbo].[BF_ALOC_RSLT]), OBJECT:([PB].[dbo].[BF_ALOC_RSLT].[XIE1_ALOC_RSLT]), SET:([PB].[dbo].[BF_ALOC_RSLT].[ACTY_CD] = [PB].[dbo].[BF_ALOC_WORK_ACTY].[RSLT],[PB].[dbo].[BF_ALOC_RSLT].[BDOB_CD] = [PB].[dbo].[BF_ALOC_WORK_BDOB].[RSLT],[PB].[dbo].[BF_ALOC_RSLT].[FUND_CD] = [PB].[dbo].[BF_ALOC_WORK_FUND].[RSLT],[PB].[dbo].[BF_ALOC_RSLT].[ORGN_CD] = [PB].[dbo].[BF_ALOC_WORK_ORGN].[RSLT],[PB].[dbo].[BF_ALOC_RSLT].[PROG_CD] = [PB].[dbo].[BF_ALOC_WORK_PROG].[RSLT],[PB].[dbo].[BF_ALOC_RSLT].[PROJ_CD] = [PB].[dbo].[BF_ALOC_WORK_PROJ].[RSLT],[PB].[dbo].[BF_ALOC_RSLT].[USER_DM1_CD] = [PB].[dbo].[BF_ALOC_WORK_USER_DM1].[RSLT],[PB].[dbo].[BF_ALOC_RSLT].[USER_DM2_CD] = [PB].[dbo].[BF_ALOC_WORK_USER_DM2].[RSLT],[PB].[dbo].[BF_ALOC_RSLT].[USER_DM3_CD] = [PB].[dbo].[BF_ALOC_WORK_USER_DM3].[RSLT],[PB].[dbo].[BF_ALOC_RSLT].[USER_DM4_CD] = [PB].[dbo].[BF_ALOC_WORK_USER_DM4].[RSLT],[PB].[dbo].[BF_ALOC_RSLT].[USER_DM5_CD] = [PB].[dbo].[BF_ALOC_WORK_USER_DM5].[RSLT],[PB].[dbo].[BF_ALOC_RSLT].[USER_DM6_CD] = [PB].[dbo].[BF_ALOC_WORK_USER_DM6].[RSLT],[PB].[dbo].[BF_ALOC_RSLT].[USER_DM7_CD] = [PB].[dbo].[BF_ALOC_WORK_USER_DM7].[RSLT],[PB].[dbo].[BF_ALOC_RSLT].[USER_DM8_CD] = [PB].[dbo].[BF_ALOC_WORK_USER_DM8].[RSLT],[PB].[dbo].[BF_ALOC_RSLT].[TYP] = RaiseIfNullInsert([Expr1067]),[PB].[dbo].[BF_ALOC_RSLT].[DATA] = RaiseIfNullInsert([Expr1068]),[PB].[dbo].[BF_ALOC_RSLT].[ALOC_LINE_GUID] = RaiseIfNullInsert([Expr1069]),[PB].[dbo].[BF_ALOC_RSLT].[BF_BATC_STUS_EXEC_ID] = RaiseIfNullInsert([Expr1070]),[PB].[dbo].[BF_ALOC_RSLT].[DIV_CD] = NULL,[PB].[dbo].[BF_ALOC_RSLT].[PATN_CD] = NULL))     |--Compute Scalar(DEFINE:([Expr1067]=CONVERT_IMPLICIT(char(1),[@P0],0), [Expr1068]=CONVERT_IMPLICIT(numeric(27,6),round([Expr1066]*[@P1],(2)),0), [Expr1069]=CONVERT_IMPLICIT(varchar(32),[@P2],0), [Expr1070]=CONVERT_IMPLICIT(varchar(32),[@P3],0)))          |--Top(ROWCOUNT est 0)               |--Stream Aggregate(GROUP BY:([PB].[dbo].[BF_ALOC_WORK_ACTY].[RSLT], [PB].[dbo].[BF_ALOC_WORK_BDOB].[RSLT], [PB].[dbo].[BF_ALOC_WORK_FUND].[RSLT], [PB].[dbo].[BF_ALOC_WORK_ORGN].[RSLT], [PB].[dbo].[BF_ALOC_WORK_PROG].[RSLT], [PB].[dbo].[BF_ALOC_WORK_PROJ].[RSLT], [PB].[dbo].[BF_ALOC_WORK_USER_DM1].[RSLT], [PB].[dbo].[BF_ALOC_WORK_USER_DM2].[RSLT], [PB].[dbo].[BF_ALOC_WORK_USER_DM3].[RSLT], [PB].[dbo].[BF_ALOC_WORK_USER_DM4].[RSLT], [PB].[dbo].[BF_ALOC_WORK_USER_DM5].[RSLT], [PB].[dbo].[BF_ALOC_WORK_USER_DM6].[RSLT], [PB].[dbo].[BF_ALOC_WORK_USER_DM7].[RSLT], [PB].[dbo].[BF_ALOC_WORK_USER_DM8].[RSLT]) DEFINE:([Expr1066]=SUM([PB].[dbo].[BF_DATA].[DATA])))                    |--Sort(ORDER BY:([PB].[dbo].[BF_ALOC_WORK_ACTY].[RSLT] ASC, [PB].[dbo].[BF_ALOC_WORK_BDOB].[RSLT] ASC, [PB].[dbo].[BF_ALOC_WORK_FUND].[RSLT] ASC, [PB].[dbo].[BF_ALOC_WORK_ORGN].[RSLT] ASC, [PB].[dbo].[BF_ALOC_WORK_PROG].[RSLT] ASC, [PB].[dbo].[BF_ALOC_WORK_PROJ].[RSLT] ASC, [PB].[dbo].[BF_ALOC_WORK_USER_DM1].[RSLT] ASC, [PB].[dbo].[BF_ALOC_WORK_USER_DM2].[RSLT] ASC, [PB].[dbo].[BF_ALOC_WORK_USER_DM3].[RSLT] ASC, [PB].[dbo].[BF_ALOC_WORK_USER_DM4].[RSLT] ASC, [PB].[dbo].[BF_ALOC_WORK_USER_DM5].[RSLT] ASC, [PB].[dbo].[BF_ALOC_WORK_USER_DM6].[RSLT] ASC, [PB].[dbo].[BF_ALOC_WORK_USER_DM7].[RSLT] ASC, [PB].[dbo].[BF_ALOC_WORK_USER_DM8].[RSLT] ASC))                         |--Hash Match(Inner Join, HASH:([PB].[dbo].[BF_DATA].[BF_ACTY_CD])=([PB].[dbo].[BF_ALOC_WORK_ACTY].[SRC]), RESIDUAL:([PB].[dbo].[BF_DATA].[BF_ACTY_CD]=[PB].[dbo].[BF_ALOC_WORK_ACTY].[SRC]))                              |--Hash Match(Inner Join, HASH:([PB].[dbo].[BF_DATA].[BF_USER_DM2_CD])=([PB].[dbo].[BF_ALOC_WORK_USER_DM2].[SRC]), RESIDUAL:([PB].[dbo].[BF_DATA].[BF_USER_DM2_CD]=[PB].[dbo].[BF_ALOC_WORK_USER_DM2].[SRC]))                              |    |--Hash Match(Inner Join, HASH:([PB].[dbo].[BF_ALOC_WORK_BDOB].[SRC])=([PB].[dbo].[BF_DATA].[BF_BDOB_CD]), RESIDUAL:([PB].[dbo].[BF_DATA].[BF_BDOB_CD]=[PB].[dbo].[BF_ALOC_WORK_BDOB].[SRC]))                              |    |    |--Nested Loops(Inner Join, OUTER REFERENCES:([Bmk1008]))                              |    |    |    |--Index Seek(OBJECT:([PB].[dbo].[BF_ALOC_WORK_BDOB].[IX_COM_BF_ALOC_WORK_BDOB]), SEEK:([PB].[dbo].[BF_ALOC_WORK_BDOB].[BF_BATC_STUS_EXEC_ID]=[@P5]) ORDERED FORWARD)                              |    |    |    |--RID Lookup(OBJECT:([PB].[dbo].[BF_ALOC_WORK_BDOB]), SEEK:([Bmk1008]=[Bmk1008]) LOOKUP ORDERED FORWARD)                              |    |    |--Hash Match(Inner Join, HASH:([PB].[dbo].[BF_ALOC_WORK_USER_DM3].[SRC])=([PB].[dbo].[BF_DATA].[BF_USER_DM3_CD]), RESIDUAL:([PB].[dbo].[BF_DATA].[BF_USER_DM3_CD]=[PB].[dbo].[BF_ALOC_WORK_USER_DM3].[SRC]))                              |    |         |--Nested Loops(Inner Join, OUTER REFERENCES:([Bmk1036]))                              |    |         |    |--Index Seek(OBJECT:([PB].[dbo].[BF_ALOC_WORK_USER_DM3].[IX_COM_BF_ALOC_WORK_USER_DM3_EXEC_ID]), SEEK:([PB].[dbo].[BF_ALOC_WORK_USER_DM3].[BF_BATC_STUS_EXEC_ID]=[@P12]) ORDERED FORWARD)                              |    |         |    |--RID Lookup(OBJECT:([PB].[dbo].[BF_ALOC_WORK_USER_DM3]), SEEK:([Bmk1036]=[Bmk1036]) LOOKUP ORDERED FORWARD)                              |    |         |--Hash Match(Inner Join, HASH:([PB].[dbo].[BF_ALOC_WORK_USER_DM1].[SRC])=([PB].[dbo].[BF_DATA].[BF_USER_DM1_CD]), RESIDUAL:([PB].[dbo].[BF_DATA].[BF_USER_DM1_CD]=[PB].[dbo].[BF_ALOC_WORK_USER_DM1].[SRC]))                              |    |              |--Nested Loops(Inner Join, OUTER REFERENCES:([Bmk1028]))                              |    |              |    |--Index Seek(OBJECT:([PB].[dbo].[BF_ALOC_WORK_USER_DM1].[IX_COM_BF_ALOC_WORK_USER_DM1_EXEC_ID]), SEEK:([PB].[dbo].[BF_ALOC_WORK_USER_DM1].[BF_BATC_STUS_EXEC_ID]=[@P10]) ORDERED FORWARD)                              |    |              |    |--RID Lookup(OBJECT:([PB].[dbo].[BF_ALOC_WORK_USER_DM1]), SEEK:([Bmk1028]=[Bmk1028]) LOOKUP ORDERED FORWARD)                              |    |              |--Hash Match(Inner Join, HASH:([PB].[dbo].[BF_ALOC_WORK_USER_DM8].[SRC])=([PB].[dbo].[BF_DATA].[BF_USER_DM8_CD]), RESIDUAL:([PB].[dbo].[BF_DATA].[BF_USER_DM8_CD]=[PB].[dbo].[BF_ALOC_WORK_USER_DM8].[SRC]))                              |    |                   |--Table Scan(OBJECT:([PB].[dbo].[BF_ALOC_WORK_USER_DM8]), WHERE:([PB].[dbo].[BF_ALOC_WORK_USER_DM8].[BF_BATC_STUS_EXEC_ID]=[@P17]))                              |    |                   |--Hash Match(Inner Join, HASH:([PB].[dbo].[BF_ALOC_WORK_FUND].[SRC])=([PB].[dbo].[BF_DATA].[BF_FUND_CD]), RESIDUAL:([PB].[dbo].[BF_DATA].[BF_FUND_CD]=[PB].[dbo].[BF_ALOC_WORK_FUND].[SRC]))                              |    |                        |--Table Scan(OBJECT:([PB].[dbo].[BF_ALOC_WORK_FUND]), WHERE:([PB].[dbo].[BF_ALOC_WORK_FUND].[BF_BATC_STUS_EXEC_ID]=[@P6]))                              |    |                        |--Hash Match(Inner Join, HASH:([PB].[dbo].[BF_ALOC_WORK_USER_DM6].[SRC])=([PB].[dbo].[BF_DATA].[BF_USER_DM6_CD]), RESIDUAL:([PB].[dbo].[BF_DATA].[BF_USER_DM6_CD]=[PB].[dbo].[BF_ALOC_WORK_USER_DM6].[SRC]))                              |    |                             |--Table Scan(OBJECT:([PB].[dbo].[BF_ALOC_WORK_USER_DM6]), WHERE:([PB].[dbo].[BF_ALOC_WORK_USER_DM6].[BF_BATC_STUS_EXEC_ID]=[@P15]))                              |    |                             |--Hash Match(Inner Join, HASH:([PB].[dbo].[BF_ALOC_WORK_USER_DM5].[SRC])=([PB].[dbo].[BF_DATA].[BF_USER_DM5_CD]), RESIDUAL:([PB].[dbo].[BF_DATA].[BF_USER_DM5_CD]=[PB].[dbo].[BF_ALOC_WORK_USER_DM5].[SRC]))                              |    |                                  |--Table Scan(OBJECT:([PB].[dbo].[BF_ALOC_WORK_USER_DM5]), WHERE:([PB].[dbo].[BF_ALOC_WORK_USER_DM5].[BF_BATC_STUS_EXEC_ID]=[@P14]))                              |    |                                  |--Hash Match(Inner Join, HASH:([PB].[dbo].[BF_ALOC_WORK_PROJ].[SRC])=([PB].[dbo].[BF_DATA].[BF_PROJ_CD]), RESIDUAL:([PB].[dbo].[BF_DATA].[BF_PROJ_CD]=[PB].[dbo].[BF_ALOC_WORK_PROJ].[SRC]))                              |    |                                       |--Table Scan(OBJECT:([PB].[dbo].[BF_ALOC_WORK_PROJ]), WHERE:([PB].[dbo].[BF_ALOC_WORK_PROJ].[BF_BATC_STUS_EXEC_ID]=[@P9]))                              |    |                                       |--Hash Match(Inner Join, HASH:([PB].[dbo].[BF_ALOC_WORK_USER_DM4].[SRC])=([PB].[dbo].[BF_DATA].[BF_USER_DM4_CD]), RESIDUAL:([PB].[dbo].[BF_DATA].[BF_USER_DM4_CD]=[PB].[dbo].[BF_ALOC_WORK_USER_DM4].[SRC]))                              |    |                                            |--Table Scan(OBJECT:([PB].[dbo].[BF_ALOC_WORK_USER_DM4]), WHERE:([PB].[dbo].[BF_ALOC_WORK_USER_DM4].[BF_BATC_STUS_EXEC_ID]=[@P13]))                              |    |                                            |--Hash Match(Inner Join, HASH:([PB].[dbo].[BF_ALOC_WORK_USER_DM7].[SRC])=([PB].[dbo].[BF_DATA].[BF_USER_DM7_CD]), RESIDUAL:([PB].[dbo].[BF_DATA].[BF_USER_DM7_CD]=[PB].[dbo].[BF_ALOC_WORK_USER_DM7].[SRC]))                              |    |                                                 |--Table Scan(OBJECT:([PB].[dbo].[BF_ALOC_WORK_USER_DM7]), WHERE:([PB].[dbo].[BF_ALOC_WORK_USER_DM7].[BF_BATC_STUS_EXEC_ID]=[@P16]))                              |    |                                                 |--Hash Match(Inner Join, HASH:([PB].[dbo].[BF_ALOC_WORK_PROG].[SRC])=([PB].[dbo].[BF_DATA].[BF_PROG_CD]), RESIDUAL:([PB].[dbo].[BF_DATA].[BF_PROG_CD]=[PB].[dbo].[BF_ALOC_WORK_PROG].[SRC]))                              |    |                                                      |--Table Scan(OBJECT:([PB].[dbo].[BF_ALOC_WORK_PROG]), WHERE:([PB].[dbo].[BF_ALOC_WORK_PROG].[BF_BATC_STUS_EXEC_ID]=[@P8]))                              |    |                                                      |--Nested Loops(Inner Join, OUTER REFERENCES:([PB].[dbo].[BF_ALOC_WORK_ORGN].[SRC]))                              |    |                                                           |--Table Scan(OBJECT:([PB].[dbo].[BF_ALOC_WORK_ORGN]), WHERE:([PB].[dbo].[BF_ALOC_WORK_ORGN].[BF_BATC_STUS_EXEC_ID]=[@P7]))                              |    |                                                           |--Sort(DISTINCT ORDER BY:([PB].[dbo].[BF_DATA].[BF_ACTY_CD] ASC, [PB].[dbo].[BF_DATA].[BF_FUND_CD] ASC, [PB].[dbo].[BF_DATA].[BF_PROG_CD] ASC, [PB].[dbo].[BF_DATA].[BF_PROJ_CD] ASC, [PB].[dbo].[BF_DATA].[BF_USER_DM1_CD] ASC, [PB].[dbo].[BF_DATA].[BF_USER_DM2_CD] ASC, [PB].[dbo].[BF_DATA].[BF_USER_DM3_CD] ASC, [PB].[dbo].[BF_DATA].[BF_USER_DM4_CD] ASC, [PB].[dbo].[BF_DATA].[BF_USER_DM5_CD] ASC, [PB].[dbo].[BF_DATA].[BF_USER_DM6_CD] ASC, [PB].[dbo].[BF_DATA].[BF_USER_DM7_CD] ASC, [PB].[dbo].[BF_DATA].[BF_USER_DM8_CD] ASC, [PB].[dbo].[BF_DATA].[BF_BDOB_CD] ASC, [PB].[dbo].[BF_DATA].[BF_TM_PERD_CD] ASC))                              |    |                                                                |--Concatenation                              |    |                                                                     |--Nested Loops(Inner Join, OUTER REFERENCES:([PB].[dbo].[BF_DATA].[BF_ORGN_CD], [PB].[dbo].[BF_DATA].[BF_ACTY_CD], [PB].[dbo].[BF_DATA].[BF_PROG_CD], [PB].[dbo].[BF_DATA].[BF_PROJ_CD], [PB].[dbo].[BF_DATA].[BF_BDOB_CD], [PB].[dbo].[BF_DATA].[BF_USER_DM1_CD], [PB].[dbo].[BF_DATA].[BF_USER_DM2_CD], [PB].[dbo].[BF_DATA].[BF_USER_DM3_CD], [PB].[dbo].[BF_DATA].[BF_USER_DM4_CD], [PB].[dbo].[BF_DATA].[BF_USER_DM5_CD], [PB].[dbo].[BF_DATA].[BF_USER_DM6_CD], [PB].[dbo].[BF_DATA].[BF_USER_DM7_CD], [PB].[dbo].[BF_DATA].[BF_USER_DM8_CD], [PB].[dbo].[BF_DATA].[BF_FUND_CD], [PB].[dbo].[BF_DATA].[BF_TM_PERD_CD]))                              |    |                                                                     |    |--Index Seek(OBJECT:([PB].[dbo].[BF_DATA].[XIF18_DATA]), SEEK:([PB].[dbo].[BF_DATA].[BF_TM_PERD_CD]=[@P18]),  WHERE:([PB].[dbo].[BF_DATA].[BF_ORGN_CD]=[PB].[dbo].[BF_ALOC_WORK_ORGN].[SRC]) ORDERED FORWARD)                              |    |                                                                     |    |--Clustered Index Seek(OBJECT:([PB].[dbo].[BF_DATA].[XPK_DATA]), SEEK:([PB].[dbo].[BF_DATA].[BF_ACTY_CD]=[PB].[dbo].[BF_DATA].[BF_ACTY_CD] AND [PB].[dbo].[BF_DATA].[BF_FUND_CD]=[PB].[dbo].[BF_DATA].[BF_FUND_CD] AND [PB].[dbo].[BF_DATA].[BF_ORGN_CD]=[PB].[dbo].[BF_DATA].[BF_ORGN_CD] AND [PB].[dbo].[BF_DATA].[BF_PROG_CD]=[PB].[dbo].[BF_DATA].[BF_PROG_CD] AND [PB].[dbo].[BF_DATA].[BF_PROJ_CD]=[PB].[dbo].[BF_DATA].[BF_PROJ_CD] AND [PB].[dbo].[BF_DATA].[BF_USER_DM1_CD]=[PB].[dbo].[BF_DATA].[BF_USER_DM1_CD] AND [PB].[dbo].[BF_DATA].[BF_USER_DM2_CD]=[PB].[dbo].[BF_DATA].[BF_USER_DM2_CD] AND [PB].[dbo].[BF_DATA].[BF_USER_DM3_CD]=[PB].[dbo].[BF_DATA].[BF_USER_DM3_CD] AND [PB].[dbo].[BF_DATA].[BF_USER_DM4_CD]=[PB].[dbo].[BF_DATA].[BF_USER_DM4_CD] AND [PB].[dbo].[BF_DATA].[BF_USER_DM5_CD]=[PB].[dbo].[BF_DATA].[BF_USER_DM5_CD] AND [PB].[dbo].[BF_DATA].[BF_USER_DM6_CD]=[PB].[dbo].[BF_DATA].[BF_USER_DM6_CD] AND [PB].[dbo].[BF_DATA].[BF_USER_DM7_CD]=[PB].[dbo].[BF_DATA].[BF_USER_DM7_CD] AND [PB].[dbo].[BF_DATA].[BF_USER_DM8_CD]=[PB].[dbo].[BF_DATA].[BF_USER_DM8_CD] AND [PB].[dbo].[BF_DATA].[BF_BDOB_CD]=[PB].[dbo].[BF_DATA].[BF_BDOB_CD] AND [PB].[dbo].[BF_DATA].[BF_TM_PERD_CD]=[PB].[dbo].[BF_DATA].[BF_TM_PERD_CD]) LOOKUP ORDERED FORWARD)                              |    |                                                                     |--Nested Loops(Inner Join, OUTER REFERENCES:([PB].[dbo].[BF_DATA].[BF_ORGN_CD], [PB].[dbo].[BF_DATA].[BF_ACTY_CD], [PB].[dbo].[BF_DATA].[BF_PROG_CD], [PB].[dbo].[BF_DATA].[BF_PROJ_CD], [PB].[dbo].[BF_DATA].[BF_BDOB_CD], [PB].[dbo].[BF_DATA].[BF_USER_DM1_CD], [PB].[dbo].[BF_DATA].[BF_USER_DM2_CD], [PB].[dbo].[BF_DATA].[BF_USER_DM3_CD], [PB].[dbo].[BF_DATA].[BF_USER_DM4_CD], [PB].[dbo].[BF_DATA].[BF_USER_DM5_CD], [PB].[dbo].[BF_DATA].[BF_USER_DM6_CD], [PB].[dbo].[BF_DATA].[BF_USER_DM7_CD], [PB].[dbo].[BF_DATA].[BF_USER_DM8_CD], [PB].[dbo].[BF_DATA].[BF_FUND_CD], [PB].[dbo].[BF_DATA].[BF_TM_PERD_CD], [Expr1077]) WITH UNORDERED PREFETCH)                              |    |                                                                          |--Nested Loops(Inner Join, OUTER REFERENCES:([PB].[dbo].[BF_TM_PERD_RLUP].[BF_TM_PERD_CHLD_CD]))                              |    |                                                                          |    |--Index Seek(OBJECT:([PB].[dbo].[BF_TM_PERD_RLUP].[XIF1_TM_PERD_RLUP]), SEEK:([PB].[dbo].[BF_TM_PERD_RLUP].[BF_TM_PERD_PARN_CD]=[@P19]) ORDERED FORWARD)                              |    |                                                                          |    |--Index Seek(OBJECT:([PB].[dbo].[BF_DATA].[XIF18_DATA]), SEEK:([PB].[dbo].[BF_DATA].[BF_TM_PERD_CD]=[PB].[dbo].[BF_TM_PERD_RLUP].[BF_TM_PERD_CHLD_CD]),  WHERE:([PB].[dbo].[BF_DATA].[BF_ORGN_CD]=[PB].[dbo].[BF_ALOC_WORK_ORGN].[SRC]) ORDERED FORWARD)                              |    |                                                                          |--Clustered Index Seek(OBJECT:([PB].[dbo].[BF_DATA].[XPK_DATA]), SEEK:([PB].[dbo].[BF_DATA].[BF_ACTY_CD]=[PB].[dbo].[BF_DATA].[BF_ACTY_CD] AND [PB].[dbo].[BF_DATA].[BF_FUND_CD]=[PB].[dbo].[BF_DATA].[BF_FUND_CD] AND [PB].[dbo].[BF_DATA].[BF_ORGN_CD]=[PB].[dbo].[BF_DATA].[BF_ORGN_CD] AND [PB].[dbo].[BF_DATA].[BF_PROG_CD]=[PB].[dbo].[BF_DATA].[BF_PROG_CD] AND [PB].[dbo].[BF_DATA].[BF_PROJ_CD]=[PB].[dbo].[BF_DATA].[BF_PROJ_CD] AND [PB].[dbo].[BF_DATA].[BF_USER_DM1_CD]=[PB].[dbo].[BF_DATA].[BF_USER_DM1_CD] AND [PB].[dbo].[BF_DATA].[BF_USER_DM2_CD]=[PB].[dbo].[BF_DATA].[BF_USER_DM2_CD] AND [PB].[dbo].[BF_DATA].[BF_USER_DM3_CD]=[PB].[dbo].[BF_DATA].[BF_USER_DM3_CD] AND [PB].[dbo].[BF_DATA].[BF_USER_DM4_CD]=[PB].[dbo].[BF_DATA].[BF_USER_DM4_CD] AND [PB].[dbo].[BF_DATA].[BF_USER_DM5_CD]=[PB].[dbo].[BF_DATA].[BF_USER_DM5_CD] AND [PB].[dbo].[BF_DATA].[BF_USER_DM6_CD]=[PB].[dbo].[BF_DATA].[BF_USER_DM6_CD] AND [PB].[dbo].[BF_DATA].[BF_USER_DM7_CD]=[PB].[dbo].[BF_DATA].[BF_USER_DM7_CD] AND [PB].[dbo].[BF_DATA].[BF_USER_DM8_CD]=[PB].[dbo].[BF_DATA].[BF_USER_DM8_CD] AND [PB].[dbo].[BF_DATA].[BF_BDOB_CD]=[PB].[dbo].[BF_DATA].[BF_BDOB_CD] AND [PB].[dbo].[BF_DATA].[BF_TM_PERD_CD]=[PB].[dbo].[BF_DATA].[BF_TM_PERD_CD]) LOOKUP ORDERED FORWARD)                              |    |--Nested Loops(Inner Join, OUTER REFERENCES:([Bmk1032], [Expr1078]) WITH UNORDERED PREFETCH)                              |         |--Index Seek(OBJECT:([PB].[dbo].[BF_ALOC_WORK_USER_DM2].[IX_COM_BF_ALOC_WORK_USER_DM2_EXEC_ID]), SEEK:([PB].[dbo].[BF_ALOC_WORK_USER_DM2].[BF_BATC_STUS_EXEC_ID]=[@P11]) ORDERED FORWARD)                              |         |--RID Lookup(OBJECT:([PB].[dbo].[BF_ALOC_WORK_USER_DM2]), SEEK:([Bmk1032]=[Bmk1032]) LOOKUP ORDERED FORWARD)                              |--Table Scan(OBJECT:([PB].[dbo].[BF_ALOC_WORK_ACTY]), WHERE:([PB].[dbo].[BF_ALOC_WORK_ACTY].[BF_BATC_STUS_EXEC_ID]=[@P4]))  

One other note, the query plan shows early termination due to plan timeout.

MySQL Workbench sync keeps requesting the same changes

Posted: 28 Mar 2013 05:57 PM PDT

I am using MySQL Workbench, and when I try to "synchronize" it with my remote database, it keeps detecting some changes to make.

Specifically, the most recurrent ones are:

  • Dropping foreign keys and re-creating the same ones again
  • Changing the collation

I was compliant and executed all the queries given to me (and added the semi-colon that they forgot). MySQL didn't complain and executed them.

However it didn't help, I can run it 20 times in a row, it will still ask the same useless changes.

mysql performance / cache configuration enigma

Posted: 28 Mar 2013 09:57 AM PDT

I have two mysql 5.1 instances (say A, B) hosting the same database schema. If I run (with mysql workbench) the same query on both instances I don't understand why I get very different response times with subsequent requests.

On instance A, first query execution takes 0.688s and second query execution takes 0.683s
On instance B, first query execution takes 0.688s and second query execution takes 0.027s

It looks like there's a cache configuration difference between the two instances but I can't find it.

Comparing the SHOW VARIABLES result on both instances gives only few distinct values (which I don't see how they may impact query execution time) :

general_log_file :    /path/to/file2.log    VS    /path/to/file1.log      hostname :    mysql2    VS    mysql1      pid_file :    /var/lib/mysql/mysql2.pid    VS    /var/lib/mysql/mysql1.pid      slave_max_allowed_packet :    1073741824    VS    (empty)      slow_query_log_file :    /var/lib/mysql/mysql2-slow.log   VS   /var/lib/mysql/mysql1-slow.log      system_time_zone :    CET   VS  CEST      timestamp :    1352219171   VS   1352219229      version :    5.1.66-0ubuntu0.10.04.1   VS   5.1.62-0ubuntu0.10.04.1  

Just to mention, instance A is our test environment and instance B is our production environment

Edit : (recommended by @Rick James)

The following variables are strictly identical on both environments

SHOW VARIABLES LIKE '%buffer%'    bulk_insert_buffer_size 8388608  innodb_buffer_pool_size 8388608  innodb_log_buffer_size  1048576  join_buffer_size    131072  key_buffer_size 16777216  myisam_sort_buffer_size 8388608  net_buffer_length   16384  preload_buffer_size 32768  read_buffer_size    131072  read_rnd_buffer_size    262144  sort_buffer_size    2097144  sql_buffer_result   OFF      SHOW VARIABLES LIKE 'innodb%'    innodb_adaptive_hash_index  ON  innodb_additional_mem_pool_size 1048576  innodb_autoextend_increment 8  innodb_autoinc_lock_mode    1  innodb_buffer_pool_size 8388608  innodb_checksums    ON  innodb_commit_concurrency   0  innodb_concurrency_tickets  500  innodb_data_file_path   ibdata1:10M:autoextend  innodb_data_home_dir      innodb_doublewrite  ON  innodb_fast_shutdown    1  innodb_file_io_threads  4  innodb_file_per_table   OFF  innodb_flush_log_at_trx_commit  1  innodb_flush_method   innodb_force_recovery   0  innodb_lock_wait_timeout    50  innodb_locks_unsafe_for_binlog  OFF  innodb_log_buffer_size  1048576  innodb_log_file_size    5242880  innodb_log_files_in_group   2  innodb_log_group_home_dir   ./  innodb_max_dirty_pages_pct  90  innodb_max_purge_lag    0  innodb_mirrored_log_groups  1  innodb_open_files   300  innodb_rollback_on_timeout  OFF  innodb_stats_method nulls_equal  innodb_stats_on_metadata    ON  innodb_support_xa   ON  innodb_sync_spin_loops  20  innodb_table_locks  ON  innodb_thread_concurrency   8  innodb_thread_sleep_delay   10000  innodb_use_legacy_cardinality_algorithm ON  

The actual SELECT :

select      post0_.id as id83_0_,      thread1_.id as id81_1_,      post0_.createDate as createDate83_0_,      post0_.editCount as editCount83_0_,      post0_.editDate as editDate83_0_,      post0_.editor_id as editor7_83_0_,      post0_.postType as postType83_0_,      post0_.poster_id as poster8_83_0_,      post0_.repliedTo_id as repliedTo9_83_0_,      post0_.text as text83_0_,      post0_.thread_id as thread10_83_0_,      thread1_.created as created81_1_,      thread1_.debate_id as debate16_81_1_,      thread1_.description as descript4_81_1_,      thread1_.lastPostDate as lastPost5_81_1_,      thread1_.poster_id as poster17_81_1_,      thread1_.status as status81_1_,      thread1_.threadType as threadType81_1_,      thread1_.title as title81_1_,      thread1_.updated as updated81_1_,      thread1_.viewCount as viewCount81_1_,      thread1_.imageUrl as imageUrl81_1_,      thread1_.mediaTypeValue as mediaTy11_81_1_,      thread1_.mediaUrl as mediaUrl81_1_,      thread1_.modoWeight as modoWeight81_1_,      thread1_.shortName as shortName81_1_,      thread1_.why as why81_1_,      thread1_.theme_id as theme18_81_1_,      thread1_.answer_debaterId as answer19_81_1_,      thread1_.answer_pollId as answer20_81_1_,      (SELECT          COUNT(*)       FROM          Post p       WHERE          p.thread_id = thread1_.id) as formula9_1_   from      Post post0_   inner join      Thread thread1_           on post0_.thread_id=thread1_.id   where      post0_.postType in (          'P', 'F'      )       and thread1_.debate_id=69       and thread1_.threadType<>'B'       and length(post0_.text)>0   order by      createDate desc limit 5  

The EXPLAIN SELECT (sam on both environments) :

id  select_type table   type    possible_keys   key key_len ref rows    Extra  1   PRIMARY thread1_    ref PRIMARY,FK9545FA2AEBB74893  FK9545FA2AEBB74893  5   const   3690    Using where; Using temporary; Using filesort  1   PRIMARY post0_  ref FK260CC0D74BC5F3    FK260CC0D74BC5F3    5   debate-atest.thread1_.id    2   Using where  2   DEPENDENT SUBQUERY  p   ref FK260CC0D74BC5F3    FK260CC0D74BC5F3    5   debate-atest.thread1_.id    2   Using where; Using index  

The CREATE TABLE STATEMENT (exact same on both except the constraint names) :

delimiter $$    CREATE TABLE `Post` (    `postType` varchar(1) NOT NULL,    `id` int(11) NOT NULL AUTO_INCREMENT,    `createDate` datetime DEFAULT NULL,    `editCount` int(11) NOT NULL,    `editDate` datetime DEFAULT NULL,    `text` longtext,    `editor_id` int(11) DEFAULT NULL,    `poster_id` int(11) DEFAULT NULL,    `repliedTo_id` int(11) DEFAULT NULL,    `thread_id` int(11) DEFAULT NULL,    PRIMARY KEY (`id`),    KEY `createDateIx` (`createDate`),    KEY `FK260CC0D74BC5F3` (`thread_id`),    KEY `FK260CC07C1A4F95` (`poster_id`),    KEY `FK260CC0960B3775` (`editor_id`),    KEY `FK260CC0D7C95B5F` (`repliedTo_id`),    CONSTRAINT `FK260CC0D7C95B5F` FOREIGN KEY (`repliedTo_id`) REFERENCES `Post` (`id`),    CONSTRAINT `FK260CC07C1A4F95` FOREIGN KEY (`poster_id`) REFERENCES `Debater` (`id`),    CONSTRAINT `FK260CC0960B3775` FOREIGN KEY (`editor_id`) REFERENCES `Debater` (`id`),    CONSTRAINT `FK260CC0D74BC5F3` FOREIGN KEY (`thread_id`) REFERENCES `Thread` (`id`)  ) ENGINE=InnoDB AUTO_INCREMENT=148523 DEFAULT CHARSET=utf8$$  

and

delimiter $$    CREATE TABLE `Thread` (    `threadType` varchar(1) NOT NULL,    `id` int(11) NOT NULL AUTO_INCREMENT,    `created` datetime DEFAULT NULL,    `description` varchar(700) DEFAULT NULL,    `lastPostDate` datetime DEFAULT NULL,    `status` int(11) NOT NULL,    `title` varchar(100) DEFAULT NULL,    `updated` datetime DEFAULT NULL,    `viewCount` int(11) NOT NULL,    `imageUrl` varchar(255) DEFAULT NULL,    `mediaTypeValue` int(11) DEFAULT NULL,    `mediaUrl` varchar(1000) DEFAULT NULL,    `modoWeight` int(11) DEFAULT NULL,    `shortName` varchar(15) DEFAULT NULL,    `why` longtext,    `debate_id` int(11) DEFAULT NULL,    `poster_id` int(11) DEFAULT NULL,    `theme_id` int(11) DEFAULT NULL,    `answer_debaterId` int(11) DEFAULT NULL,    `answer_pollId` int(11) DEFAULT NULL,    PRIMARY KEY (`id`),    KEY `answerPollIx` (`answer_pollId`),    KEY `FK9545FA2AAF24B581` (`theme_id`),    KEY `FK9545FA2A7C1A4F95` (`poster_id`),    KEY `FK9545FA2AEBB74893` (`debate_id`),    KEY `FK9545FA2A82957DB8` (`answer_debaterId`,`answer_pollId`),    CONSTRAINT `FK9545FA2A82957DB8` FOREIGN KEY (`answer_debaterId`, `answer_pollId`) REFERENCES `Answer` (`debaterId`, `pollId`),    CONSTRAINT `FK9545FA2A7C1A4F95` FOREIGN KEY (`poster_id`) REFERENCES `Debater` (`id`),    CONSTRAINT `FK9545FA2AAF24B581` FOREIGN KEY (`theme_id`) REFERENCES `Thread` (`id`),    CONSTRAINT `FK9545FA2AEBB74893` FOREIGN KEY (`debate_id`) REFERENCES `Debate` (`id`)  ) ENGINE=InnoDB AUTO_INCREMENT=49829 DEFAULT CHARSET=utf8$$  

Copy Postgres databases with indexes between instances

Posted: 28 Mar 2013 12:54 PM PDT

How do you copy a single Postgres database between installations without using pg_dump and without regenerating indexes?

We have an application we deploy which uses Postgres 8.4.7 database in Linux. We have a rather large static data set (4GB) which is is indexed.

Using pg_dump we have to index the data once it is restored to the second Postgres instance. Indexing this data can take up to 80 minutes, so we would rather deploy the database with the index already generated.

From postgres documentation it appears that databases are contained in sub-directories under postgresql/8.4/main/base based on their OID.

My first attempt was to create an empty database in Postgres, shutdown the instance, then copy the data from the existing instance to the directory of the OID for the empty database. This failed as my indexes were not working. Preforming a vacuum gave me the following error:

WARNING:  PD_ALL_VISIBLE flag was incorrectly set in relation "pg_type" page 6  WARNING:  PD_ALL_VISIBLE flag was incorrectly set in relation "pg_depend" page 39  ERROR:  xlog flush request 0/8B199518 is not satisfied --- flushed only to 0/8AF81680  CONTEXT:  writing block 39 of relation base/17004/2608  

Getting max values from MySQL tables

Posted: 28 Mar 2013 06:57 PM PDT

I have three tables:

  1. competitions (id, name, date)
  2. athletes (id,name)
  3. results (place, id_athlete, id_competition, ranking_points)

where:

results.id_athlet=athlet.id  results.id_competition=competitions.id  

I need a query to select the latest ranking_points of each athlete based on competitions.date.

Minimizing Indexed Reads with Complex Criteria

Posted: 28 Mar 2013 09:59 AM PDT

I'm optimizing a Firebird 2.5 database of work tickets. They're stored in a table declared as such:

CREATE TABLE TICKETS (    TICKET_ID id PRIMARY KEY,    JOB_ID id,    ACTION_ID id,    STATUS str256 DEFAULT 'Pending'  );  

I generally want to find the first ticket that hasn't been processed and is in Pending status.

My processing loop would be:

  1. Retrieve 1st Ticket where Pending
  2. Do work with Ticket.
  3. Update Ticket Status => Complete
  4. Repeat.

Nothing too fancy. If I'm watching the database while this loop runs I see the number of indexed reads climbs for each iteration. The performance doesn't seem to degrade terribly that I can tell, but the machine I'm testing on is pretty quick. However, I've received reports of performance degradation over time from some of my users.

I've got an index on Status, but it still seems like it scans down the Ticket_Id column each iteration. It seems like I'm overlooking something, but I'm not sure what. Is the climbing number of indexed reads for something like this expected, or is the index misbehaving in some way?

-- Edits for comments --

In Firebird you limit row retrieval like:

Select First 1    Job_ID, Ticket_Id  From    Tickets  Where    Status = 'Pending'  

So when I say "first", I'm just asking it for a limited record set where Status = 'Pending'.

Tools to migrate from SQLite to PostgreSQL

Posted: 28 Mar 2013 04:16 PM PDT

I need to continuously migrate from SQLite to PostgreSQL. By continuously, I mean that I will reimport the SQLite database into PostgreSQL every day. It would be nice if I could make changes to the tables in SQLite as I please without having to manually make any changes to the PostgreSQL database. I will run the migration from Python, but it could be any external tool, which can be controlled from command line. The tool needs to be available for Linux, and it would be nice if it also ran on Windows.

[MS SQL Server] Migrating Maintenance plans

[MS SQL Server] Migrating Maintenance plans


Migrating Maintenance plans

Posted: 27 Mar 2013 07:41 AM PDT

Hi,Is it possible to migrate maintenance plans from 1 server to another server?SueTons.

transaction log size reduction

Posted: 27 Mar 2013 11:17 AM PDT

Does backup of transaction log reduce the log file size or it is DBCC SHRINKFILE really make the size small?I know it is not good practice to shrink log, but we have special case that need to do that to save space. For example we have some archived databases and set to readonly, and simple recovery mode, we don't need those big log file there, so I was able to shrink the log size to a smaller one.But I remember somewhere I read only backup transaction log reduces the size, not shrink , or maybe I didn't remmember it correctly?Thanks

Loading Users into instance security

Posted: 19 Mar 2013 12:57 AM PDT

We are moving from a server with SQL server 2005 to one with SQL server 2008. Is there any way to script a bulk load of users into the instance security? I have mounted the databases, from backup, but need to get the users set-up in the instance. With over 1,000 users this would take a while going through the SSMS GUI interface.

Restore verifyonly on the last backup

Posted: 27 Mar 2013 11:17 PM PDT

Hi , Somebody can Help me :I want a script to use 'restore verifyonly' command on the last full backup of my database :blush:(get the path of last full backup and use the command restore verifyonly from disk = <backup>

Monitoring error

Posted: 11 Mar 2013 07:47 AM PDT

I'm using Red-Gate Monitor tool to help me in managing our SQL Server. I see very frequently these two messages: Monitoring error (SQL Server data collection) and Monitoring Error (host machine data collection). I know there could be almost an infinite number of reasons for these error but I was wondering how would it be best to try to find out the reasons why I'm getting notified on these. I looked at our existing backup jobs and schedule jobs as well as the logs to see if I can find a reason why this is happening. Any other ideas or a direction that I need to go to get more information on this it would be very appreciated.

sql2008 port number changed

Posted: 27 Mar 2013 08:46 PM PDT

not able to connect after changing the port number.

Find SPID of Linked Server Call

Posted: 12 Mar 2013 07:14 AM PDT

Hi SCC,I have a stored procedure on server A that makes a linked server call to server B.... on server A I see the wait type is OLEDB so I want to look at the SPID on server B... is there a way to identify the remote SPID other than connecting to the server and doing an sp_who2 and looking for the client who made the connection and then username of the linked server?Thanks!

SQL Server access violation

Posted: 27 Mar 2013 09:07 PM PDT

Hi All,My sql server crashed and I see the following error in my error log. I browsed the net got to know to analyze the mdmp files but iam not sure how to do that .PLease help me to find the issue 2013-03-26 09:16:26.58 Server *2013-03-26 09:16:26.58 Server * BEGIN STACK DUMP:2013-03-26 09:16:26.58 Server * 03/26/13 09:16:26 spid 137842013-03-26 09:16:26.58 Server *2013-03-26 09:16:26.58 Server * ex_handle_except encountered exception C0000005 - Server terminating2013-03-26 09:16:26.58 Server Waiting for your input.Thanks you

Availability and (simulation) using backup ( SQL SERVER)

Posted: 27 Mar 2013 07:30 PM PDT

Hi everybody, I want to know if the last backup is physically accessible and know what is spend if used it Is that possible with SQL SERVER (SQL)?

Crippling high RAM usage, Googled SQL buffer-related usage doesn't show the issue

Posted: 27 Mar 2013 10:37 AM PDT

I have an SQL Server 2008R2 server instance that is hitting the RAM ceiling and crippling the server.- Server has 12GB RAM- Task Manager shows more than 9.5GB of RAM being used on SQLServr.exe, I am getting time-out errors on C# queries, and connecting to the server via SSMS- Management won't allow for increased RAM on this server- Restarting the SQL Server service fixes the issue, but the RAM slowly climbs back up- Online SQL Queries are showing that my databases are using approximately 2.5GB of RAM for buffer pages all added up- This issue has happened approximately once a week for the last few weeksCould anyone please direct me on how to better monitor / see where this memory usage is going? I'd appreciate some assistance on this.

Buffer cache hit ration ( for week )

Posted: 27 Mar 2013 08:39 PM PDT

Hi everybody , I need your help please :I want to create a graph to show the progression of the "buffer cache hit ratio" for last 7 days (a week). Until now I use this script (actual moment ) :[code="sql"]SELECT CAST( ( SELECT CAST (cntr_value AS BIGINT) FROM sys.dm_os_performance_counters WHERE counter_name = 'Buffer cache hit ratio' )* 100.00 / ( SELECT CAST (cntr_value AS BIGINT) FROM sys.dm_os_performance_counters WHERE counter_name = 'Buffer cache hit ratio base' ) AS NUMERIC(6,3) )[/code]

Server doesn't accept connections from the web!

Posted: 27 Mar 2013 02:09 PM PDT

I am making a VB.Net app that accesses my SQL Server, but when I try to run the app from a remote computer, I get an error that it can't find the server.It works fine if I access it from the local network using the local IP, but outside, using the public IP, it doesn't work!I checked the Allow Remote Connection options, but that didn't work eitherAlso, I tried to put my server as DMZ host on my router, didn't work.Additionally, I tried to disable windows firewall, no results.Environment: Windows Server 2012, SQL Server 2012Maxwell.

How would you recover from shutdown due to lack of drive space

Posted: 27 Mar 2013 08:58 AM PDT

We have a database server hosting SharePoint sites. I'm not the SharePoint admin so I don't know about its admin processes. Our SharePoint admin tried to delete a site, which almost shut down the SQL Server because tempdb and the SharePoint database log balooned up. We're working on figuring out a better way to do this. So... Obviously we try to avoid running out of space with various monitoring tools but how do you recover if a rogue process fills up the drive and the SQL Server shuts down? We're going to do our best to avoid this scenario but we almost got to that point today in less than an hour...Thanks for any pointers.

Reset Index Usage Counters

Posted: 27 Mar 2013 06:44 AM PDT

Hello, is it possible to reset the Index Usage counters like user_seeks,user_scans,user_lookups etc etc on a [i]production [/i] server without restarting the SQL Service?thanks

[Articles] Halo 4 and Hadoop

[Articles] Halo 4 and Hadoop


Halo 4 and Hadoop

Posted: 27 Mar 2013 11:00 PM PDT

A case study shows how the combination of Azure and Hadoop helped the Halo 4 team grow their successful franchise.

[SQL 2012] Need to stop SQL Server Agent Job Automatically :

[SQL 2012] Need to stop SQL Server Agent Job Automatically :


Need to stop SQL Server Agent Job Automatically :

Posted: 28 Mar 2013 12:54 AM PDT

Hi, Re-Index job runs for more than 10 hrs which affects the production. So decided to stop the job automatically once it runs 10 hrs. Can i know how to stop the job automatically after certain amount of time.

SQL Server 2012 Insane Licensing Price!

Posted: 28 Jan 2013 12:31 AM PST

Hi, We are about to launch some new projects that has been designed to run under SQL Server 2012 databases. However after we ask our software vendor for the SQL Server Licensing price, we are about to give up using SQL Server and considering to adopt another database solution. Let go to the numbers. Case 1: System: One system accessed from one applicantion server. This application Server opens several connections to the database. About 200 users. Server: Dell Blade with 2 X Xeon 6 core total 12 core. SQL Server Standart, Windows 2008 R2. Price offered from our vendor: USD 2818,00 for each 2 core per year. Total USD 16912,00 per year X 3 years Total SQL Server Licensing Cost for 3 years: US$ 50736,00! Contract = Select + EA We understand that is quite expensive for just one server with SQL Server Standart. Even using ORACLE the total price is above than 1/3 that SQL Server total price. I really thank if somebody could help-me to check this license cost. If it is true, I going to study another database solutions, because SQL Server will be seriously commited in Small and medium companies. Thanks in advance Eduardo Pin

[T-SQL] Combine multiple combinations to one

[T-SQL] Combine multiple combinations to one


Combine multiple combinations to one

Posted: 28 Mar 2013 01:25 AM PDT

Hi, The following test condition :[code="sql"]IF object_id('tempdb..#testEnvironment') IS NOT NULLBEGINDROP TABLE #testEnvironment;END;CREATE TABLE #testEnvironment ([Sample_ID] INT, [Rep_ID] INT, [Result] VARCHAR(20))INSERT INTO #testEnvironment SELECT 1, 1, 'O152' UNION ALL SELECT 1, 2, 'O2' UNION ALLSELECT 1, 3, 'O157' UNION ALL SELECT 1, 4, 'O154' UNION ALLSELECT 2, 1, 'O5' UNION ALL SELECT 2, 2, 'Negative' UNION ALLSELECT 2, 3, 'O1' UNION ALL SELECT 3, 1, 'O157' UNION ALLSELECT 3, 2, 'O1' UNION ALL SELECT 3, 3, 'O1' UNION ALLSELECT 4, 1, 'O157' UNION ALL SELECT 4, 2, 'Negative' UNION ALLSELECT 4, 3, 'O2' UNION ALL SELECT 4, 4, 'O152' UNION ALL SELECT 5, 1, 'Negative' UNION ALLSELECT 5, 2, 'Negative' UNION ALL SELECT 5, 3, 'Negative' UNION ALLSELECT 6, 1, 'O154' UNION ALL SELECT 6, 2, 'O157' UNION ALL SELECT 6, 3, 'O152' UNION ALLSELECT 7, 1, 'Negative' UNION ALL SELECT 7, 2, 'O152' UNION ALL SELECT 7, 3, 'O157';WITH SampleData AS (SELECT [Sample_ID], ISNULL([Interim],[Profile]) AS [Profile]FROM (SELECT [Sample_ID], CAST(CASE WHEN MAX([Result]) = MIN([Result]) AND MAX([Result]) = 'Negative' THEN 'Negative' ELSE NULL END AS NVARCHAR(MAX)) FROM #testEnvironment GROUP BY [Sample_ID] )a([Sample_ID],[Interim])CROSS APPLY (SELECT ISNULL(STUFF((SELECT ';'+[Result] FROM #testEnvironment WHERE a.Sample_ID = Sample_ID AND CHARINDEX('O15',[Result]) > 0 FOR XML PATH(''), TYPE ).value('.','NVARCHAR(MAX)'),1,1,'') ,'Non-pathogen') )b([Profile]))SELECT [Profile], COUNT([Profile]) AS [Count]FROM SampleDataGROUP BY [Profile][/code]Returns :[code="plain"]Profile Count-------------------- -------Negative 1Non-pathogen 1O152;O157 1O152;O157;O154 1O154;O157;O152 1O157 1O157;O152 1[/code]As you see different variations of same combination are counted separately. How can I avoid it and have : [code="plain"]Profile Count-------------------- -------Negative 1Non-pathogen 1O152;O157 2O152;O157;O154 2O157 1[/code]Thanks in advance for any suggestion.

Just a random observation with 'WITH'

Posted: 27 Mar 2013 10:46 PM PDT

Following is my observation with 'WITH'I executed the below query[code="sql"]SELECT * FROM Sales.SalesOrderDetail (NOLOCK)[/code]Which run sucessfullynow i execute the below query[code="sql"]UPDATE Sales.SalesOrderDetail_BACKUP (ROWLOCK)SET UNITPRICE=1WHERE PRODUCTID=707[/code](I know the query does not make sense but just a trail)It gives me the error as [b]Incorrect syntax near '('[/b].If i change it to [code="sql"]UPDATE Sales.SalesOrderDetail_BACKUP WITH(ROWLOCK)SET UNITPRICE=1WHERE PRODUCTID=707[/code]It Works. Just out of curiosity i would like to know why is WITH not mandatory with (NOLOCK).

How do i line up the weeks in T-SQL ?

Posted: 27 Mar 2013 06:52 AM PDT

How do i line up the weeks ('Weeks-2013' and 'Weeks-2012') in T-SQL ?Below is sample data to test with:SELECT DISTINCT 'International' Campus,28 'Weeks-2013',0 'Weeks-2012',2 'Student-2013',0 'Student-2012',0 StudentPrevYrTotalUNIONSELECT DISTINCT 'International' Campus,32 'Weeks-2013',0 'Weeks-2012',1 'Student-2013',0 'Student-2012',0 StudentPrevYrTotalUNIONSELECT DISTINCT 'International' Campus,0 'Weeks-2013',32 'Weeks-2012',0 'Student-2013',1 'Student-2012',1 StudentPrevYrTotalUNIONSELECT DISTINCT 'International' Campus,29 'Weeks-2013',0 'Weeks-2012',6 'Student-2013',0 'Student-2012',0 StudentPrevYrTotalUNIONSELECT DISTINCT 'International' Campus,27 'Weeks-2013',0 'Weeks-2012',5 'Student-2013',0 'Student-2012',0 StudentPrevYrTotalUNIONSELECT DISTINCT 'International' Campus,24 'Weeks-2013',0 'Weeks-2012',4 'Student-2013',0 'Student-2012',0 StudentPrevYrTotalUNIONSELECT DISTINCT 'International' Campus,0 'Weeks-2013',24 'Weeks-2012',0 'Student-2013',3 'Student-2012',4 StudentPrevYrTotalORDER BYCampus,'Weeks-2013','Weeks-2012'The idea is to achieve following:Campus / 'Weeks-2013' / 'Weeks-2012' / 'Student-2013' / 'Student-2012'International / 24 / 24 / 4 / 3International / 27 / 0 / 5 / 0International / 28 / 0 / 2 / 0International / 29 / 0 / 6 / 0International / 32 / 32 / 1 / 124,27,28,29,32 are the week numbers.

Update Help..

Posted: 27 Mar 2013 03:47 PM PDT

Hi Guys, Need favor. Here is the e.g. I am using TABLEA.ID (as an e.g) in my SP.I am not sure how many SP I am using TABLEA.ID. What i want to update TABLEA.ID TO TABLEB.ID from ALL SPs in my database. Please let me know if my question is not clear. Any help would be great help.Thank You.

can't view all db_users when select * from sys.databaseprincipals

Posted: 27 Mar 2013 07:02 AM PDT

Per msdn, there is a permissions restrictions where,,,Permissions--------------------------------------------------------------------------------Any user can see their own user name, the system users, and the fixed database roles. To see other users, requires ALTER ANY USER, or a permission on the user. To see user-defined roles, requires ALTER ANY ROLE, or membership in the role. Since I don't know who the db_users area, I need a query that I can run to see ALL db_users? Anyone know?

how to show results in pivot in sql please?

Posted: 27 Mar 2013 07:44 AM PDT

Hello All, how to show results in pivot format,please assit me to get the desired output as attached--create table #MedMediaTemp1 (PatientJoinDate Date, VisitType varchar(25), Priority varchar(12), Roller int, Total int)insert into #MedMediaTemp1 select '04/11/2013','Emergency Visit','Priority 1',1,34unionselect '04/11/2013','Emergency Visit','Priority 2',1,21unionselect '04/11/2013','Emergency Visit','Priority 3',1,67unionselect '04/11/2013','Emergency Visit','Priority 2',2,9unionselect '04/11/2013','Emergency Visit','Priority 3',2,21unionselect '04/11/2013','Emergency Visit','Priority 1',2,18unionselect '04/11/2013','Appointment Visit','Priority 2',1,7unionselect '04/11/2013','Appointment Visit','Priority 1',1,9unionselect '04/11/2013','Appointment Visit','Priority 3',1,217unionselect '04/11/2013','Appointment Visit','Priority 3',2,16unionselect '04/11/2013','Appointment Visit','Priority 2',2,1unionselect '04/11/2013','Appointment Visit','Priority 1',2,22unionselect '03/14/2013','Emergency Visit','Priority 3',1,64unionselect '03/14/2013','Emergency Visit','Priority 2',1,27unionselect '03/14/2013','Emergency Visit','Priority 1',1,33unionselect '03/14/2013','Emergency Visit','Priority 3',2,68unionselect '03/14/2013','Emergency Visit','Priority 2',2,68unionselect '03/14/2013','Emergency Visit','Priority 1',2,43unionselect '03/14/2013','Appointment Visit','Priority 2',1,10unionselect '03/14/2013','Appointment Visit','Priority 3',1,17unionselect '03/14/2013','Appointment Visit','Priority 1',1,11unionselect '03/14/2013','Appointment Visit','Priority 1',2,14unionselect '03/14/2013','Appointment Visit','Priority 3',2,56unionselect '03/14/2013','Appointment Visit','Priority 2',2,30unionselect '02/11/2013','Emergency Visit','Priority 2',1,56unionselect '02/11/2013','Emergency Visit','Priority 3',1,69unionselect '02/11/2013','Emergency Visit','Priority 1',1,352unionselect '02/11/2013','Appointment Visit','Priority 1',1,10unionselect '02/11/2013','Appointment Visit','Priority 2',1,54unionselect '02/11/2013','Appointment Visit','Priority 3',1,175unionselect '01/12/2013','Emergency Visit','Priority 2',1,20unionselect '01/12/2013','Emergency Visit','Priority 3',1,389unionselect '01/12/2013','Emergency Visit','Priority 1',1,642unionselect '01/12/2013','Appointment Visit','Priority 1',1,76unionselect '01/12/2013','Appointment Visit','Priority 2',1,48unionselect '01/12/2013','Appointment Visit','Priority 3',1,98unionselect '12/24/2012','Emergency Visit','Priority 2',1,5unionselect '12/24/2012','Emergency Visit','Priority 3',1,74unionselect '12/24/2012','Emergency Visit','Priority 1',1,36unionselect '12/24/2012','Appointment Visit','Priority 1',1,10unionselect '12/24/2012','Appointment Visit','Priority 2',1,10unionselect '12/24/2012','Appointment Visit','Priority 3',1,201unionselect '11/11/2012','Emergency Visit','Priority 1',1,26unionselect '11/11/2012','Emergency Visit','Priority 3',1,58unionselect '11/11/2012','Emergency Visit','Priority 2',1,3unionselect '11/11/2012','Appointment Visit','Priority 3',1,129unionselect '11/11/2012','Appointment Visit','Priority 2',1,15unionselect '11/11/2012','Appointment Visit','Priority 1',1,9unionselect '10/12/2012','Emergency Visit','Priority 1',1,20unionselect '10/12/2012','Emergency Visit','Priority 3',1,383unionselect '10/12/2012','Emergency Visit','Priority 2',1,38unionselect '10/12/2012','Appointment Visit','Priority 3',1,104unionselect '10/12/2012','Appointment Visit','Priority 1',1,7unionselect '10/12/2012','Appointment Visit','Priority 2',1,10unionselect '09/13/2012','Emergency Visit','Priority 2',1,35unionselect '09/13/2012','Emergency Visit','Priority 3',1,64unionselect '09/13/2012','Emergency Visit','Priority 1',1,32unionselect '09/13/2012','Appointment Visit','Priority 1',1,86unionselect '09/13/2012','Appointment Visit','Priority 2',1,116unionselect '09/13/2012','Appointment Visit','Priority 3',1,156unionselect '08/11/2012','Emergency Visit','Priority 2',1,60unionselect '08/11/2012','Emergency Visit','Priority 1',1,3unionselect '08/11/2012','Emergency Visit','Priority 3',1,69unionselect '08/11/2012','Appointment Visit','Priority 2',1,61unionselect '08/11/2012','Appointment Visit','Priority 1',1,5unionselect '08/11/2012','Appointment Visit','Priority 3',1,100unionselect '07/20/2012','Emergency Visit','Priority 2',1,298unionselect '07/20/2012','Emergency Visit','Priority 3',1,175unionselect '07/20/2012','Emergency Visit','Priority 1',1,52unionselect '07/20/2012','Appointment Visit','Priority 3',1,64unionselect '07/20/2012','Appointment Visit','Priority 1',1,27unionselect '07/20/2012','Appointment Visit','Priority 2',1,49unionselect '06/10/2012','Emergency Visit','Priority 2',1,147unionselect '06/10/2012','Emergency Visit','Priority 3',1,618unionselect '06/10/2012','Emergency Visit','Priority 1',1,20unionselect '06/10/2012','Appointment Visit','Priority 2',1,12unionselect '06/10/2012','Appointment Visit','Priority 1',1,9unionselect '06/10/2012','Appointment Visit','Priority 3',1,88SELECT * FROM #MedMediaTemp1 from this table how to produce the below results through sql please assist me April 2013, Roller 1 Aprill 2013 Roler 2 April 2013 Total March 2013 Roller 1 March 2013 Roler 2 March 2013 TotalAppointment Visit - ALL 233 39 272 38 100 138Appointment Visit - Priority 1 9 22 31 11 14 25Appointment Visit - Priority 2 7 1 8 10 30 40Appointment Visit - Priority 3 217 16 233 17 56 73Emergency Visit - ALL 122 48 170 124 179 303Emergency Visit - Priority 1 34 18 52 33 43 76Emergency Visit - Priority 2 21 9 30 27 68 95Emergency Visit - Priority 3 67 21 88 64 68 132Thanks in advancedhani

How to concatenate group of rows

Posted: 27 Feb 2013 10:28 AM PST

Hi,I have a data structure as followed : [Name], [Task] and the data is :John, task1John, task2Joe, task1Joe, task3Joe, task4Jane, task1Jane, task4..........I wonder how to use the 'SELECT' to make the following result :John, task1;task2Joe, task1;task3;task4Jane, task1;task4.....Thanks for any help.

How to create a calendar table for 65 weeks (not the default of 53 weeks) ?

Posted: 27 Mar 2013 07:27 AM PDT

I have a cte script to create a calendar table for 53 weeks, starts at 2011-01-01 (week 1, Saturday) and ends at 2011-12-31 (week 53, Saturday).The average weeks starts on a Sunday (2011-01-02) and ends on a Saturday (2011-01-08). I use 'SELECT datepart(ww, [Date])' to generate the weeks, the problem is that datepart week only generates 53 weeks (since there is only 53 weeks in a year). Customer wants weeks from 01 January 2011 to 31 March 2012 (65 weeks).The calendar table i use spans over 15 yrs (2011-01-01 to 2011-12-31, 2012-01-01 to 2012-12-31, etc.)Below is the cte scriptdeclare @StartDate datetime = '2011-01-01',@EndDate datetime = '2012-01-01'; -- for testing purposes I only selected one yearwith Calendaras(SELECT cast(@StartDate as datetime) [Date]union allSELECT [Date] + 1 FROM Calendar WHERE [Date] + 1 < @EndDate)SELECT [Date],datepart(dy, [Date]) [day of year],datepart(dd, [Date]) [day of month],datepart(dw, [Date]) [day of week],datename(dw, [Date]) [dayname],datepart(ww, [Date]) [week] ,datepart(mm, [Date]) [month],datepart(yy, [Date]) [year]--into dbo.CalendarTable -- uncomment to populate CalendarTableFROM CalendarOPTION (MAXRECURSION 0)

SELECT FROM Multiple tables with names in a table

Posted: 27 Mar 2013 05:34 AM PDT

I am trying to do a query that I am not sure can be written without using a cursor and building dynamically. But if someone can help me figure out how to do it without having to use cursors that would be awesome.So I have a table called AppSystem.ApplianceTypes. It has a varchar column named ApplianceTypeTableName that contains the name of other tables in the form of schema.tablename. There are 71 rows of tablenames in the AppSystem.ApplianceTypes table. Each of the 71 tables have a column inside them called MFG. What I want to do is do a query that lists the MFG values in all 71 tables in the same query (like a UNION). By the way, the 71 rows in AppSystem.ApplianceTypes will grow in the future as we add new tables.If someone can figure this out, I will be praising them as a SQL King/Queen. :) If you don't think it can be done without cursors just let me know and I will figure that part out using the cursor.Thanks,David

[SQL Server 2008 issues] There is insufficient memory available in the buffer pool.

[SQL Server 2008 issues] There is insufficient memory available in the buffer pool.


There is insufficient memory available in the buffer pool.

Posted: 27 Mar 2013 03:09 PM PDT

Please help me.I m try to import data from csv to table using bulk insert but it gives me insufficient memory available in buffer pool error.what i have to do for resolving this issue.Regards,Arjun.

Find a word in a String

Posted: 27 Mar 2013 01:54 PM PDT

How do I find a word in string? That word stands by itselfIn this example, CD is the word, it should only return row 1 row 3 and row 4 as cd appears seperately on those rows.Table AValues(varchar)abbc cd efabcdefadb ef cdad cd eg

The delivery extension for this subscription could not be loaded.

Posted: 27 Mar 2013 12:39 AM PDT

Hi All, I have a single SSRS report which is set with a daily (weekday) subscription. Randomly I get the following error when the subscription fires -[b]The delivery extension for this subscription could not be loaded[/b]. This doesn't happen all the time, it is random, sometimes the reports sends out via email fine, then sometimes not.Does anyone have any ideas why this would be happening? Unfortunately Google isn't being much help on this one..I am running SQL Server 2008R2 SP1 Cheers, Chris.

Find out The column list which are referenced by paricular table in stored procedure using DMV

Posted: 27 Mar 2013 05:10 PM PDT

How to find out The column list which are referenced by paricular table in stored procedure using DMV

how i can save image in sql server 2008

Posted: 27 Mar 2013 06:20 PM PDT

hi every onei am using vb 6.0 and i want to save my image in sqlserver 2008 how can i save and retrieve image from sqlserver

Existing While Loop

Posted: 27 Mar 2013 05:07 AM PDT

I'm looking at some code from a project that is not performing very well, and I've noticed that they have a while loop on a couple of the data gathering procedures that do a number of retries.But theres no obvious break only a Return 0 as per below.[code="sql"]WHILE @Retry<@LoopcountBEGIN BEGIN TRY --DO STUFF RETURN 0 END TRY TRY CATCH SET @Retry=@Retry+1 END CATCHEND[/code]Surely there needs to be a BREAK rather than a RETURN or does the RETURN act like a BREAK in this instance?

how to group using cases

Posted: 27 Mar 2013 05:36 AM PDT

hi,i need to group people who weight more less than 200 ordered by age (from the oldest to the youngest), and the result need to work with bigger tables.how can i build a case?, this is the data.my major problem is that i dont know how to build a case that once we have the first group of people will jump to the next person.create table #Person ( PersonId int identity(1,1), PersonName nvarchar(100), PersonHeight int, PersonWeight int, PersonBorn datetime )insert into #Person(PersonName, PersonHeight, PersonWeight, PersonBorn)values ('Loskov', 180, 80, '1988-01-19'),('Sychev', 190, 88, '1966-05-15'),('Evseev', 174, 91, '1974-12-29'),('Maminov', 166, 70, '1981-09-05'),('Orlov', 176, 89, '1972-12-29'),thanks!

Migration

Posted: 11 Oct 2012 10:17 AM PDT

Any article/guidance/experiences on Migrating application data from Iseries to SQL Environment using SSIS as well as wherescape? thanks.

MS SQL + MS Visual Studio VS Wherescape RED

Posted: 24 Dec 2010 07:53 PM PST

Have you ever heard of or used Wherescape RED to do the data warehouse developing? The fact is it is so popular in New Zealand, and maybe already hi-jacked the whole NZ BI industry. However, after I have been forced to use this product to do the MS SQL DW and database project for months, I found it has nowhere can compete with MS SQL + MS visual studio itself. AS an ETL tool, SSIS is far more flexible and scalable. As to version control, source safe or team fundation is far more sophisticated. in terms of documentation, the BI Documenter is certainly much advanced.What I also found is that Wherescape is also not able to keep up with the SQL server new functionalities? The autogenerated sql script is good on standardized point of view but largely diminished the developer's SQL skills.Indeed, I was totally lost why we need it under Microsoft SQL evironment? Could anybody give me your thoughts to clear my doubts?

Slow Cascade Stored Procedure & Hang

Posted: 14 Mar 2013 06:11 AM PDT

Table Image : http://tinypic.com/r/1075g6v/6So I have this query which searches id by id and calculates the cost accordingly. But is very slow and I would like to understand how I could convert it into a set based operation.So depending on our condition we calculate our modeled costs differently.When user updates a driver we can run a update on the entire driver column based on where it has changed.But when it comes to calculating the modeled cost. We do it row by row as the fixed cost differs and then divide by months. I have pasted the code below. Is there still a way to this by set based operation ?First we update the drivers in the same table depending on what value has changed using an update and then update the modeled cost row by row (which is really slow)Code :SELECT @rowCounter = 1, @totalrows = @@ROWCOUNTWHILE @rowCounter <= @totalrows BEGIN SELECT @currentId = tempId FROM @temp WHERE row = @rowCounter SELECT @newModeledCost = case when not exists (select 1 from dbo.DIMSTD_SCENARIO where SCENARIO0_Name = SCENARIO and SCENARIO2_Name = 'Model') then ISNULL(DriverValue1,0)*ISNULL(DriverValue2,0)*ISNULL(UnitA,0)*ISNULL(UnitB,0)+ISNULL(FixedCost,0) -- normal allocation for all scenarios else (ISNULL(unita,0) * (ISNULL(DriverValue1,0)/ISNULL(NULLIF(DriverValue2,0),1))* ISNULL(UnitB,0))+ISNULL(FixedCost,0) --(ISNULL(unita,0) * (ISNULL(DriverValue1,0)/ISNULL(DriverValue2,0))*ISNULL(UnitB,0))+ISNULL(FixedCost,0) -- allocation for model scenarios end , @oldModeledCost = ISNULL(ModeledCost,0), @newOct = (ISNULL(@newModeledCost,0) * (ISNULL(Oct, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newNov = (ISNULL(@newModeledCost,0) * (ISNULL(Nov, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newDec = (ISNULL(@newModeledCost,0) * (ISNULL(Dec, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newJan = (ISNULL(@newModeledCost,0) * (ISNULL(Jan, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newFeb = (ISNULL(@newModeledCost,0) * (ISNULL(Feb, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newMar = (ISNULL(@newModeledCost,0) * (ISNULL(Mar, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newApr = (ISNULL(@newModeledCost,0) * (ISNULL(Apr, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newMay = (ISNULL(@newModeledCost,0) * (ISNULL(May, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newJun = (ISNULL(@newModeledCost,0) * (ISNULL(Jun, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newJul = (ISNULL(@newModeledCost,0) * (ISNULL(Jul, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newAug = (ISNULL(@newModeledCost,0) * (ISNULL(Aug, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newSep = (ISNULL(@newModeledCost,0) * (ISNULL(Sep, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))) FROM dbo.TBF_BUDGETExpenses WHERE BudgetId = @currentId --and not exists (select 1 from dbo.DIMSTD_SCENARIO where SCENARIO0_Name = SCENARIO and SCENARIO2_Name = 'Model') UPDATE dbo.TBF_BUDGETExpenses SET ModeledCost = @newModeledCost, Oct = @newOct, Nov = @newNov, Dec = @newDec, Jan = @newJan, Feb = @newFeb, Mar = @newMar, Apr = @newApr, May = @newMay, Jun = @newJun, Jul = @newJul, Aug = @newAug, Sep = @newSep, Username = 'Cascade', lastmodified = getdate() WHERE BudgetId = @currentId AND @oldModeledCost <> 0 Print 'Record Update ' + CAST(@currentId AS VARCHAR(15)) SET @rowCounter = @rowCounter + 1END

Error in application when started by SQL server agent

Posted: 27 Mar 2013 08:06 AM PDT

I' running SQL server 2008R2. I have a C# app that accesses a database and then constructs and sends an email based on the info. Outlook is used to send the email.I created an SSIS job to run this on a schedule. The app would get an error sending the email when I started it from the 'job' tab in SQL server agent, even if I signed on as the SSIS job owner. So I created a PROXY/credentials (PROXY uses an id that has access to the OUTLOOK profile I'm using to create the email) and set the SSIS job to 'run as' the PROXY. This worked and when I start the SSIS job (from SQL Server Agent) when I am signed in as the owner of the job, the emails get sent. However, if I try to start it when I'm signed on as a different user (that does have access to the OUTLOOK profile,) the app errors. I also receive the same error if the job is started using the scheduler. The error is "Error in Outlook logon: The server is not available."First tme asking a question here so hopfully I'm m aking sense.

Execute SQL in all database

Posted: 27 Mar 2013 01:50 AM PDT

Hi,I would like do execute multiple statements in all databases.I created the following code:DECLARE @C1_NAME VARCHAR(1000), @SQL VARCHAR(1000) DECLARE C1 CURSOR FORSELECT Name FROM sys.databases WHERE substring(name,1,3) = 'MyD'ORDER BY NAMEOPEN C1FETCH NEXT FROM C1 INTO @C1_NAMEWHILE @@FETCH_STATUS = '0'BEGIN SET @SQL = 'USE ' + @C1_NAME; PRINT @SQL EXECUTE( @SQL ); SET @SQL = 'SELECT * FROM MyTable'; -- or update MyTable set field1 = 1; PRINT @SQL EXECUTE( @SQL ); FETCH NEXT FROM C1 INTO @C1_NAMEEND CLOSE C1DEALLOCATE C1GOThe problem is that it is running only on the database where he was processed this instruction...There is another way to pass all database and execute SQL commands on each base?Thanks!!Jose Anchieta Carvalho Junior

how to calculate average time in sql?

Posted: 27 Mar 2013 05:09 AM PDT

Hi i have a big problem herei have a list of person in my databaseeach person have several entry date and exit datei would like to know what is the average time those person stays if the first person stay 2 days , the second 3 days ....i would like to calculate 2+3+... and give the averagethe problem is some person have entry date but without exit dateso i need to eliminate those person i will only consider person that have an exit following and entrythanks to help please its very urgent and important

CC# in String

Posted: 27 Mar 2013 06:35 AM PDT

Hi Guys,I have a table with a column that may have Credit Card number in the string. What I need to do is find rows that might have credit card numbers... Does anyone have function (NOT using CLR) to find group of numbers in a string that might look like CC#. The string also has date in it.I was working on someting like this, but I am sure others have already worked on a function to do so...[quote]WITH Tokens AS(SELECT 'you 123456789123-9999' AS Memo UNION ALL SELECT '11/29/2012 me 99123456789-99' UNION ALL SELECT 'this is my 9912345678999-88--99' UNION ALL SELECT 'What the 11/29/2012' UNION ALL SELECT '11/29/2012 98897788778') SELECT Memo, SUBSTRING(Memo,NULLIF(PATINDEX('%[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]%',Memo),0),20) as CCN,patindex('%[0-9][0-9 -][0-9 -][0-9 -][0-9 -][0-9 -][0-9 -][0-9 -][0-9 -][0-9 -][0-9 -][0-9 -][0-9 -]%',Memo),Substring(Memo, patindex('%[0-9][0-9 -][0-9 -][0-9 -][0-9 -][0-9 -][0-9 -][0-9 -][0-9 -][0-9 -][0-9 -][0-9 -][0-9 -]%',Memo), 20) as CCN2FROM TokensWHERE patindex('%[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]%',Replace(replace(Memo, '-', ''), ' ', '')) > 0[/quote]Thanks in Advance!Laura

Can I use a case in a where?

Posted: 27 Mar 2013 05:04 AM PDT

Tried this below with no luck I Wanted to use today's date if the parameter @enddate was greater than todayWHERE (KindOfDay = 'Weekday') AND (Date BETWEEN @startdate AND CASE When @enddate > getdate() THEN Getdate() ELSE (@enddate + 1))Thanks in Advance.. Again :)Joe

SQL registration in Management STudio

Posted: 25 Mar 2013 11:34 PM PDT

From time to time, I need to test a users login with SQL, so I go to a registered servers properties and test from there. Somehow I saved a users id and password. But now I cannot resave or overwrite it with the sa userid and password. I even tried deleting the registration and re-registering, but the user id keeps coming back.

Arithmetic overflow error converting varchar to data type numeric.

Posted: 27 Mar 2013 02:18 AM PDT

Here is my query,select a.row,sum(CONVERT(decimal(11,0), a.value)) as total from(SELECT adc.row,adc.col,adc.value FROM AnesthesiaDrugCells adc right join AnesthesiaDrugs ad on adc.row = ad.rowwhere adc.ProcID=11080 and ad.ProcID = 11080 AND adc.row>=2 and adc.row<=11and adc.value != '' and ad.transmissiontype != 'I' and adc.value !='-' and (ISNUMERIC(adc.value)=1)) as awhere(ISNUMERIC(a.value)=1)group by a.roworder by a.row Resulting in error message 'Arithmetic overflow error converting varchar to data type numeric.'If I just execute,(SELECT adc.row,adc.col,adc.value FROM AnesthesiaDrugCells adc right join AnesthesiaDrugs ad on adc.row = ad.rowwhere adc.ProcID=11080 and ad.ProcID = 11080 AND adc.row>=2 and adc.row<=11and adc.value != '' and ad.transmissiontype != 'I' and adc.value !='-' and (ISNUMERIC(adc.value)=1)) I am getting,row col value7 1 505 1 110 5 48 1 1508 2 506 1 1006 4 504 1 35Now I need to sum all those values for each row.My result should be likerow value4 355 16 200 so on..'m not sure where am I going wrong. Please someone help.Thank you

Updating Part of Filename

Posted: 27 Mar 2013 03:08 AM PDT

Hello Everyone,What would be the best way to go about converting a part of this Filename from upper case to lower case?FROM: [b]APR[/b]0111OFR_AAAAAAAA_01TO: Apr0111OFR_AAAAAAAA_01

running total possible??

Posted: 27 Mar 2013 01:45 AM PDT

HiI have a view which selects name and date say to make it short...each name can contain more than one recordI want to keep a running total of names by month examplename1 1/1/12name1 1/2/12name3 1/2/12name3 2/1/12name1 2/1/13The end results I want to show January(or 1) =2(2 names)February = 2Im also writing this out to SSRS so maybe a formuala ?ThanksJoe

Space between parameters in reporting services

Posted: 27 Mar 2013 03:39 AM PDT

Hi,I am stuck with an issue which relates to managing space between parameters. I have a multi-value parameter with values like 'KWR TFR' and 'KFT IUT'. I know there is data available for these values in the database but because of the space in between the parameter values, it is not detected and the report draws a blank. If there is a value like 'KIETYR', then the data for the same is displayed on the report.I have managed to use the split function for the multi-value parameter but dont know what to do with this space in the values.Does anyone have any idea ?Thanks,Paul

Count number of days that a doctor works

Posted: 27 Mar 2013 12:09 AM PDT

I need to count the number of days that a doctor works in a month. My data has the doc's name and appointment date. The number of visits per month isselect MonthName, DocName, count(*)from Appointmentsgroup by MonthName, DocNameWhat I need to do is count each appointment day as one for each doc, then sum the count to get days worked. I just don't know how to do that efficiently.

Extract between &lt;div&gt; tags

Posted: 27 Mar 2013 02:37 AM PDT

I need to extract data from a sharepoint list, where the data is written within div tags. I got this so far:[code="sql"]DECLARE @c varchar(100)SET @c = '<div>Sunshine</div>' SELECT SUBSTRING( @c, CHARINDEX('>', @c) + 1, LEN(@c) - CHARINDEX('>', @c) - CHARINDEX('<', REVERSE(@c)))[/code]But when I adapt this to my table it says:[quote]Argument data type ntext is invalid for argument 1 of len function.[/quote]How can i work around this problem?thanks in advance!

Clustering an Existing SQL 2008 R2 Server

Posted: 27 Mar 2013 03:10 AM PDT

Hi distinguished SQL peeps, I have a need to cluster two physical SQL servers to achieve some form of HA. The tricky part is that these servers are already live and there is not any alternative tin to put them onto (the HA was an afterthought, as always). I am aware that you can not make an existing SQL instance into a cluster but I was wondering about anyone's experience with this, so all comments are welcome. My idea is to create two new instances on each of the servers to facilitate a side-by-side upgrade on the same servers. This would mean that if it all goes belly up I can revert to the default instance. One new instance would be the new active node for that server and the second would be the inactive node, so they would effectively fail over onto a single server in critical failure scenario, running at a potentially reduced capacity (not an issue, as servers are highly spec'd). Both servers have 1000+db's residing on them.Has anyone ever tried this? Do you know it's not possible? If so let me know your experiences. Much obligedE

Options for deleting millions of rows from a heap?

Posted: 27 Mar 2013 01:22 AM PDT

One of the Devs I work with "inherited" a DB that started life back in the '80s with ISAM and such. It's only a couple tables, but one has ~500M rows, and weekly he needs to delete several million rows from it.This table has NO PK / Clustered Index, nor are any of the columns (or group of columns) a good choice for such (every column has duplicates) Also thankfully, there are no FKs on any of the tables.Right now, the Dev is modifying his delete routine to read in a couple thousand items from one table, delete them, then go and grab the next couple thousand, wrapping the delete in a transaction to help keep the log from ballooning out of control...Myself and another Dev have been pushing him to add an identity column as a PK / CI to the table, so that he could populate his delete table with the identity value (hopefully speeding up the process) Of course, with a couple hunded million rows, it would take a while to populate that ID column...Thanks all,Jason

Any potential issues with putting a Stored Procedure in Master?

Posted: 27 Mar 2013 01:12 AM PDT

I'm working in a rather locked down environment, and would rather avoid adding a DB if I can. I'm thinking of using the CheckDB procedure created by [url=http://spaghettidba.com/2011/11/28/email-alert-dbcc-checkdb/]Gianluca Sartori[/url] to run a weekly DBCC CheckDB against all DBs on the servers.Other than the normal "don't use the system DBs for user stuff" is there any potential issues?Although I am going to check, and see if I can create a DB for these sort of tools, and if so, use that instead.Thanks,Jason

Enabling Trace Flag T1118

Posted: 25 Mar 2013 04:17 PM PDT

Hi, We are using Sql server 2008 R2 express. We heard enabling T1118 is good for tempdb contention.Somewhere it said its not much help sql server 2005 aftwrwards. We are thinking about enabling it. Is there any disadvantages of enabling T1118?Thank You

Table design and query performance

Posted: 26 Mar 2013 11:36 PM PDT

Hi All,I'm working on database where a table contains an ID that is prefixed with a 2 letter identifier typei.eAB123456AB345234AB423423XY456789XY234344Ideally i would like to remove the prefix completely and create a new column for this.However I don't have the capacity to do this now as a lot of changes would be required.My Question is, when querying the data, say i wanted to only get all XY records, would it be quicker if I created a new column storing these types. (AB,XY) and then filter where column = 'XY' while retaining the prefix on the id. Or create a bit column where XY is set to 1 and filter on this?Or since the data is already setup with the prefix is it quicker just to filter where left 2 is XY?Many thanks

Inserting rows from one table to another another with PK column value incrementing

Posted: 25 Mar 2013 09:15 PM PDT

Hi,We are using Sql server 2008 R2 express. We have two tables,table1(id int,name nvarchar(50)) and table2(id int,name nvarchar(50)). For both tables id is primary key.I want to move rows from table2 to table1 with PK value incrementing i.e i want max(id)+1 of table1 for all rows copying .I tried this query[code="sql"]declare @root int select @root=max(id+1) from Bgd_common.dbo.table1 insert into Bgd_common.dbo.table1(id,Name) select @root,name from Bgd_common.dbo.table2[/code]But its giving error as ''Violation of PRIMARY KEY constraint 'PK_Table1'. Cannot insert duplicate key in object 'dbo.Table1'.". How to do?

Wednesday, March 27, 2013

[SQL Server] New San/Compellent testing

[SQL Server] New San/Compellent testing


New San/Compellent testing

Posted: 27 Mar 2013 12:38 AM PDT

I need to create a query that will execute for 20 minutes or more so that connectivity and redundancy can be tested on the SAN. Any suggestions will be greatly appreciated. SQL 2008 R2 developer.Thanks in advance.

Search This Blog