Saturday, June 29, 2013

[how to] Get the Time of Last update of a column

[how to] Get the Time of Last update of a column


Get the Time of Last update of a column

Posted: 29 Jun 2013 08:38 PM PDT

Good evening, This command gives the date of the last update for a table

SELECT UPDATE_TIME  FROM   information_schema.tables  WHERE  TABLE_SCHEMA = 'MyDB'  AND TABLE_NAME = 'MyTable'  

But I want to find the time of last update of a particular column of a table. I can't use triggers because I want to know the time of last update of a specific column from a system table.

I hope I explained well my problem. Thanks

MySQL design for big data

Posted: 29 Jun 2013 12:19 PM PDT

this is my first post on this forum although i have been following it for a while. Here's my question and i really need help about this one. I have a database witch contains trip services. every trip service has 30 fields of data and GPS position logs of vehicle movement and asterisk call logs, receipts etc. With my calculation over 1 year period database size would be at least 2.3 TB. Needles to say that querying needs to go in many-2-many fashion and all data needs to be available within seconds and i need to have it at disposal for at least 3 years. In future there will be around 2000 queries per second generating about 4000 records per seconds within approximate of 6000 fields and also approximate of 1000 - 2000 updates and 4000 SELECT operations. My question is : what is the bast practice to keep my database responsive and effective and what hardware is most suitable for that kind of demands. I have read the normalization article and i understand what is the goal. Hardware that i am planning to get is as follows ( HP Prolliant running centOS):

CPU : 2 x Intel Xeon Processor E5-2620 RAM : 48 GB HDD : 8 x HP 300 GB 6G SAS 15K rpm SFF (2.5-inch) Network : 4 x Gigabit

Thanks in advance,

best regards

What is the best way to create database table? [on hold]

Posted: 29 Jun 2013 08:04 AM PDT

I am developing a web-application almost similar to cardekho or carwale. Like these website, my application also contains full details of car such as brands, models. Car details like: basic overview, pictures, features, specification, dimensions, colors etc.

My question is what is the best practice to create database table. What I have concluded yet is I will create different tables for brands and models and will use foreign key to join them. But for car details, should I go put every detailed information in one table or should I keep that in different tables and join them with a primary model key?

How to select rows which have max and min of count?

Posted: 29 Jun 2013 01:18 PM PDT

I have the following table in SQL Server 2012:

CREATE TABLE [MyTable] (      [Id] UNIQUEIDENTIFIER NOT NULL PRIMARY KEY DEFAULT NEWID(),      [MyGroup] INT NOT NULL  );  

I want to output a table of the form

    | MyGroup | Count  Max |         |  Min |         |  

For example, if I have 30 rows in MyTable where MyGroup = 1, 20 rows where MyGroup = 2, and 10 rows where MyGroup = 3, then

    | MyGroup | Count  Max | 1       | 30  Min | 3       | 10  

What sort of query would output this information?

Difference between partitioning and fragmentation

Posted: 29 Jun 2013 05:47 AM PDT

Im trying to learn sth. about distributed databases and i stumbled upon the concepts of fragmentation and partitioning. Now im asking myself: whats the difference? It seems to me as if both do the same thing producing the similar results (deviding a relation horizontally/vertically/both)...

MariaDB header error

Posted: 29 Jun 2013 03:55 PM PDT

I had a MySQL server on my Ubuntu dev box (I use a tutorial "how to install a lamp on ubuntu" type of article) and I decided to install mariadb.

So I uninstall mysql and install mariadb.

~$ mysql --version  mysql  Ver 15.1 Distrib 10.0.3-MariaDB, for debian-linux-gnu (i686) using readline 5.1  ~$ mysqld --version  mysqld  Ver 10.0.3-MariaDB-1~precise-log for debian-linux-gnu on i686 (mariadb.org binary distribution)  

The installation worked fine, all my database worked fine except I get this error when I run the code:

Warning: mysqli::mysqli() [mysqli.mysqli]: Headers and client library minor version mismatch. Headers:50529 Library:100003 in /var/www/test.php on line 5

this is my php code:

  $mysqli = new mysqli("localhost", "root", "", "test");      if (mysqli_connect_errno()) {        printf("Connect failed: %s\n", mysqli_connect_error());        exit();    }      $query = "SELECT * FROM test";      if ($result = $mysqli->query($query)) {          while ($row = $result->fetch_assoc()) {            printf ("%s (%s)\n", $row["id"], $row["name"]);        }        $result->free();    }      $mysqli->close();  

and the mysql info:

Welcome to the MariaDB monitor.  Commands end with ; or \g.  Your MariaDB connection id is 40  Server version: 10.0.3-MariaDB-1~precise-log mariadb.org binary distribution    Copyright (c) 2000, 2013, Oracle, Monty Program Ab and others.    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.    MariaDB [(none)]> select version();  +------------------------------+  | version()                    |  +------------------------------+  | 10.0.3-MariaDB-1~precise-log |  +------------------------------+  1 row in set (0.00 sec)    MariaDB [test]> show tables;  +----------------+  | Tables_in_test |  +----------------+  | test           |  +----------------+  1 row in set (0.01 sec)    MariaDB [test]> show create table test\G  *************************** 1. row ***************************         Table: test  Create Table: CREATE TABLE `test` (    `id` int(11) DEFAULT NULL,    `name` varchar(255) DEFAULT NULL  ) ENGINE=InnoDB DEFAULT CHARSET=latin1  1 row in set (0.00 sec)    MariaDB [test]> select * from test;  +------+------+  | id   | name |  +------+------+  |    1 | dev  |  |    2 | qa   |  +------+------+  2 rows in set (0.01 sec)  

I tried a mysql_upgrade:

mysql_upgrade --force  Phase 1/3: Fixing table and database names  Phase 2/3: Checking and upgrading tables  Processing databases  information_schema  mysql  mysql.column_stats                                 OK  mysql.columns_priv                                 OK  mysql.db                                           OK  mysql.event                                        OK  mysql.func                                         OK  mysql.gtid_slave_pos                               OK  mysql.help_category                                OK  mysql.help_keyword                                 OK  mysql.help_relation                                OK  mysql.help_topic                                   OK  mysql.host                                         OK  mysql.index_stats                                  OK  mysql.innodb_index_stats                           OK  mysql.innodb_table_stats                           OK  mysql.ndb_binlog_index                             OK  mysql.plugin                                       OK  mysql.proc                                         OK  mysql.procs_priv                                   OK  mysql.proxies_priv                                 OK  mysql.servers                                      OK  mysql.slave_master_info                            OK  mysql.slave_relay_log_info                         OK  mysql.slave_worker_info                            OK  mysql.table_stats                                  OK  mysql.tables_priv                                  OK  mysql.time_zone                                    OK  mysql.time_zone_leap_second                        OK  mysql.time_zone_name                               OK  mysql.time_zone_transition                         OK  mysql.time_zone_transition_type                    OK  mysql.user                                         OK  performance_schema  test  test.test                                          OK  Phase 3/3: Running 'mysql_fix_privilege_tables'...  OK  

As far as I know everything there is ok. What's the problem?

Also, is MariaDB ready for production?

How to manage constants within an Oracle Database

Posted: 29 Jun 2013 09:17 AM PDT

This question touches on a few different parts of Oracle that I'm not particularly familiar with so bear with me:

I am trying to redesign some of the procedures and functions within the database I manage. One of the most annoying things is the use of integers to express the status of rows.

For example rows that need to be processed get the number 3, rows that have been processed get the number 0 and rows that are in the middle of processing get number 1. This is a simplified example.

What I was hoping to do was code those integers into constants so when the procedures are written the words would be self-documenting...

I've tried to use packages and functions to manage these constants. Everything comes out a bit messy. The cleanest I've found is:

CREATE OR REPLACE FUNCTION PROCESSED RETURN NUMBER AS BEGIN RETURN 0;    END;  /  

This allows you to type the sql below which looks relatively neat.

SELECT rows  FROM table  WHERE status = PROCESSED;  

The problem I've found is that these columns are indexed to return quickly but the functions mean the indexes aren't used.

With that background my question is: How should constants be managed in Oracle effectively? What solution has the best trade off for visual simplicity, logical organisation and database performance.

database structure - complicated requirements

Posted: 29 Jun 2013 04:45 AM PDT

I have a project to build a website but it's complicated and I'm having trouble figuring out what the best way to build the database would be to handle these particular requirements.

The site is for a local builders and farmers (and anyone else who uses heavy equipment) to rent their machinery amongst themselves. Users should be able to sign up and list an item of equipment which is then searchable and bookable by other users of the site.

So a builder might sign-up and upload a listing for his concrete mixer. Then another user can search for concrete mixers to hire between 2 dates and place a booking for the mixer through the site.

So far so good.

Problem is that the builder should be able to set a default per-day rate but they should also be able to say that through-out the month of July, or on the last two weekends in August the mixers default daily rate is different. So basically everyday could end up having a different rate and I'm having trouble figuring out what is the most efficient way to structuring the database and how to calculate the total costs of renting for several days if there's potentially a different rate every day.

At the moment I'm imaging having to loop through a 365 sized array but that can't be right. I'm a bit new to this so I'm probably just confused.

When creating remote BLOB store is "RBSFilestreamFile" always the name of the file to be added to the FILEGROUP?

Posted: 29 Jun 2013 04:45 PM PDT

When creating a remote BLOB store in SQL Server (2008 R2) is "RBSFilestreamFile" always the name of the file when adding it to the FILEGROUP like in this query (this is the name I've seen used in every example I've found online, but I need to know for sure)?

ADD FILE (name = RBSFilestreamFile, filename = 'c:\Blobstore')      TO FILEGROUP RBSFilestreamProvider  

I'm asking because I'm working on an application for restoring SharePoint content databases and need to know if I can hardcode this string into the application.

I know you can create file groups and files with any name you want, but specifically for setting up RBS with SharePoint using SQL 2008's built in FILESTREAM provider are these names an expected convention? i.e. Will it work if I name my FILEGROUP and/or FILE something else?

What is proper database design for a drupal table with an order field?

Posted: 29 Jun 2013 01:45 AM PDT

I am adding a custom widget to an instance of Drupal 6.x & MySQL 5.5 and came across a problem with updating rows.

I have a table of recipe ingredients where multiple ingredients are tied to a single recipe by node id (nid) & version id (vid).

The primary key is vid, nid & order.

The vid & nid are related to the recipe nid & vid fields.

The table schema is:

+-----------------+------------------+  | Field           | Type             |  +-----------------+------------------+  | vid             | int(10) unsigned |  | nid             | int(10) unsigned |  | order           | int(10) unsigned |  | name            | varchar(255)     |  | unit_of_measure | varchar(32)      |  | quantity        | int(10) unsigned |  +-----------------+------------------+  

The problem comes when attempting to reorder the ingredients.

For instance:

+-----+-----+-------+---------+-------------------+----------+  | vid | nid | order |  name   |  unit_of_measure  | quantity |  |  5  |  1  |   1   | Chicken |        Lb         |    1     |  |  5  |  1  |   2   |  Rice   |        Cup        |    2     |  |  5  |  1  |   3   |  Thyme  |        Tbsp       |    3     |  +-----+-----+-------+---------+-------------------+----------+  

I want to move Thyme to the top of the list but I can't change the order for Thyme to 1 since that primay key already exists (Chicken). I can't move Chicken down to order 2 because Rice is already there, etc...

My position is that we should add a unique auto-incrementing int field that will be the sole primary key. Which will enable us to reorder the rows without incident with the possibility that two rows might end up with the same nid, vid, & order.

My coworkers position was that to add a unique auto-increment int field is bad design because there should never be two different rows that have the same vid, nid & order. But following this belief there are two ways to implement a reorder of the rows

  1. Update each row's order with some large number (ie- 1001, 1002, 1003) so that the original order is no longer conflicting, then update each row with the correct order values (1, 2, 3).
  2. Delete each row that has the same nid & vid, then insert all the rows again in the correct order.

From the database's perspective, what is the correct approach?

what is the best way to update 2 tables via data entry form in access 2007 database?

Posted: 29 Jun 2013 02:45 AM PDT

I have a database with more than 500.000 records, I use Table_a to Accession the files by the ID number,and Name, Date, Address, Tel, ..... And then i use Table_b to archive files and get the position where i should keep that file, just by entering the ID. Is there any way to update the Database to have it all in one action? I mean after Accessioning the file, i may have a check box, or a drop list to move this record directly to the Archiving at the sametime??

SQL Server Replication: "ALTER TABLE ALTER COLUMN" is not propagated to subscribers

Posted: 29 Jun 2013 07:45 AM PDT

We are running SQL Server 2008 R2 SP1 as publisher & distributor, and SQL Server 2005 SP3 as subscriber. The replication of schema changes is activated, and the replication has been running for years, including frequent schema changes (new column, new constraints, etc).

The following instruction was sent on the publisher:

use myDatabase  alter table Tbl_TypeLignePaye        alter column natureTypeLignePaye nvarchar(12)  go  

where field natureTypeLignePaye was originally nvarchar(3) null

The query ran without errors on the main database. The result is the following:

  1. The field natureTypeLignePaye still appears as nvarchar(3) in the object browser
  2. The column properties screen indicates a nvarchar type with a length of 12
  3. The change was not propagated to the subscribers

Any idea on what is going on with this database?

Publisher: object browser window vs property window give incoherent data

field type and length

Is it possible to have extra tables in a Slave with MySQL Replication

Posted: 29 Jun 2013 01:45 PM PDT

As my title mention I have a Master and a Slave database.

Master if for operations data and my slave mainly for reporting stuff.

The issue is that I need to create extra tables on reporting that can't be on the master, but the way my replication is set (the simplest one mentioned by the official doc) at the moment, this breaks the replication system.

How could I add tables on the Slave without Master caring about it ? Is it even possible ?

What fillfactor for caching table?

Posted: 29 Jun 2013 06:14 AM PDT

I have heavily updated / accessed table where I store serialized java objects. They are in the table for 2-3 hours (also are being updated during that period) and then removed. Size of table is around 300MB. I have spotted it is very, very often VACUUMed and wonder if changing the fillfactor would help?

Replication issue - CREATE SELECT alternative?

Posted: 29 Jun 2013 03:45 PM PDT

I've an MySQL 5.1 slave for our BI team.

They need to make some CREATE SELECT with big select queries (several million lines).

As CREATE SELECT is a DDL, if the replication attempts to update some rows in same tables than the SELECT statement, replication is blocked until the freeing of the CREATE SELECT.

Do you now a good non-blocking alternative to thoses CREATE SELECT statements?

I thought to an SELECT INTO OUTPUT FILE then LOAD DATA INFILE but they will fill out our disks as BI guys like to do... :)

Max.

Which text-index I should create for xooops engine to achieve better search results?

Posted: 29 Jun 2013 09:45 AM PDT

In one of projects we use xoops engine to manage content. In mysql slow query log most of queries are following :

SELECT p.uid,f.forum_id, p.topic_id, p.poster_name, p.post_time, f.forum_name, p.post_id, p.subject              FROM xps33_bb_posts p,              xps33_bb_posts_text pt,                  xps33_bb_forums f WHERE p.post_id = pt.post_id AND p.approved = 1 AND p.forum_id = f.forum_id AND f.forum_id IN (1,4,61,7,9,17,20,45,35,44,38,39,43,53,54,55,56,57,58,60,14,29,40,26,18,41,33,24,32,59,25) AND ((p.subject LIKE '%rivi%' OR pt.post_text LIKE '%orvi%') AND (p.subject LIKE '%care%' OR pt.post_text LIKE '%gor%'))  ORDER BY p.post_time DESC LIMIT 0, 5;  

I can't change them as It would involve changing the engine which is not an option atm. But I can help the engine to search faster. As I understood as the table uses MyIsam engine I can create text indicies which should make search faster, am I right?

So in general for which indicies I should create to avoid following queries run for long time?

+----+-------------+-------+--------+---------------------+---------+---------+--------------------+--------+-----------------------------+  | id | select_type | table | type   | possible_keys       | key     | key_len | ref                | rows   | Extra                       |  +----+-------------+-------+--------+---------------------+---------+---------+--------------------+--------+-----------------------------+  |  1 | SIMPLE      | p     | ALL    | PRIMARY,forumid_uid | NULL    | NULL    | NULL               | 144090 | Using where; Using filesort |  |  1 | SIMPLE      | f     | eq_ref | PRIMARY             | PRIMARY | 4       | diginew.p.forum_id |      1 | Using where                 |  |  1 | SIMPLE      | pt    | eq_ref | PRIMARY             | PRIMARY | 4       | diginew.p.post_id  |      1 | Using where                 |  +----+-------------+-------+--------+---------------------+---------+---------+--------------------+--------+-----------------------------+  3 rows in set (0.00 sec)  

How to drop a DB2 instance when the instance owner was removed

Posted: 29 Jun 2013 10:45 AM PDT

This is a real sticky situation. I was handed over a machine (running an AIX 7.1), and my first task was to re-install DB2 server on it. But someone before me had conveniently removed an instance owner account, and probably recreated it. Now, the problem is this:

1) When I try to uninstall DB2, it says the instance is active and has to be dropped first.

2) When I try to drop this instance, DB2 says there is no such instance.

I am quite new to DB2 administration. Not sure how to proceed here. Any help is appreciated

Thanks

Postgresql could not find pg_xlog

Posted: 29 Jun 2013 08:20 PM PDT

I'm trying to mount the WAL to a different disk. I stopped the server, and did the following:

(I'm using Fabric)

  # Move pg_xlog to wals/path.dev/    dest_dir = "/home/ec2-user/%s/wals/%s" % (config['main_dir'], dev.replace('/','.'))    src_dir = "/var/lib/pgsql/%s/data/pg_xlog" % version      # Move all files     sudo("mkdir %s" % (dest_dir), warn_only = True)      # Mount .fab-pg/wals/path.to.dev/    mount(dev, dest_dir, fs)      # Point it at new origin    sudo("sudo mv %s* %s" % (src_dir, dest_dir))    dest_dir = "%s/pg_xlog" % dest_dir # pg_xlog was moved into the directory    sudo("ln -s %s %s" % (dest_dir, src_dir))      # Grant postgres permissions    sudo("chown postgres %s" % dest_dir)    sudo("chown postgres %s" % src_dir)  

PG Log

LOG:  received smart shutdown request  LOG:  autovacuum launcher shutting down  LOG:  shutting down  LOG:  database system is shut down  LOG:  database system was shut down at 2012-12-13 03:00:46 UTC  FATAL:  required WAL directory "pg_xlog" does not exist  LOG:  startup process (PID 30971) exited with exit code 1  LOG:  aborting startup due to startup process failure  

Ls

[ec2-user@ip-10-205-21-74 fab-postgres]$ sudo ls -al /var/lib/pgsql/9.2/data  total 96  drwx------ 14 postgres postgres  4096 Dec 13 03:02 .  drwx------  4 postgres postgres  4096 Dec 12 04:03 ..  drwx------  5 postgres postgres  4096 Dec 12 03:57 base  drwx------  2 postgres postgres  4096 Dec 13 03:00 global  drwx------  2 postgres postgres  4096 Dec 12 03:57 pg_clog  -rw-------  1 postgres postgres  4476 Dec 12 03:57 pg_hba.conf  -rw-------  1 postgres postgres  1636 Dec 12 03:57 pg_ident.conf  drwx------  2 postgres postgres  4096 Dec 13 03:02 pg_log  drwx------  4 postgres postgres  4096 Dec 12 03:57 pg_multixact  drwx------  2 postgres postgres  4096 Dec 13 03:02 pg_notify  drwx------  2 postgres postgres  4096 Dec 12 03:57 pg_serial  drwx------  2 postgres postgres  4096 Dec 12 03:57 pg_snapshots  drwx------  2 postgres postgres  4096 Dec 13 03:00 pg_stat_tmp  drwx------  2 postgres postgres  4096 Dec 12 03:57 pg_subtrans  drwx------  2 postgres postgres  4096 Dec 12 03:57 pg_tblspc  drwx------  2 postgres postgres  4096 Dec 12 04:03 pg_twophase  -rw-------  1 postgres postgres     4 Dec 12 03:57 PG_VERSION  lrwxrwxrwx  1 root     root        30 Dec 13 03:01 pg_xlog -> .fab-pg/wals/.dev.xvdf/pg_xlog  -rw-------  1 postgres postgres 19587 Dec 12 03:57 postgresql.conf  -rw-------  1 postgres postgres    59 Dec 13 03:02 postmaster.opts  

ls

[ec2-user@ip-10-205-21-74 fab-postgres]$ sudo ls -al ~/.fab-pg/wals/.dev.xvdf/  total 12  drwxr-xr-x 3 root     root     4096 Dec 13 03:01 .  drwxrwxr-x 3 ec2-user ec2-user 4096 Dec 13 03:01 ..  drwx------ 3 postgres postgres 4096 Dec 12 04:03 pg_xlog  

fixed pg_xlog to point to correct place

[ec2-user@ip-10-205-21-74 fab-postgres]$ sudo ls -al /var/lib/pgsql/9.2/data  total 96  drwx------ 14 postgres postgres  4096 Dec 13 14:42 .  drwx------  4 postgres postgres  4096 Dec 13 14:38 ..  drwx------  5 postgres postgres  4096 Dec 13 14:38 base  drwx------  2 postgres postgres  4096 Dec 13 14:38 global  drwx------  2 postgres postgres  4096 Dec 13 14:38 pg_clog  -rw-------  1 postgres postgres  4476 Dec 13 14:38 pg_hba.conf  -rw-------  1 postgres postgres  1636 Dec 13 14:38 pg_ident.conf  drwx------  2 postgres postgres  4096 Dec 13 14:42 pg_log  drwx------  4 postgres postgres  4096 Dec 13 14:38 pg_multixact  drwx------  2 postgres postgres  4096 Dec 13 14:42 pg_notify  drwx------  2 postgres postgres  4096 Dec 13 14:38 pg_serial  drwx------  2 postgres postgres  4096 Dec 13 14:38 pg_snapshots  drwx------  2 postgres postgres  4096 Dec 13 14:38 pg_stat_tmp  drwx------  2 postgres postgres  4096 Dec 13 14:38 pg_subtrans  drwx------  2 postgres postgres  4096 Dec 13 14:38 pg_tblspc  drwx------  2 postgres postgres  4096 Dec 13 14:38 pg_twophase  -rw-------  1 postgres postgres     4 Dec 13 14:38 PG_VERSION  lrwxrwxrwx  1 root     root        45 Dec 13 14:40 pg_xlog -> /home/ec2-user/.fab-pg/wals/.dev.xvdf/pg_xlog  -rw-------  1 postgres postgres 19587 Dec 13 14:38 postgresql.conf  -rw-------  1 postgres postgres    59 Dec 13 14:42 postmaster.opts  

Mongo replicated shard member not able to recover, stuck in STARTUP2 mode

Posted: 29 Jun 2013 03:45 AM PDT

I have following setup for a sharded replica set in Amazon VPC:

mongo1: 8G RAM Duo core (Primary)

mongo2: 8G RAM Duo core (Secondary)

mongo3: 4G RAM (Arbiter)

Mongo1 is the primary member in the replica set with a 2 shard setup:

 mongod --port 27000 --dbpath /mongo/config -- configsvr      mongod --port 27001 --dbpath /mongo/shard1 --shardsvr --replSet rssh1     mongod --port 27002 --dbpath /mongo/shard2 --shardsvr --replSet rssh2  

Mongo2 is the secondary member in the replica set, mirrors mongo1 exactly:

 mongod --port 27000 --dbpath /mongo/config -- configsvr      mongod --port 27001 --dbpath /mongo/shard1 --shardsvr --replSet rssh1   # Faulty process     mongod --port 27002 --dbpath /mongo/shard2 --shardsvr --replSet rssh2  

Then for some reason, the 27001 process on mongo2 experienced a crash due to out of memory (cause unknown) last week. When I discovered the issue (the application still works getting data from the primary) and restarted the 27001 process, it was too late to catch up with the shard1 on mongo1. So I followed 10gen's recommendation:

  • emptied directory /mongo/shard1
  • restart the 27001 process using command

    mongod --port 27001 --dbpath /mongo/shard1 --shardsvr --replSet rssh1  

However it's 24+ hours now, the node is still in STARTUP2 mode, I have about 200G data in the shard1 and it appears that it got about 160G over to /mongo/shard1 on mongo2. Following is the replica set status command output(run on mongo2)

rssh1:STARTUP2> rs.status()  {       "set" : "rssh1",       "date" : ISODate("2012-10-29T19:28:49Z"),       "myState" : 5,       "syncingTo" : "mongo1:27001",       "members" : [            {                 "_id" : 1,                 "name" : "mongo1:27001",                 "health" : 1,                 "state" : 1,                 "stateStr" : "PRIMARY",                 "uptime" : 99508,                 "optime" : Timestamp(1351538896000, 3),                 "optimeDate" : ISODate("2012-10-29T19:28:16Z"),                 "lastHeartbeat" : ISODate("2012-10-29T19:28:48Z"),                 "pingMs" : 0            },            {                 "_id" : 2,                 "name" : "mongo2:27001",                 "health" : 1,                 "state" : 5,                 "stateStr" : "STARTUP2",                 "uptime" : 99598,                 "optime" : Timestamp(1351442134000, 1),                 "optimeDate" : ISODate("2012-10-28T16:35:34Z"),                 "self" : true            },            {                 "_id" : 3,                   "name" : "mongoa:27901",                 "health" : 1,                 "state" : 7,                 "stateStr" : "ARBITER",                 "uptime" : 99508,                 "lastHeartbeat" : ISODate("2012-10-29T19:28:48Z"),                 "pingMs" : 0            }       ],       "ok" : 1  }    rssh1:STARTUP2>   

It would appear most of the data from primary was replicated, but not all. The logs shows some error but I don't know if it's related:

  Mon Oct 29 19:39:59 [TTLMonitor] assertion 13436 not master or secondary; cannot currently read from this replSet member ns:config.system.indexes query:{ expireAfterSeconds: { $exists: true } }    Mon Oct 29 19:39:59 [TTLMonitor] problem detected during query over config.system.indexes : { $err: "not master or secondary; cannot currently read from this replSet member", code: 13436 }    Mon Oct 29 19:39:59 [TTLMonitor] ERROR: error processing ttl for db: config 10065 invalid parameter: expected an object ()    Mon Oct 29 19:39:59 [TTLMonitor] assertion 13436 not master or secondary; cannot currently read from this replSet member ns:gf2.system.indexes query:{ expireAfterSeconds: { $exists: true } }    Mon Oct 29 19:39:59 [TTLMonitor] problem detected during query over gf2.system.indexes : { $err: "not master or secondary; cannot currently read from this replSet member", code: 13436 }    Mon Oct 29 19:39:59 [TTLMonitor] ERROR: error processing ttl for db: gf2 10065 invalid parameter: expected an object ()    Mon Oct 29 19:39:59 [TTLMonitor] assertion 13436 not master or secondary; cannot currently read from this replSet member ns:kombu_default.system.indexes query:{ expireAfterSeconds: { $exists: true } }    Mon Oct 29 19:39:59 [TTLMonitor] problem detected during query over kombu_default.system.indexes : { $err: "not master or secondary; cannot currently read from this replSet member", code: 13436 }    Mon Oct 29 19:39:59 [TTLMonitor] ERROR: error processing ttl for db: kombu_default 10065 invalid parameter: expected an object ()  

Everything on primary appeared to be fine. No errors in the log.

I tried the steps twice, one with the mongo config server running and one with mongo config server down, both are same results.

This is a production setup and I really need to get the replica set back up working, any help is much much appreciated.

Ensure correct username when using pg_restore

Posted: 29 Jun 2013 08:45 AM PDT

I have just installed postgres 9.1.6 on a local Ubuntu server. Now I'm trying to restore a database dump from a database on Heroku. The local database is setup like this:

sudo -u postgres psql -c "create user app_user with password 'pass';"  sudo -u postgres psql -c "create database app_production owner app_user;"  

Now, when I try to restore the the dump I use the following command:

pg_restore --verbose --schema=public --no-acl --no-owner --jobs=8 --exit-on-error --username=app_user --dbname=app_production /tmp/app_production.dump  

Now in psql with \l to see ownerships I get the following:

                                         List of databases            Name    |   Owner   | Encoding |   Collate   |    Ctype    |   Access privileges    ------------------+-----------+----------+-------------+-------------+-----------------------   app_production   | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |    postgres         | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |    template0        | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +                    |           |          |             |             | postgres=CTc/postgres   template1        | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +                    |           |          |             |             | postgres=CTc/postgres  

As you can see, the ownership of app_production database has now turned from app_user to postgres. I would have expected the owner of the app_production database to still be app_user, so what am I doing wrong?

BTW, The --schema=public was added, because I was getting a weird error:

"Could not execute query: ERROR: must be owner of extension plpgsql"

Another thing is, that the owner of the dump is the user that the database was having on heroku, which would be something like 'jebf473b73bv73v749b7'

Can I monitor the progress of importing a large .sql file in sqlite3 using zenity --progress?

Posted: 29 Jun 2013 05:45 PM PDT

I'm trying to monitor the progress of a sqlite3 command importing a large .sql file into a database using zenity --progress.

I've tried the following which will import the file, however progress is not shown:

sqlite3 DATABASE < import_file.sql | zenity --progress --percentage=0 --auto-close  

I know I need to provide zenity a numeric source for the progress, but don't really know how to obtain the number.

Can anyone help me?

What's the difference between a temp table and table variable in SQL Server?

Posted: 29 Jun 2013 01:48 PM PDT

This seems to be an area with quite a few myths and conflicting views.

So what is the difference between a table variable and a local temporary table in SQL Server?

[SQL Server] Monitor INSERTs, UPDATEs, or DELETEs?

[SQL Server] Monitor INSERTs, UPDATEs, or DELETEs?


Monitor INSERTs, UPDATEs, or DELETEs?

Posted: 29 Jun 2013 04:45 AM PDT

Hi All,I have a particularly important table, and I need to know when anything about it changes. Would someone mind providing guidance on how to monitor this (preferably using SQL)? I'm hoping for something that will either send me and email, write to another table, or something along those lines. Any help is appreciated. Thanks!

sqlcmd

Posted: 28 Jun 2013 05:08 PM PDT

Hi All,I am a newbie to sqlserver.I recently came across "sqlcmd"utility feature while surfing.So here is my doubt.I connect to a server using some credentials.I do it like --->Right click on ssms--->Run as different user--->give some credentials(domain/password)How can I do the same using command prompt..When do we actually use "sqlcmd"feature.Please help me on this.It would be helpful..Sorry the same post got passed 3 times..Dont know how to delete :(

sqlcmd

Posted: 28 Jun 2013 05:07 PM PDT

Hi All,I am a newbie to sqlserver.I recently came across "sqlcmd"utility feature while surfing.So here is my doubt.I connect to a server using some credentials.I do it like --->Right click on ssms--->Run as different user--->give some credentials(domain/password)How can I do the same using command prompt..When do we actually use "sqlcmd"feature.Please help me on this.It would be helpful..

sqlcmd

Posted: 28 Jun 2013 05:05 PM PDT

Hi All,I am a newbie to sqlserver.I recently came across "sqlcmd"utility feature while surfing.So here is my doubt.I connect to a server using some credentials.I do it like --->Right click on ssms--->Run as different user--->give some credentials(domain/password)How can I do the same using command prompt..

[Articles] SQL in the City 2013

[Articles] SQL in the City 2013


SQL in the City 2013

Posted: 27 Jun 2013 11:00 PM PDT

Steve Jones looks back at the 2013 tour kickoff for SQL in the City in London.

[MS SQL Server] SQL Server Storage in a Virtual Machine (2008 SQL on Windows 2012 on Hyper-V 2012 host)

[MS SQL Server] SQL Server Storage in a Virtual Machine (2008 SQL on Windows 2012 on Hyper-V 2012 host)


SQL Server Storage in a Virtual Machine (2008 SQL on Windows 2012 on Hyper-V 2012 host)

Posted: 29 Jun 2013 02:43 AM PDT

For a small business where SQL instance is running 8 Databases supporting only 20 users and low number of transactions per second. In a virtual environment do the traditional recommendations for SQL, i.e. separating log files from data files from Temp db on their own spindles still apply? If not and all can reside on the same OBR10 are there pros/cons performance wise for using separate VHDs for each?We are new to RAID and Virtual environments and have read more documentation/articles than I can count. Some advise from those who "Do" will be greatly appreciated.

backup- fastest

Posted: 28 Jun 2013 09:03 AM PDT

Hi,If I have a 100 GB of database and what are the best best options if I need to take a backup( a fastest possible way) to the same server and to backup to a network location ?Thanks.

PowerShell snap-ins

Posted: 28 Jun 2013 05:39 AM PDT

Hi,I'm reading through a book on PowerShell and tried the examples about finding the available cmdlets for a snap-in.What seems odd is that even though the book shows output for the commands, the only one that returns results for me is the Core snap-in listing:[code]Get-Command -commandtype cmdlet | Where-Object {$_.PSSnapin -match "core"}[/code]The others, for example, return no results in my PowerShell session.[code]Get-Command -commandtype cmdlet | Where-Object {$_.PSSnapin -match "host"}[/code]Does anyone know why this is? Do I have to load the other snap-ins for them to show up when I issue those commands (PowerShell.Management, Security, etc.)?Thanks for any help!- webrunner

Does this sound like a good place to enable "Optimize for ad-hoc workloads?"

Posted: 28 Jun 2013 05:35 AM PDT

I recently ran the "Server Dashboard" report on a new server here. Imagine my reaction to seeing Adhoc Queries sucking up ~75% of the cumulative CPU Usage and 80-90% of the Logical IO!So I started looking into the "Optimize for adhoc workloads." I read the articles by Kimberly Tripp, Technet stuff, and others, and it sounds like it may not be a bad idea to enable.The server in question only has 4GB of RAM, and I suspect I could put in a request to get it doubled, but it's like pulling teeth from an angry hyena...Running a couple queries agains dm_exec_cached_plans, out of 12975 cached plans, 10148 are adhoc, and 4910 of those are with usecounts of 1...Now, I know enabling this won't immediately have an effect on the plan cache, and it will have something of a detrimental effect on the adhoc queries (no cached plan, takes longer to run.) I'd enable it in a test environment first, but the test server won't get hit as hard as the production...So anyways, does it sound like it might provide some benefit to enable this option?Thanks,Jason

[T-SQL] Query Help

[T-SQL] Query Help


Query Help

Posted: 28 Jun 2013 09:11 PM PDT

Hi,I need help in getting this type of output, I tired with MIN and MAX function but didnt get required output:Create table T1(Id int identity primary key,VoucherNo varchar(10),TransNo varchar(10))Insert into values ('V100','Trns1'),('V101','Trns1'),('V102','Trns1'),('V103','Trns1'),('V104','Trns1'),('V106','Trns1')Resultant output:TransNo FirsvoucherNo LastVoucherNo Quantitytrns1 V100 V104 5trns1 V106 V106 1

Group by - Retaining all serial numbers

Posted: 28 Jun 2013 11:12 PM PDT

Hi,I am trying to group a dataset by the variable Sale_Type and retain the serial numbers along:Here is the data:Table1:Sno Sale_Type Amount1 Drug 102 Fruit 203 Groceries 304 Drug 105 Fruit 206 Groceries 307 Drug 108 Fruit 209 Groceries 3010 Drug 10Here is the result, I am trying to achieve:Sale_Type Amount Sno_RetainedDrug 40 1, 4, 7, 10Fruit 60 2, 5, 8Groceries 90 3, 6, 9Here is my code so far:[code="sql"]select Saletype, sum(Amount) from table1 group by sale_type[/code]Any help would be truly appreciable.Thanks & Regards,Akber Khan.

Insert Multiple rows using Transacrtions

Posted: 28 Jun 2013 02:41 PM PDT

Hi.. I have a table and need insert multiple data into it. First i have delete all data into the table and then insert multiple rows in table using sql transactions so that if any error comes all data will not be deleted. Thanks and RegardsSankar Cochin

disable Job

Posted: 28 Jun 2013 03:47 AM PDT

How to disable a job by using t-sql query???Please help...

sql query questio

Posted: 28 Jun 2013 10:56 AM PDT

I have 2 tablesCaseCaseID Name-------------------12341 XYZ23451 ZZZ90892 XXQCaseCodeCaseID CaseCode TypeFlag--------------------------------12341 001 P12341 003 S90892 111 S90892 222 S90982 999 PHere in TypeFlag 'P' stands for PrimaryCode and 'S' for secondary codeI want an output like thisCaseID Name PrimaryCode SecondaryCode-----------------------------------------------------------------12341 XYZ 001 00323451 ZZZ NULL NULL90982 XXQ 999 111,222I tried joining the two tables using CASE WHEN, but it generates multiple rows for a case which has more than 1 code.Also i need comma seperated codes when there are multiple Type Flag for a caseID as shown in the last record.Please helpthankskk

Dynamic Vertical Rows to Horizontal

Posted: 28 Jun 2013 09:38 AM PDT

I read through the following topic already, and it has moved me MUCH closer to the solution that I'm seeking than I was getting otherwise, but I can't seem to find the answer to my questionStarting Point: [url=http://www.sqlservercentral.com/Forums/Topic1274906-392-1.aspx]Vertical to Horizontal[/url]My situation is somewhat similar in that I have a unique identifier than can have, at a max, 60 rows tied to it - however, there are often quite a few less, but knowing the upper boundary is useful, I think.I'm trying to figure out an easier way than taking the following block of code and having to copy-paste 60 times in order to account for the maximum number of rows I'm trying to "pivot" out:[code="sql"]SELECT MK, HK, CK, MAX(CASE WHEN N=1 THEN MD ELSE NULL END) AS MD1, MAX(CASE WHEN N=1 THEN BN ELSE NULL END) AS BN1, MAX(CASE WHEN N=2 THEN MD ELSE NULL END) AS MD2, MAX(CASE WHEN N=2 THEN BN ELSE NULL END) AS BN2, MAX(CASE WHEN N=3 THEN MD ELSE NULL END) AS MD3, MAX(CASE WHEN N=3 THEN BN ELSE NULL END) AS BN3FROM TMP_URECROSS APPLY (SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3) X(N)WHERE RN=NGROUP BY MK, HK, CKORDER BY MK, HK, CK[/code]And perhaps copy-pasting 60 times really is the best way, but it would seem that I should be able establish a variable that has a range of 1 - 60, and then loop the above statement for every value between 1 and 60, but I'm not sure, and more to the point, I'm not sure how to do so.Am I just barking up the wrong tree?

Need some help in solving the below logic of averages

Posted: 28 Jun 2013 04:27 AM PDT

I have a requirement to work with the below 2 tables and get the last table as my output. Any help would be great.Table 1 : [code="sql"]Type Code Current BPO ForecastNULL NULL NULL NULL NULL NULL NULL NULL NULLNULL NULL NULL NULL NULL NULL NULLA NULL NULL NULL NULLA NULL NULL NULLA A001 55 32 56.322A A002 NULL NULL NULLA A003 66.22 44.21 81.13A A004 NULL NULL NULLA B001 NULL NULL NULLA B002 NULL NULL NULLR A001 28.423 53.245 82.444R A003 100 100 100R A004 NULL NULL NULLR A023 NULL NULL NULLR C001 89.3444 52.432 100S A001 22.22 18.322 65S C001 67 34 62S NULL NULL NULLZ A003 12 22.56 38.43Z B001 56 43.233 81.69Z A023 NULL NULL NULL [/code] Table 2 : [code="sql"]Index Type CodeBact A A001Bact R A001Bact Z B001Bact S C001Dis1 A A001Dis1 A A002Dis1 A A003Dis1 R C001Dis1 Z A023MEX A A004MEX A B001MEX R A001MEX S A001MEX A B002TED R A003TED R A004TED S A001TED Z A003ZES R A023ZES Z A003ZES Z A023ZES Z B001 [/code]I need the below columns as my output :[code="sql"] Index Current BPO Forecast Bact (Avg of A A001,A B001,A C001,R A001,R B001,R C001,S A001,S B001,S C001,Z A001,Z B001,Z C001) Dis1 MEX TED ZES[/code]Here Current , BPO and Forecast should be average of the combination of Type and Code for a particular Index.We are also eliminating all the rows which do not match with the Table 2. I am also putting Null values in Table 1 combination as these were the actual values in my tableAlso, Please note that my Table1 is not a table but an output of some other query formed using some logic.Also below are my scripts : Table 1 Create and Insert Scripts :-[code="sql"] CREATE TABLE [dbo].[Table1]( [Type] [char](1) NULL, [Code] [varchar](5) NULL, [Current] [decimal](18, 8) NULL, [BPO] [decimal](18, 8) NULL, [Forecast] [decimal](18, 8) NULL ) ON [PRIMARY] INSERT INTO [Table1] ([Type],[Code],[Current],[BPO],[Forecast])VALUES(NULL,NULL,NULL,NULL,NULL) INSERT INTO [Table1] ([Type],[Code],[Current],[BPO],[Forecast])VALUES('',NULL,NULL,NULL,NULL) INSERT INTO [Table1] ([Type],[Code],[Current],[BPO],[Forecast])VALUES(NULL,'',NULL,NULL,NULL) INSERT INTO [Table1] ([Type],[Code],[Current],[BPO],[Forecast])VALUES('','',NULL,NULL,NULL) INSERT INTO [Table1] ([Type],[Code],[Current],[BPO],[Forecast])VALUES('A',NULL,NULL,NULL,NULL) INSERT INTO [Table1] ([Type],[Code],[Current],[BPO],[Forecast])VALUES('A','',NULL,NULL,NULL) INSERT INTO [Table1] ([Type],[Code],[Current],[BPO],[Forecast])VALUES('A','A001',55.00000000,32.00000000,56.32200000) INSERT INTO [Table1] ([Type],[Code],[Current],[BPO],[Forecast])VALUES('A','A002',NULL,NULL,NULL) INSERT INTO [Table1] ([Type],[Code],[Current],[BPO],[Forecast])VALUES('A','A003',66.22000000,44.21000000,81.13000000) INSERT INTO [Table1] ([Type],[Code],[Current],[BPO],[Forecast])VALUES('A','A004',NULL,NULL,NULL) INSERT INTO [Table1] ([Type],[Code],[Current],[BPO],[Forecast])VALUES('A','B001',NULL,NULL,NULL) INSERT INTO [Table1] ([Type],[Code],[Current],[BPO],[Forecast])VALUES('A','B002',NULL,NULL,NULL) INSERT INTO [Table1] ([Type],[Code],[Current],[BPO],[Forecast])VALUES('R','A001',28.42300000,53.24500000,82.44400000) INSERT INTO [Table1] ([Type],[Code],[Current],[BPO],[Forecast])VALUES('R','A003',100.00000000,100.00000000,100.00000000) INSERT INTO [Table1] ([Type],[Code],[Current],[BPO],[Forecast])VALUES('R','A004',NULL,NULL,NULL) INSERT INTO [Table1] ([Type],[Code],[Current],[BPO],[Forecast])VALUES('R','A023',NULL,NULL,NULL) INSERT INTO [Table1] ([Type],[Code],[Current],[BPO],[Forecast])VALUES('R','C001',89.34440000,52.43200000,100.00000000) INSERT INTO [Table1] ([Type],[Code],[Current],[BPO],[Forecast])VALUES('S','A001',22.22000000,18.32200000,65.00000000) INSERT INTO [Table1] ([Type],[Code],[Current],[BPO],[Forecast])VALUES('S','C001',67.00000000,34.00000000,62.00000000) INSERT INTO [Table1] ([Type],[Code],[Current],[BPO],[Forecast])VALUES('S','',NULL,NULL,NULL) INSERT INTO [Table1] ([Type],[Code],[Current],[BPO],[Forecast])VALUES('Z','A003',12.00000000,22.56000000,38.43000000) INSERT INTO [Table1] ([Type],[Code],[Current],[BPO],[Forecast])VALUES('Z','B001',56.00000000,43.23300000,81.69000000) INSERT INTO [Table1] ([Type],[Code],[Current],[BPO],[Forecast])VALUES('Z','A023',NULL,NULL,NULL)[/code]Table 2 Create and Insert Scripts :- [code="sql"] INSERT INTO [Table2] ([Index],[Type],[Code])VALUES('Bact','A','A001') INSERT INTO [Table2] ([Index],[Type],[Code])VALUES('Bact','R','A001') INSERT INTO [Table2] ([Index],[Type],[Code])VALUES('Bact','Z','B001') INSERT INTO [Table2] ([Index],[Type],[Code])VALUES('Bact','S','C001') INSERT INTO [Table2] ([Index],[Type],[Code])VALUES('Dis 1','A','A001') INSERT INTO [Table2] ([Index],[Type],[Code])VALUES('Dis 1','A','A002') INSERT INTO [Table2] ([Index],[Type],[Code])VALUES('Dis 1','A','A003') INSERT INTO [Table2] ([Index],[Type],[Code])VALUES('Dis 1','R','C001') INSERT INTO [Table2] ([Index],[Type],[Code])VALUES('Dis 1','Z','A023') INSERT INTO [Table2] ([Index],[Type],[Code])VALUES('MEX','A','A004') INSERT INTO [Table2] ([Index],[Type],[Code])VALUES('MEX','A','B001') INSERT INTO [Table2] ([Index],[Type],[Code])VALUES('MEX','R','A001') INSERT INTO [Table2] ([Index],[Type],[Code])VALUES('MEX','S','A001') INSERT INTO [Table2] ([Index],[Type],[Code])VALUES('MEX','A','B002') INSERT INTO [Table2] ([Index],[Type],[Code])VALUES('TED','R','A003') INSERT INTO [Table2] ([Index],[Type],[Code])VALUES('TED','R','A004') INSERT INTO [Table2] ([Index],[Type],[Code])VALUES('TED','S','A001') INSERT INTO [Table2] ([Index],[Type],[Code])VALUES('TED','Z','A003') INSERT INTO [Table2] ([Index],[Type],[Code])VALUES('ZES','R','A023') INSERT INTO [Table2] ([Index],[Type],[Code])VALUES('ZES','Z','A003') INSERT INTO [Table2] ([Index],[Type],[Code])VALUES('ZES','Z','A023') INSERT INTO [Table2] ([Index],[Type],[Code])VALUES('ZES','Z','B001')[/code]

How to read all content of sp or fn?

Posted: 28 Jun 2013 02:33 AM PDT

Currently I am using the following code to read all content of sp or fn and it works in most cases, however, it does return more than one rows if the content's size is too large, is there better to get this? Thank you.[code="sql"]SELECT textFROM syscommentsWHERE id = (SELECT id FROM sysobjects WHERE name = 'spname')ORDER BY colid [/code]

query syntax problem

Posted: 28 Jun 2013 01:19 AM PDT

I have a query that is working successfully, just wanting to add a third table to it and trying several things but not getting the results needed. Here is the starting query and a description for the third table. Any ideas? Thanks!!SELECT k.AttributeKeyID, a.AttributeKeyID, k.AttributeKeyDsc, a.WorkItemID, a.KeyValueTxtFROM WorkItemAttributeKey AS k INNER JOINWorkItemAttribute AS a ON k.AttributeKeyID = a.AttributeKeyID WHERE (a.WorkItemID = 107893)SELECT c.WorkItemID, c.CommentTxtFROM WorkItemComment As cWHERE (c.WorkItemID = 107893)

[SQL Server 2008 issues] sqlcmd

[SQL Server 2008 issues] sqlcmd


sqlcmd

Posted: 28 Jun 2013 06:42 PM PDT

Hi All,I am a newbie to sqlserver.I recently came across "sqlcmd"utility feature while surfing.So here is my doubt.I connect to a server using some credentials.I do it like --->Right click on ssms--->Run as different user--->give some credentials(domain/password)How can I do the same using command prompt..When do we actually use "sqlcmd"feature.Please help me on this.It would be helpful

Replication Issue: Multiple Publishers, One Subscriber, Same Schema..Please Help!

Posted: 28 Jun 2013 01:18 AM PDT

Hello,Am relatively new to Replications as such, the scenario I am dealing with is like as follows:1. I am having multiple Databases say 10 for e.g. having same structure(exactly same objects tables,sp's etc) as such. Thus in each Database there is this one table with same schema say Friends.2. Table Friends has following columns : Column Name | Type ID | int(pk,not null) FirstName | varchar(100, not null) MiddleName | varchar(100, not null) LastName | varchar(100, not null) IDProblem | int(not null)3. I want to replicate this Friends table from all the 10 Databases to one central Subscriber.4. I am using Transactional Publication to achieve the same, while creating the Publication for each Database, I change the article properties and set the Destination Object Name relevant to the Database and I also change the procedure names that it will render accordingly to prevent them from being overwritten. Thus for 10 databases I replicate the Friends table into 10 different tables in the subscriber and I have 10 sets of those msins,msupdate,msdelete procedures respectively. The whole reason for doing this is to know from which Database I actually get the data5. I set an After Insert trigger on each of these Tables at the subscriber and there I actually insert all these columns into a FinalFreinds table along with the Database name from where it came hardcoded according to the table name.6. I have mentioned this IDProblem column of type int in the schema above, which is replicated fine alright in each individual table, but the after trigger messes up with the column, what happens is sometimes I get the correct value for that column, and most other times it inserts 0 there, however the value in the replicated table was a non zero.7. Means I get a value of 10 in the IDProblem column once, trigger fires, inserts everything fine including this value 10, and the immediate next record which gets replicated also has the value 10, my trigger inserts proper values for all other columns but this column although it was 10 again, the trigger inserts 0. This happens randomly, sometimes the correct value is inserted and most of the times 0 is inserted.So now I have to solve this mess, firstly I would like to ask that can't I replicate all in one table at the subscriber, with one additional column DatabaseName which will tell me from which database it is coming from, this will solve all my problem as this is the only reason why I am having the triggers on each individual table.If not this, then how can I fix the trigger, for it is not the case that trigger isn't working, it works for rest of the columns everytime, and for this IDProblem column sometimes, so I cannot figure out what is going wrong.Please help me as I need to close this issue ASAP...:crying:

Error with stuff function; its cutting of my commands.

Posted: 28 Jun 2013 08:55 AM PDT

Hi, I am having an issue with the stuff function. I want to concatenate the strings but some the last string is being cut off and I know there is to be more commands than what is in the output screen. Here is my code. Im basically checking a database file to run updates deletes and inserts on another database file.Im using SQL Server 2008.The database file that contains the instructions to run the commands is CY1Errors. The other database files that are to be affected by the EDUCATION_CY1FOURTEEN_ALL_CY1HH_LISTING_CY1HH_ALL_CY1[code="sql"]Declare @SQL varchar(max)--First is the updatesselect @SQL = stuff (( select 'Update dbo.' + case when ERRORMESSAGE LIKE 'ED%' and CorrectedValue <> 'NULL' then 'EDUCATION_CY1' when ERRORMESSAGE LIKE 'O14%' and CorrectedValue <> 'NULL' then 'FOURTEEN_ALL_CY1' when ERRORMESSAGE LIKE 'HL%' and CorrectedValue <> 'NULL' then 'HH_LISTING_CY1' when ERRORMESSAGE LIKE 'HL%'and CorrectedValue <> 'NULL' then 'HH_ALL_CY1' end + ' set ' + ColumnsToFix + ' = ''' + CorrectedValue + ''' where zBarcode = ' + cast(zBarcode as varchar(20)) + ' and PERSON_NUMBER = ' + cast(Person_No as varchar(20)) + ';'+ CHAR(13) as MyVal from CY1Errors where Command = 'UPDATE' for XML PATH(''), type).value('.','varchar(max)'), 1, 0, '')--select @SQL--First is the updatesselect @SQL = @SQL + stuff (( select 'Update dbo.' + case when ERRORMESSAGE LIKE 'ED%' then 'EDUCATION_CY1' when ERRORMESSAGE LIKE 'O14%' then 'FOURTEEN_ALL_CY1' when ERRORMESSAGE LIKE 'HL%' then 'HH_LISTING_CY1' when ERRORMESSAGE LIKE 'HL%' then 'HH_ALL_CY1' end + ' set ' + ColumnsToFix + ' = NULL' + + ' where zBarcode = ' + cast(zBarcode as varchar(20)) + ' and PERSON_NUMBER = ' + cast(Person_No as varchar(20)) + ';'+ CHAR(13) as MyVal from CY1Errors where Command = 'UPDATE' and CorrectedValue = 'NULL' for XML PATH(''), type).value('.','varchar(max)'), 1, 0, '')--select @SQL--Now we append the deletesselect @SQL = @SQL + stuff (( select 'DELETE dbo.' + case when ERRORMESSAGE LIKE 'ED%' then 'EDUCATION_CY1' when ERRORMESSAGE LIKE 'O14%' then 'FOURTEEN_ALL_CY1' when ERRORMESSAGE LIKE 'HL%' then 'HH_LISTING_CY1' when ERRORMESSAGE LIKE 'HL%' then 'HH_ALL_CY1' end + ' where zBarcode = ' + cast(zBarcode as varchar(20)) + ' and PERSON_NUMBER = ' + cast(Person_No as varchar(20)) + ';'+ CHAR(13) as MyVal from CY1Errors where Command = 'DELETE' for XML PATH(''), type).value('.','varchar(max)'), 1, 0, '')--Now we append the insertsselect @SQL = @SQL + stuff (( select 'INSERT INTO dbo.' + case when ERRORMESSAGE LIKE 'ED%' then 'EDUCATION_CY1' + +' VALUES('+ coalesce(CAST(ED_THREE AS varchar(10)),'99999') + ',' + coalesce(CAST(ED_Four AS varchar(10)),'99999') + ',' + coalesce(CAST(ED_Five AS varchar(10)),'99999') + ',' + coalesce(CAST(ED_Six AS varchar(10)),'99999')+ ',' + coalesce(CAST(Person_No AS varchar(7)),'99999') + ',' + coalesce(CAST(zBarcode AS varchar(15)),'99999')+ ')' /*when ERRORMESSAGE LIKE 'O14%' then 'FOURTEEN_ALL_CY1' + ' VALUES('+ coalesce(CAST(BatchNo AS varchar(7)),'99999') + ' , ' + coalesce(CAST(zBarcode AS varchar(15)),'99999') + ' , ' + coalesce(CAST(HHBarcode AS varchar(7)),'99999') + ' , ''' + coalesce(CAST(Person_No AS varchar(7)),'99999') + ''' , ' + coalesce(CAST(District AS varchar(7)),'99999') + ' , '+ coalesce(CAST(URBAN_RURAL AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Cluster AS varchar(7)),'99999') + ' , ' + coalesce(CAST(HHNUM AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Final_Result_Code AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_5 AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_5_KNS AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_7 AS varchar(7)),'99999') + ' , '''+ coalesce(CAST(Q1_8 AS varchar(7)),'99999') + ''' , '+ coalesce(CAST(Q1_9 AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_9_2 AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_10a AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_10b AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_10c AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_10_Total AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_11 AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_12 AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_13 AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_14 AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_15 AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_15a AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_16 AS varchar(7)),'99999') + ' , ''' + coalesce(CAST(Q1_17 AS varchar(7)),'99999') + ''' , ' + coalesce(CAST(Q1_18 AS varchar(7)),'99999') + ' , '+ coalesce(CAST(Q1_19 AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_20 AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_21 AS varchar(7)),'99999') + ' , ''' + coalesce(CAST(Q1_22 AS varchar(7)),'99999')+ ''' , ''' + coalesce(CAST(Q1_23 AS varchar(7)),'99999') + ''' , ' + coalesce(CAST(Q1_24 AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_25 AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_26 AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_27 AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_28_MAIN AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_28_OTHER AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_28_PREVIOUS AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_29_SEASONAL AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_29_YEARROUND AS varchar(7)),'99999') + ' , '''+ coalesce(CAST(Q1_30_MAIN AS varchar(7)),'99999') +''' , ''' + coalesce(CAST(Q1_30_OTHER AS varchar(7)),'99999') + ''' , ''' + coalesce(CAST(Q1_30_PREVIOUS AS varchar(7)),'99999')+ ''' , ''' + coalesce(CAST(Q1_31_MAIN AS varchar(7)),'99999') + ''' , ''' + coalesce(CAST(Q1_31_OTHER AS varchar(7)),'99999') + ''' , '''+ coalesce(CAST(Q1_31_Previous_AJ AS varchar(7)),'99999')+ ''' , ' + coalesce(CAST(Q1_32_MAIN AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_32_OTHER AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_32_TOTAL AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_33_MAIN AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_33_OTHER AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_33_TOTAL AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_35 AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_37 AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_38 AS varchar(7)),'99999') + ' , ''' + coalesce(CAST(Q1_39 AS varchar(7)),'99999') + ''' , ''' + coalesce(CAST(Q1_40 AS varchar(7)),'99999') + ''' , ' + coalesce(CAST(Q1_41 AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_41_DKNS AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_41a AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_41b AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_41b_DKNS AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_42 AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_42_DKNS AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_43 AS varchar(7)),'99999') + ' , ' + coalesce(CAST(Q1_44 AS varchar(7)),'99999') + ')' */ when ERRORMESSAGE LIKE 'HL%' then 'HH_LISTING_CY1'+ ' VALUES('''+ coalesce(CAST(Person_No AS varchar(7)),'99999') + ''' , ' + coalesce(CAST(HL_Three AS varchar (10)),'99999') + ' , ' + coalesce(CAST(HL_Four AS varchar (10)),'99999') + ' , ' + coalesce(CAST(HL_Five AS varchar(10)),'99999') + ' , '+ coalesce(CAST(HL_Six AS varchar(10)),'99999')+ ' , ' + coalesce(CAST(HL_Seven AS varchar (10)),'99999') + ' , ' + coalesce(CAST(HL_SevenB AS varchar (10)),'99999') + ' , ' + coalesce(CAST(zBarcode AS varchar(15)),'99999') + ' )' when ERRORMESSAGE LIKE 'HL%' then 'HH_ALL_CY1'+ ' VALUES('+ coalesce(CAST(BatchNo AS varchar(8)),'99999') + ' , ' + coalesce(CAST(Cluster AS varchar (10)),'99999') + ' , ' + coalesce(CAST(HHNUM AS varchar(10)),'99999')+ ' , ' + coalesce(CAST(zBarcode AS varchar(15)),'99999') + ' , ' + coalesce(CAST(District AS varchar(10)),'99999')+ ' , ' + coalesce(CAST(URBAN_RURAL AS varchar(10)),'99999') + ' , ' + coalesce(CAST(Final_Result_Code AS varchar(10)),'99999') + ' , ' + coalesce(CAST(Person_No AS varchar(7)),'99999') + ' )' end + ';' + CHAR(13) as MyVal from CY1Errors where Command = 'INSERT' for XML PATH(''), type).value('.','varchar(max)'), 1, 0, '') ;print @SQL--select @SQLexec (@SQL)[/code]In the output screen the following is shown[code="plain"]Update dbo.EDUCATION_CY1 set ED_THREE = NULL where zBarcode = 133593 and PERSON_NUMBER = 5;Update dbo.EDUCATION_CY1 set ED_THREE = NULL where zBarcode = 135443 and PERSON_NUMBER = 2;Update dbo.EDUCATION_CY1 set ED_THREE = NULL where zBarcode = 135153 and PERSON_NUMBER = 4;Update dbo.EDUCATION_CY1 set ED_FOUR = NULL where zBarcode = 135153 and PERSON_NUMBER = 4;Update dbo.EDUCATION_CY1 set ED_FIVE = NULL where zBarcode = 135153 and PERSON_NUMBER = 4;Update dbo.EDUCATION_CY1 set ED_THREE = NULL where zBarcode = 135092 and PERSON_NUMBER = 7;Update dbo.EDUCATION_CY1 set ED_FOUR = NULL where zBarcode = 135092 and PERSON_NUMBER = 7;Update dbo.EDUCATION_CY1 set ED_FIVE = NULL where zBarcode = 135092 and PERSON_NUMBER = 7;Update dbo.EDUCATION_CY1 set ED_THREE = NULL where zBarcode = 135092 and PERSON_NUMBER = 8;Update dbo.EDUCATION_CY1 set ED_FOUR = NULL where zBarcode = 135092 and PERSON_NUMBER = 8;Update dbo.EDUCATION_CY1 set ED_FIVE = NULL where zBarcode = 135092 and PERSON_NUMBER = 8;Update dbo.EDUCATION_CY1 set ED_THREE = NULL where zBarcode = 134996 and PERSON_NUMBER = 3;Update dbo.EDUCATION_CY1 set ED_FOUR = NULL where zBarcode = 134996 and PERSON_NUMBER = 3;Update dbo.EDUCATION_CY1 set ED_FIVE = NULL where zBarcode = 134996 and PERSON_NUMBER = 3;Update dbo.EDUCATION_CY1 set ED_THREE = NULL where zBarcode = 134095 and PERSON_NUMBER = 3;Update dbo.EDUCATION_CY1 set ED_FOUR = NULL where zBarcode = 134095 and PERSON_NUMBER = 3;Update dbo.EDUCATION_CY1 set ED_FIVE = NULL where zBarcode = 134095 and PERSON_NUMBER = 3;Update dbo.EDUCATION_CY1 set ED_THREE = NULL where zBarcode = 134095 and PERSON_NUMBER = 4;Update dbo.EDUCATION_CY1 set ED_FOUR = NULL where zBarcode = 134095 and PERSON_NUMBER = 4;Update dbo.EDUCATION_CY1 set ED_FIVE = NULL where zBarcode = 134095 and PERSON_NUMBER = 4;Update dbo.EDUCATION_CY1 set ED_THREE = NULL where zBarcode = 134095 and PERSON_NUMBER = 5;Update dbo.EDUCATION_CY1 set ED_FOUR = NULL where zBarcode = 134095 and PERSON_NUMBER = 5;Update dbo.EDUCATION_CY1 set ED_FIVE = NULL where zBarcode = 134095 and PERSON_NUMBER = 5;Update dbo.EDUCATION_CY1 set ED_THREE = NULL where zBarcode = 134095 and PERSON_NUMBER = 6;Update dbo.EDUCATION_CY1 set ED_FOUR = NULL where zBarcode = 134095 and PERSON_NUMBER = 6;Update dbo.EDUCATION_CY1 set ED_FIVE = NULL where zBarcode = 134095 and PERSON_NUMBER = 6;Update dbo.EDUCATION_CY1 set ED_THREE = NULL where zBarcode = 134095 and PERSON_NUMBER = 7;Update dbo.EDUCATION_CY1 set ED_FOUR = NULL where zBarcode = 134095 and PERSON_NUMBER = 7;Update dbo.EDUCATION_CY1 set ED_FIVE = NULL where zBarcode = 134095 and PERSON_NUMBER = 7;Update dbo.EDUCATION_CY1 set ED_THREE = NULL where zBarcode = 134095 and PERSON_NUMBER = 8;Update dbo.EDUCATION_CY1 set ED_FOUR = NULL where zBarcode = 134095 and PERSON_NUMBER = 8;Update dbo.EDUCATION_CY1 set ED_FIVE = NULL where zBarcode = 134095 and PERSON_NUMBER = 8;Update dbo.EDUCATION_CY1 set ED_THREE = NULL where zBarcode = 133777 and PERSON_NUMBER = 3;Update dbo.EDUCATION_CY1 set ED_FOUR = NULL where zBarcode = 133777 and PERSON_NUMBER = 3;Update dbo.EDUCATION_CY1 set ED_FIVE = NULL where zBarcode = 133777 and PERSON_NUMBER = 3;Update dbo.EDUCATION_CY1 set ED_THREE = NULL where zBarcode = 133821 and PERSON_NUMBER = 3;Update dbo.EDUCATION_CY1 set ED_THREE = NULL where zBarcode = 135153 and PERSON_NUMBER = 4;Update dbo.EDUCATION_CY1 set ED_FOUR = NULL where zBarcode = 135153 and PERSON_NUMBER = 4;Update dbo.EDUCATION_CY1 set ED_FIVE = NULL where zBarcode = 135153 and PERSON_NUMBER = 4;Update dbo.EDUCATION_CY1 set ED_THREE = NULL where zBarcode = 135092 and PERSON_NUMBER = 7;Update dbo.EDUCATION_CY1 set ED_FOUR = NULL where zBarcode = 135092 and PERSON_NUMBER = 7;Update dbo.EDUCATION_CY1 set ED_FIVE = NULL where zBarcode = 135092 and PERSON_NUMBER = 7;Update dbo.EDUCATION_CY1 set ED_THREE = NULL where zBarcode = 135092 and PERSON_NUMBER = 8;Update dbo.EDUCATION_CY1 set ED_FOUR = NULL where zBarcode = 135092 and PERSON_NUMBER = 8;Update dbo.EDUCATION_CY1 set ED_FIVE = NULL where zBarcode = 135092 and PERSON_NUMBER = 8;Update dbo.EDUCATION_CY1 set ED_THREE = NULL where zBarcode = 134996 and PERSON_NUMBER = 3;Update dbo.EDUCATION_CY1 set ED_FOUR = NULL where zBarcode = 134996 and PERSON_NUMBER = 3;Update dbo.EDUCATION_CY1 set ED_FIVE = NULL where zBarcode = 134996 and PERSON_NUMBER = 3;Update dbo.EDUCATION_CY1 set ED_THREE = NULL where zBarcode = 134095 and PERSON_NUMBER = 3;Update dbo.EDUCATION_CY1 set ED_THREE = NULL where zBarcode = 134095 and PERSON_NUMBER = 4;Update dbo.EDUCATION_CY1 set ED_THREE = NULL where zBarcode = 134095 and PERSON_NUMBER = 5;Update dbo.EDUCATION_CY1 set ED_THREE = NULL where zBarcode = 134095 and PERSON_NUMBER = 6;Update dbo.EDUCATION_CY1 set ED_THREE = NULL where zBarcode = 134095 and PERSON_NUMBER = 7;Update dbo.EDUCATION_CY1 set ED_THREE = NULL where zBarcode = 134095 and PERSON_NUMBER = 8;Update dbo.EDUCATION_CY1 set ED_THREE = NULL where zBarcode = 133777 and PERSON_NUMBER = 3;Update dbo.EDUCATION_CY1 set ED_THREE = NULL where zBarcode = 133821 and PERSON_NUMBER = 3;Update dbo.HH_LISTING_CY1 set HL_FIVE = NULL where zBarcode = 133289 and PERSON_NUMBER = 1;Update dbo.HH_LISTING_CY1 set HL_FIVE = NULL where zBarcode = 133289 and PERSON_NUMBER = 2;Update dbo.HH_LISTING_CY1 set HL_FIVE = NULL where zBarcode = 133289 and PERSON_NUMBER = 3;Update dbo.HH_LISTING_CY1 set HL_FIVE = NULL where zBarcode = 133289 and PERSON_NUMBER = 4;Update dbo.HH_LISTING_CY1 set HL_FIVE = NULL where zBarcode = 133289 and PERSON_NUMBER = 5;Update dbo.HH_LISTING_CY1 set HL_FIVE = NULL where zBarcode = 133289 and PERSON_NUMBER = 6;Update dbo.HH_LISTING_CY1 set HL_FIVE = NULL where zBarcode = 133418 and PERSON_NUMBER = 1;Update dbo.HH_LISTING_CY1 set HL_FIVE = NULL where zBarcode = 133418 and PERSON_NUMBER = 2;Update dbo.HH_LISTING_CY1 set HL_FIVE = NULL where zBarcode = 133418 and PERSON_NUMBER = 3;Update dbo.HH_LISTING_CY1 set HL_FIVE = NULL where zBarcode = 133418 and PERSON_NUMBER = 4;Update dbo.HH_LISTING_CY1 set HL_FIVE = NULL where zBarcode = 133418 and PERSON_NUMBER = 5;Update dbo.HH_LISTING_CY1 set HL_FIVE = NULL where zBarcode = 133340 and PERSON_NUMBER = 1;Update dbo.HH_LISTING_CY1 set HL_FIVE = NULL where zBarcode = 133340 and PERSON_NUMBER = 2;Update dbo.HH_LISTING_CY1 set HL_FIVE = NULL where zBarcode = 133340 and PERSON_NUMBER = 3;Update dbo.HH_LISTING_CY1 set HL_FIVE = NULL where zBarcode = 133258 and PERSON_NUMBER = 5;Update dbo.HH_LISTING_CY1 set HL_SEVEN = NULL where zBarcode = 133289 and PERSON_NUMBER = 4;Update dbo.HH_LISTING_CY1 set HL_SEVEN = NULL where zBarcode = 133289 and PERSON_NUMBER = 3;Update dbo.HH_LISTING_CY1 set HL_THREE = NULL where zBarcode = 133920 and PERSON_NUMBER = 1;Update dbo.HH_LISTING_CY1 set HL_THREE = NULL where zBarcode = 133920 and PERSON_NUMBER = 2;Update dbo.HH_LISTING_CY1 set HL_THREE = NULL where zBarcode = 133920 and PERSON_NUMBER = 3;Update dbo.HH_LISTING_CY1 set HL_SEVEN = NULL where zBarcode = 133289 and PERSON_NUMBER = 4;Update dbo.HH_LISTING_CY1 set HL_SEVEN = NULL where zBarcode = 133289 and PERSON_NUMBER = 3;Update dbo.HH_LISTING_CY1 set HL_SEVEN = NULL where zBarcode = 133708 and PERSON_NUMBER = 2;Update dbo.HH_LISTING_CY1 set HL_THREE = NULL where zBarcode = 133708 and PERSON_NUMBER = 2;Update dbo.HH_LISTING_CY1 set HL_SEVEN = NULL where zBarcode = 134040 and PERSON_NUMBER = 2;Update dbo.HH_LISTING_CY1 set HL_THREE = NULL where zBarcode = 134040 and PERSON_NUMBER = 2;Update dbo.HH_LISTING_CY1 set HL_SEVEN = NULL where zBarcode = 133715 and PERSON_NUMBER = 2;Update dbo.HH_LISTING_CY1 set HL_THREE = NULL where zBarcode = 133715 and PERSON_NUMBER = 2;Update dbo.HH_LISTING_CY1 set HL_SEVEN = NULL where zBarcode = 134828 and PERSON_NUMBER = 2;Update dbo.HH_LISTING_CY1 set HL_SEVEN = NULL where zBarcode = 134828 and PERSON_NUMBER = 1;Update dbo.FOURTE[/code]However when I check there are rows missing so I know somehow the stuff is cutting off the statements.Please any assistance will be appreciated. I am really stuck.

Massive slowdown in SELECT statement in Cursor from SQL 2000 to 2008 -- need help!

Posted: 28 Jun 2013 04:46 AM PDT

I have a SELECT statement in a cursor in a stored procedure that ran extremely fast in SQL Server 2000. I just migrated our database to SQL Server 2008 R2 (SP2) for a customer, and the same SELECT now takes fifty times longer to execute. I have logging that measure this performance, which is how I found out. The database on the new 2008 R2 server is set to 2008 compatibility. The slow performance occurs in both 2000 and 2008 compatibility mode. Some notes: I need to keep the SELECT statement inside the cursor. It calculates the number of cross sales for an employee in a given day, which is then used to figure out what incentive earnings they get for the cross sales. The SVAccountsHistory table has three million rows; SVCrossSales has 16,000; SVSales has 74,000; SVAccounts has 90,000. The joins between table use the primary keys in these tables. I have a number of other SELECT statements that run just fine for the incentive calculations that I do elsewhere in the stored procedure, so it's just this SELECT in particular. What I'm guessing is the problem in SQL 2008 is that it doesn't like the DATEDIFF compare or the COUNT(SELECT DISTINCT xxx) calculation. It's extremely maddening, I have to say. I'm at my wit's end, and desperately need to figure out what SQL 2008 is choking on in comparison to SQL 2000. Thank you for your help!Here's the SELECT statement:[quote]SELECT @tmpCalcValue1 = COUNT(DISTINCT SVCrossSalesHistory.SalesNum) * @EmpRuleValue1FROM SVCrossSalesHistory INNER JOIN SVSales ON ( SVCrossSalesHistory.INum = SVSales.INum AND SVCrossSalesHistory.SalesNum = SVSales.SalesNum AND SVSales.SaleDate = @tmpDate AND -- Go back to the day of consideration SVSales.OrgNum = @OrgNum AND -- Selected emp SVSales.DeleteFlg = 'N') INNER JOIN SVGoalProdXref ON ( SVSales.INum = SVGoalProdXref.INum AND SVSales.ProdNum = SVGoalProdXref.ProdNum AND SVGoalProdXref.GoalNum = @GoalNum AND -- Go against all the prods for the selected goal SVGoalProdXref.DeleteFlg = 'N') INNER JOIN SVAccounts ON ( SVSales.INum = SVAccounts.INum AND SVSales.FullAcctNum = SVAccounts.FullAcctNum AND ( -- If the account was closed, determine if the minimum # active days was met during the life of the account DATEDIFF(DD,OpenDate, CASE WHEN CloseDate = '1/1/1900' THEN NULL ELSE CloseDate END) > @EmpRuleValue2 OR -- @EmpRuleValue2 = 'x # days active' DATEDIFF(DD,OpenDate, CASE WHEN CloseDate = '1/1/1900' THEN NULL ELSE CloseDate END) IS NULL)) INNER JOIN SVAccountsHistory ON ( SVAccounts.INum = SVAccountsHistory.INum AND SVAccounts.FullAcctNum = SVAccountsHistory.FullAcctNum AND SVAccountsHistory.HistoryDate = @StartTime AND -- Today SVAccountsHistory.Balance > ISNULL(@EmpRuleValue5,0)) -- '$' value in 'balance > $'WHERE SVCrossSalesHistory.INum = @INum AND SVCrossSalesHistory.CSFlg = 'Y' AND -- Must be a cross sale SVCrossSalesHistory.IsNewCustFlg = 'Y' -- Consider new customers only[/quote]

Group every X rows up to Y

Posted: 27 Jun 2013 09:24 PM PDT

Dear SQL Experts,I need to housekeep a large table basically composed by an ID and a creation date (not timestamp) columns. However, I don't have direct write permissions on the database, so I need to use an batch job program to do it. This program accepts as a parameter the maximum retention days (i.e., it will keep all data from today minus X days and delete everything older).StoreID; Date123; 01/18/2013124; 01/18/2013125; 01/18/2013126; 01/19/2013127; 01/19/2013128; 01/19/2013...The application server has memory limitation, so I need to set the retention days parameter gradually. I ran some tests that showed that the batch job can load and delete 250,000 rows.Therefore, I need to know how I could group the rows in groups up to 250,000 and know the finish (max) day of each group. Could you please help me?Thanks in advance.Best Regards,Darthurfle

Named Instance SQL2008R2

Posted: 28 Jun 2013 07:37 AM PDT

I am trying to install SQL2008R2 enterprize version on server 2. Can I use the existing Instance name ( suppose INST01 is already exist in Server1) Because when I am trying to use the same instance during installation I am getting this error 'access denied' when the user has the local admin access.

Using RAISERROR with Dynamic SQL

Posted: 28 Jun 2013 04:16 AM PDT

Can anyone provide me a short example of how to implement this? I am not able to do this and need to.Any and all help will be greatly appreciated!! Back to Google!

Same two select statments show different sort order on a result

Posted: 28 Jun 2013 06:21 AM PDT

Hello thereI have same query for running on two different database servers with identical views, the views have same datawhen I execute the query, on one server it returns the first column on ascending order(A_Id). the same query returns unsorted on a different server on the same viewSelect count(distinct Acc_nt.A_Id) as AliasOne, Acc_nt.A_ID as AliasTwo From Acc_nt Where Acc_nt.StateNum=1 and caseyear=2013 group by Acc_nt.AccIDAcc_nt is Viewwhat do you think is the problem.

Restore a msdb database to a different server - sql server 2008 R2

Posted: 28 Jun 2013 05:33 AM PDT

Hi, We are currently setting up a new test environment which will be a exact copy of our production server. is it possible to take a backup of msdb database from the source server and restore it to the new destination server? I am thinking of this to import all the jobs from old server to the new one. To my knowledge this can be done as I will have same versions of SQL servers. can somebody help me to understand it better:-).

Some input needed on developing a process...

Posted: 28 Jun 2013 02:53 AM PDT

We have a lot of applications running in the background that constantly send the request to SQL tables in SQL server 2008, then we also have about 100 users accessing a web application that has the same database as the backend.I would like to have some input on developing a process queue system based on CPU usage for the application request. For .e.g. an application A that uses intensive query checks the CPU usage and if the usage is hight it will not run, something like that.Any input would be greatly appreicated.

SQL 2008 - Migrating job schedules

Posted: 27 Jun 2013 10:29 PM PDT

How to move job schedules from sql 2000/sql2005 to sql 2008 server. I have moved the jobs (sysjobs).

SQLServer Replication The process could not connect to Distributor .

Posted: 26 Jun 2013 06:31 PM PDT

Good Day. We configured Pull Replication and I checked the number of rows being stored in the replicated tables and they are the numbers match. But when I look at the properties of the subscriptions , the abovementioned message is dispalyed. We use SQL Server Security . I was even able to sign onto the Distributor database using the same account. The account has all the required permissions and there is no entry in SQL's log that there are failed logins . Any ideas ?

Replicate pending transactions manually

Posted: 25 Jun 2013 09:27 PM PDT

Hi all, I have configured the transactional replication between two servers which is different locations, it was working fine till now, but becuase of connectivity issue the replication is stopped and the new transactions are not getting updated on subscriber, Is there any way I can manually replicate the pending transactions to subscriber, I can access this server by remote desktop / team viewer.Is there any option available that I can transfer the pending transaction into a file, and upload it on subscriber ?Please help...Thanks

Partitioning Existing table with Non clustered index on Date column

Posted: 25 Jun 2013 05:40 PM PDT

Hi All, I have one table with 10 lacks records. I partitioned that table on CreatedDate column with non clustered index( i am not removing clustered index on ID column, It is as part of primary key).It is inserting to data into relevant partition only. But i am verifying is that table partitioned or not by using below steps, in object Explorer-Database-->TestDB-->tables-->select partitioned table and Right click on table select properties --Storage [b]File Group= PrimaryTable Partitioned = False[/b]If create Partitioned with Clustered index , it is showing correctly [b] Table Partitioned = True[/b] But i am creating with non clustered.Can any one explainIs it table partitioned or not? and how to know data is coming from which partition (with out using ($partition)below are example table partition script which i followed steps for original table.CREATE TABLE tblPartition(ID int primary key identity(1,1),Name varchar(30),CreatedDate Datetime)insert into tblPartition(Name,CreatedDate)SELECT 'Name1','2013-05-26 13:53:47.650'union allselect 'Name2','2013-05-26 13:53:47.650'union allSELECT 'Name1','2013-06-26 13:53:47.650'union allselect 'Name2','2013-06-26 13:53:47.650'union allSELECT 'Name1','2013-07-26 13:53:47.650'union allselect 'Name2','2013-07-26 13:53:47.650'goCREATE PARTITION FUNCTION [PartitionFunction](datetime) AS RANGE RIGHT FOR VALUES (N'2013-05-31 23:59:59', N'2013-06-30 23:59:59', N'2013-07-31 23:59:59')CREATE PARTITION SCHEME [PartitionScheme] AS PARTITION [PartitionFunction] TO ([FGNdf10], [FGNdf11], [FGNdf12], [PRIMARY])CREATE NONCLUSTERED INDEX [IX_PartitionScheme_CreatedDate] ON [dbo].[tblPartition]( [CreatedDate])WITH (SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF) ON [PartitionScheme]([CreatedDate])goselect * from tblPartitionwhere $partition.PartitionFunction(CreatedDate)=1goThanks,PRR

Index Rebuild Failed - Need More Information

Posted: 24 Jun 2013 12:24 AM PDT

I have an Index Rebuild that has been failing. It fails on the same table/index everytime. Looking at this history of the index job, I see entries like this, "Rebuild - [PK_CCX] [SQLSTATE 01000] (Message 0)" and then the very last entry is, "Rebuild - [PK_CCW_TOX] [SQLSTAT... The step failed." I am needing to find more information on this failure. I have checked the SQL Agent Error Logs and there is not a whole lot there, as in maybe 25 entries total. I then try the SQL Server Error Logs and search for the index and nothing comes back.I have queried the table to see if there is some data out of place for these two columns that make up the PK and everything seems correct. To add to this, the table is not very big (almost makes me wonder why it needs to be rebuild, but I can worry about that once I get this corrected).Can anyone point me in the right direction on where I can look to get more information on why this Index Rebuild is failing?Any and all advice on this subject will be greatly appreciated...Thank You.

DMV sys.dm_db_missing_index_group_stats returns no rows

Posted: 05 Oct 2011 10:31 AM PDT

Hi All,I am using a script to review missing indexes on various SQL Servers.Most of the time, it works just fine.Sometimes, however, certain DMVs that are part of the script, just don't return any rows.Usually, the main issue is sys.dm_db_missing_index_group_stats.1. I know that restarting the service clears DMVs - that's not it :-)2. I know I need the VIEW SERVER STATE permission - that's not it either :-)3. I know I need actual missing indexes:select * from sys.dm_db_missing_index_detailsreturns 573 rows, yet select TOP 1 * from sys.dm_db_missing_index_group_stats returns 0 rows.This renders all the various missing index scripts useless, because the JOIN doesn't work, and so the script doesn't return any results.I keep running into servers with this issue, and so far, only restarting the service has resolved it. Data collection starts from scratch, and within just a few minutes, I get data in the sys.dm_db_missing_index_group_stats. Check back a few days, weeks or sometimes months later, and the DMV again returns no results - same user, same DB, etc.I have Googled the heck out of this issue and can't find a solution. It is frustrating, since missing indexes are often a major issue of the application I am troubleshooting on many different SQL Servers.Any help would be greatly appreciated :-)

Can use clr function in select but not in update

Posted: 27 Jun 2013 08:33 PM PDT

I have a complicated clr function that does a formula calculation. The variable values of these formulas comes from different databases and that is the reason for the clr function.Now, with the following T_Sql:SELECT fncFormula(1, 2013, 6, 28) as ValueI get a result within 1 second.When I want to update a table with that result or just dump the result into a temp table, it takes about 2 minutes and then returns a value of 0.Now it seems that sql do not trust the result of the clr function, but what is the use if I can't use the value?The clr permission level is set to External and the database property Trustworty is on.Is there something else I am missing here?

Inserting Results from SPROC into table

Posted: 28 Jun 2013 12:55 AM PDT

I was able to finally develop this SPROC that runs fine and returns the results but i would like to add an INSERT INTO clause that will load the values into a table that i could use for additional querying and joins.Thoughts?DECLARE @Start_Date DATETIME = NULLDECLARE @Part_Type_MP VARCHAR(1000) = ''SET @Part_Type_MP = ',' + @Part_Type_MP + ','SET @Start_Date = ISNULL(@Start_Date, DATEADD(yy, 50, GETDATE()))SET @Start_Date = dateadd(ms,-3,dateadd(day,1,DATEADD(dd, DATEDIFF(dd,0,@Start_Date), 0)))DECLARE @sqlQuery NVARCHAR(MAX),@finalQuery NVARCHAR(MAX),@q CHAR(1)=''''SET @sqlQuery = N'SELECT ' + 'PLK.Part_no, ' +'PLK.line_item_key, ' + 'PLK.unit_price ' +'FROM ' +'(SELECT p.part_no, pli.unit_price, MAX(PLI.Line_Item_Key) AS [Line_Item_Key] ' + 'FROM Purchasing_v_Line_Item_e AS PLI ' + 'JOIN Part_v_Part_e AS P ' + 'ON p.plexus_customer_no = pli.plexus_customer_no ' + 'AND p.part_key = pli.part_key ' + 'WHERE pli.add_date <= ' + @q + @q + CONVERT(VARCHAR(30), @Start_Date,120)+ @q + @q + ' AND (' + @q + @q + @Part_Type_MP + @q + @q + ' = '''',,'''' OR (CHARINDEX('''','''' + CAST(P.Part_Type as VARCHAR(50)) + '''','''', ' + @q + @q + @Part_Type_MP + @q + @q + ') >0))' + 'GROUP by p.part_no, pli.unit_price ' + ') AS PLK'-- SET @finalQuery = N'INSERT INTO dbo.tblActualPrice ' + CHAR(10)SET @finalQuery = N'SELECT ' +'part_no AS Part, ' +'unit_price AS ActualCost ' +'FROM OPENQUERY (PLEXREPORTSERVER, ''' + @SQLQuery + ''')'EXEC (@finalQuery)

TSQL Challenge

Posted: 27 Jun 2013 08:39 PM PDT

[b]Taken from TSQL Challenge[/b]The challenge idea is taken from a problem discussed in the MSDN TSQL forum. The challenge is to find the Islands(gaps) in sequential dates. You need to write a query to identify continuous intervals from the start date and end date.For example,01/01/2012 - 01/17/201201/18/2010 - 02/20/2012The above two intervals should be considered as 01/01/2012 - 02/20/2012[b]Rules [/b]The output should be ordered by PatientID, AdmissionDate.For any patient there will be no overlapping date intervals.[b]Sample data[/b]There is a table which maintains the Patient admission and discharge information. Each admission comes as one new record. But when there is a continuous internal, you should show them as a single row in output[code="plain"]PatientID AdmissionDate DischargeDate Cost--------- ------------- ------------- -------709 2011-07-27 2011-07-31 450.00709 2011-08-01 2011-08-23 2070.00709 2011-08-31 2011-08-31 90.00709 2011-09-01 2011-09-14 1260.00709 2011-12-01 2011-12-31 2790.001624 2011-06-07 2011-06-28 1980.001624 2011-06-29 2011-07-31 2970.001624 2011-08-01 2011-08-02 180.00[/code][b]Expected Results[/b][code="plain"]PatientID AdmissionDate DischargeDate Cost--------- ------------- ------------- -------709 2011-07-27 2011-08-23 2520.00709 2011-08-31 2011-09-14 1350.00709 2011-12-01 2011-12-31 2790.001624 2011-06-07 2011-08-02 5130.00[/code][b]Sample Script[/b][code="sql"] IF OBJECT_ID('TC79','U') IS NOT NULL BEGIN DROP TABLE TC79ENDGOCREATE TABLE TC79( PatientID INT, AdmissionDate DATETIME, DischargeDate DATETIME, Cost MONEY)GOINSERT INTO TC79(PatientID,AdmissionDate,DischargeDate,Cost)SELECT 709,'2011-07-27','2011-07-31',450.00 UNION ALLSELECT 709,'2011-08-01','2011-08-23',2070.00 UNION ALLSELECT 709,'2011-08-31','2011-08-31',90.00 UNION ALLSELECT 709,'2011-09-01','2011-09-14',1260.00 UNION ALLSELECT 709,'2011-12-01','2011-12-31',2790.00 UNION ALLSELECT 1624,'2011-06-07','2011-06-28',1980.00 UNION ALLSELECT 1624,'2011-06-29','2011-07-31',2970.00 UNION ALLSELECT 1624,'2011-08-01','2011-08-02',180.00SELECT * FROM TC79GO[/code]

Need SQL Function ?

Posted: 27 Jun 2013 07:33 PM PDT

Dear all,Hope things are going well at your end.I need sql function for getting the following Scenarios:Input parameter : ShiftDateNeed to create function for Senario 11st day of the month – Till Date(Output:01-06-2013,02-06-2013,...28-06-2013)Senario 21st day of the week – Till Day(week start date and end date)(23-06-2013,29-06-2013)Senario 31st day of the year – Till Date(from 01-01-2013,02-01-2013,...01-02-2013,....28-06-2013)please help on this?

Updating huge table

Posted: 27 Jun 2013 10:24 PM PDT

Hi All,I have two tables i.e VoiceData, VoiceData_History , below are rows Count of the tables. VoiceData --230 millions (23 core)VoiceData_History --8 Millions (80 lacks)Now i want to update Archive_id column in VoiceData table based on VoiceData_History table below the update statement for updating the record Update VoiceData set Archive_id =VDH.Archive_id from VoiceData VD join VoiceData_History VDH onVD.ID=VDH.IDI have[b] "non clustered index on Archive_id "[/b] column and Clustered index on ID column in both tableNote: In both table Matched Data is 80 millions[b]What is best way for updating the those records. If i run above statement , it locking the data table.and it is production DB[/b]

Get min and max dates based on end flag

Posted: 26 Jun 2013 04:52 AM PDT

HiI have a result set which produces EndFlag for an activity.The resultset is like below.CustNo, ActivityStart, ActivityEndDate, ActivityEndActivityEnd is derived based on current ActivityEndDate and next row's ActivityStartDate is > 7 hours ActivityEnd = 1Now, I need to generate final resultset with CustNo, MIN(ActivityStartDate), MAX(ActivityEndDate) for each ActivityEnd = 1Means if the ActivityEnd = 1 then we need to MIN(ActivityStart) where ActivityStartDate <= ActivityEndDate of ActivityEnd = 1I am struggling to get this done.Can anyone give an exampleSample Resultset and expected output as below.CustNo ActivityStartDate ActivityEndDate DiffHours ActivityEnd42 12/21/2006 11:35 12/21/2006 13:40 1 042 12/21/2006 14:10 12/21/2006 16:30 1 042 12/21/2006 17:00 12/21/2006 18:15 0 042 12/21/2006 18:45 12/21/2006 20:00 33 142 12/23/2006 07:00 12/23/2006 10:00 0 042 12/23/2006 10:30 12/23/2006 13:35 15 142 12/24/2006 07:00 12/24/2006 10:00 0 042 12/24/2006 10:30 12/24/2006 13:35 1 042 12/24/2006 14:30 12/24/2006 19:30 5 042 12/25/2006 00:00 12/26/2006 00:00 0 1Need output asCustNo ActivityStartDate ActivityEndDate42 12/21/2006 11:35 12/21/2006 20:0042 12/23/2006 07:00 12/23/2006 13:3542 12/24/2006 07:00 12/26/2006 00:00

Search This Blog