Tuesday, March 12, 2013

[how to] List DB2 backups

[how to] List DB2 backups


List DB2 backups

Posted: 12 Mar 2013 08:22 PM PDT

Is there a way to list the DB2 backups? All I can find is db2 list history backup all for but I think you need to check through all of those to see if they've since been deleted. This seems like a simple question but I'm coming up blank.

How do I remove duplicate records in a join table in psql?

Posted: 12 Mar 2013 08:13 PM PDT

I have a table that has a schema like this:

create_table "questions_tags", :id => false, :force => true do |t|          t.integer "question_id"          t.integer "tag_id"        end          add_index "questions_tags", ["question_id"], :name => "index_questions_tags_on_question_id"        add_index "questions_tags", ["tag_id"], :name => "index_questions_tags_on_tag_id"  

I would like to remove records that are duplicates, i.e. they have both the same tag_id and question_id as another record.

What does the SQL look like for that?

Strategies for organising SQL Server with large amount of RAM

Posted: 12 Mar 2013 05:24 PM PDT

We now have a new server for our DB and amongst other things we have 128GB of RAM available (previously I had 16GB) . I know SQL Server is very good at managing it's resources, but I was wondering if there are any special settings or strategies that I should employ in either the server/db settings or processing code (stored procs/indexes etc) to ensure that SS takes best advantage of the available ram.

The DB is about 70GB and it's a non transactional db (it's a data warehouse). So basically large WRITE followed by massive READ is the normal flow of things.

Leaf nodes for averages

Posted: 12 Mar 2013 03:53 PM PDT

I have the following MDX tuple calculation for my KPI in Dashboard Designer:

AVG([Person].[Person].children,[Measures].[Hours])  

This works perfectly when, for instance, I view it by Team name.

However, when I view it by the [Person] it's returning no values. Does AVG not work when you're looking directly at the leaf nodes or something? Or is there something else I'm doing wrong?

Specify Server for DBMS_Scheduler Job in Policy Managed RAC

Posted: 12 Mar 2013 02:33 PM PDT

A unit test requires a dbms_scheduler job to run on the same RAC node as the unit test is being run from. I know that with an Admin managed database this could be done by creating a service that limited the available instances and then using that service in a job class the job uses. My question is, how can this be done in 11.2 with policy management?

Pools can be created that have only a single server in them and databases can be assigned to multiple pools, but as I understand it, a server can only be assigned to a single pool. Therefore, a service can't be created that uses a single server and still have other services that use a pool defined with multiple servers including that one.

I also know that services can be created as either SINGLETON or UNIFORM, but since SIGNLETON doesn't provide for allowed servers or even preferred servers, I'm not sure how this would help.

Surely I am missing something that makes this all possible.

After streaming replication has failed, how to get it back again?

Posted: 12 Mar 2013 01:08 PM PDT

I have a similar problem to this: Replication has failed; how to get going on again?

Essentially my slave failed, and now complains "requested WAL segment 0000000100000135000000E4 has already been removed"

In my case I HAVE done a full base backup again as per the instructions http://wiki.postgresql.org/wiki/Binary_Replication_Tutorial#Binary_Replication_in_6_Steps. So I shut the master down, did a full rsync, started up the slave, then started the master. And I still get the same error.

What is a good, repeatable way to calculate MAXDOP on SQL Server?

Posted: 12 Mar 2013 05:48 PM PDT

When setting up a new SQL Server 2012, I use the following code to determine a good starting point for the MAXDOP setting:

/* If this instance is hosting a Sharepoint database, you MUST specify       MAXDOP=1 according to http://blogs.msdn.com/b/rcormier/archive/2012/10/25/you-shall-configure-your-maxdop-when-using-sharepoint-2013.aspx  */      DECLARE @CoreCount int;  DECLARE @NumaNodes int;    SET @CoreCount = (SELECT i.cpu_count from sys.dm_os_sys_info i);  SET @NumaNodes = (SELECT MAX(c.memory_node_id) + 1               FROM sys.dm_os_memory_clerks c               WHERE memory_node_id < 64);    IF @CoreCount > 4 /* If less than 5 cores, don't bother... */  BEGIN      DECLARE @MaxDOP int;      SET @MaxDOP = @CoreCount * 0.75;      IF @MaxDOP > (@CoreCount / @NumaNodes)           SET @MaxDOP = (@CoreCount / @NumaNodes);      PRINT 'Suggested MAXDOP = ' + CAST(@MaxDOP as varchar(max));  END  

I realize this is a bit subjective, and can vary based on many things; however I'm attempting to create a tight catch-all piece of code to use as a starting point for a new server.

Does anyone have any input on this code?

UNION is slow but both queries are fast in separate

Posted: 12 Mar 2013 02:57 PM PDT

Dunno what else to do about this one. I have one table that has has a start and a stop columns and I want to return the results of it joined both by start and by stop and I want a clear distinction between the two. Now both queries run fast separatly:

SELECT              UNIX_TIMESTAMP(CONVERT_TZ(start_dev, '+00:00', GetCarrierTimezone(a0.carrier_id))) AS alertStart,              NULL AS alertStop,              c0.name AS carrier_name,              carrier_image,              l0.Latitude,              l0.Longitude          FROM              carriers AS c0                  INNER JOIN start_stop AS a0 ON a0.carrier_id = c0.id                      INNER JOIN pcoarg AS l0 ON a0.startLogId = l0.id          WHERE                  FIND_IN_SET(a0.carrier_id, '89467,1,64578,222625,45013') > 0              AND                  start_dev > '2013-03-11 11:46:48'              AND                   start_dev = (SELECT MIN(start_dev) FROM start_stop AS a1 WHERE a0.carrier_id = a1.carrier_id AND DATE(a1.start_dev) = DATE(a0.start_dev))          AND IsNotificationInSchedule(22, start_dev) > 0  

So this one takes 0.063. But if I combine it in a UNION (doesn't matter if it's UNION ALL OR DISTINCT OR WHATEVER) it just takes about 0.400 seconds.

SELECT * FROM  (      (          SELECT              UNIX_TIMESTAMP(CONVERT_TZ(start_dev, '+00:00', GetCarrierTimezone(a0.carrier_id))) AS alertStart,              NULL AS alertStop,              c0.name AS carrier_name,              carrier_image,              l0.Latitude,              l0.Longitude          FROM              carriers AS c0                  INNER JOIN start_stop AS a0 ON a0.carrier_id = c0.id                      INNER JOIN pcoarg AS l0 ON a0.startLogId = l0.id          WHERE                  FIND_IN_SET(a0.carrier_id, '89467,1,64578,222625,45013') > 0              AND                  start_dev > '2013-03-11 11:46:48'              AND                   start_dev = (SELECT MIN(start_dev) FROM start_stop AS a1 WHERE a0.carrier_id = a1.carrier_id AND DATE(a1.start_dev) = DATE(a0.start_dev))              AND IsNotificationInSchedule(22, start_dev) > 0      ) UNION ALL (          SELECT              NULL AS alertStart,              UNIX_TIMESTAMP(CONVERT_TZ(stop_dev, '+00:00', GetCarrierTimezone(a0.carrier_id))) AS alertStop,              c0.name AS carrier_name,              carrier_image,              l0.Latitude,              l0.Longitude          FROM              start_stop AS a0                  INNER JOIN carriers AS c0 ON a0.carrier_id = c0.id                      INNER JOIN pcoarg AS l0 ON a0.stopLogId = l0.id          WHERE                  FIND_IN_SET(a0.carrier_id, '89467,1,64578,222625,45013') > 0              AND                  stop_dev > '2013-03-11 11:46:48'              AND                   stop_dev = (SELECT MAX(stop_dev) FROM start_stop AS a1 WHERE a0.carrier_id = a1.carrier_id AND DATE(a1.stop_dev) = DATE(a0.stop_dev))              AND IsNotificationInSchedule(22, start_dev) > 0      )  ) AS startStops  ORDER BY IF(alertStart IS NULL, alertStop, alertStart)  

Here is EXPLAIN on single query:

1   PRIMARY c0  ALL PRIMARY             17  Using where  1   PRIMARY a0  ref PRIMARY,startstop_carriers_stopdev_idx,georefidx,startstop_carriers_startdev_idx    startstop_carriers_stopdev_idx  4   test_backoffice.c0.id   72  Using where  1   PRIMARY l0  ref id ASC  id ASC  4   test_backoffice.a0.startLogId   1   Using where  2   DEPENDENT SUBQUERY  a1  ref PRIMARY,startstop_carriers_stopdev_idx,georefidx,startstop_carriers_startdev_idx    startstop_carriers_stopdev_idx  4   test_backoffice.a0.carrier_id   72  Using where; Using index  

And here is the EXPLAIN for the JOIN:

1   PRIMARY   system                  0   const row not found  2   DERIVED c0  ALL PRIMARY             17  Using where  2   DERIVED a0  ref PRIMARY,startstop_carriers_stopdev_idx,georefidx,startstop_carriers_startdev_idx    startstop_carriers_stopdev_idx  4   test_backoffice.c0.id   72  Using where  2   DERIVED l0  ref id ASC  id ASC  4   test_backoffice.a0.startLogId   1   Using where  3   DEPENDENT SUBQUERY  a1  ref PRIMARY,startstop_carriers_stopdev_idx,georefidx,startstop_carriers_startdev_idx    startstop_carriers_stopdev_idx  4   test_backoffice.a0.carrier_id   72  Using where; Using index  4   UNION   c0  ALL PRIMARY             17  Using where  4   UNION   a0  ref PRIMARY,startstop_carriers_stopdev_idx,georefidx,startstop_carriers_startdev_idx    startstop_carriers_stopdev_idx  4   test_backoffice.c0.id   72  Using where  4   UNION   l0  ref id ASC  id ASC  4   test_backoffice.a0.stopLogId    1   Using where  5   DEPENDENT SUBQUERY  a1  ref PRIMARY,startstop_carriers_stopdev_idx,georefidx,startstop_carriers_startdev_idx    startstop_carriers_stopdev_idx  4   test_backoffice.a0.carrier_id   72  Using where; Using index      UNION RESULT      ALL                       

Help on this one would be greatly appreciated. :)

EDIT:

I'm getting inconsistent result. If I remove the convert_tz for example and try to get the timezone outside the union I get very fast results, but If I rename the result it automatically goes down to the same underperformante query:

SELECT      *,      GetCarrierTimezone(carrier_id) timezone  FROM  (  

this takes 0.374s

SELECT      *,      GetCarrierTimezone(carrier_id)  FROM  (  

while this takes 0.078 (mostly the lag from the db to my machine)..

Are there any good open source tools for DB objects end user manipulation?

Posted: 12 Mar 2013 12:15 PM PDT

I've been recently tasked to provide some object adjustment features for our end users, simple thing like changing the value of two or three fields in some specific business know tables without the need to call the IT department and with some logging and auditing for our most paranoid managers.

Is there any software that already does this? (gather the table schema and data from another DB and provide a user with really simple adjusting capabilities)

Feature requests keep piling up, support for SQLServer and Oracle databases, auditing automatically and by user choice, running processes and what not.

Any good tools that can provide this way of meta/highlevel simple database interaction?

Using pgAdmin SQL Editor to execute a master file containing multiple sql files

Posted: 12 Mar 2013 08:59 PM PDT

I'm using pgAdminIII SQL Editor to develop a really long script. I'd like to break the script into smaller, more manageable scripts and include each sql file in a master file, then just execute the master file.

example: master.sql

contents (I don't know the syntax to use):

file1.sql  file2.sql  file3.sql  

I've found several tutorials about using psql -f in the command-line and \i to include these files, but I'd rather use a GUI to execute my scripts while I develop and test locally.

Is this possible? Any references/documentation would be very helpful.

EDIT: For clarity, I'm not asking about better products to use other than pgAdmin (unless the product can do what I'm asking above), nor am I asking how to do this in psql - I already have documentation for that and I don't want to use the command line. Preference is for executing the master.sql script file in a sql editor.

Sum Up values in a query based on other information

Posted: 12 Mar 2013 03:05 PM PDT

I am trying to grab the sum of 2 columns if 1 column is the same. I currently have a record-set that looks like this:

enter image description here

I get these results by running this statement:

 select distinct a.eventnum, a.num_cust, a.out_minutes, d.xpers   FROM mv_outage_duration a   INNER JOIN mv_aeven d   ON d.Num_1 = a.eventnum   and (d.DEV_NAME = 'T007F12127')   and d.rev_num = (select max(rev_num)   from mv_aeven d   where a.eventnum = d.Num_1)  group by a.eventnum, a.num_cust, a.out_minutes, d.xpers  

How do I get the sum of Num_cust and Out_Minutes for the record if the eventnum is the same?

I'd like to return 1 and only 1 row for each event number, and if there are more than 1 step, I'd like to add the Num_cust and Out_minutes for each step.

I've tried

 select distinct a.eventnum, a.num_cust, a.out_minutes, d.xpers, sum(a.Num_cust)   FROM mv_outage_duration a   INNER JOIN mv_aeven d   ON d.Num_1 = a.eventnum   and (d.DEV_NAME = 'T007F12127')   and d.rev_num = (select max(rev_num)   from mv_aeven d   where a.eventnum = d.Num_1)  group by a.eventnum, a.num_cust, a.out_minutes, d.xpers  

and it just returns the results as a new column sum(a.num_cust).

enter image description here

I also tried

 select distinct a.eventnum, a.num_cust, a.out_minutes, d.xpers, sum(select Num_cust from mv_outage_duration a where a.eventnum = d.num_1)   FROM mv_outage_duration a   INNER JOIN mv_aeven d   ON d.Num_1 = a.eventnum   and (d.DEV_NAME = 'T007F12127')   and d.rev_num = (select max(rev_num)   from mv_aeven d   where a.eventnum = d.Num_1)  group by a.eventnum, a.num_cust, a.out_minutes, d.xpers  

...but that just wouldn't run at all.

Here's some statements to set everything up

Create table mv_outage_duration( eventnum, num_cust, out_minutes, restore_dts, off_dts, cause, feeder, dev_name)   create table mv_aeven (Num_1, rev_num, xpers, weather_code, completion_remarks)          Insert into "mv_outage_duration" (EVENTNUM,NUM_CUST,OUT_MINUTES,RESTORE_DTS,OFF_DTS,CAUSE,FEEDER,DEV_NAME) values ('T00000000133',79,11,'20130307085914CS','20130307084811CS','10','17FL012011','T007F12127');  Insert into "mv_outage_duration" (EVENTNUM,NUM_CUST,OUT_MINUTES,RESTORE_DTS,OFF_DTS,CAUSE,FEEDER,DEV_NAME) values ('T00000000133',61,13,'20130307090200CS','20130307084811CS','10','17FL012011','T007F12127');  Insert into "mv_outage_duration" (EVENTNUM,NUM_CUST,OUT_MINUTES,RESTORE_DTS,OFF_DTS,CAUSE,FEEDER,DEV_NAME) values ('T00000000014',61,4,'20130304140400CS','20130304135945CS','09','17FL012011','T007F12127');  Insert into "mv_outage_duration" (EVENTNUM,NUM_CUST,OUT_MINUTES,RESTORE_DTS,OFF_DTS,CAUSE,FEEDER,DEV_NAME) values ('T00000000173',79,1,'20130307161532CS','20130307161424CS','01','17FL012011','T007F12127');  Insert into "mv_outage_duration" (EVENTNUM,NUM_CUST,OUT_MINUTES,RESTORE_DTS,OFF_DTS,CAUSE,FEEDER,DEV_NAME) values ('T00000000173',61,3,'20130307161800CS','20130307161424CS','01','17FL012011','T007F12127');            Insert into "mv_aeven" (NUM_1,REV_NUM,XPERS,WEATHER_CODE,COMPLETION_REMARKS) values ('T00000000014',10,796072,'LIGHTNING IN AREA','COMPLETETION REMARKS FROM TRUCK ON TOFS. ');  Insert into "mv_aeven" (NUM_1,REV_NUM,XPERS,WEATHER_CODE,COMPLETION_REMARKS) values ('T00000000014',11,796072,'NORMAL FOR SEASON','COMPLETETION REMARKS FROM TRUCK ON TOFS.');  Insert into "mv_aeven" (NUM_1,REV_NUM,XPERS,WEATHER_CODE,COMPLETION_REMARKS) values ('T00000000173',7,79607,'LIGNTNING IN AREA','wetr');  Insert into "mv_aeven" (NUM_1,REV_NUM,XPERS,WEATHER_CODE,COMPLETION_REMARKS) values ('T00000000173',6,79607,'LIGNTNING IN AREA','wetr');  Insert into "mv_aeven" (NUM_1,REV_NUM,XPERS,WEATHER_CODE,COMPLETION_REMARKS) values ('T00000000133',7,796072,'THUNDERSTORM','Testing Step Restore for Kasey');  Insert into "mv_aeven" (NUM_1,REV_NUM,XPERS,WEATHER_CODE,COMPLETION_REMARKS) values ('T00000000133',6,796072,'THUNDERSTORM','Testing Step Restore for Kasey');  

Install PostgreSQL 9.2 on Windows using WIN1252 encoding.

Posted: 12 Mar 2013 01:26 PM PDT

I had installed PostgreSQL 9.2 earlier and it always installed with the encoding being WIN1252 (the default database was WIN1252). I then some time ago reinstalled it with the encoding being UTF8 (I dont exactly remember what I did). I am now trying to re-install postgresql again but re-installing it with the encoding set to WIN1252. I am installing postgresql 9.2.2 from the installer executable and using an options file. I am setting the locale to "English, United States" and the installer-language to "en". Are these the wrong values I should be using? Is there some internal variable I must of set to UTF8 that postgresql is reading to know to use UTF8? I dont see any reference to UTF8 anywhere when I install postgresql. After I install postgres, it shows my database is UTF8 and the 'client_encoding' variable is set to WIN1252.

Can the same database be log-shipping secondary and primary at the same time?

Posted: 12 Mar 2013 11:15 AM PDT

Here is my scenario:

Database DB1 on Server1 is log shipping primary in data center.

Database DB1 on Server2 is log shipping secondary; Server2 is in remote location. Logs are shipped from data center to remote location via shared virtual Jungle Disk drive accessible both from data center and remote location via internet.

In case I fail over to Server2 I would like to have log backups as well.

So my thinking is after configuring DB1 on Server2 as log shipping secondary I would then also configure it as a log shipping primary (even though these log backups won't get shipped anywhere from Server2). When database DB1 on Server2 is in "secondary" mode log backup job will probably be disabled.

Is this a valid use for log shipping?

Performing SELECT on EACH ROW in CTE or Nested QUERY?

Posted: 12 Mar 2013 11:45 AM PDT

This is a problem in PostgreSQL

I have a table which stores the tree of users;

      +------+---------+      |  id  | parent  |      |------+---------|      |  1   |   0     |      |------|---------|      |  2   |   1     |      |------|---------|      |  3   |   1     |      |------|---------|      |  4   |   2     |      |------|---------|      |  5   |   2     |      |------|---------|      |  6   |   4     |      |------|---------|      |  7   |   6     |      |------|---------|      |  8   |   6     |      +------+---------+  

I can query a complete tree from any node by using the connectby function, and I can separately query the size of tree in terms of total nodes in it, for example

tree for #1 has size 7
tree for #5 has size 0
tree for #6 has size 2, and so on

Now I want to do something like Selecting all possible trees from this table (which is again carried out by connectby), count the size of it and create another dataset with records of ID and size of underlying tree, like this:

      +------------------+-------------+      |  tree_root_node  |  tree_size  |      |------------------+-------------|      |      1           |     7       |      |------------------+-------------|      |      2           |     3       |      |------------------+-------------|      |      3           |     0       |      |------------------+-------------|      |      4           |     3       |      |------------------+-------------|      |      5           |     0       |      |------------------+-------------|      |      6           |     2       |      |------------------+-------------|      |      7           |     0       |      |------------------+-------------|      |      8           |     0       |      +------------------+-------------+  

The problem is, I am unable to perform the same SELECT statement for every available row in original table in order to fetch the tree and calculate the size, and even if I could, I dont know how to create a separate dataset using the fetched and calculated data.

I am not sure if this could be simple use of some functions available in Postgres or I'd have to write a function for it or simply I dont know what exactly is this kind of query is called but googling for hours and searching for another hour over here at dba.stackexchange returned nothing.

Can someone please point to right direction ?

SQL Server 2008 can't repair consistency

Posted: 12 Mar 2013 05:52 PM PDT

I have a problem with a SQL Server 2008 database.

Launching

DBCC CHECKDB  

I get this error:

SQL Server detected a logical consistency-based I/O error: incorrect checksum (expected: 0xd2e00940; actual: 0x925ef494). It occurred during a read of page (1:15215) in database ID 22 at offset 0x000000076de000 in file 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\storico_ita_tlx.mdf'. Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

I found the table causing the problem:

DBCC CHECKTABLE  

Msg 824, Level 24, State 2, Line 8
SQL Server detected a logical consistency-based I/O error: incorrect checksum (expected: 0xd2e00940; actual: 0x925ef494). It occurred during a read of page (1:15215) in database ID 22 at offset 0x000000076de000 in file 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\storico_ita_tlx.mdf'. Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately.

Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

So I tried with the repair operations:

DBCC CHECKTABLE (table_name, REPAIR_ALLOW_DATA_LOSS)  

but I get the same error:

Msg 824, Level 24, State 2, Line 8
SQL Server detected a logical consistency-based I/O error: incorrect checksum (expected: 0xd2e00940; actual: 0x925ef494). It occurred during a read of page (1:15215) in database ID 22 at offset 0x000000076de000 in file 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\storico_ita_tlx.mdf'. Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately.

Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.

I've also tried setting the DB in SINGLE_USER mode, but with no result.

I am not able to delete nor truncate the table. As I always get the same error.

The table does not have any constraints. It has one PK and one Index, but I can't drop any of them.

How can I store a pdf in PostgreSQL

Posted: 12 Mar 2013 04:45 PM PDT

I have to store .pdf files in a table.

I have a table, state, with columns:

id_state,  name,  pdffile (bytea)  

I want to store the pdf files in the pdffile column.

How can I do this?

How I prevent deadlock occurrence in my application?

Posted: 12 Mar 2013 11:53 AM PDT

I am developing an LMS application in PHP framework(Codeigniter 2.1.0). I am using MySQL database. All the tables in the database have innodb engine. I also created indexes on each tables. Now I am doing load testing using Jmeter version 2.9 locally for 200 users concurrently. During the load testing, in a specific page action I got Deadlock Found error. I changed my original query to the new one but again same error is occurring.

I have written save_interactions function which takes four parameters interaction array,module_id,course_id,user_id & is been called so many times by the AJAX script. The following script inserts the record if the specific interaction_id is not present in that table otherwise the update query will get fire.

public function save_interactions($interaction_array,$modid,$cid,$uid)  {      foreach($interaction_array as $key=>$interact_value)      {          $select_query = $this->db->query("SELECT COUNT(*) AS total FROM `scorm_interactions` WHERE `mod_id`='".$modid."' AND `course_id`='".$cid."' AND `user_id`='".$uid."' AND `interaction_id`='".$interact_value[0]."'");          $fetchRow = $select_query->row_array();            if($fetchRow['total']==1)          {              $update_data = array(                          "interaction_type"=>$interact_value[1],                          "time"=>$interact_value[2],                          "weighting"=>$interact_value[3],                          "correct_response"=>$interact_value[4],                          "learner_response"=>$interact_value[5],                          "result"=>$interact_value[6],                          "latency"=>$interact_value[7],                          "objectives"=>$interact_value[8],                          "description"=>$interact_value[9]              );              $this->db->where('mod_id', $modid);              $this->db->where('course_id', $cid);              $this->db->where('user_id', $uid);              $this->db->where('interaction_id', $interact_value[0]);              $this->db->update('scorm_interactions', $update_data);          }else          {              $insert_data = array(                          "user_id"=>$uid,                          "course_id"=>$cid,                          "mod_id"=>$modid,                          "interaction_id"=>$interact_value[0],                          "interaction_type"=>$interact_value[1],                          "time"=>$interact_value[2],                          "weighting"=>$interact_value[3],                          "correct_response"=>$interact_value[4],                          "learner_response"=>$interact_value[5],                          "result"=>$interact_value[6],                          "latency"=>$interact_value[7],                          "objectives"=>$interact_value[8],                          "description"=>$interact_value[9]              );              $this->db->insert('scorm_interactions', $insert_data);          }      }  }  

I got this type of error:

Deadlock found when trying to get lock; try restarting transaction

UPDATE `scorm_interactions` SET      `interaction_type` = 'choice',      `time` = '10:45:31',      `weighting` = '1',      `correct_response` = 'Knees*',      `learner_response` = 'Knees*',      `result` = 'correct',      `latency` = '0000:00:02.11',      `objectives` = 'Question2_1',      `description` = ''  WHERE      `mod_id` =  '4' AND      `course_id` =  '5' AND      `user_id` =  '185' AND      `interaction_id` =  'Question2_1'  ;    Filename: application/models/user/scorm1_2_model.php Line Number: 234  

Can anyone please suggest me how to avoid Deadlock?

Breaking Semisynchronous Replication in MySQL 5.5

Posted: 12 Mar 2013 12:25 PM PDT

I've set up Semisynchronous Replication between two MySQL 5.5 servers running on Windows 7.

My application is running and updating the database of the master server and same is being updated in the slave database server.

But due to some unknown reasons sometimes, Replication breaks.

On running the command:

SHOW STATUS LIKE 'Rpl_semi_sync%';  

It gives this status:

'Rpl_semi_sync_master_no_times', '0'  'Rpl_semi_sync_master_no_tx', '0'  'Rpl_semi_sync_master_status', 'ON'     <<-------------  'Rpl_semi_sync_master_timefunc_failures', '0'  'Rpl_semi_sync_master_tx_avg_wait_time', '338846'  'Rpl_semi_sync_master_tx_wait_time', '29479685'  'Rpl_semi_sync_master_tx_waits', '87'  'Rpl_semi_sync_master_wait_pos_backtraverse', '0'  'Rpl_semi_sync_master_wait_sessions', '0'  'Rpl_semi_sync_master_yes_tx', '3106'  

Ideally, in semi synchronization, when the sync breaks the status should come as OFF since master is not able to receive any acknowledgement from the slave. Please help us in this regard.

Script to get duration

Posted: 12 Mar 2013 02:53 PM PDT

I am new to PostgreSQL. I am trying to write a query which can give me duration of the time. The fields are in the format yyyymmddhhmmss. In between we will get empty fields for Start_Time or End_Time. I want to skip these rows and get the output.

Start_Time          End_Time   20130312080535   20130312080550  20130312080018   20130312080028  20130312080030   20130312080049  20130311154049   20130311154138  20130311225510    20130311152500   20130311152538  20130311225510    20130311152539   20130311152614  20130311152740   20130311152806  

Is there an execution difference between a JOIN condition and a WHERE condition?

Posted: 12 Mar 2013 02:40 PM PDT

Is there a performance difference between these two example queries?

Query 1:

select count(*)  from   table1 a  join   table2 b  on     b.key_col=a.key_col  where  b.tag = 'Y'  

Query 2;

select count(*)  from   table1 a  join   table2 b  on     b.key_col=a.key_col     and b.tag = 'Y'  

Notice the only difference is the placement of the supplemental condition; the first uses a WHERE clause and the second adds the condition to the ON clause.

When I run these queries on my Teradata system, the explain plans are identical and the JOIN step shows the additional condition in each case. However, on this SO question regarding MySQL, one of the answers suggested that the second style is preferred because WHERE processing occurs after the joins are made.

Is there a general rule to follow when coding queries like this? I'm guessing it must be platform dependent since it obviously makes no difference on my database, but perhaps that is just a feature of Teradata. And if it is platform dependent, I'd like very much to get a few documentation references; I really don't know what to look for.

DB2 Server Table Space Locked

Posted: 12 Mar 2013 02:11 PM PDT

At work we keep receiving the following DataException seemingly at random when one of our processes tries to write/access a table for one of our clients:

com.ibm.db.DataException: A database manager error occurred. :   [IBM][CLI Driver][DB2/NT64] SQL0290N  Table space access is not allowed.  SQLSTATE=55039  

Has anyone encountered this? I'm not the person who primarily does administrative tasks on our databases, but even they seem to be having difficulty finding the root of this problem. Any suggestions? Anyone encounter this before? This error comes up for only one of our clients at a time, and it generally seems to rotate. We have Rackspace service but they wont be of much help unless we can provide screenshots, etc at the exact moment this occurs.

Apologies if this post may be too vague, please let me know what information to supply to clarify things more. I'm one of the developers in my office, but I don't primarily handle the administrative tasks on our databases.

edit: We spoke with IBM and this could possibly be caused by some sort of virus scan being run by IBM/Rackspace as a part of maintenance? They said this kind of dubiously though, so I am doubting this is the culprit because tables remained locked for variable amounts of time.

Relational database for address model

Posted: 12 Mar 2013 12:07 PM PDT

I want to design an "Address" model for all types of entities like users, businesses, etc.

I have two types of main models: one is User and the other is Business. Each one has different address types like below.

User

1.Contact Address  2.Billing Address  

Business

1.Contact Address  2.something  

So I created an address model with an addresstype column like this

Address

id  addresstype  user  addressline1  addressline2  

Relationships:

  • User – One to many –> Business
  • User – One to many –> Address (User Column)

Now using the above relations, addresstype and user columns will be in a relation, but Business address is not relatted with address.

How can I design this one in an efficient way?

unable to login oracle as sysdba

Posted: 12 Mar 2013 06:38 PM PDT

I just got 11gR2 running and was able to conn as sysdba. I shutdown and started up the database to mount a new pfile. Now, I cannot login as sysdba. My parameter for password file is:

 *.remote_login_passwordfile='EXCLUSIVE'  

I am using sqlplus within the server. This is not a remote connection.

[oracle@oel56 ~]$ sqlplus /nolog    SQL*Plus: Release 11.2.0.1.0 Production on Tue Feb 5 22:50:46 2013    Copyright (c) 1982, 2009, Oracle.  All rights reserved.    SQL> conn / as sysdba  ERROR:  ORA-01031: insufficient privileges  

Here's some more information:

[oracle@oel56 ~]$ grep -E "ine SS_DBA|ine SS_OPER" $ORACLE_HOME/rdbms/lib/config.c  #define SS_DBA_GRP "oracle"  #define SS_OPER_GRP "oracle"  [oracle@oel56 ~]$ id oracle  uid=500(oracle) gid=500(oracle) groups=500(oracle),54321(oinstall),54322(dba),54323(oper) context=user_u:system_r:unconfined_t  

Is there a combination of columns in sys.dm_exec_sessions that is unique per the server?

Posted: 12 Mar 2013 04:42 PM PDT

In SQL Server, each session has its own spid. Spids are unique at any given notice, but spids, like process and thread identifiers in the OS are recycled.

However sys.dm_exec_sessions has other columns with session metadata. Is there a combination of columns that is guaranteed to be unique for a server instance?

"connectivity libraries needed are not found" error in IBM Data Studio

Posted: 12 Mar 2013 02:37 PM PDT

UPDATE

I am getting the following error when I try to create a new database in IBM Data Studio v3.1.1.0.

The connectivity libraries that are needed for local or remote non-JDBC operations were not found. To provide these libraries, you can install IBM data server client or a local DB2 server instance.  

I have already started the instance using

db2start  

command.

After searching exhaustively, I am not able to find any help on the internet regarding this error.

How to insert into junction table using triggers

Posted: 12 Mar 2013 01:39 PM PDT

Sorry in advance if this is "basic SQL." I wanted to know how to update my junction tables automatically. For example, these are my tables.

Artist and Song are base tables and SongArtist is the junction table. Everything in SongArtist is PK and FK.

CREATE  TABLE IF NOT EXISTS `Artist` (    `artistID` INT NOT NULL AUTO_INCREMENT ,    `artistName` VARCHAR(150) NOT NULL ,    PRIMARY KEY (`artistID`) )  ENGINE = InnoDB    CREATE  TABLE IF NOT EXISTS `Song` (    `songName` VARCHAR(150) NOT NULL ,    `songID` INT NOT NULL AUTO_INCREMENT ,    PRIMARY KEY (`songID`) )  ENGINE = InnoDB    CREATE  TABLE IF NOT EXISTS `SongArtist` (    `songID` INT NOT NULL ,    `artistID` INT NOT NULL ,    PRIMARY KEY (`songID`, `artistID`) ,    INDEX `fk_Artist_Artist_idx` (`artistID` ASC) ,    INDEX `fk_Song_Song_idx` (`songID` ASC) ,    CONSTRAINT `fk_Song_Song`      FOREIGN KEY (`songID` )      REFERENCES `Song` (`songID` )      ON DELETE CASCADE      ON UPDATE CASCADE,    CONSTRAINT `fk_Artist_Artist`      FOREIGN KEY (`artistID` )      REFERENCES `Artist` (`artistID` )      ON DELETE CASCADE      ON UPDATE CASCADE)  ENGINE = InnoDB  

I created some triggers like this, but they don't seem to work as I can't do INSERT INTO and add a new row when I only know one field of the junction table because I have two columns that are PK.

CREATE   TRIGGER after_song_insert AFTER INSERT  ON Song  FOR EACH ROW   BEGIN      INSERT INTO SongArtist (songID) values (songID);  END;  CREATE  TRIGGER after_song_update AFTER UPDATE  ON Song  FOR EACH ROW   BEGIN      INSERT INTO SongArtist (songID) values (songID);  END;  CREATE  TRIGGER after_song_delete AFTER DELETE  ON Song  FOR EACH ROW   BEGIN      DELETE FROM SongArtist (songID) values (songID);      END;  $$      DELIMITER ;  

What should I do?

Data dictionary best practices in SQL Server 2008 r2

Posted: 12 Mar 2013 07:38 PM PDT

We are interested in sharing the meta data and data dictionary among the team. I know that we can use the Extended Properties for this purpose, but based on my experience I've seen it gets out of date easily, because team members tend to forget to update them or skip this step.

I'm wondering if there is a more convenient way to create the data dictionary which can be maintained with the least amount of effort and time.

Thank you.

InnoDB - High disk write I/O on ibdata1 file and ib_logfile0

Posted: 12 Mar 2013 12:12 PM PDT

Server Specification: VPS with following info

model name  : Intel(R) Xeon(R) CPU           E5649  @ 2.53GHz  MemTotal:      2058776 kB  MemFree:        244436 kB  

We are running IP.Board from Invision Power Services, we are using innodb_file_per_table and have reloaded the database to reduce ibdata1 size. However, we still got problem of high CPU and I/O usage lately despite of the reduced ibdata1 size.

From my inspection, I believe that it causes by high I/O usage on ibdata1. Below is the data I obtained using pt-ioprofile -cell sizes (in Percona ToolKit). Basically, it's the total I/O amount collected in the period of 30 seconds.

# pt-ioprofile -cell sizes  Fri Jul 20 10:22:23 ICT 2012  Tracing process ID 8581       total      pread       read     pwrite      fsync       open      close   getdents      lseek      fcntl filename     6995968          0          0    6995968          0          0          0          0          0          0 /db/mysql/ibdata1     1019904          0          0    1019904          0          0          0          0          0          0 /db/mysql/ib_logfile0      204800     204800          0          0          0          0          0          0          0          0 /db/mysql/admin_phpbb3forum/phpbb_posts.ibd       49152      49152          0          0          0          0          0          0          0          0 /db/mysql/admin_ips/ips_reputation_cache.ibd       32768      32768          0          0          0          0          0          0          0          0 /db/mysql/admin_ips/ips_reputation_totals.ibd       29808          0          0          0          0          0          0      29808          0          0 /db/mysql/admin_ips/  ... (other trivial I/O records truncated)  

Running iotop and I see DISK WRITE: goes up and down around 2M/s and 200K/s

My question is, why we have high I/O write on ibdata1 and ib_logfileX while we have only about 5-10 small update per second into our sessions tables, which are also MEMORY table (only about 300K in size)? It is puzzling me because there's also no equivalent write I/O on any other table file, which indicates that the write I/O is not caused by UPDATE/INSERT/DELETE.

Note that I'm only a programmer who are just by chance have the duty to maintain this, so please feel free to ask for more info. I've done a lot of things to this server, but please don't assume that I have done anything I should have done already.

Additional info:

# ls -l /db/mysql/ib*  -rw-rw---- 1 mysql mysql  18874368 Jul 21 01:26 /db/mysql/ibdata1  -rw-rw---- 1 mysql mysql 134217728 Jul 21 01:26 /db/mysql/ib_logfile0  -rw-rw---- 1 mysql mysql 134217728 Jul 21 01:26 /db/mysql/ib_logfile1  

and

mysql> SHOW VARIABLES LIKE 'innodb%';  +-------------------------------------------+------------------------+  | Variable_name                             | Value                  |  +-------------------------------------------+------------------------+  | innodb_adaptive_flushing                  | ON                     |  | innodb_adaptive_flushing_method           | estimate               |  | innodb_adaptive_hash_index                | ON                     |  | innodb_adaptive_hash_index_partitions     | 1                      |  | innodb_additional_mem_pool_size           | 20971520               |  | innodb_autoextend_increment               | 8                      |  | innodb_autoinc_lock_mode                  | 1                      |  | innodb_blocking_buffer_pool_restore       | OFF                    |  | innodb_buffer_pool_instances              | 1                      |  | innodb_buffer_pool_restore_at_startup     | 0                      |  | innodb_buffer_pool_shm_checksum           | ON                     |  | innodb_buffer_pool_shm_key                | 0                      |  | innodb_buffer_pool_size                   | 402653184              |  | innodb_change_buffering                   | all                    |  | innodb_checkpoint_age_target              | 0                      |  | innodb_checksums                          | ON                     |  | innodb_commit_concurrency                 | 0                      |  | innodb_concurrency_tickets                | 500                    |  | innodb_corrupt_table_action               | assert                 |  | innodb_data_file_path                     | ibdata1:10M:autoextend |  | innodb_data_home_dir                      |                        |  | innodb_dict_size_limit                    | 0                      |  | innodb_doublewrite                        | ON                     |  | innodb_doublewrite_file                   |                        |  | innodb_fake_changes                       | OFF                    |  | innodb_fast_checksum                      | OFF                    |  | innodb_fast_shutdown                      | 1                      |  | innodb_file_format                        | Barracuda              |  | innodb_file_format_check                  | ON                     |  | innodb_file_format_max                    | Barracuda              |  | innodb_file_per_table                     | ON                     |  | innodb_flush_log_at_trx_commit            | 2                      |  | innodb_flush_method                       | O_DIRECT               |  | innodb_flush_neighbor_pages               | 0                      |  | innodb_force_load_corrupted               | OFF                    |  | innodb_force_recovery                     | 0                      |  | innodb_ibuf_accel_rate                    | 100                    |  | innodb_ibuf_active_contract               | 1                      |  | innodb_ibuf_max_size                      | 201310208              |  | innodb_import_table_from_xtrabackup       | 0                      |  | innodb_io_capacity                        | 4000                   |  | innodb_kill_idle_transaction              | 0                      |  | innodb_large_prefix                       | OFF                    |  | innodb_lazy_drop_table                    | 0                      |  | innodb_lock_wait_timeout                  | 50                     |  | innodb_locks_unsafe_for_binlog            | OFF                    |  | innodb_log_block_size                     | 4096                   |  | innodb_log_buffer_size                    | 4194304                |  | innodb_log_file_size                      | 134217728              |  | innodb_log_files_in_group                 | 2                      |  | innodb_log_group_home_dir                 | ./                     |  | innodb_max_dirty_pages_pct                | 75                     |  | innodb_max_purge_lag                      | 0                      |  | innodb_mirrored_log_groups                | 1                      |  | innodb_old_blocks_pct                     | 37                     |  | innodb_old_blocks_time                    | 0                      |  | innodb_open_files                         | 300                    |  | innodb_page_size                          | 16384                  |  | innodb_purge_batch_size                   | 20                     |  | innodb_purge_threads                      | 1                      |  | innodb_random_read_ahead                  | OFF                    |  | innodb_read_ahead                         | linear                 |  | innodb_read_ahead_threshold               | 56                     |  | innodb_read_io_threads                    | 24                     |  | innodb_recovery_stats                     | OFF                    |  | innodb_recovery_update_relay_log          | OFF                    |  | innodb_replication_delay                  | 0                      |  | innodb_rollback_on_timeout                | OFF                    |  | innodb_rollback_segments                  | 128                    |  | innodb_show_locks_held                    | 10                     |  | innodb_show_verbose_locks                 | 0                      |  | innodb_spin_wait_delay                    | 6                      |  | innodb_stats_auto_update                  | 0                      |  | innodb_stats_method                       | nulls_equal            |  | innodb_stats_on_metadata                  | OFF                    |  | innodb_stats_sample_pages                 | 8                      |  | innodb_stats_update_need_lock             | 1                      |  | innodb_strict_mode                        | OFF                    |  | innodb_support_xa                         | ON                     |  | innodb_sync_spin_loops                    | 30                     |  | innodb_table_locks                        | ON                     |  | innodb_thread_concurrency                 | 0                      |  | innodb_thread_concurrency_timer_based     | OFF                    |  | innodb_thread_sleep_delay                 | 10000                  |  | innodb_use_global_flush_log_at_trx_commit | ON                     |  | innodb_use_native_aio                     | ON                     |  | innodb_use_sys_malloc                     | ON                     |  | innodb_use_sys_stats_table                | OFF                    |  | innodb_version                            | 1.1.8-rel27.1          |  | innodb_write_io_threads                   | 24                     |  +-------------------------------------------+------------------------+  90 rows in set (0.00 sec)  

From @RolandoMySQLDBA : Please run this

SET @TimeInterval = 300;  SELECT variable_value INTO @num1 FROM information_schema.global_status  WHERE variable_name = 'Innodb_os_log_written';  SELECT SLEEP(@TimeInterval);  SELECT variable_value INTO @num2 FROM information_schema.global_status  WHERE variable_name = 'Innodb_os_log_written';  SET @ByteWrittenToLog = @num2 - @num1;  SET @KB_WL = @ByteWrittenToLog / POWER(1024,1) * 3600 / @TimeInterval;  SET @MB_WL = @ByteWrittenToLog / POWER(1024,2) * 3600 / @TimeInterval;  SET @GB_WL = @ByteWrittenToLog / POWER(1024,3) * 3600 / @TimeInterval;  SELECT @KB_WL,@MB_WL,@GB_WL;  

and show the output. This will tell you how many bytes per hour is written to ib_logfile0/ib_logfile1 based on the next 5 minutes.

Above SQL query result (At 8am local time, while the members online is about 25% of the stat during the day):

mysql> SELECT @KB_WL,@MB_WL,@GB_WL;  +--------+----------+-------------------+  | @KB_WL | @MB_WL   | @GB_WL            |  +--------+----------+-------------------+  |  95328 | 93.09375 | 0.090911865234375 |  +--------+----------+-------------------+  1 row in set (0.00 sec)  

Performance implications of MySQL VARCHAR sizes

Posted: 12 Mar 2013 01:33 PM PDT

Is there a performance difference in MySQL between varchar sizes? For example, varchar(25) and varchar(64000). If not, is there a reason not to declare all varchars with the max size just to ensure you don't run out of room?

Comfortable sqlplus interface?

Posted: 12 Mar 2013 11:26 AM PDT

I found sqlplus'interface is rather outdated. It's quite nice to have some commands or keywords at disposal, but for example no "arrow-up" key for the previous history entry is available.

What is a good replacement / extension for sqlplus? Could be a GUI or better (so it stays useful via SSH) a command line utility.

SQL*Plus is the main command line tool to operate with the Oracle Database.

[Articles] Self Describing Databases

[Articles] Self Describing Databases


Self Describing Databases

Posted: 11 Mar 2013 11:00 PM PDT

Today Steve Jones gives his vision of the database in the future. He hopes that databases contain more information about not only their objects, but their needs, like backups and maintenance.

[SQL 2012] upgrading PowerPivot for Sharepoint from 2008R2 to 2012

[SQL 2012] upgrading PowerPivot for Sharepoint from 2008R2 to 2012


upgrading PowerPivot for Sharepoint from 2008R2 to 2012

Posted: 12 Mar 2013 04:31 AM PDT

I am trying to upgrade my Powerpivot for Sharepoint 2008R2 to 2012 and struggling with many problems...I try to follow [u][url=http://msdn.microsoft.com/en-us/library/ee210646.aspx]Upgrade PowerPivot for SharePoint[/url][/u] which refers to [u][url=http://msdn.microsoft.com/en-us/library/hh230964.aspx]Upgrade PowerPivot for SharePoint (PowerPivot Configuration Tool)[/url][/u] In this latest article, I am supposed to use the new tool "Power Pivot Configuration Tool" that I can't see in my start up menus...Any idea how to install this tool?If I try to install "PowerPivot for Sharepoint", this is rejected because there is already an instance called "POWERPIVOT" (normal since I installed it with 2008R2)I am a bit confused... :doze:

Event Notifications

Posted: 12 Mar 2013 01:38 AM PDT

Hello,I am trying to set up event notifications on an instance of SQL Server in order to send me an email when a DDL statement is executed in that instance.I have created the queue, service, route and then the event notification but am unsure how what to do now.Does anyone have any documentation on what I need to do next in order to get this working?Thank you,Andrew

NUMA and PLE on SQL Server 2012

Posted: 04 Feb 2013 11:26 PM PST

I've read both Paul and Jonathan's blogs regarding this issue (http://www.sqlskills.com/blogs/paul/page-life-expectancy-isnt-what-you-think/) and started looking at the PLE counters for each individual NUMA node. I can't seem to wrap my head around why there is such a widespread discrepancy between the NUMA nodes. We are running SQL Server 2012 Enterprise Core. Any insight would be greatly appreciated.Thanks,Tommy[url=https://skydrive.live.com/redir?resid=EB98D18648791013!7197&authkey=!AN6DKRDzgyJlarU]https://skydrive.live.com/redir?resid=EB98D18648791013!7197&authkey=!AN6DKRDzgyJlarU[/url][img]https://skydrive.live.com/redir?resid=EB98D18648791013!7197&authkey=!AN6DKRDzgyJlarU[/img]

Multiple instances referencing same master.mdf

Posted: 11 Mar 2013 03:37 AM PDT

Okay, this one has me stumped (or I have missed something about instance installations!)I have added some more instances on our server at work:MIRROR, STAGE and DEVELOP (these are in addition to the first installed instance SQL_EXPRESS). Each one has a [b]different[/b] data folder.Installations worked fine, however they now all show the [b]same[/b] list of databases as SQL_EXPRESS.I have checked MSDN and have checked the startup parameters ([b]-d[/b] for [i]c:\path[/i]\master.mdf) of the services for the other instances and they are correct. I know it's checking them as I gave a false path and it raised an error. I also checked the registry settings for each instance.The only thing I can think is that during installation I specified the same [i]Instance Root Directory[/i] - I would test this theory but I can't do a reboot. I would have thought it would have told me if I had misconfigured? Surely each instance has the option listed because each instance can run from a separate master.mdf and therefore data folder?Thanks in advance :-)

Feature Inquiry

Posted: 11 Mar 2013 10:14 AM PDT

Hello,Does 2012 offer any feature which allows for the collection of Instance data across all my production installs? In other words, what is a good technique to centralize (for documentation)All SSIS package (document only)All Jobs and schedualesAll logins and database mappingsInstance configuration parametersI can write SQL queries and run them on each server and centralize them manually...or I can deploy procedures an maintain those across all instances. But I'd like to do this with as little footprint as possible. Preferable none.What I'd like, is a feature where one of my instances is a repository and collects this information from the servers I choose. Does this make sense? How are people doing this and is there something in 2012 to help.

[T-SQL] Complex SQL QUERY with DateDIFF

[T-SQL] Complex SQL QUERY with DateDIFF


Complex SQL QUERY with DateDIFF

Posted: 25 Feb 2013 12:16 AM PST

Hi all,I need to script a query to discard access logs that not exceeding a predetermined margin (eg. 2 seconds). That is, if each record was made by a user on a especific date / time, I don't want to get those who its date / time does not exceed the range compared with the date / time of the previous record. I think an example is clearer.Example:LogIndex, UserID, Date / Time1. 01551, 20.02.2013 17:41:45.0002. 01551, 20.02.2013 17:41:45.900 *3. 01551, 20.02.2013 17:41:46.150 *4. 01551, 20.02.2013 20:41:47.0005. 01552, 02/20/2013 17:42:45.0006. 01552, 20.02.2013 17:42:46.000 *7. 01552, 02/20/2013 19:45:45.000*: Records to discard because its date / time does not exceed the margin of 2 seconds over the previous record. In the first case two records should be discarted because both not exceed this margin.Here's the code that creates the temporary table and adds the previous records to test:CREATE TABLE # TEMP (LogIndex int, UserID nvarchar (10), LogTime datetime)insert into # temp select 1, '01551 ', '20 / 02/2013 17:41:45.000'insert into # temp select 2, '01551 ', '20 / 02/2013 17:41:45.900'insert into # temp select 3, '01551 ', '20 / 02/2013 17:41:46.150'insert into # temp select 4, '01551 ', '20 / 02/2013 20:41:47.000'insert into # temp select 5, '01552 ', '20 / 02/2013 17:42:45.000'insert into # temp select 6, '01552 ', '20 / 02/2013 17:42:46.000'insert into # temp select 7, '01552 ', '20 / 02/2013 19:45:45.000'select * from # tempDROP TABLE # tempThanks in advance!

How to create a query which kill CPU?

Posted: 12 Mar 2013 12:28 AM PDT

HiFor some tests I need to create a query or set of queries which overloads my test machine which is quite powerful, two 6 cores CPUs and 50GB RAM. I was trying to create multiple joins, aggregates, UDFs but nothing is able to make the CPUs suffering. Also linked servers usage which influenced CPU greater degree than previous combinations haven't even loaded CPU till 50%. Do you think it's it possible to overload CPUs on such powerful machine? Do you know some tips to create really heavy query which can kill machine :-D ?

Why do some of these procedure calls have syntax errors?

Posted: 11 Mar 2013 10:49 PM PDT

I am using 2012 (but I think I have had similar issues with 2005 and 2008)Why is it that the last two procedure calls, in the commented section, have syntax errors?[code]begin try drop procedure dbo.usp_WMELog end try begin catch end catchgocreate Procedure dbo.usp_WMELog @event varchar(MAX)asinsert into tblWMELog (event) Values (@event)/*declare @t intdeclare @s varchar(10)set @t = 3exec usp_WMELog @tset @s = cast (@t as varchar(10))exec usp_WMELog @sexec usp_WMELog cast (@t as varchar(10))exec usp_WMELog convert(varchar(1), @t)select * from tblWMELog*/[/code]Thanks.

Single Update Query - Required

Posted: 11 Mar 2013 10:29 PM PDT

Table 1:[code="sql"]Sno SID Sname1 Null A2 Null B3 Null C4 Null D5 Null E[/code]Table 2:[code="sql"]ano aID aName1 55 AA2 32 BB3 53 CC4 10 DD5 10 EE[/code]Requirement: I need to update Table 1 - Column SID values with Table 2 - Column aID in a single update queryOutput data should be:[code="sql"] [b]Sno SID Sname1 55 A2 32 B3 53 C4 10 D5 10 E[/b][/code]

DateDiff function help

Posted: 11 Mar 2013 08:20 AM PDT

Hi Friends,I'm new to TSQL and in my code I'm using -datediff(day,[WorkForce_JobPosting].[Job Posting End Date_JP] ,[WorkForce_WorkOrder].[Work Order End Date_WO]) > 0the above code if I use with single quotes round 0 still gives me the same result-ie,-datediff(day,[WorkForce_JobPosting].[Job Posting End Date_JP] ,[WorkForce_WorkOrder].[Work Order End Date_WO]) > '0'Hence wanted to know though the output of DateDiff is a integer number but using the integer within quotes also does not fail the result ?thanksDhananjay

query returns empty row

Posted: 11 Mar 2013 08:31 PM PDT

hi all, please find the following code[code="sql"]DECLARE @intid as varchar(100) ='''BSEC0002''' + ',' + '''BSEC0001''';PRINT @intid;DECLARE @qry as varchar(500) ='SELECT * FROM mstinstrument where instrumentid IN (' + @intid +')';print @qry;exec (@qry);SELECT * FROM mstinstrument where instrumentid IN (@intid);[/code]here exec @qry returns the value of two rowsbut the select statement which is exactly the same query mentioned in @qry variable is returning empty row, any clues.

Data Insertion into Access DB using T-SQL

Posted: 11 Mar 2013 07:53 AM PDT

Hi Guys,I am trying to insert data from SQLServer into an access database,using T-SQLI used OPENROWSET to Select the data from access into SQLServer, it worked fineQUERY:SELECT * FROM OPENROWSET('Microsoft.ACE.OLEDB.12.0','"ServerName"';'admin';'',TableName);My table has columns with the changing month namesis there any way, i could do some sort of SELECT INTO an access DB using T-SQL ..?Note : This is an automated report, i cannot use Import Export Wizard, there is no SSIS installed on the machine

Creating Stored Procedure

Posted: 11 Mar 2013 03:48 PM PDT

Hi,I have below table:ID URL1 https://google.com2 https://facebook.com3 https://yahoo.com4 https://gmail.comI am trying to create a procedure where I take the input for URL column.I am defining 5 paramters for procedure, url1, url2, url3, url4, proc_msgI need to do below checks:If value is null, existing value should not change. (means table is not updated)- I was using NVL2 function for this.If value is provided, check if value starts with 'https://'. If it is valid update the table. Else display msg- invalid url.User may not i/p all values at one time. he may update only 1 url or may be more at a particular time.

[SQL Server 2008 issues] Encryption doubts

[SQL Server 2008 issues] Encryption doubts


Encryption doubts

Posted: 11 Mar 2013 07:21 PM PDT

I want to encrypt my database. So which method is simple and the best. If i use symmetric key then does i encrypt it by using database master key or password? Which is safe here?

Query optimization

Posted: 07 Mar 2013 04:09 PM PST

Hello friends,I have table contains millions of records.When i fired query to get record by id. Query get millions of rows for that it take 2-3 second time.How can i optimize query?My sql query is:SELECT * FROM Table1 WHERE intId = @intId AND bintTimeStamp >= @bintStartTimeStamp AND bintTimeStamp < @bintEndTimeStamp ORDER BY bintTimeStamp

Larger Date Range taking MUCH less time

Posted: 10 Mar 2013 09:32 PM PDT

HiI'm executing the below TSQL on a table with about 700,000 records. I am clearing the caching so it does not affect the queries.When the data range is between '2013-01-01' and '2013-01-31' (or even more), it takes less than one second to return the data.But, when the date range is between '2013-01-09' and '2013-01-31' it takes ages (4 min)Can anyone give me an explanation why this is happening.ThanksJP[quote]DBCC DROPCLEANBUFFERSgoDBCC FREEPROCCACHEgoSelect STLStkCode From STranLine Where STranLine.STLTranDate Between '2013-01-09' and '2013-01-31' [/quote]

inserting data from one table to another and want to create primary key at same time

Posted: 11 Mar 2013 04:36 PM PDT

I am creating a table by using an insert into from another tablethen I am deleting any duplicate rows from that tablethen I try to create a primary key on column1(SN_Original)but I get an error saying Msg 8111, Level 16, State 1, Line 27Cannot define PRIMARY KEY constraint on nullable column in table 'lookuptable'is there a way round thishere is my codeDrop table lookuptableselect dbo.bigtable.Software_Name_Original as SN_Original , dbo.bigtable.Software_Name_Raw as SN_New into lookuptablefrom dbo.BigTableorder by dbo.bigtable.Software_Name_Raw --Delete duplicate rows from lookup table based on SN_NEW (software_name_new)DELETE fFROM (select row_number() over (partition by SN_NEW order by SN_NEW) as rn from lookuptable) AS fWHERE rn > 1 ALTER TABLE lookuptable ADD CONSTRAINT SN_OriginalPKPRIMARY KEY CLUSTERED (SN_Original);

Inserting currency symbols in money datatypes

Posted: 07 Mar 2013 03:21 PM PST

Hi,I heard we can insert currency Symbols like $ to Money and small money datatypes. But while i am trying i realized we can insert $ into money datatype but it will not display symbol while viewing table content. Am i missing anything or it is like that only? If so What is need of money datatype because we can use decimal or nvarchar only?Thank you

maximum rows without duplication

Posted: 11 Mar 2013 08:26 AM PDT

Consider this table:declare @a table (a int,b int,c char(1))insert into @aselect 1,17,'a' union allselect 1,15,'b' union allselect 2,15,'c' union allselect 3,14,'d' union allselect 4,13,'e' union allselect 3,13,'f' union allselect 5,12,'g' union allselect 6,12,'h' union allselect 6,11,'i' union allselect 7,10,'j' union allselect 8,9,'k' union allselect 8,10,'l'My goal is to get the maximum rows in my result without any duplicate values within columns a or b across all rows. In this case I should get 8 rows. I tried these two queries below but I only get 6 or 7 rows respectively due to an eliminated row causing a qualifying row to be eliminated because they have the same value in a particular column. Thanks!!------------------------------------------------------------------------------------SELECT cFROM (SELECT ROW_NUMBER() OVER (PARTITION BY a ORDER BY c asc) Rank_a ,ROW_NUMBER() OVER (PARTITION BY b ORDER BY c asc) Rank_b ,c FROM @a) tWHERE rank_a = 1AND rank_b = 1------------------------------------------------------------------------------------SELECT cFROM (SELECT ROW_NUMBER() OVER (PARTITION BY b ORDER BY c asc) Rank_b,c FROM (SELECT ROW_NUMBER() OVER (PARTITION BY a ORDER BY c asc) Rank_a,b,c FROM @a) t WHERE rank_a = 1) uWHERE rank_b = 1------------------------------------------------------------------------------------

my backup maintenance plan is failing

Posted: 10 Mar 2013 08:48 PM PDT

Hi friends my backup plan is failing from error logs i got this information. can any one throw some light on it.3/6/2013Executing the query "EXECUTE master.dbo.xp_create_subdir N'z:\\Core'..." failed with the following error: "xp_create_subdir() returned error 3, 'The system cannot find the path specified.'". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.Executing the query "BACKUP DATABASE [Core] TO DISK = N'z:\\Core\\..." failed with the following error: "Cannot open backup device 'z:\\Core\\Core_backup_2013_03_06_021517_6310479.bak'. Operating system error 3(The system cannot find the path specified.).BACKUP DATABASE is terminating abnormally.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.Executing the query "BACKUP DATABASE [ReportServer$SRVR1] TO DISK = N'..." failed with the following error: "Cannot open backup device 'z:\\ReportServer$SRVR1\\ReportServer$SRVR1_backup_2013_03_06_021517_7770625.bak'. Operating system error 3(The system cannot find the path specified.).BACKUP DATABASE is terminating abnormally.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.Executing the query "BACKUP DATABASE [SharePoint_Config_Dev] TO DISK =..." failed with the following error: "Cannot open backup device 'z:\\SharePoint_Config_Dev\\SharePoint_Config_Dev_backup_2013_03_06_021517_7990647.bak'. Operating system error 3(The system cannot find the path specified.).BACKUP DATABASE is terminating abnormally.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.

Find month totals between date columns

Posted: 11 Mar 2013 03:29 AM PDT

I have a sample view with some dates. How would you find the numbers of items open per month. Say between OpenDate and CloseDate I want to find how many were open for January, February,?Here is a sample table with the data[code="sql"]CREATE TABLE [dbo].[TestDate]( [ItemTitle] [nvarchar](50) NULL, [ItemAttachAssignDate] [date] NULL, [ItemDetachConcludeDate] [date] NULL, [Status] [nvarchar](50) NULL, [FullName] [nvarchar](100) NULL, [OpenDate] [date] NULL, [CloseDate] [date] NULL) ON [PRIMARY]GO[/code][code="sql"]INSERT INTO [TestDate]([ItemAttachAssignDate], [ItemDetachConcludeDate], [Status], [FullName], [OpenDate], [CloseDate]) VALUES('2013-02-18 00:00:00', '2013-02-19 00:00:00', 'Done', 'Jeff Hunter ', '2013-02-18 00:00:00', '2013-02-19 00:00:00');INSERT INTO [TestDate]([ItemAttachAssignDate], [ItemDetachConcludeDate], [Status], [FullName], [OpenDate], [CloseDate]) VALUES('2012-10-15 00:00:00', '2013-02-05 00:00:00', 'Done', 'Tommy Johnson', '2013-01-22 00:00:00', '2013-01-28 00:00:00');INSERT INTO [TestDate]([ItemAttachAssignDate], [ItemDetachConcludeDate], [Status], [FullName], [OpenDate], [CloseDate]) VALUES('2012-10-15 00:00:00', '2013-02-05 00:00:00', 'Done', 'Jeff Haynes', '2012-10-17 00:00:00', '2013-02-01 00:00:00');INSERT INTO [TestDate]([ItemAttachAssignDate], [ItemDetachConcludeDate], [Status], [FullName], [OpenDate], [CloseDate]) VALUES('2012-10-15 00:00:00', '2013-02-05 00:00:00', 'Done', 'Nancy Belkin', '2012-10-28 00:00:00', '2012-12-14 00:00:00');INSERT INTO [TestDate]([ItemAttachAssignDate], [ItemDetachConcludeDate], [Status], [FullName], [OpenDate], [CloseDate]) VALUES('2012-10-15 00:00:00', '2013-02-05 00:00:00', 'Done', 'Rudolph Porche', '2013-01-16 00:00:00', '2013-02-02 00:00:00');INSERT INTO [TestDate]([ItemAttachAssignDate], [ItemDetachConcludeDate], [Status], [FullName], [OpenDate], [CloseDate]) VALUES('2012-10-20 00:00:00', '2013-02-07 00:00:00', 'Done', 'Pat Franks', '2013-01-20 00:00:00', '2013-01-25 00:00:00');INSERT INTO [TestDate]([ItemAttachAssignDate], [ItemDetachConcludeDate], [Status], [FullName], [OpenDate], [CloseDate]) VALUES('2012-10-20 00:00:00', '2013-02-07 00:00:00', 'Done', 'Leslie Jordan', '2012-11-25 00:00:00', '2012-12-04 00:00:00');INSERT INTO [TestDate]([ItemAttachAssignDate], [ItemDetachConcludeDate], [Status], [FullName], [OpenDate], [CloseDate]) VALUES('2012-10-20 00:00:00', '2013-02-07 00:00:00', 'Done', 'Fred Haney', '2012-10-20 00:00:00', '2013-02-04 00:00:00');INSERT INTO [TestDate]([ItemAttachAssignDate], [ItemDetachConcludeDate], [Status], [FullName], [OpenDate], [CloseDate]) VALUES('2012-10-20 00:00:00', '2013-02-07 00:00:00', 'Done', 'Henry Hanks', '2012-10-31 00:00:00', '2012-11-15 00:00:00');[/code]

Is this is the Best Practice to select E: for the SQL Server root directory?

Posted: 11 Mar 2013 05:05 AM PDT

Hi Experts,Is this is the Best Practice to select E: or any other driver(not C:) for the SQL Server root directory? Please let me know the Pros/cons and the practices to be followed.Thanks a lot in adavance

How to view stats used in query?

Posted: 11 Mar 2013 09:23 AM PDT

Hello all,I could use some help trying to better understand how stats are used in this situation.I have a query that runs relatively poor (14 seconds); however the execution plan is actually quite good. It uses an index and performs a seek, no problems here. The overall plan is decent with some areas of improvement. Anyway, a developer of mine added an index and reduced the run time quite significantly (dropped to 3 seconds). Yet, after reviewing the index utilization, I found that the new index was not being touched (both DMV's and execution plan tell the same story) and instead it was still using the original index. So I then created a statistic which matched his index and I was able to achieve the same result in performance gains and yet my execution plan is 100% the same.With the above said, I'm curious to understand a couple things.1) How if at all, can you view which stats are touched during query execution?2) Why would the optimizer not use the statistic already on the index it used within the execution plan? (yes, I did update stats)3) What really determines how "auto create stats" takes effect?Thanks

SQL query 4 weeks 3 business days

Posted: 11 Mar 2013 02:19 AM PDT

Hello,I need to write a query that returns a result set that is within the past 4 weeks and 3 business days. For example, the result set for today would return everything from Feb 6th forward. (Date of this post was March 11).I have the second part but I am not sure how to include the 3 days. Any help would be appreciated.select dateadd(week,-4,getdate());-Dave

Converting datetime format

Posted: 11 Mar 2013 08:18 AM PDT

I have a following table:SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOCREATE TABLE [dbo].[Tbl1]( [Old_Date] [datetime] NULL, [New_Date] [datetime] NULL) ON [PRIMARY]INSERT INTO TBL1(Old_Date)values('2012-12-31')INSERT INTO TBL1(Old_Date)values('2013-01-01')INSERT INTO TBL1(Old_Date)values('2013-01-02')INSERT INTO TBL1(Old_Date)values('2013-01-03')with Old_Date getting the values from SSIS package with the date format 'YYYY-MM-DD'Using T-SQL, I am trying to convert the Old_dATE into New_date column with the format 'MM-DD-YYYY', but for some reason it is not working out..I tried following statements:SELECT OLD_DATE,CONVERT(DATETIME,OLD_DATE,110) AS NEW_DATE FROM TBL1But In sql server, I am seeing the same YYYY-MM-DD format in new_date instead of MM-DD-YYYYCan anyone help me out here..Thank you!!

To not RBAR

Posted: 11 Mar 2013 03:31 AM PDT

I have a table of taxonomic entries. It is for paleontology, where the entire taxonomic structure is often not known. Someone may find a clearly new type of brachiopod (phylum) and name in honor of his favorite musician Jamesus (genus) hetfieldii (species), but be unable to determine any of the intermediate taxonomic levels (class, order, family), and often the various auxiliary levels (sub-order, super-family) don't even exist. I needed a method to allow any taxonomic level to 'bind upwards' to any level above it. The table structure I created looks like this:[code="sql"]CREATE TABLE [dbo].[Taxonomy]( [TaxonAutoID] [int] IDENTITY(1,1) NOT NULL, [TaxonName] [varchar](100) NOT NULL, [TaxonLevelID] [int] NOT NULL, [NextHigherTaxonAutoID] [int] NULL, [AltTexts] [varchar](4000) NULL, CONSTRAINT [PK_Taxonomy] PRIMARY KEY CLUSTERED …[/code]where each entry has the identity column of the next higher taxonomic unit (whatever level that may be) as one of its properties. One of the things I need to do with this table is retrieve the entire string of entries, starting at some arbitrary point and proceeding upwards until I reach the top level (kingdom).Using RBAR, it's simple. A loop, using the NextHigherTaxonAutoID field of each entry for the search key of the next iteration, until I hit the top level. The absolute maximum possible number of levels is 21, and no entry will have anywhere near that, so it's probably not a huge performance issue, but some queries will ask for many records at once, and this has to be assembled for each record.Is there a way to do this assembly using a set-based query? It seems to me that there ought to be a way, using FOR XML, Cross Apply or some such, but my skills in the more esoteric regions of T-SQL are still somewhat lacking.

Trace Flag 4044

Posted: 11 Mar 2013 05:36 AM PDT

4044 - SA account can be unlocked by rebooting server with trace flag. If sa (or sso_role) password is lost, add this to your RUN_serverfile. This will generate new password when server started.[url]http://www.sqlservercentral.com/articles/trace+flags/70131/[/url][url]http://social.technet.microsoft.com/wiki/contents/articles/13105.trace-flags-in-sql-server.aspx[/url]I've seen this trace flag listed on a couple web sites but cannot get it to work. I've tried adding it as a startup parameter in configuration manager. SQL starts but there's no output and I can find the RUN_server file anywhere. All the searching I've done points to Sybase (it is also a Sybase trace flag).I was just interested to test it out. It could be handy when users install a local instance and forget their password.Has anyone else implemented this successfully?Thanks-Jeff

OUTPUT variables in Profiler

Posted: 11 Mar 2013 01:41 AM PDT

The following is what SQL Profiler shows for a stored procedure call:declare @p1 intset @p1=20611159exec dbo.Set_Phone_6 @p1 output,NULL,NULL,NULL,NULLselect @p1goIn the sproc, I have logic = if @p1 is null, then do an insert, set the value of @p1 to the newly created identity, and exit. If @p1 is not null, it's an update, do the update and exit.My developer is actually issuing the call to the sproc passing @p1 as NULL. The result of his call is an INSERT. Why is Profiler showing "set @p1=20611159"?Thanks!

Getting the files stored in a database table?

Posted: 11 Mar 2013 07:26 AM PDT

Hi All,I am struggling to find a solution / guidelines / 3rd party or internal database tools that allows me to extract files stored in a database table. These files have multiple formats, PDF, Word, Excel, Images etc.Please advise me, I have limited or no knowledge on visual studio.Thanks Yasar

T-SQL Code Review: Looping Record Deletion w. Date Validation

Posted: 11 Mar 2013 03:41 AM PDT

Hi there, I put this script together based off of some code I found in a few other threads on this forum after doing a search. This script is for a development environment, but after it is thoroughly tested and vetted it may be deployed to a prod environment. I am simpilying pruning old records from some staging tables. Any feedback or words of wisdom would be wonderful. Thanks![code="sql"]DECLARE @DeleteDate DATETIME , @RowsToDelete BIGINT-- Calculate the Min. Date plus 1 which will be the first range of dates to deleteSET @DeleteDate = ( SELECT DATEADD(DAY, 1, MIN([cslpnle].[DateCreated])) AS [DeleteDate] FROM [dbo].[RawVehicle] (NOLOCK) AS cslpnle )-- Delete rows while data older than 5 daysPRINT @DeleteDateWHILE DATEDIFF(DAY, @DeleteDate, GETUTCDATE()) > 1 BEGIN SET @RowsToDelete = 1 --Purge RawVehicles WHILE @RowsToDelete > 0 BEGIN DELETE TOP ( 200 ) [dbo].[RawVehicleOption] FROM [dbo].[RawVehicleOption] INNER JOIN [dbo].[RawVehicle] AS rv ON [dbo].[RawVehicleOption].[RawVehicleId] = [rv].[Id] WHERE [rv].[DateCreated] <= @DeleteDate DELETE TOP ( 200 ) [dbo].[RawInventoryPhoto] FROM [dbo].[RawInventoryPhoto] AS rip INNER JOIN [dbo].[RawInventoryVehicle] AS riv ON rip.[RawInventoryVehicleId] = [riv].[Id] INNER JOIN [dbo].[RawVehicle] AS rv ON [riv].[RawVehicleId] = [rv].[Id] WHERE [rv].[DateCreated] <= @DeleteDate DELETE TOP ( 200 ) [dbo].[RawDealerLotToInventoryVehicle] FROM [dbo].[RawDealerLotToInventoryVehicle] AS rdltiv INNER JOIN [dbo].[RawInventoryVehicle] AS riv ON [rdltiv].[RawInventoryVehicleId] = [riv].[Id] INNER JOIN [dbo].[RawVehicle] AS rv ON [riv].[RawVehicleId] = [rv].[Id] WHERE [rv].[DateCreated] <= @DeleteDate DELETE TOP ( 200 ) [dbo].[RawInventoryVehicle] FROM [dbo].[RawInventoryVehicle] AS riv INNER JOIN [dbo].[RawVehicle] AS rv ON [riv].[RawVehicleId] = [rv].[Id] WHERE [rv].[DateCreated] <= @DeleteDate DELETE TOP ( 200 ) FROM [dbo].[RawVehicle] WHERE [DateCreated] <= @DeleteDate SET @RowsToDelete = @@ROWCOUNT END SET @DeleteDate = DATEADD(DAY, 1, @DeleteDate) END [/code]

Permissions for DTEXEC

Posted: 11 Mar 2013 07:07 AM PDT

Any suggestions are welcome.I am calling DTEXEC like this in the context of database A.'"E:\Program Files (x86)\Microsoft SQL Server\100\DTS\Binn\dtexec.exe Mypackage"' The package does execute.The source query within the package calls a procedure in database A. The connection manager within the package is within the context of database A.The package executes EXEC MySchema.MyProcAWithin procedure MySchema.MyProcA a separate procedure in called. This procedure is in database B.So within procedure MySchema.MyProcA I do my cross database execution call.EXEC databaseB.schema.MyProcBThe error messageAn OLE DB record is available. Source: "Microsoft SQL Server Native Client 10.0" Hresult: 0x80040E14 Description: "schema.MyProcB: NT user not authorized".I assume the account I need to permissions is the SQL Server service account. Is this correct?

Index confusion

Posted: 11 Mar 2013 05:29 AM PDT

Hi everyone. I have a table with a few million rows in it. There is a process as part of a stored procedure that checks if data exists. It checks by selecting top 1 from the table prior to moving on.The developer told me that it gets slower and slower as more data is added to the table so we looked at ways to speed things up.They are running a query like this:SELECT TOP 1 * FROM TABLEA WHERE COLUMN4 = 'xxxx' and COLUMN6 = 'xxxx' and COLUMN9 = 'xxxx'There were no good indexes that would help us so I created a non clustered that includes COLUMN4, COLUMN6, and COLUMN9. After creating this index, the execution plan shows that the query is using the new index.My question is, why does the above query have the same execution plan as the below query?SELECT TOP 1 COLUMN4 FROM TABLEA WHERE COLUMN4 = 'xxxx' and COLUMN6 = 'xxxx' and COLUMN9 = 'xxxx'The second query can get all of its output from the index and should not need to visit the table at all right? The top query is requesting all rows and should need to get the unindexed rows from the actual table right?Why would the two selects have the same execution plan?Thanks for helping me understand.

Efficient Method for a stored Procedure

Posted: 11 Mar 2013 05:05 AM PDT

Hello, I am a Sql Server Newbie. I am writing a stored procedure which has one input parameter I have written it as shown below using two temporary tables, and I wanted to know if there is any better efficient way of writing it. -- I have simplified the stored procedure StoredProcedureX @Id Int (Id sent from another stored procedure)Create TempTable 1 (Name1 String, Name2 String, Name3 String, Marks int)Insert INTO TempTable1 select Name,Name1, Name2, Marks from TableA where Id = @Id.Create TempTable2 (Address,City)Insert Into TempTable2 select Address, City from TableA where Name = (select Name from TempTable1) ANDName1 = (select Name1 from TempTable1) AND Name2 = (select Name2 from TempTable1)AND MARKS > 100;--- Some other processing on TempTable2-- drop tables TempTable1 & TempTable2

Cross Apply Executing Too Many Times

Posted: 11 Mar 2013 06:02 AM PDT

I have a rather complex set of queries that are mostly embedded in table valued functions and call one another in chains. I've had a number of very peculiar performance issues that seem to come and go, but that don't seem to be directly related to the queries themselves (heavily indexed tables).Currently I'm examining a particular cross apply that seems to be behaving strangely. I'm simplifying a bit, but the two following queries return the "same" result, just different ways; the main difference is that the first one has an additional column hardcoded instead of relying on the cross join to fill it in. [quote]select 'LA13G12ST-05', 10000, pidcm.met, pidcm.quantityFound from table_valued_function('LA13G12ST-05', [b]10000[/b], '00000630', 1, 'S0000013122', 'LA2013') pidcmunion allselect 'LA13G12ST-05', 20000, pidcm.met, pidcm.quantityFound from table_valued_function('LA13G12ST-05', [b]20000[/b], '00000630', 1, 'S0000013122', 'LA2013') pidcm[/quote][quote]select 'LA13G12ST-05', pidc.[Line No_], pidcm.met, pidcm.quantityFound from (select distinct pidc.[Line No_] from table1 pid join table2 pidc on pid.[Pidc Code] = pidc.Code where pid.Code = 'LA2013' and pid.[Line No_] = 10000) pidc cross apply table_valued_function('LA13G12ST-05', [b]pidc.[Line No_][/b], '00000630', 1, 'S0000013122', 'LA2013') pidcm[/quote]Yes, the subquery does return exactly two rows, 10000 and 20000So, when I run the both queries, I get the expected (and identical) results from both[quote]LA13G12ST-05 10000 0 3LA13G12ST-05 20000 0 0LA13G12ST-05 10000 0 3LA13G12ST-05 20000 0 0[/quote]However, the statistics are very different:[quote]SQL Server parse and compile time: CPU time = 0 ms, elapsed time = 3 ms.(2 row(s) affected)Table '#71DCD509'. [b]Scan count 1, logical reads 1[/b], physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table '#70E8B0D0'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 813 ms, [b]elapsed time = 809 ms[/b].(2 row(s) affected)Table '#71DCD509'. [b]Scan count 73, logical reads 78[/b], physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Table2'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Table1'. Scan count 0, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 28859 ms, [b]elapsed time = 28898 ms[/b].[/quote]Curious on why the latter required 73 times as many scans, I ran the SQL profiler, and found that the table valued function is running 72 times in the cross apply example.It may be purely a coincidence, but there happen to be 72 rows in the subquery if you remove the where clause from it. So, it looks to me as if SQL Server is unrolling my subquery, moving its "where" to the outer query, executing the cross apply, and then filtering. Unfortunately, this ends up being much more expensive.So, is there some way to convince the query optimizer to do otherwise? I had expecting that adding the "distinct" to the subquery (not technically needed) would help the optimizer know it should do the inner part first, but it didn't.Being from an Oracle background, the first thing that comes to mind is to put the inner query in a cursor, and loop through its results calling the function for each, so I can control the number of function calls. Such behavior seems to be frowned upon in this forum, and disparagingly referred to as RBAR. So, I'm open to alternatives.I'm sure someone will ask whether it is possible to avoid table valued functions and roll all the logic into a single SQL statement. It would be very difficult (perhaps impossible) because of two things:1) Analytic functions aren't fully available in SQL Server 2008R2 (and I can't easily upgrade to 2012); specifically, I need to limit something by a running total, and the "rows between" clause isn't available yet on the analytic sum function.2) In several cases, completely different queries need to run depending upon some other setting or function result (case statement runs different queries). It might be possible perhaps to join all the possible results and discard the ones that don't apply, but I'm not sure that would be good for performance anyway.Edit:I found a workaround, sort of; I can select the subquery into a temporary table and then cross apply against that temporary table instead of the subquery. That seems to solve the particular problem, but it also forces this function to be a multi-statement table valued function, whereas before it could have been in-line. Since it is calling a multi-statement TVF (one with a gigantic case), I suppose the distinction is largely irrelevant.I don't suppose there is some optimizer hint that says "this part is expensive"? I tried force order, but it didn't help (maybe it applies only to joins, not applies).

modifying a SP

Posted: 07 Mar 2013 08:03 AM PST

We already have a SP which selects data from db.I need to modify the SP so that it selects one more column called supervisor.The code to get the supervisor data is SELECT * INTo #sStaffNamesFROM (select p.personid,p.LastName + N', ' + p.FirstName AS Supervisor, S.StaffID---from Staff as s inner join people as pon s.PersonID = p.personid)A select peoplelinkid ,staffassigned,Supervisorfrom clientinfo as ci LEFT JOIN #StaffNames as s ON Ci.StaffAssigned=S.StaffIDwhere ci.staffassigned<>0 drop table #sstaffnamesThis code works perfectly and I am able to get the desired result.Now the problem is I am not able to figure out how to put the above code in a SP which selects data.Thanks in advance

After Error 823 Database stayed on "Restoration" and Can't Restore Can't Drop it

Posted: 11 Mar 2013 03:23 AM PDT

Hi all, We had a connection problem for some hours yesterday. All databases are re-attached automatically except one(DBNAME). One of them stayed on the "restoration" mode. So I Restored from backup DBNAME_RESTORE . So i really don't need this database. What i made : Drop Database DBNAME ---Result : successful---sp_renamedb 'DBNAME_RESTORE','DBNAME'---Result : Error , The database DBNAME already exists. Specify a database name unique.---When i have a look i see that the dabase has not dropped. I try to restore it : Restore database DBNAME with recovery Msg 823, Level 24, State 2, Line 1The operating system returned error 21 (The device is not ready.) To SQL Server during a read at offset 0x00000000012000 in file 'C:\DBNAME.mdf'. You may find more details in the Additional messages in the error log and system event log to SQL Server. It is a condition of severe system-level error that jeopardizes the integrity of the database and must be corrected immediately. Perform a complete check of the consistency of the database (DBCC CHECKDB). This error can be caused by many factors, for more information, refer to the documentation in SQL Server Books Online.I can't execute :dbcc checkdb (DBNAME) because it is on restoring state... Do you have an idea haow to drop this database ? Thanks in advance .

Isolation Level

Posted: 11 Mar 2013 01:59 AM PDT

Hello there, I have a db on a standby mode that is restoring from a primary one every 15 minutes. usres are logging on to the stand by db to select information using the nolock hint but the Db still locks and preventing from restoring after. What do you think i should do ? Should i change the isolation level to snapshot? Will that help?Thank you in advance. Riri

Table variable subquery as column

Posted: 07 Mar 2013 04:15 AM PST

Hi All,I'm trying to use a value from a table variable's column that matches w/a passed in variable as a subquery for one of the main query's column. I'm trying the below, but the values for @Region and @Queue are showing up as blank in a SSRS report field which are pulling from it:ALTER PROCEDURE [dbo].[udp_WorkQueuesReport] -- Add the parameters for the stored procedure here @FromDt as date, @ThruDt as date, @Region as varchar(max), @Queue as varchar(max)ASBEGIN-- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; Declare @RegionTbl as table(RegionName varchar(50)) Declare @QueueTbl as table(QueueName varchar(50)) Insert @RegionTbl select Value from hpexprpt.dbo.split(@Region,',') Insert @QueueTbl select Value from hpexprpt.dbo.split(@Queue,',') select users.last_name + ',' + users.first_name as [User ID] , (Select RegionName from @RegionTbl where RegionName = @Region) as Region , (Select QueueName from @QueueTbl where QueueName = @Queue) as Queue ... from hpexpprod.dbo.work_items join...where...Any pointers would be greatly appreciated...Thanks in advance!

How to update multiple tables in sql server 2008 ?

Posted: 07 Mar 2013 10:58 PM PST

Hi All,I want to update multiple columns from multiple tables in a single UPDATE Query...Just want to do like below query...UPDATE Table1, Table2SET Table1.Column1 = 'one' ,Table2.Column2 = 'two'FROM Table1 T1, Table2 T2WHERE T1.id = T2.idand T1.id = 'id1'Does Sql Server 2008 provide any mechanism to do so?If Sql Server 2008 provide such nice approach, please share some links with me!Thanks!

Not consuming memory

Posted: 08 Mar 2013 12:37 AM PST

My company is rolling out a new product that was supposed to go to beta on March 1. That has been deferred, so the pressure is to get everything done yesterday.It is a complex multi-tiered application running web services, Citrix published apps, multiple databases and instances on a virtualized cluster. Sort of a kitchen sink of input sources. I had ZERO input on the database design and system architecture. So I'm having to learn the system in the middle of the problem.Which probably sounds familiar to most people here, no?The load test was focused on the web services so I was not allowed to capture any SQL statistics. I was only able to watch the defaults available through the Activity Monitor.The strangest thing during the test from the database end is that memory utilization peaked at 1.5 GB on an instance that had 28 GB assigned to it. Today we tested the instance with a few memory hogging scripts just to show that they were configured properly and, as expected, the memory was easily consumed.The load test had some interesting things happen. As the web requests loaded up the front end, the CPU climbed linearly - a nice direct correlation to the number of request hitting the web servers. But as soon as the CPU hit 25% it leveled off even as we doubled, tripled and quadrupled the number of web hits.More interesting is that there were two SQL instances in the test and when the CPU leveled off the waits displayed in the Activity Monitor started climbing up into the thousands. Even more curious is that the waits on each instance were inversely correlated. When one would peak, the other would be at a minimum in a very regular saw toothed pattern.So I have to recommend a "solution" without having any data since I wasn't allowed to pull any during the test.FWIW, disk I/O was fine. Today's memory test showed that the memory allocation of the instance is fine.My first recommendation is going to be to put all the databases on the same instance (there were two databases, one one each instance, that talked to each other a great deal) just to see how it effects the waits and the cross talk between those two databases.Then look at tempDB issues and insist that I be allowed to pull some performance counters DURING the test.I found the oscillation of peak waits very interesting. Has anyone ever seen this type of behavior?I'm not expecting any magic answers here. More just some possibilities so I can drill down into the lower levels.

Looging all executed SQL Server Queries

Posted: 10 Mar 2013 10:25 PM PDT

Hello Friends,I am using SQL 2008 R2.This instance of SQL Server ,many user uses.All user uses the same user sa and password.I want to log each query running by user with following details:User Name: may be [b]computer name/or IP address[/b]Query Text,date and time of query.Is it possible?if yes then how?

If I compress a primary key, am I compressing the whole table --> P-key with DATA_COMPRESSION

Posted: 05 Mar 2013 06:56 AM PST

Can someone explain how data compression works at the index level? If I compress a primary key, am I compressing the whole table [since the data pages are leaf pages]?

SQL.log file growing rapidly C:\Users\sqlaccount\AppData\Local\Temp

Posted: 10 Mar 2013 10:26 PM PDT

Hi Guys,On the development server I noticed a file called SQL in [b]C:\Users\sqlaccount\AppData\Local\Temp [/b] that has grown up to 10GB. The only way to get rid of it is to stop SQL Agent and delete file, but when i restart the agent the file is created again. It takes over a month to grow up to 10gb.I can confirm the SQL Agent ERROR log file is pointing to a different drive.Message in SQL log fileSQLAGENT 520-344 EXIT SQLGetDiagFieldW with return code 100 (SQL_NO_DATA_FOUND) SQLSMALLINT 3 SQLHANDLE 0x00000000002D6740 SQLSMALLINT 1 SQLSMALLINT -1154 SQLPOINTER 0x000000001CFC871C SQLSMALLINT -6 SQLSMALLINT * 0x0000000000000000Any ideas what this is ?

Monday, March 11, 2013

[SQL Server] Linked Server Anonymose Logon Error

[SQL Server] Linked Server Anonymose Logon Error


Linked Server Anonymose Logon Error

Posted: 04 Mar 2013 04:54 AM PST

Hi,So we are changing the SQL service account. The account we are changing to is in a domain user and the previous one was a domain admin. The account is inside a group and the group has been added to the local administrators group per server using a group policy. The group has been given logon rights to the servers, and the sysadmin server role.When I changed the sql service account to the new account it seemed to go fine. A few days later developers are complaining about linked servers being disconnected. So I logged in to a server through ssms that had a link server to the server that I changed the account on. When I test the connection to the linked server that had the service account changed I get "Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. (Microsoft SQL Server, Error: 18456)."Are there specific permissions that a domain account needs to be used for link servers? We temporary resolved the issue by changing the logon under the linked server properties to the sa account, but the sql service still runs as the newly changed account. I have seen some documentation but it's pretty hard to follow i'm pretty new to the more complex areas of sql.

Hash value generation times

Posted: 11 Mar 2013 02:16 AM PDT

I have a table with 56 million rows. I performed the following operation:ALTER TABLE MyBigTable ADD NewHashColumn AS HASHBYTES('SHA2_512', Column1+Column2+Column3)The operation took a fraction of a second to complete. How is this possioble?! I mean can SQL Server really generate so many hash keys in such space of time?!I'm guessing that the table was already in memory and all this operation took place in memory? Will the next checkpoint write the actual hash codes to the disk?

Search This Blog