Thursday, October 3, 2013

[how to] Doing a point in time restore with CDC enabled; Possible?

[how to] Doing a point in time restore with CDC enabled; Possible?


Doing a point in time restore with CDC enabled; Possible?

Posted: 03 Oct 2013 08:42 PM PDT

I discovered this week, the hard way, that the database restore options NORECOVERY and KEEP_CDC are mutually exclusive. So, this begs the question. How do you do a database restore, keeping CDC intact, with both full and log backups?

Doing research on MSDN etc. I cannot find any documentation on restoring a database with KEEP_CDC using any other option than a single full db restore with RECOVERY specified.

I was able to find one attempt that did the full and subsequent logs without the keep_cdc option waiting until the final log. Only then was the table brought online with the RECOVERY and KEEP_CDC option. The result was a corrupt CDC schema as demonstrated here.

If the intent is to KEEP_CDC on restore are you truley limited to a full backup only or is there a mechanism similar to the attempt above to keep it intact during multi-file restors on a server other than the original?

Mysql: update query with subquery

Posted: 03 Oct 2013 08:25 PM PDT

Hello i need to perform the following query

Query

update Taxonomy  set sourceId = (  select id from TaxonomyMapping a where a.oldId =       (          select cm.en_ID          from TaxonomyMapping ta           join CategoryMapping cm on ta.oldId = cm.language_ID where ta.oldId = oldId limit 1      )   ) where id > -1;  

The tables are as following:

Taxonomy(id, oldId, sourceId, name, language)

TaxonomyMapping(id, oldId, sourceId, name, language) Exact copy of Taxonomy

CategoryMapping(en_ID, language_ID)

What im trying to accomplish The original taxonomy table has got categories in a certain language there are translations of it, but the need they are known in Category mapping, now i need to add the english translation to the sourceId, Can somebody help me with this? at the moment it fills one id for all im suspecting that the oldId(ta.oldId = oldId) i use is not the oldId of the table to update. Is it possible to do a query like this or should i search for another solution?

Recover oracle 10g database from old data filse

Posted: 03 Oct 2013 09:16 PM PDT

I have oracle 10g database,

unfortunately, our machine and system crashed and all the backups and oracle archive logs, but we have old image copies of our data files and one of the control file (Copy & Paste)

how I can restore our database from this copies?

I tried to create a new database and replace the old files with new files, but I couldn't (I got message, we need to open the database with resetlogs, but we can't open it if we didn't make incomplete recovery)

please help me

MySQL innodb engine table and database backup and recovery

Posted: 03 Oct 2013 01:54 PM PDT

I am using storage snapshot to backup the mysql database for myisam database. can you guide me how to take the innodb database and tablespace backup and restore using storage snapshot.

Move SQL Server 2008 database from machine A to B with no downtime

Posted: 03 Oct 2013 05:45 PM PDT

I got a bit of a problem. I have a SQL Server database in box A which has approximately 60gb of data. I need to move this to a new machine but I cannot have downtime at all. Every second that goes by there are new data in the db so we cant do it at some random time in the middle of the night.

Is there any way to achieve this? I am pretty much able to do whatever I want so any suggestion will be useful.

The remote copy of database “<DBNAME>” is not related to the local copy of the database

Posted: 03 Oct 2013 11:41 AM PDT

I'm trying to join a secondary DB to my SQL Server 2012 Availability Group following this article:

Join a Secondary Database to an Availability Group (SQL Server)

When I attempt this the local SQL server windows event log gives error 1407:

The remote copy of database "<DBNAME>" is not related to the local copy of the database  

I have followed all the steps and created a recent backup and restore with NORECOVERY.

The error log of the destination SQL server gives these errors:

AlwaysOn: The local replica of availability group '<AGNAME>' is going offline because the corresponding resource in the Windows Server Failover Clustering (WSFC) cluster is no longer online. This is an informational message only. No user action is required.    The state of the local availability replica in availability group '<AGNAME>' has changed from 'SECONDARY_NORMAL' to 'RESOLVING_NORMAL'. The replica state changed because of either a startup, a failover, a communication issue, or a cluster error. For more information, see the availability group dashboard, SQL Server error log, Windows Server Failover Cluster management console or Windows Server Failover Cluster log.     The state of the local availability replica in availability group '<AGNAME>' has changed from 'RESOLVING_NORMAL' to 'NOT_AVAILABLE'. The replica state changed because of either a startup, a failover, a communication issue, or a cluster error. For more information, see the availability group dashboard, SQL Server error log, Windows Server Failover Cluster management console or Windows Server Failover Cluster log.   

The cluster and all its nodes are online. The AG dashboard does not give any additional information to help diagnose the problem.

Opening all firewalls did not solve the issue.

The Windows Event log gives this line:

The database 'DelphiOBUDB' is marked RESTORING and is in a state that does not allow recovery to be run.  

However, the DB is left in Restoring mode when you restore it and specify the NORECOVERY flag as per the tutorial guidelines.

How can I join the DB to my availability group?

Is there any way to use different InnoDB settings for different databases on the same server?

Posted: 03 Oct 2013 12:28 PM PDT

From what i know, this is not possible but i'm surprised its not highly requested feature. I'd love to be able to have one db using InnoDB in ACID compliant mode and another db on the same server used for non-critical data that uses InnoDB with more relaxed settings related to disk writes.

Is there a way to achieve this setup?

Query to return fields of distinct values per key

Posted: 03 Oct 2013 08:35 PM PDT

In the process of building a data warehouse we are creating some aggregation tables for a reporting engine. We would like to build a single aggregation table from the fact table that can exist as list of all possible values of certain fields that are present in the fact table.

I would like to create a table that has the following output structure:

|Org   |Gender| Age|State|  ---------------_----------  |     1|     M|   1|   FL|  |     1|     F|   2|   VA|  |     1|  NULL|   3|   CA|  |     1|  NULL|NULL|   NJ|  |     2|     M|   2|   VA|  |     2|  NULL|   3| NULL|  |     2|  NULL|   5| NULL|  

So that for each Org, the non-NULL DISTINCT list in each column represents the list of distinct values in the Fact table for each column. I can't use a DISTINCT operator because that will look for the list of distinct combinations of fields, not where each field is distinct and the number of rows for each Org is at most the number of distinct values for that Org.

The database I'm using is SQL Server 2012, if it matters.

Selecting the highest value in a column?

Posted: 03 Oct 2013 12:13 PM PDT

I am dealing with a table that has 3 columns and I need a way to select, for every PrntP_PstnGrpID, the row that match and has the highest ChldLevel.

The PrntP_PstnGrpID could have multiple entries but each entry will have an ever increasing ChldLevel, starting at 1. How can I write a select statement to pull the row with maximum ChilDlevel, for every PrntP_PstnGrpID?

I.E. for PrntP_PstnGrpID = 10 which has 3 entries of ChldLevel : 1,2,3 I want to get the row with ChldLevel = 3 but for PrntP_PstnGrpID = 5 which has 5 entries of ChldLevel : 1,2,3,4,5 to get the row with ChldLevel = 5.

CREATE TABLE [dbo].[P_PositionGroupFlat] (      [ID]                      INT            NOT NULL,      [PrntP_PstnGrpID]         INT            NOT NULL,      [ChldLevel]               INT            NOT NULL,      [Sort]                    VARCHAR (8000) NOT NULL,  );  

Oracle's UTL_FILE when connected locally or through listener has different behaviour

Posted: 03 Oct 2013 03:35 PM PDT

We have a procedure which creates a CSV file with ULT_FILE. (env is Oracle 11.2.0.1 and RHEL 6.4) Until now, we were storing that CSV file into system oracle (the oracle owner) user home (/home/oracle/csv/). It works fine, but now we're required to store the CSV file into a different system user home (eg. reports system user home (/home/reports/csv/))

So we added the oracle user to the group reports as its secondary group, and then change reports home permissions to be accesible to him and all his group.

# id oracle  uid=500(oracle) gid=500(oracle) grupos=500(oracle),502(reports)  # id reports  uid=502(reports) gid=502(reports) grupos=502(reports)  # chmod 770 /home/reports  # ls -la /home/reports/  total 52  drwxrwx---  8 reports reports 4096 oct  3 12:58 .  drwxr-xr-x. 5 root    root    4096 oct  2 11:05 ..  drwxrwxrwx  2 reports reports 4096 oct  3 12:59 csv  

With this, logging into system as oracle I can write, read and execute files into reports's home.

# su - oracle  oracle ~$ touch /home/reports/csv/test.txt  oracle ~$ ls -la /home/reports/csv/test.txt  total 8  -rw-rw-r-- 1 oracle          oracle             0 oct  3 17:51 test.txt  

And now , as far as I know (and Oracle's documentation says), this should works, but it doesn't .. not at all. If I connect locally with sqlplus, It works. But if I connect from a remote machine or through the listener, I doesn't!

I'll show you: Local connection with sqlplus:

oracle ~$ export ORACLE_SID=MYDB  oracle ~$ sqlplus -S informes  Introduzca la contraseña:   select a.directory_name,          a.directory_path,         b.grantee,         b.privilege  from all_directories a,        all_tab_privs b   where a.directory_name = b.table_name    and DIRECTORY_NAME='CSVFOLDER';    DIRECTORY_NAME                 DIRECTORY_PATH       GRANTEE   PRIVILEGE  ------------------------------ -------------------- --------- ---------  CSVFOLDER                       /home/reports/csv   INFORMES  READ  CSVFOLDER                       /home/reports/csv   INFORMES  WRITE    show user  USER es "INFORMES"    declare    output_fich utl_file.file_type;  begin    output_fich := utl_file.fopen('CSVFOLDER','testfile.csv','W');    utl_file.put_line (output_fich, 'test line');    utl_file.fclose(output_fich);  end;  /    Procedimiento PL/SQL terminado correctamente.    host ls -la /home/reports/csv/testfile.csv  -rw-rw-r-- 1 oracle oracle 10 oct  3 18:20 /home/informesestados/tmp/testfile.csv    host rm /home/reports/csv/testfile.csv  

Now, lets try it again connecting through the listener Let's see where is my tnsnames pointing to:

oracle ~$ cat $ORACLE_HOME/network/admin/tnsnames.ora | grep MYDB  MYDB = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = FR-BD1-tmp)(PORT = 1522))) (CONNECT_DATA = (SERVICE_NAME = MYDB)))    oracle ~$ ping FR-BD1-tmp  PING fr-bd1-tmp (192.168.78.3) 56(84) bytes of data.  64 bytes from fr-bd1-tmp (192.168.78.3): icmp_seq=1 ttl=64 time=0.047 ms  64 bytes from fr-bd1-tmp (192.168.78.3): icmp_seq=2 ttl=64 time=0.025 ms  ^C  --- fr-bd1-tmp ping statistics ---  2 packets transmitted, 2 received, 0% packet loss, time 1386ms  rtt min/avg/max/mdev = 0.025/0.036/0.047/0.011 ms    oracle ~$ /sbin/ifconfig | grep "inet addr"            inet addr:192.168.78.3  Bcast:192.168.78.255  Mask:255.255.255.0    oracle ~$ lsnrctl services LISTENER_MYBD    LSNRCTL for Linux: Version 11.2.0.1.0 - Production on 03-OCT-2013 18:33:04    Copyright (c) 1991, 2009, Oracle.  All rights reserved.    Conectándose a (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=fr-bd1-tmp)(PORT=1522)))  Resumen de Servicios...  El servicio "mydb" tiene 1 instancia(s).    La instancia "mydb", con estado READY, tiene 1 manejador(es) para este servicio...      Manejador(es):        "DEDICATED" establecido:45 rechazado:0           LOCAL SERVER  El servicio "mydb_node1" tiene 1 instancia(s).    La instancia "mydb", con estado READY, tiene 1 manejador(es) para este servicio...      Manejador(es):        "DEDICATED" establecido:3 rechazado:0 estado:ready           LOCAL SERVER  El comando ha terminado correctamente    oracle ~$ sqlplus -S informes@mydb    Introduzca la contraseña:  select a.directory_name,          a.directory_path,         b.grantee,         b.privilege  from all_directories a,        all_tab_privs b   where a.directory_name = b.table_name    and DIRECTORY_NAME='CSVFOLDER';    DIRECTORY_NAME                 DIRECTORY_PATH       GRANTEE   PRIVILEGE  ------------------------------ -------------------- --------- ---------  CSVFOLDER                       /home/reports/csv   INFORMES  READ  CSVFOLDER                       /home/reports/csv   INFORMES  WRITE    show user  USER es "INFORMES"    declare    output_fich utl_file.file_type;  begin    output_fich := utl_file.fopen('INFORMES','testfile.csv','W');    utl_file.put_line (output_fich, 'test line');    utl_file.fclose(output_fich);  end;  /  declare  *  ERROR en línea 1:  ORA-29283: operación de archivo no válida  ORA-06512: en "SYS.UTL_FILE", línea 536  ORA-29283: operación de archivo no válida  ORA-06512: en línea 4  

Now, If I change reports's home permissions to be accesible to all, the UTL_FILE procedure through listener conenction works!

# chmod 777 /home/reports  # ls -la /home/reports/  total 52  drwxrwxrwx  8 reports reports 4096 oct  3 12:58 .  drwxr-xr-x. 5 root    root    4096 oct  2 11:05 ..  drwxrwxrwx  2 reports reports 4096 oct  3 12:59 csv    # su - oracle  oracle ~$ sqlplus -S informes@mydb  Introduzca la contraseña:  declare    output_fich utl_file.file_type;  begin    output_fich := utl_file.fopen('CSVFOLDER','testfile.csv','W');    utl_file.put_line (output_fich, 'test line');    utl_file.fclose(output_fich);  end;  /    Procedimiento PL/SQL terminado correctamente.    host ls -la /home/reports/csv/testfile.csv  -rw-rw-r-- 1 oracle oracle 10 oct  3 18:59 /home/informesestados/tmp/testfile.csv  

???!!

I don't get why this behaviour. As Oracle's documentation says:

(http://docs.oracle.com/cd/B28359_01/appdev.111/b28419/u_file.htm)

On UNIX systems, the owner of a file created by the FOPEN function is the owner of the shadow process running the instance

The owner of the shadow process its the same in both methods (i.e. ''oracle'' system user), so,.. why this?

Anybody has a clue? Am I missing something?

Regards!

E/R diagram for sports league database

Posted: 03 Oct 2013 04:43 PM PDT

I am attempting to create a data model for a sports league

There are two entities in my domain. The first entity is a Manager and the second is a Team.

A Manager has

  • personnel id
  • first name
  • last name
  • birth date
  • birthplace

A Team has one Manager

If these are the design constraints, then how can we connect these two constraints?

bulk insert not working in stored proc when using exec

Posted: 03 Oct 2013 01:50 PM PDT

We have a SQL Server 2005 database with a stored procedure that needs to do a bulk insert.

The query in the stored procedure is defined as such:

declare @query nvarchar(max)    set @query = N'bulk insert uploadtmp  from ''\\xxxxxxxxxxxxxxx.local\webapps\root\DOE_SPECIAL_ED\uplds\2014\201455E002.txt'' with (datafiletype=''char'', fieldterminator=''\t'', rowterminator=''\n'')'    exec sp_executesql @query  

and the stored procedure is called from the web app nothing gets inserted, but if the stored procedure is changed to just run the bulk insert using a hard coded SQL statement and called from the web app:

bulk insert uploadtmp  from '\\xxxxxxxxxxxxxxx.local\webapps\root\DOE_SPECIAL_ED\uplds\2014\201455E002.txt' with (datafiletype='char', fieldterminator='\t', rowterminator='\n')  

the data gets loaded. The developer needs to be able to eventually use a dynamic version because the path and file will change and be passed from the web app as a parameter. What's even more confusing is when you execute the stored procedure from a query window on the database server the bulk insert works either way, it only has a problem when called from the web app.

Composite Primary Key efficiency as a Foreign Key

Posted: 03 Oct 2013 07:32 AM PDT

I have a table with a Composite Primary key (consisting of 4 columns) which is used to ensure no duplicates are entered into the table. I am now in need of a new table which will need to reference the keys in this table as foreign keys.

My question is which approach is more efficient for lookup speeds:

1) Do I create the new table including all 4 columns and reference them all in a foreign key.

or

2) Do I create a new identity column in the Primary Key table and use this as a foreign key in the new table.

This database is expected to hold a very large amount of data, so I have built it up until now with a view to minimising the amount of data held in each table. With this in mind, option 2 would be the best approach since I will save 2 int columns and a datetime column for every row, but I want to avoid increasing the lookup time if unnecessary.

How to remove diacritics in computed persisted columns? COLLATE is non-deterministic and cannot be used

Posted: 03 Oct 2013 09:59 AM PDT

I have a function that includes:

SELECT @pString = CAST(@pString AS VARCHAR(255)) COLLATE SQL_Latin1_General_Cp1251_CS_AS  

This is useful, for example, to remove accents in French; for example:

UPPER(CAST('Éléctricité' AS VARCHAR(255)) COLLATE SQL_Latin1_General_Cp1251_CS_AS)  

gives ELECTRICITE.

But using COLLATE makes the function non-deterministic and therefore I cannot use it as a computed persisted value in a column.

Q1. Is there another (quick and easy) way to remove accents like this, with a deterministic function?

Q2. (Bonus Question) The reason I do this computed persisted column is 1)to search 2)to detect duplicates. For example the user may enter the customer's last name as either 'Gagne' or 'Gagné' or 'GAGNE' or 'GAGNÉ' and the app will find it using the persisted computed column (and the update/insert will balk at the duplicate). Is there a better way to do this?

EDIT: Using SQL Server 2012 and SQL-Azure.

Initializing Transactional Replication From two Different Database Backup

Posted: 03 Oct 2013 08:53 AM PDT

I have two different large Database (200GB for each) which I want to replicate their data to a central database (3rd DB). I followed this link and it was very useful for a single database.

But my problem is I have to restore two different backups to a single database. Is that possible at all?

Performance: order of tables in joined statement

Posted: 03 Oct 2013 08:02 AM PDT

I have the following SQL statement, running on a SQLite database on a Windows mobile device.

SELECT       table1.uniqueidentifier1, table1.int1, table1.varchar1,       table1.decimal1, table1.decimal2   FROM table1  INNER JOIN table2 On table1.PK = table2.FK  WHERE table2.uniqueidentifier2 IN (uniqueidentifier1,uniqueidentifier2,....)  ORDER BY table1.varchar1  

As there are several hundred thousand records in each table and the device isn't really new this takes some time.

Would the performance be better, if I switched the tables, somehow like this:

SELECT       table1.uniqueidentifier1, table1.int1, table1.varchar1,       table1.decimal1, table1.decimal2   FROM table2  INNER JOIN table1 On table1.PK = table2.FK  WHERE table2.uniqueidentifier2 IN (uniqueidentifier1,uniqueidentifier2,....)  ORDER BY table1.varchar1  

Please note: in the first statement I select from table 1` and join table 2, in the second, it is switched.

Why or why not is it faster?

How to design database for polymorphic relationships

Posted: 03 Oct 2013 10:13 AM PDT

I have the following scenerio:

  • There are some categories
  • Each category has attributes
  • An attribute could be of free text or collection of options
  • There is a product which is assigned to category which needs to store values/options of attributes

How can a model be made without any repetition, and which correctly addresses the polymorphic nature of attributes?

MYSQL LOAD DATA INFILE taking a long time

Posted: 03 Oct 2013 03:13 PM PDT

I have a MYSQL DB running on a raspberry pi. Now under normal circumstances MYSQL actually runs slightly quicker than it did on my much more powerful desktop. However I am trying to insert 60 million records in to the database using LOAD DATA INFILE. I tried it all in one go (a 1.2GB File) and it was still trying to load the data 1.5 days later. So I tried loading in 100 000 chunks which was fine for the first 3 million records but soon started to grind to a halt. So I then removed the indexes from the table and it seems to run a bit quicker but I noticed that for each 100 000 rows I insert the time increases by about 20 seconds.

What is strange is that when I did a database restore from my original desktop machines database (an identical db with 60million rows in the main table) the restore only took about 1 hour.

What is causing the slowdown for LOAD DAT

I should point out that I am using InnoDB

EDIT:

I reduced the chunks to 1000 records and left it running which did appear to speed things up as after about 1 hour it had inserted 24million records however each insert of 1000 was taking about 30 seconds. However I then decided to stop it running and restarted the raspberry pi. Then I ran the import again and low and behold the initial inserts were back to less than one second again.

So my question is, do I need to clear a cache or something as MYSQL appears to be getting bogged down rather than the actual LOAD DATA INFILE being slow. It is almost as if it is filling up memory and not releasing it or something much more technical to do with MYSQL.

Unclear oracle problem with getting response of query

Posted: 03 Oct 2013 08:21 AM PDT

The problem is very unclear,

I have an application server, that constructs queries and sends it to Oracle to get response.

The Oracle database is installed on a high perfomance server:
Linux 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux

The application server was tested on different locations inside of the intranet, including that high perfomance server, where the database is located.

Several versions of the application server (from old to newest) were tested also.

The problem:

When the application server sends a query to the database, it has to wait for about 300 seconds to get response and construct logged output.

But when I'm connecting to the database with PL/SQL Developer and running that query using that IDE, I'm getting a response in less than one second.

Is there something familiar for you, guys?

We're just cracked our heads trying to determine where the problem could be located.

If you need more information - I'll update this post immideately.

Restrict range of dynamic ports available to SQL Server

Posted: 03 Oct 2013 10:38 AM PDT

Is there a way to limit the range of dynamic ports available to SQL Server from the database side, or database server side? Our SOP is to use static ports across the network firewall and a vendor is having trouble locking down their ports. Theoretically, if we allowed a range of 1000 ports inside the dynamic range (49152–65535) across the firewall how would I limit SQL Server to only assign a dynamic port inside that range?

Probable circular dependency, but query still runs

Posted: 03 Oct 2013 08:27 PM PDT

I built this ER diagram in MySQL Workbench and it will run and generate the database. My concern is that the relationship between the employees, departments, positions, and employment_statuses is a circular relationship.

Because I know people will ask, this is something I was working on in a class. I'm not asking for help with homework, this is something we were working on in class and I took a different route than the professor. I'm asking for clarification about database modelling theory.

enter image description here

Calculating the median value in a MySQL table with CodeIgniter

Posted: 03 Oct 2013 05:25 PM PDT

This is my model file in codeigniter. I need to calculate the median for this variable: budget_breakdown.breakdown_grants. How can I do this calculation?

function reports_details1() {          $this->db->select('budget_breakdown.breakdown_grants');      //survey_respondent_info.state,survey_respondent_info.survey_id,budgets.budget_id,          $this->db->from('survey_respondent_info');          $this->db->join('budgets',                          'budgets.survey_id=survey_respondent_info.survey_id' , 'left');          $this->db->join('budget_breakdown',                              'budgets.budget_id=budget_breakdown.budget_id' , 'left');          $this->db->where('budgets.budget_option_id', 2);          $query1 = $this->db->get();          $result = $query1->result();          return $result;      }  

Access denied for user 'root'@'%'

Posted: 03 Oct 2013 08:56 PM PDT

I used to access the root user in MySQL just fine. But recently, I am no longer able to.

I am able to login fine :

 mysql -u root -p  

Here is the mysql status after login :

mysql> status  --------------  mysql  Ver 14.14 Distrib 5.5.28, for debian-linux-gnu (i686) using readline 6.2    Connection id:      37  Current database:     Current user:       root@localhost  SSL:            Not in use  Current pager:      stdout  Using outfile:      ''  Using delimiter:    ;  Server version:     5.5.28-0ubuntu0.12.04.3 (Ubuntu)  Protocol version:   10  Connection:     Localhost via UNIX socket  Server characterset:    latin1  Db     characterset:    latin1  Client characterset:    utf8  Conn.  characterset:    utf8  UNIX socket:        /var/run/mysqld/mysqld.sock  Uptime:         4 min 16 sec    Threads: 1  Questions: 112  Slow queries: 0  Opens: 191    Flush tables: 1  Open tables:  6  Queries per second avg: 0.437  --------------  

But when I want to do any action, such as :

mysql> CREATE DATABASE moyennegenerale;  ERROR 1044 (42000): Access denied for user 'root'@'%' to database 'moyennegenerale'  

I understand % is used to signify any host , but my status clearly states localhost. Does somebody have an idea of what might be going on?

Parallel inserts cause deadlock on DB2

Posted: 03 Oct 2013 07:38 AM PDT

The scenario is the following: The application attempts to insert into the same table from two parallel threads making 4000 insertions in each thread (separate transaction per thread). It causes the DB always to fail with the following exception:

com.ibm.db2.jcc.a.pn: The current transaction has been rolled back because of a deadlock or timeout. Reason code "2".. SQLCODE=-911, SQLSTATE=40001, DRIVER=3.52.95

The full log is ([#1] indicates the 1st thread/transaction, [#2] is correspondingly the 2nd):

SQL: create table line (id integer generated by default as identity, line_number integer not null, constraint line_pk primary key (id))  [#1] SQL: insert into line (line_number) values (1)  [#1] SQL: insert into line (line_number) values (2)  [#2] SQL: insert into line (line_number) values (1)  [#2] SQL: insert into line (line_number) values (2)  [#1] SQL: insert into line (line_number) values (3)  [#2] SQL: insert into line (line_number) values (3)  [#1] SQL: insert into line (line_number) values (4)  [#2] SQL: insert into line (line_number) values (4)  ...  [#2] SQL: insert into line (line_number) values (1608)  [#1] SQL: insert into line (line_number) values (1608)  [#2] SQL: insert into line (line_number) values (1609)  [#2] SQL: insert into line (line_number) values (1610)  [#2] SQL: insert into line (line_number) values (1611)  ...  [#2] SQL: insert into line (line_number) values (1654)  [#2] SQL: insert into line (line_number) values (1655)  [#1] [org.epo.lifesciences.slice.db.DBTest] Thread #1 has failed  org.springframework.dao.DeadlockLoserDataAccessException: StatementCallback; SQL [insert into line (line_number) values (1608)]; The current transaction has been rolled back because of a deadlock or timeout.  Reason code "2".. SQLCODE=-911, SQLSTATE=40001, DRIVER=3.52.95; nested exception is com.ibm.db2.jcc.a.pn: The current transaction has been rolled back because of a deadlock or timeout.  Reason code "2".. SQLCODE=-911, SQLSTATE=40001, DRIVER=3.52.95          at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.doTranslate(SQLErrorCodeSQLExceptionTransl          at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLException          at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:407)          at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:519)       ...  [#2] SQL: insert into line (line_number) values (1656)  [#2] SQL: insert into line (line_number) values (1657)  ...  [#2] SQL: insert into line (line_number) values (3999)  [#2] SQL: insert into line (line_number) values (4000)  [#2] Thread #2 completed  

It looks like that lock space is exhausted much earlier then transaction log is exhausted. Solutions which I see (and seem to work):

  • Increase lock space, in particular trick the parameters:
    • locklists – maximum storage for lock list configuration parameter
    • maxlocks – maximum percent of lock list before escalation configuration parameter
  • Commit more frequently.

Both of them are not strictly acceptable because:

  • One need to know in advance what should be the size of locklists which needs to be increased with the growing number of insertions. Thus DB2 cannot automatically adapt to increasing workload efficiently. Also DB instance needs to be restarted when this parameter is changed.
  • Often commits also mean that there should be a way to roll back all previous commits if something goes wrong at certain point of time. That complicates the application as it needs to implement "revert" logic (savepoints won't help). Moreover intermediate commits will be visible to data requester, so application needs also to track such "uncompleted" tasks and hide the data from client.

The test application works OK both for MySQL, HSQL and MSSQL as is (with no further tricks). So I believe there should be a way to make it working on DB2 without code refactoring and staying on SQL'92 compliant level. Is it possible with enterprise-level DB2? Perhaps I miss something trivial, any feedback is welcomed.

Pgpool, Postgresql and Apache tuning (1000 concurrent spatial queries) [on hold]

Posted: 03 Oct 2013 09:25 AM PDT

I'm trying to configure a load balancing system. I've a python script, invoked through mod_wsgi on Apache, that generates a query and executes it on pgpool: request-> wsgi python -> pgpool -> postgresql. Pgpool is configured as load balancer using 4 servers with 24GB ram and 350GB ssh hd. Our db is about 150GB and a query takes about 2 seconds. These are the configurations:

Pgpool

  • num_init_children 500
  • max_pool 2
  • child_life_time 300 seconds

Apache (mpm_prefork)

  • StartServers 100
  • MinSpareServers 20
  • MaxSpareServers 45
  • ServerLimit 2000
  • MaxClients 100
  • MaxRequestsPerChild 1000

PostgreSQL

  • max_connections = 1000
  • shared_buffers = 6GB
  • work_mem = 4GB

It seems not working When I try to submit more than 150 concurrent queries, although pgpool log file doesn't have any errors I get this error from the python script:

OperationalError: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.

Any ideas?

Can I move rows between partitions by updating the partition key?

Posted: 03 Oct 2013 09:56 AM PDT

I would think that this would be a fairly simply question, but I've actually had a difficult time finding an answer for this.

The question: Can you move rows of data within a partitioned table from one partition to another by simply updating the partition column so that it crosses the partition boundary?

For example, if I have a table that has a partition key:

CREATE TABLE SampleTable  (      SampleID INT PRIMARY KEY,      SampleResults VARCHAR(100) NOT NULL,  )  

With the partition function that maps to the primary key:

CREATE PARTITION FUNCTION MyPartitionFunc (INT) AS  RANGE LEFT FOR VALUES (10000, 20000);  

Can I move a row from the first partition to the third partition by changing the SampleID from 1 to (say) 500,000?

Note: I'm tagging this as both sql server 2005 and 2008, since they both support partitioning. Do they handle it differently?

Tool to generate large datasets of test data

Posted: 03 Oct 2013 07:28 AM PDT

Many times when trying to come up with an efficient database design the best course of action is to build two sample databases, fill them with data, and run some queries against them to see which one performs better.

Is there a tool that will generate (ideally straight into the database) large (~10,000 records) sets of test data relatively quickly? I'm looking for something that at least works with MySQL.

[Articles] The Challenges of Being Safe

[Articles] The Challenges of Being Safe


The Challenges of Being Safe

Posted: 02 Oct 2013 11:00 PM PDT

Securing your data is a challenge. Steve Jones has a few comments no just how hard it can be to obfuscate your production data as you move it to development environments.

[MS SQL Server] sqlserver connection problems

[MS SQL Server] sqlserver connection problems


sqlserver connection problems

Posted: 03 Oct 2013 04:18 AM PDT

we had a problem yesterday with all jobs skipping their schedule on a server for a small time frame at regular intervals. No information in the windows logs or event logs. We had restart the server to fix it. I was not able to connect to the sql server thru ssms. [i]A severe error occurred on the current command. The results, if any, should be discarded. (Microsoft SQL Server, Error: 0)[/i]I am trying to find out what exactly the issue is?

Transactional Replication. snapshot Agent not picking up Creation Script

Posted: 03 Oct 2013 02:08 AM PDT

I'm working on some custom transactional replication as part of a partial database/application upgrade. Table definitions have changed in the publisher, but not in the subscribers. I need to run a custom creation script to modify the subscriber and the distribution agent is not picking it up. Here's the article creation script:[code="sql"]EXEC sp_addarticle @publication = N'WB_to_opls_character', @article = N'Employee', @source_owner = N'dbo', @source_object = N'Employee', @type = N'indexed view logbased manualboth', @description = N'', @creation_script = 'F:\Data\Cleanup tbl_Employee.sql', @pre_creation_cmd = N'delete', @schema_option = 0x00, @identityrangemanagementoption = N'manual', @destination_table = N'tbl_EMPLOYEE', @destination_owner = N'dbo', @status = 0, @vertical_partition = N'false', @ins_cmd = N'CALL [sp_MSins_dboEmployeeCustom]', @del_cmd = N'CALL [sp_MSdel_dboEmployeeCustom]', @upd_cmd = N'SCALL [sp_MSupd_dboEmployeeCustom]', @sync_object = N'vEmployee', @sync_object_owner = N'sync'[/code]And yes I'm jumping through some hoops here to make replication work. I was given the option of replication (which was already being used) or linked servers, synonyms, and other hoops and chose replication. SSIS may have been a better choice, but they want near real-time and once I learned I could jump through some hoops with replication to make the source look like the destination I decided to go with it.Thanks.P.S. I had it working, forgot to save the script and now for some reason the creation script is not getting picked up by the snapshot agent.

Alter Partition Function with existing data problem

Posted: 03 Oct 2013 01:59 AM PDT

I have inherited a datamart on SQL 2008 R2 that has an existing partition function in place for our Fact data broken up by months. It was setup to go out as far as the end up 2012, but was never extended out past that. This wasn't discovered until about a month ago and by then the last file group (12/20012) had the data for 1/2013 through the current day. My previous attempts to resolve this using the split range option has resulted in a some pretty large tran logs which ultimately ended up in me running out of disk space. I'd be okay with leaving the large partition in place that contained a data for 12/2012-10/2013 if I knew I'd be able to roll 11/2013, 12/2013, etc in their correct partitions going foward. My partition schema/functions look like this:CREATE PARTITION SCHEME [ds_partition_fact_scheme_month] AS PARTITION [ds_partition_fact_fn] TO ([fact_2003_1_partition_filegroup…..[fact_2012_12_partition_filegroup]]CREATE PARTITION FUNCTION [ds_partition_fact_fn](int) AS RANGE LEFT FOR VALUES (1127, 1155……4718) (where each range value represents the first day of every month in our time dimension table and 4718 is 12/1/2012)I've added all the files/filegroups going out for 12 months in the future and tried running the following commands:ALTER PARTITION SCHEME ds_partition_fact_scheme_month NEXT USED [fact_2013_11_partition_filegroup]and ALTER PARTITION FUNCTION ds_partition_fact_fn() SPLIT RANGE (5054) (where 5054 represents 11/1/2013 in our time dimension table)When I run the alter partition function command, my tran log grows until I run out of disk space once again (over 200 GB in tran log) despite the fact that I haven't accumulated any data for Nov 2013 yet. I wouldn't have thought there should have been any data to be moved. Maybe I'm misunderstanding what's going on. Based on what I'm trying to accomplish, am I doing this right? I seem to be missing something.ThanksDave

Memory threshold for SQL server 2008

Posted: 02 Oct 2013 09:50 PM PDT

Hello,I am building a new virtual server - Windows 2008 R2 DataCenter.Installed SQL Server 2008 R2 Standard Edition - 64 bit.The "Maximum server memory" is set to 4096 MBThe server is having 6GB RAM.Why is SQL server crossing the threshold memory and utilizing most of the available 6GB RAM. With no other client connection yet and in idle server condition, currently I have got only 400 MB available. Why is it eating up most memory? Thanks,Bij

[SQL 2012] Huge Increase in Logical Reads for 17 Hour Period

[SQL 2012] Huge Increase in Logical Reads for 17 Hour Period


Huge Increase in Logical Reads for 17 Hour Period

Posted: 03 Oct 2013 01:46 AM PDT

We have recently implemented Dynamics CRM and have been working to performance tune all aspects of the product. Yesterday we started getting calls related to slow performance. I installed a performance analyzer toolset named DynamicsPerf (http://archive.msdn.microsoft.com/DynamicsPerf). This tools captures data from the DMVs every 2 hours and stores it in a database.While analyzing this issue I noticed that logical reads in this timeframe were through the roof. A query which usually averages 1k - 2k reads was averaging 1.4 million reads. This was happening with a number of different entities.Looking at the data DyamicsPerf captured (very small subset attached), it looks like the same statements and executions plans were used before, during, and after this timeframe.Any insight into what could cause this type of behavior would be much appreciated.Thanks,Sam

Pass data from SSRS report result set to Webform

Posted: 02 Oct 2013 07:42 AM PDT

This is my first post here so good day to all. Let me give you a background on the item I need help with.Where I am as we speak (as simplified an explanation as I can give):Webform1 has a textbox1 and button1 control on it.Upon clicking button1 it that captures the textbox1 data and passes the captured value to Webform2 Upon loading Webform2 the parameter is passed to the reportviewer control where the report displays queried result.All of the above is already in place.What I want to know:Is it possible to have the SSRS report contain links within the results of the report that can be passed back to a Webform so that it can be used to populate textbox controls on a third webpage? Sort of like a drillthrough but not.i.e. Webform with Reportviewer control that has a SSRS report result which returns and ID column. Each piece of data in the ID column has a link that passes the piece of data as a parameter to Webform3. Webform3 takes the parameter and passes it to a SQL stored proc which returns the data to load textbox controls on Webform3.Can this be done?Clear as mud? Any and all help is much appreciated!!!

SQL Server version

Posted: 03 Oct 2013 02:32 AM PDT

When I run SELECT @@VERSION on SSMS of the instance I got the following info.Microsoft SQL Server 2012 (SP1) - 11.0.3000.0 (X64) Oct 19 2012 13:38:57 Copyright (c) Microsoft Corporation Standard Edition (64-bit) on Windows NT 6.2 <X64> (Build 9200: ) (Hypervisor)I have another instance and when I run the same command I see the following info.Microsoft SQL Server 2012 (SP1) - 11.0.3128.0 (X64) Dec 28 2012 20:23:12 Copyright (c) Microsoft Corporation Enterprise Evaluation Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1)Does it mean the first one is not an eval version and it'a licensed version?Is there any other way to confirm if an instance is a licensed version or not?Thanks.

Book Reviews

Posted: 03 Oct 2013 12:27 AM PDT

Wouldn't it be a great idea to have forum members give their reviews of the book that has been featured in the latest newsletter? I honestly do not know what to make of so many "Amazon" type reviews, some of which are hyper-critical of the author to an unacceptable degree, yet others who sound like "professional" reviewers who "review" anything by skimming pages and rehashing table of contents. Much like that old adage that says don't judge a book by its covers, I'm tempted to say don't judge a book by its reviews. To be fair, in between these, there are a few helpful reviews. I'm sure that members of this forum who are mainly practitioners in the field will have many useful, helpful comments, on the books featured and newbies like me can profit greatly from their reviews.Anyone else for the idea?

SQL 2012 Import wizard VERY slow - what to do?

Posted: 02 Oct 2013 08:16 AM PDT

I was using the SQL 2012 import wizard to double the size of my 80k record table. It has been stuck on the same '1160 rows transferred message' for three hours, which is weird as it normally only takes a couple of minutes to copy the whole table. In the Task Manager it is still in 'running' status. Should i just let it run, or can i stop it using a command, without the DB going into 'suspect', 'emergency' or some other weird state? Will 'take offline' and go back 'online' or 'detach' then 'attach' safely work?Thanks

openquery and DBCC

Posted: 02 Oct 2013 09:01 PM PDT

Hello.This queryselect * from openquery([ServerName],'exec(''SET FMTONLY OFF; dbcc page(20,1,192963 ,0) with tableresults'')')works fine in SQL SERVER 2008R2But in SQL SERVER 2012, the next message:Mens 11527, Level 16, State 1, Procedure sp_describe_first_result_set, Line 1The metadata could not be determined because statement 'dbcc page(20,1,192963 ,0) with tableresults' does not support metadata discovery.Why?There is other way to determine the object with this page number?Thanks.

Find value of cell from header name and row name (like going down and across)

Posted: 02 Oct 2013 08:38 PM PDT

what I want is the value of cell just like where x meets y, going down and across what is the value in that cell.Thank you for your time

Support of JET OLEDB

Posted: 02 Oct 2013 06:51 AM PDT

All,We are planning to upgrade from SQL Server 2005 to SQL Server 2012.1. We have many SSIS packages. What is the best apporach to migrate them?2. We have many SSIS packages using JET OLEDB provider. How to make this work in SQL 2012?Thanks in advance.Regards,

Launch SSMS from command prompt

Posted: 02 Oct 2013 04:15 AM PDT

Hi,How do we launch 2012 SSMS from command prompt? I was using ssms.exe to launch 2008 SSMS also googled and found we can use ssms.exe for 2012 ssms also. But in my system both 2008 and 2012 installed. But when I use ssme.exe the 2008 ssms is opening. Even I tried with the the instance name with -S switch. So it is opening in 2008 ssms.

Missing OraOLEDB provider

Posted: 29 Jul 2013 06:05 AM PDT

I need to be able to set up Linked Servers on our SQL Server 2012 Server. When I try to set up a new Linked server... I do not see the Oracle provider. I installed 64 bit Oracle client.... no go. I then installed 32 bit Oracle client in addition to 64 bit.... no go. I tried a Regedit hack I found on Google.... no go. I rebooted the server after each of the above attempts. Has anyone found a solution to this issue? Thank you in advance.Charlie

BIDS vs SSDT

Posted: 02 Oct 2013 05:45 AM PDT

Hi,Simple question.We already develop with BIDS for SQL server 2008. We recently upgrade th PC to Visual Studio 2012 for our developper but our server still with SQL 2008.Could it be possible to use SSDT for developpement instead using sql 2008 on the server?Not sure if it's clear? I wish :)Steve

[T-SQL] Inline Valued Function query...

[T-SQL] Inline Valued Function query...


Inline Valued Function query...

Posted: 03 Oct 2013 12:05 AM PDT

Afternoon all,Looking for a bit of advice on something I am struggling to explain. I have 3 functions, 2 of which are examples provided by Adam Machanic in a useful post he did on Scalar Functions vs Inline Functions. The functions use AdventureWorks and are as follows:------------------------------------------------------[b]FUNCTION 1 - Inline Function 1:[/b]CREATE FUNCTION [dbo].[GetMaxProductQty_Inline_Test1]( @ProductId INT)RETURNS TABLEAS RETURN ( SELECT MAX(sod.OrderQty) AS maxqty FROM Sales.SalesOrderDetail sod WHERE sod.ProductId = @ProductId )[b]FUNCTION 2 - Inline Function 2:[/b]CREATE FUNCTION GetMaxProductQty_Inline_Test2( @ProductId INT)RETURNS @test TABLE (MaxQty INT)ASBEGIN DECLARE @MaxQty INT SELECT @MaxQty = (SELECT MAX(sod.OrderQty) AS maxqty FROM Sales.SalesOrderDetail sod WHERE sod.ProductId = @ProductId) INSERT INTO @test (MaxQty) VALUES (@MaxQty) RETURNEND[b]FUNCTION 3 - Scalar Function:[/b]CREATE FUNCTION GetMaxProductQty_Scalar_Test1( @ProductId INT)RETURNS INTASBEGIN DECLARE @maxQty INT SELECT @maxQty = MAX(sod.OrderQty) FROM Sales.SalesOrderDetail sod WHERE sod.ProductId = @ProductId RETURN (@maxQty)END------------------------------------------------------I used this to give people an easy to understand example with the implications of a Scalar Functions vs Inline Function when they are both setup to return a similar results set. [b]Inline Function 1[/b] vs [b]Scalar Function 1[/b] highlighted what Adam described in his blog post in that the Inline Function was 288 times quicker and did 1200 (ish) Reads vs 365,000 (ish) Reads of the Scalar Function, and consumed vastly less CPU and in the process, allowed an optimised Execution Plan to be generated detailing exactly what was happening, rather than the Black Box Approach of the Scalar Function.However, I set about re-writing the Inline Function to try and utilise parameters to return as a Table, which spawned [b]Inline Function 2[/b]...This function when used in exactly the same way as Inline Function 1, allowed SQL to generate a plan, but in reality still did a similar amount of Reads to the Scalar Function, only this time it was visible as to what was going on under the covers and what I deem to be the culprit which was a Nested Loop, which was cycling through the results in a similar way to the Scalar Function with a Row by Row Cursor like approach which was just as bad on performance, only more visible through the Execution Plan.What I am struggling to understand is why [b]Inline Function 1[/b] is so superior to [b]Inline Function 2[/b] when they both run Inline with the query and they both allow SQL to generate a useful Execution Plan of what is going on behind the scenes. The obvious answer is the Nested Loop, but I am looking for more detail than this if possible. The SQL used to execute both these queries is as follows:-------------------------------------------------------- Inline Function 1 --SELECT ProductId, ( SELECT MaxQty FROM dbo.GetMaxProductQty_Inline_Test1(ProductId) ) MaxQtyFROM Production.ProductORDER BY ProductId-- Inline Function 2 --SELECT ProductId, ( SELECT MaxQty FROM dbo.GetMaxProductQty_Inline_Test2(ProductId) ) MaxQtyFROM Production.ProductORDER BY ProductID------------------------------------------------------Any advice on this would be massively appreciated.Cheers

Blocking SPID with Text Data?

Posted: 02 Oct 2013 11:17 PM PDT

HiSELECT wt.session_id, ot.task_state, wt.wait_type, wt.wait_duration_ms, wt.blocking_session_id, wt.resource_description, es.[host_name], es.[program_name] FROM sys.dm_os_waiting_tasks wt INNER JOIN sys.dm_os_tasks ot ON ot.task_address = wt.waiting_task_address INNER JOIN sys.dm_exec_sessions es ON es.session_id = wt.session_id WHERE es.is_user_process = 1 this script provide only bocking SPID.. Pl give me, I want textdata (Query which query get it blocking) along with SPID.thanksananda

Change Excel formula to Case Statement

Posted: 02 Oct 2013 05:34 PM PDT

I have this formula I use within Excel, I'm moving most of the work into SQL & can't think how I can replicate this formula...IF(AND(A9=A8,L8="MoreThan1"),"MoreThan1",IF(A9=A8,IF(E9=E8,"OneStop","MoreThan1"),"OneStop"))Any ideas? o_O

Most efficient way to extract data for a historical data table

Posted: 02 Oct 2013 07:23 PM PDT

I have a 3rd normal form table which maintains history over time using a date row start and date row end. I need to extract data incrementally from this table starting at 1900-01-01 and looping through each day. The easiest way is to provide the data and use it in a where clause like this:[code="sql"]WHERE '1900-01-01' between date row start and date end date[/code]incrementing the date by a day then running the query again. However this would obviously pick up the same records time and time again. This isn't neccessary a issue as I will be merging these 'incremental' records using a column . It will only be a one off data load operation. But from a performance I would like to think of other ways to access that data.I cannot change the structure of the source table and there are no other dates I can use.Thanks

Concatenating several row columns into one

Posted: 02 Oct 2013 07:53 AM PDT

I've been tasked with creating a table that contains a customer# and all doctors within certain proximity of that customer in ONE row. I'll explain. one customer may have more than one Dr. avail within proximity, ALSO one Dr. may have more than one office within proximity of that one customer, so the data in the table would look like this:Cust# DocId DocName DocProfile DocCity DocState111 aaa Dr.John Doe Our prac... Sunrise FL111 aaa Dr.John Doe Our prac... Deerfield FL111 bbb Dr.Jane Doe It's with... Tamarac FL222 ccc Dr.Bill Doe We've.... Encinitas CA222 ccc Dr.Bill Doe We've.... Encinitas CA(2 offices in same city)222 ddd Dr.Ray Doe I'm glad... Sherman CA222 ddd Dr.Ray Doe I'm glad... Agoura CA222 eee Dr.Ruth Doe We're loc.. Studio CA222 fff Dr.Brian Doe We strive... Palm Sp CAMy table right now follows the structure and data above. My next step is to create a table to have a single row for each cust# with the doctor(s) column(s) concatenated into one column. i.e.CustomerNo Doctors in Area111 John Doe Our prac… Sunrise FL; John Doe Our prac… Deerfield FL; Jane Doe It's with… Tamarac FLThis is were I'm stuck. what's the optimal way to go about this? ROWNUMBER OVER? WHILE statement? IF THEN statement? not really sure. Any light shed is appreciated.Regards,John

Question Regarding Store Procedure

Posted: 02 Oct 2013 05:10 AM PDT

Hi Guys,I have quick question, I am using SP, in the SP I am using few Insert and Update statements.I want to know what should I do, If one Insert or Update Statement fails SP gonna give me error, however nothing happen (Not any insert or any update) any advise would be great appreciate...Urgent Please.Thank You,

Ignore Error in T-SQL

Posted: 24 Sep 2013 01:12 AM PDT

Hi,How do we ignore an error in T-SQL? For e.g.:- The following code throw an error once the @lCounter reach at 15 and will come out. Requirement is it should go till 1000. Even if there are errors in between.[code="sql"]DECLARE @lCounter SMALLINTDECLARE @lError SMALLINTSET @lCounter = 0WHILE (@lCounter <= 1000 )BEGIN SELECT CONVERT(VARCHAR, GETDATE(), @lCounter) SET @lCounter = @lCounter + 1END[/code]Thanks in advance

[SQL Server 2008 issues] Checkpoint vs Log Backup

[SQL Server 2008 issues] Checkpoint vs Log Backup


Checkpoint vs Log Backup

Posted: 02 Oct 2013 07:07 PM PDT

Checkpoint and log backup both have feature to "truncate the log". Checkpoint primarily handles by sql server internally but backup log doesnt.Checkpoint flushes the dirty pages to disk.ANy other differences ?

Select add new row opening balance

Posted: 02 Oct 2013 06:44 PM PDT

Dear Expert,I have a query problem, there are 4 tables (with left join)filtered by year and period and also an account code.This is the query :select a.fyear,a.aperiod,c.trx_amt,e.acct_cd,e.active_status,ISNULL((select openingBalance=SUM(a.trx_amt)from gl_pendjnls aleft join gl_jlhdr b on a.idxjlhdr = b.idxleft join v_acctperiod c on b.idxperiod = c.idxright join v_gl_chart d on a.idxcoa = d.idxwhere c.fyear < = 2013 and c.aperiod < 10and d.acct_cd = e.acct_cdgroup by d.acct_cd ),0) as openingBalance,ISNULL((select closingBalance=SUM(a.trx_amt) + c.trx_amtfrom gl_pendjnls aleft join gl_jlhdr b on a.idxjlhdr = b.idxleft join v_acctperiod f on b.idxperiod = f.idxleft join v_gl_chart d on a.idxcoa = d.idxwhere f.fyear < = 2013 and f.aperiod < 10and d.acct_cd = e.acct_cdgroup by d.acct_cd ),0) as closingBalancefrom v_acctperiod aleft join gl_jlhdr b on a.idx = b.idxperiodleft join gl_pendjnls c on b.idx = c.idxjlhdrleft join v_gl_chart e on c.idxcoa = e.idxwhere e.active_status = 'Y'and a.fyear = 2013 and a.aperiod = 10and e.acct_cd = '111-01-201'The Result and Result that i want is : http://i44.tinypic.com/oju9t.jpg[url=http://i44.tinypic.com/oju9t.jpg][/url]if i change the a.fyear to = 2013 and a.period = 11 and remove the e.acct_cd = '111-01-201'there is no data,all i want it has data, a new row that consist the acct_cd from every period that have no data, so the NEW row data consist only acct_cd from the last period opening balance, openingbalance = closingbalance from opening balance the last periodis anybody can help?. i really appriciated for your attention.Thanks,Musa

SSIS Transfer SQL Server Objects Task - Permissions Issue

Posted: 02 Oct 2013 02:36 AM PDT

I am attempting to create an SSIS package that will copy data from selected tables on one server to another. The problem I am running into is that when I try to get the list of tables, I receive the following error: "Property DefaultSchema is not available for Database '[XXXX]'. This property may not exist for this object, or may not be retrievable due to insufficient access rights. "I don't have write permissions to either server. However, I can run a SQL Job calling the SSIS package which does have the required permissions. I do not receive an error if I choose "All Tables" but I do not want to copy all of the tables. Any ideas on how to get around this? Elevated permissions are not an option. I really don't want to manually create Data Flow source and destinations for each table.Thanks for any help in advance!

select decimals after calculations, weird display

Posted: 02 Oct 2013 12:05 PM PDT

Hi, I can't get why I have that strange results, I need to keep value of dividing in a column let say 1/35, it's displays as 0 no matter what kind of convert/cast I using,however if I add 35.0 like for a1 then it works.Internally all this results are columns in intermediate temp tables derived as a result of sum() or count(), no any no any special formatting done. c1, c2 alwayse int, but I need very presise result in "a" column. Later I will use "a" in ssrs .[code="other"]select SUM(1) c1, SUM(35) c2 into #tselect c1, c2, CAST(1/35.0 as decimal(8,6)) a1, CAST(1/35 as decimal(8,6)) a2, CAST(c1/c2 as decimal(8,6)) a3 from #t c1 c2 a1 a2 a31 35 0.028571 0.000000 0.000000[/code]TxMario

which one is better for High Avaliability

Posted: 02 Oct 2013 03:28 PM PDT

Hi everyone.. log shipping, Replication,Mirroring which one is better for High avaliability? and in now a days companies which one prefer? plz give reply....

Flat File not moving all data

Posted: 02 Oct 2013 06:10 AM PDT

Hi,Im using flat file (.csv) to sql 2008, which has only 40 rows but 39 rows get transfered 1 row missing.Also i see and warning sign as below[Flat File Source [1]] Warning: There is a partial row at the end of the file.Anyone have any clue about it.Shaun..

Sending Email Using Database

Posted: 02 Oct 2013 02:57 PM PDT

Dear,I require that after processing some transactions, need to send email with necessary information to the respective email addresses.I am using SQL Server 2008 R2. How can I do it?Please any guideline/resource must help me.Regards,Akbar

Sum prescription amounts for an individual by given a start date and number of days supplied

Posted: 02 Oct 2013 09:32 AM PDT

I am trying to develop a query to determine the amount of a drug that an individual has had for every day during a quarter. On some days, there are no drugs prescribed, for others, there may be overlap and I need a total amount (meaning, strength for each summed for a day). The number of drugs, strengths, daysupply etc. can vary. Here's some data:[code="sql"] create table #MemberInfo (ProgramName varchar(255), DateFilled datetime, DaySupply integer, MemberID varchar(255), Strength integer, Tradename varchar(255)); insert into #MemberInfo Values ('InsureCo', '20130612', 30, 'MEM001', 10, 'Sedative') , ('InsureCo', '20130429', 30, 'MEM001', 20, 'Sedative') , ('InsureCo', '20130401', 30, 'MEM001', 20, 'Sedative') , ('InsureCo', '20130529', 30, 'MEM001', 30, 'Sedative')[/code] I really have no idea what the best approach might be to add up the amount of drugs taken on a given day during a quarter. I'd like to avoid using cursors if I can. I was thinking about creating a temp table with all the days for a quarter and then somehow joining those dates to every day a drug is taken (i.e., DateFilled + every subsequent day up to DaySupply). Once I get to the point where I have the dates and amounts for every drug in a quarter, I could group by day and get a sum of strength for each day. I also need to be able to get the average amount taken over a quarter. Additional Requirements: 1. I have a start date and a number of days. I'd like to create a row for each member for every day they have a prescription (and do the same for all of their prescriptions). I would then sum the strength of all the drugs for each day. If it helps any, all of the drugs will be of the same class, and strength is going to be equivalent doses, meaning that I can sum them up. 2. For reporting, I need to be able to count consecutive days that the amount is greater than some cutoff (let's say 100). That's why I'm trying to get amount per day.[code="other"] Desired output MemberID Date SumStrength MEM001 2013-04-29 40 MEM001 2013-04-30 40 MEM001 2013-05-01 20 ETC FOR EVERY DAY FOR THIS MEMBER MEM002 2013-04-01 60 MEM002 2013-04-02 40 ETC FOR EVERY DAY FOR THIS MEMBER[/code]

t-sql is bigger

Posted: 02 Oct 2013 12:05 AM PDT

Guys, I currently have a database of 80 giga bytes. did a shrink log file to 1 mega byte. after it started backup diff and log, but the log file grew unexpectedly, and even when not being used by any application, ie only test ... as a log file can grow as fast if my database is not linked to any application? every 30min i've 170mb for each log file... that is stranger...

Archive Database in SQL 2008r2

Posted: 02 Oct 2013 06:44 AM PDT

Hello All,How to archive the entire database in SQL Server 2008r2Thanks,Santosh

Performance When dropping multiple tables

Posted: 01 Oct 2013 11:31 PM PDT

Hi,The software we use often creates a lot of temporary tables that need to be deleted from time to time.Depending on the usage, this can range from hundreds per month to thousands per week.These temporary tables are all created with names that meet certain conventions, as such, they're quite easy to identify in the database.Normally, when I've been dropping these tables, I've done it within a cursor.The cursor identifies the names of the tables in the database, and then the loop process drops each table one at a time.I'm often conscious of the performance overhead of using a cursor, so have been looking an alternative process for this.As such, I've tried a new Drop Table command, which comma separates each of the table names and then passes to a single Drop Table command.[code="sql"]declare @DropObjectList varchar(max)SELECT @DropObjectList = COALESCE(@DropObjectList+',' ,'') + TABLE_NAMEFROM INFORMATION_SCHEMA.TABLESwhere TABLE_NAME like 'TEMP[_]TABLE[_]%'exec ('drop table ' + @DropObjectList)[/code]I've tried doing some basic performance testing on test data, and the results are inconclusive.It appears to me that the Drop Table command has a higher overhead than looping through the cursor.Therefore the performance isn't particularly different whether I run through a cursor or not.Does anyone have any ideas about the fastest way to drop multiple tables whose names meet a certain naming convention, and whether there should be a performance difference whether a cursor is used or not?Thank youSteve

Insert bulk failed due to a schema change of the target table.

Posted: 10 Jul 2013 07:21 AM PDT

Hello Expert.First time I used sqlbulkcopy its performance is impressive, however my application threw a SQL exception after processed about 500,000 records. The error stated "Insert bulk failed due to a schema change of the target table." which I don't understand why, data being inserted into the same table, and why it processed 500,000 records with out issue, but not the rest which I cant understand.Here is the chunk of VB.Net code that does the bulk copy. Notice that MyDestTable has a couple more fields that I didn't want them to be mapped. all fields in the MySourceDataTable table are specified in the mapping.Dim sqlBulk As New SqlBulkCopy(_connectionString) sqlBulk.DestinationTableName = MyDestTable sqlBulk.ColumnMappings.Add("DataOwnerID", "DataOwnerID") sqlBulk.ColumnMappings.Add("ObjectID", "ObjectID") sqlBulk.ColumnMappings.Add("ObjectTypeID", "ObjectTypeID") sqlBulk.ColumnMappings.Add("PropertyName", "PropertyName") sqlBulk.ColumnMappings.Add("Pvalue", "Pvalue") sqlBulk.ColumnMappings.Add("UpdateIDRefreshed", "UpdateIDRefreshed") sqlBulk.ColumnMappings.Add("UpdateIDModified", "UpdateIDModified") sqlBulk.ColumnMappings.Add("Attribute", "Attribute") sqlBulk.ColumnMappings.Add("POrder", "POrder") sqlBulk.WriteToServer(MySourceDataTable) sqlBulk.Close()Anyone run into this type of error while using SQLBulk copy? I did lot of googling but didn't find anything points to the right direction.Thanks in advanced.

SQL Server 2008 R2 Hyper-v 2012 live migration

Posted: 02 Oct 2013 01:26 AM PDT

Hi all,I would like to know if anybody has tried a live SQL Server 2008 R2 VM live migration on hyper-V 2012? And whether you have had any issues? I have found quite a lot of conflicting comments on the MS forums and it seems to depend on what features of SQL server are being used to whether it will be compatible or not. Thanks is advance.

sqlcmd using ctrl-A

Posted: 02 Oct 2013 12:33 AM PDT

I need to be able to export the data from a table to a text file using sqlcmd. I need to be able to use Ctrl-A as a column separator. Is this doable, and if so, how? I am assuming the this is a value used by the -s option.I need to use a batch file to run the sqlcmd utility.

Change value for allow_anonymous & immediate_sync for Replication

Posted: 01 Oct 2013 08:32 PM PDT

Hi,is there any good reason, why this both values should be set to true. I had to change this values many times.

Count Of Level in parent table

Posted: 01 Oct 2013 08:34 PM PDT

hello all.i have a parent table:subjectid subjectname parentid organizationid4058 x -1 5764059 x1 4058 5764060 x2 4058 5764061 x3 4058 5764062 x11 4059 5764063 x111 4062 576now i want to have count of level of my table?i want a query wich teturns 3 in this example

Wednesday, October 2, 2013

[SQL Server] exception handling: how would you do it?

[SQL Server] exception handling: how would you do it?


exception handling: how would you do it?

Posted: 02 Oct 2013 08:44 AM PDT

Hi all,I have a 3-part UNION ALL query. Sometimes, one of the tables are not available, meaning I can't connect to it.Currently, if one part of the query fails, the whole thing fails. How do I trap all possible exceptions, so even if any 1 or 2 parts fail, the rest will return a result.Here is the test data:[code="sql"]create table A (col_1 varchar(1), col_2 int);insert into A values('a',100);create table B (col_1 varchar(1), col_2 int);insert into B values('b',200);create table C (col_1 varchar(1), col_2 int);insert into C values('c',300);SELECTa.col_1 AS td, a.col_2 AS [td align=RIGHT]FROM( select col_1, col_2 from A union all select col_1, col_2 from B union all select col_1, col_2 from C ) aFOR XML RAW('tr'), elements[/code]How would you do it?Thanks,

Combining fields of matching records

Posted: 02 Oct 2013 04:16 AM PDT

I have a table that contains several fields, and some of the records are identical except for one of those fields. I'm trying to write a query to return a single row that contains each field plus a combination field representing the concatenated values that were unique to each row.For example: FIELD1 | FIELD2 | FIELD3 | FIELD4ROW1 | ABC DEF GHI JKLROW2 | ABC DEF GHI MNOAnd I want the output to look like:OUTPUT | ABC DEF GHI JKL,MNOI can do this in .NET code, but I was hoping to do this in the SQL. Is it possible?I've found this example, but haven't been able to make it work correctly and don't really understand all of it:SELECT p.ASSIGNNUM,p.DESC,p.STARTDATE,LEFT(el.EmpList,LEN(el.EmpList)-1)FROM TableParent pCROSS APPLY (SELECT EMPLOYEENUM + ',' AS [text()] FROM TableChild WHERE ASSIGNNUM =p.ASSIGNNUM FOR XML PATH(''))el(EmpList)

Auto Increment Alphabet

Posted: 27 May 2010 05:20 AM PDT

Hi All,I need to generate a query that'll prefix my form series like the below AAABAC...AZBABBBC...BZCACBCC...CZ........ZAZB...ZZAAA...........Basically, I need to increment my form prefix with the next alphabet and if Z had already been reached to increment the first letter i.e. A to BOn my part I've developed a logic for 2 character series. But struggling to make it a dynamic option to go to AAA & once it reaches ZZZ to AAAA.Here's my attempt[code="plain"]DECLARE @FP VARCHAR(3)DECLARE @FP1 VARCHAR(1)DECLARE @FP2 VARCHAR(1)SET @FP = 'AAA'SET @FP1 = SUBSTRING(@FP,LEN(@FP)-1,1)SET @FP2 = SUBSTRING(@FP,LEN(@FP),1)SELECT @FP, @FP1, ASCII(@FP1), @FP2, ASCII(@FP2)IF(ASCII(@FP2)=90)BEGINSET @FP2='A' SET @FP1=CHAR(ASCII(@FP1)+1)IF(ASCII(@FP1)>90)BEGINSET @FP1='A'SET @FP='A'+@FP1+@FP2ENDELSEBEGINSET @FP=@FP1+@FP2ENDENDELSE IF(ASCII(@FP2)<90)BEGINSET @FP2=CHAR(ASCII(@FP2)+1)SET @FP=@FP1+@FP2ENDSELECT @FP, @FP1, ASCII(@FP1), @FP2, ASCII(@FP2)[/code]I hope someone helps. Ankit Mathur

Search This Blog