Saturday, March 2, 2013

[how to] Only one NULL for one part of primary key

[how to] Only one NULL for one part of primary key


Only one NULL for one part of primary key

Posted: 02 Mar 2013 05:51 PM PST

So,i have two date columns : "start date" and "finish date"
I want to make sure that there can be only one null value in "finish date" column(which should be current state if it's NULL) for one primary key(composite in my case).
Is there a way to do that ?
I'm using SQL Server 2008.

EDIT:
I'm sorry for making no sense.
There should be only one NULL for one part of the primary key.
Here are the columns :


PeriodID | PersonID | StartDate | FinishDate

1...................1.............1/1/10.......10/1/10
2...................1.............10/1/10....... NULL
3...................1..............somedate......MUST NOT BE NULL

And yes,"PersonID" is a foreign key.

MySQL design problem

Posted: 02 Mar 2013 08:44 PM PST

So I have nearly no experience in DB Design, so forgive me if I'm asking something super nooby and ridiculous :)

Basically my issue is this. I have say an item I need to create, this item will be constructed from 3-4 types of objects, with multiple object instances. Basically as per below:
Object 1: only 1 instance per item
Object 2: extends Object 1 and shares a foreign key, may have multiple instances, one instance mandatory
Object 3: extends Object 2 and shares a foreign key, may have multiple instances, one instance mandatory . There are 3 types of Object 3 which are dependent on a field from Object 2 (haven't yet figured out how these need to link without code).
Object 4: extends Object 3 and shares a foreign key, may have multiple instances, not mandatory, so may be omitted.

My question is this, is there a way to dynamically generate Object 3 through MySQL. Like if field #3 shows Object 3.1 call that table, if it shows 3.2 call that instead. Similar to a constructor.

Also is there a way for me to map related objects. For example, since everything is dependent on Object 1, I'd like to be able to call Object 1 and have it pull all related instance of Objects 2,3,4 related to that specific object. I was thinking of creating a separate table top hold primary keys of related objects, but I'm thinking there is probably a better way of doing that.

Also I've read that I shouldn't use stored procedures, and instead do everything with code. Is that correct, because I would think for my model it would be more convenient and cleaner to just call on Object one, and have MySQL do all the joins internally and return a single completed object

Thank you very much for your help!!

Possible INDEX on a VARCHAR field in MySql

Posted: 02 Mar 2013 06:57 PM PST

I am working in a MySql database, with a table like this:

+--------------+  |  table_name  |  +--------------+  |    myField   |  +--------------+  

...and I need to make a lot of queries like this (with 5-10 strings in the list):

SELECT myField FROM table_name  WHERE myField IN ('something', 'other stuff', 'some other a bit longer'...)  

There will be around 24.000.000 unique rows

1) Should I use a FULLTEXT or and INDEX key for my VARCHAR(150)?
2) If I increase the chars from 150 to 220 or 250... would it make a great difference? (Is there any way to calculate it?)
3) As I said, they are going to be unique, so myField should be a PRIMARY KEY. Isn't it rare to add a PRIMARY KEY to a field which is already a VARCHAR INDEX/FULLTEXT?

How to improve INSERT speed in large table that has unique constraint

Posted: 02 Mar 2013 01:36 PM PST

I have a simple MyISAM table:

explain entities;  +------------+-------------+------+-----+-------------------+----------------+  | Field      | Type        | Null | Key | Default           | Extra          |  +------------+-------------+------+-----+-------------------+----------------+  | id         | int(11)     | NO   | PRI | NULL              | auto_increment |  | name       | varchar(50) | YES  | UNI | NULL              |                |  | created_at | timestamp   | NO   |     | CURRENT_TIMESTAMP |                |  +------------+-------------+------+-----+-------------------+----------------+  

that is expected to have 10s of millions of rows (or even more) eventually. I noticed that as the number of rows in the table increases, insert statements start to take longer. This is expected, as the documentation states that "the size of the table slows down the insertion of indexes by log N, assuming B-tree indexes". I read in another question that LOAD INDEX INTO CACHE can be used to improve insert speed, but I found this to have a negligible effect. Are there any other pointers on how to improve insertion speed, other than the ones listed here? For example, would partitioning the table help with insertion speed? What guidelines can I use for determining if my insertion speeds are already at a near optimum?

System info: MySQL 5.1.67-0ubuntu0.10.04.1 Running on Linode VM with 512 MB RAM and quad-core processor

Conditional Import

Posted: 02 Mar 2013 12:15 PM PST

I have 300+ flat files. I am importing/loading them using

LOAD DATA LOCAL INFILE 'D:\MxxT\tot2098.TXT' INTO TABLE Twenty.F98 FIELDS TERMINATED BY '' lines terminated by '\r\n' ;

on my MYSQL Server database.

Each of 20 files makes 28 tables. I have a flag variable which is MachineType in all flat files. After import of 20 files. I have to do

SET SQL_SAFE_UPDATES=0;

DELETE FROM Twenty.F98 WHERE MachineType <>1;

SELECT DISTINCT MachineType FROM Twenty.F98;

  1. Is there any way I can place a condition when loading data file.

  2. In MS Access I have used Schema.ini which loads the files in a second or less. Is there such mechanism available for MYSQL ?

  3. I have to create a dashboard after loading of all files. I am thinking of Using Excel as front end. Is there anything else better than that ??

Thanks !!

How to view the current settings of Autovacuum in Postgres?

Posted: 02 Mar 2013 12:56 PM PST

I've found a million examples of how to set Autovacuum's configuration settings, but can't seem to find how to list the current configuration.

Postgres 9.1 is the version I'm most interested in.

Why is optimistic locking faster than pessimistic locking?

Posted: 02 Mar 2013 03:54 PM PST

Both forms of locking cause a process to wait for a correct copy of the record if its currently in use by another process. With pessimistic locking, the lock mechanism comes from the DB itself (a native lock object), whereas with optimistic locking, the lock mechanism is some form of row versioning like a timestamp to check whether a record is "stale" or not.

But both cause a 2nd process to hang. So I ask: why is optimistic locking generally considered faster/superior than pessimistic locking? And, are there are use cases where pessimistic is preferred over optimistic? Thanks in advance!

How does optimistic locking actually enforce re-read/update?

Posted: 02 Mar 2013 01:40 PM PST

My understanding of optimistic locking is that it uses a timestamp on each record in a table to determine the "version" of the record, so that when the record is access by multiple processes at the same time, each has a reference to the record's version.

Then, when an update is performed, the timestamp is updated. Before an update is committed, it reads the timestamp on the record a 2nd time. If the timestamp (version) that it has is no longer the timestamp on the record (because it's been updated since the first read), then the process must re-read the entire record and apply the update on the new version of it.

So, if anything I have stated is not correct, please begin by making clarifications for me. But, assuming I'm more or less correct here...

How does this actually manifest itself in a RDBMS? Is this 2nd read/verification enforced in the application logic (the SQL itself) or is it a tuning parameter/configuration that the DBA makes?

I guess I'm wondering where the logic comes from to read the timestamp and perform a 2nd update if the timestamp is stale. So I ask: does the application developer enforce optimistic locking, or is it enforced by the DBA? Either way, how? Thanks in advance!

Implementation of stack in MYSQL

Posted: 02 Mar 2013 07:23 PM PST

I had an abstract idea of implementing a stack in MYSql. Tried a lot of Google work but couldn't reach to any logical conclusion.

The Stack must be a collection of tables with the stack top pointing at a particular table. We may have various such stacks in our DB. The stack length may increase if for example the field from the table is full. Likewise the stack length may decrease in the opposite manner!

Could you please give me an implementation idea! is it really possible at first place??Hope im clear with my question :-P

Please HELp!!!!! Thanks in advance!

Selecting with varbinary(max) criteria (in the where clause)

Posted: 02 Mar 2013 09:58 AM PST

Basic info

  • Database: SQL Server Express 2008 R2
  • Client: SQL Server Management Studio

Backround (skip if not interested):

A project I'm maintaining uses an ORM, which apparently stored my enum values (which inherit from Byte) into binary serialized .Net objects stored in a varbinary(max) field. I only found out this was happening after a new requirement emerged dictating my code to run under medium trust. Since the .Net binary formatter needs full trust to be called, it started crashing on the enums.

To clean the mess up I need to create migration scripts that will convert these (varbinary(max)) values back to integer values. There are only a handful of different values so it shouldn't be a big problem (I thought).

The problem:

I am able to get string representations of the blobs when selecting:

SELECT BinValue FROM MyTable where Type = 'Object';  

It returns a string '0x...(an array of hexadecimal values)'.

But when I try to select on the column using copy-and-paste making sure I have the exact binary equivalent:

SELECT ItemId FROM MyTable WHERE Value=convert(varbinary(max), '0x...')  

it does not return any records.

So is it possible to solve this using a client (such as Management Studio)?

If so, what would be the correct syntax?

UML class diagramm for database design?

Posted: 02 Mar 2013 08:18 AM PST

I used to make database designs using a french method called MERIS that uses mainly two diagrams to create the database (MCD and MLD), then i heard about UML and that is used also for databases and not only for coding, i thought it has some special diagram for database design, but i noticed that some people use class diagram to make a database design, i find it not compatible with this because :

  • it uses many concepts that doesn't exist on RDBMS (inheritance, methods, aggregation, composition, methods, access modifiers)
  • UML is a modeling language and not a methodology so it doesn't specify rules how to pass from the diagram to the relation Schema.

Question is : Is UML a nice choice for a database design ?

When to use identity increment?

Posted: 02 Mar 2013 06:57 AM PST

Many times I face these three choices to work with IDs:

  • Using an auto-increment identity column?
  • Suggesting a new codification to use (eg, for students: first 4 position is the year when the student joins the university, 5th position is 0 for females 1 for males, the last 4 position is a sequential number)
  • Don't add any extra column and use only a unique information that already exists (eg: name (of course must be sure that's unique (there are no two possible entries with the same name)))

Question is: which choice do I make, and according to what criteria?

Unable to start Oracle Database Server : Get an error as I try to start it

Posted: 02 Mar 2013 05:13 AM PST

I just installed Oracle Database Express Edition 11g Release 2 for windows.It created a short cut icon on the desktop :

enter image description here

but as I click this icon I see this dialog box :

enter image description here

What is it ? How do I start my Oracle Database Server ?

check constraint that has dynamic list

Posted: 02 Mar 2013 04:19 AM PST

I need to create a check constraint, the problem i face is that the list of values accepted by the constraint changes in the future (ex:now "red,green and blue" in the next month "red,green,blue and white"). How to do this ?

Scheduling a backup in SQL Server 2005

Posted: 02 Mar 2013 07:18 AM PST

I am trying to schedule back up in SQL Server 2005 with management option. I am unable to find database name while scheduling a backup with full option. Database name is not over there in selected area. What can be the reason?

Help me to resolve this problem.

Can I run a Report Server and a Mirror server from a working Log Shipping secondary instance?

Posted: 02 Mar 2013 07:35 AM PST

Our setup:

  • City A - Primary server for logshipping
  • City B - Secondary server for logshipping

Now there is a lot of data being updated from A - B.

And the server in City B is underutilized, and it is a very powerful machine.

Can I install a Reporting server and a mirroring server on the server which hosts the secondary logshipping server, and run the Reporting and mirroring server to get the data from the sSecondary logshipping server?

We cannot have the data come across 3 times (one for Logs, Report, Mirror) - and the secondary server is updated regularly enough to allow for our requirements.

But the secondary logshipping server is obviously always in Standby/Readonly mode - is this all possible?

which one is the fastest way of moving database form local to live server in sql server

Posted: 02 Mar 2013 10:39 AM PST

I am using SQL Server 2008, I have big database almost 500 MB of data there. Now I wanted to upload or deploy this DB to Live server. Currently I am generating script and then uploading script to live and by command line I am deploying database.

But this took almost 2 hours to do that very cumbersome too. and one more problem is also there I am not able to use import export database option cause its loses constrains like primary key and all and also gets error while inserting record from same process.

Can you tell me how can I do all this and with proper data transfer.

Backup failure in SQL Server 2005

Posted: 02 Mar 2013 01:39 AM PST

I'm being unable to perform a backup in SQL Server 2005 installed on Windows 7 local machine. I was initially able to do so but as of now, every time I try I get the following message:

Backup failed for Server 'EBENEZERAKAGLO\SQLEXPRESS'. (Microsoft.SqlServer.Smo)

System.Data.SqlClient.SqlError: Cannot open backup device 'C:\Program Files (x86)\Microsoft SQL Server\MSSQL.3\MSSQL\Backup\Naass.bak'. Operating system error 5(Access is denied.). (Microsoft.SqlServer.Smo)

What are the advantages of a database design with single table for MtM relationships?

Posted: 02 Mar 2013 07:07 AM PST

The database that backs our software product has a table design like the following:

Positions  ID    Name    Parts  1     One     12345  2     Two     12346  3     Three   12347    Collections  ID      TableID  Collection  12345   1;1      1;2  12346   1;1      3;4  12347   1;2;2    5;1;2    Parts  ID     Name  1      TestOne  2      TestTwo  3      TestThree  4      TestFour  5      TestFive    SubParts  1      SubPartOne  2      SubPartOne  

From the above example, each Position has a collection of Parts, but these are mapped generically (without foreign key constraints) into the Collections table. The Collections table keeps track of all relationships between all objects, not just between the example tables shown above, and will be used any time a collection is used.

That means that if I want to get the Position with ID 1, and all of its parts. I have to do three queries:

SELECT * FROM Positions WHERE ID = 1  SELECT * FROM Collections WHERE ID = 12345    Split the string by the semicolons    SELECT * FROM Parts WHERE ID = 1 OR ID = 2  

The advantages of the current approach are:

  • A collection of something can actually be selected from more than table. For example, if SubPart inherits from Part, and position contains a list of Parts, this will be handled OK.

The disadvantages of the current approach are:

  • Speed? We need to do many queries to load data.

  • The Collections table is the largest or second largest table in our database.

  • No support from ORM frameworks? We are looking at switching our persistency layer to use EF or NHibernate etc. but I don't believe our current design will be supported by any of these frameworks.

My questions are:

  • Is there advantages or disadvantages to this approach that I haven't listed?

  • Is this design familiar to people and if so, does it have a name?

  • If the design is familiar, is it supported by out of the box ORM frameworks?

Global locking for multi-master Oracle GoldenGate replication

Posted: 02 Mar 2013 09:43 AM PST

This is a very complex scenario, but I figured a state-of-the-art challenge might interest some of the many high-end users of dba.se.

Problem

I'm working on an intercontinental data replication solution for a document production system, somewhat similar to a wiki, utilizing Oracle GoldenGate. The primary goals are to increase application performance and availability around the globe.

The solution must allow for simultaneous read/write access to the same pool of data from multiple locations, which means that we need some clever way of preventing or resolving conflicting updates without user interaction.

Focusing on collision prevention, we must allow an object (a document, an illustration, a set of metadata etc) to be locked globally, thus preventing multiple users from simultaneously editing the same object from different locations - ultimately causing a conflict.

Similarly an object must remain locked until any user's connected database have received the updated data for that object, less a user may start editing an old object without the latest updates.

Background

The application is somewhat latency sensitive, making access to a central data center slow from remote locations. Like many content focused systems, the read/write ratio is in the line of 4 to 1, making it a good candidate for a distributed architecture. If well-managed, the latter wil also work towards ensuring availability during site or network outages.

I have used a somewhat unconventional multi-loop bi-directional replication topology. This keeps the complexity at a manageable level {2(n-1) ways}, adds resilience for site outages and allows for fairly simple addition or removal of sites. The slight drawback is that it may take up to 30 seconds for a transaction to be replicated between the most remote sites via the central master database.

A more conventional design with direct replication between all sites would cut that time in half, but would also significantly increase the complexity of the configuration {n(n-1) ways}.

With five locations that would mean a 20-way replication as opposed to the 8-way replication in my design.

This illustration shows my current test environment across data centers in Europe, Asia and North America. The production environment is expected to have additional locations.

Illustration of replication topology

All the databases are Oracle 11.2.0.3 with Oracle GoldenGate 11.2.1.

My thoughts so far

I've been thinking along the lines of doing locking by inserting a row into a "locking" table over a database link to the central database, while letting the unlock (update or delete of the previously mentioned row) be replicated along with the updated data.

On behalf of the user we must then check the availability of a lock in both the central and local database before acquiring the lock and opening the object for editing. When editing is completed, we must release the lock in the local database which will then replicate the changes and the release of the lock to all other locations via the central database.

However, queries over a high latency database link can sometimes be very slow (tests show anywhere from 1.5 seconds to 7 seconds for a single insert), and I'm not sure if we can guarantee that the update or delete statement that removes a lock is the last statement to be replicated.

Calling a remote PL/SQL procedure to do the checking and locking will at least limit the operation to a single remote query, but seven seconds is still a very long time. Something like two seconds would be more acceptable. I'm hoping the database links can be optimized somehow.

There may also be an additional issues like trying to delete or update a row in the local locking table before that row have been successfully replicated from the central database.

On the bright side, with this kind of solution, it should be relatively simple to let the application enter a read-only state if communications to the central database is distrupted, or to redirect clients if a data center should become unavailable.

Are there anyone who have done anything similar? What might be the best way to approach this?

Like I said initially, this is a rather complex solution, feel free to ask about anything left unclear or left out.

Slow query performance

Posted: 02 Mar 2013 05:31 PM PST

My database contains 20,000 rows and the following query is slow. Please help me to speed this up:

SELECT r.report_id,         r.user_id,         u.user_name,         u.user_mail,         d.department_name,         r.report_comment,         r.report_target_date,         r.report_create_date,         r.report_revised_date,         r.report_root_id  FROM   report r         JOIN user u           ON u.user_id = r.user_id         JOIN department d           ON u.department_id = d.department_id  WHERE  ( u.user_name LIKE Concat('%', 'hieu', '%') )         AND ( d.department_name LIKE Concat('%', 'BOM', '%') )         AND r.report_id IN (SELECT r.report_id                             FROM   report r                             WHERE  r.user_id = 1                             UNION ALL                             SELECT r.report_id                             FROM   report r                                    JOIN user u                                      ON u.user_id = r.user_id                                    JOIN department d                                      ON u.department_id = d.department_id                                    JOIN authority a                                      ON r.user_id = a.user_src_id                             WHERE  a.user_dest_id = 1)         AND ( r.report_id IN (SELECT r.report_id                               FROM   report r                               WHERE  r.report_comment LIKE                                      Concat('%', 'this', '%'))                OR r.report_id IN (SELECT rc.report_id                                   FROM   report_content rc                                   WHERE  rc.visited_company LIKE                                          Concat('%', 'this', '%')                                           OR rc.report_description LIKE                                              Concat('%', 'this', '%')) )  ORDER  BY r.report_target_date DESC  LIMIT  0, 30   

CREATE TABLE :

CREATE TABLE IF NOT EXISTS `report` (    `report_id` int(11) NOT NULL AUTO_INCREMENT,    `user_id` int(11) NOT NULL,    `report_comment` text CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL,    `report_target_date` datetime NOT NULL,    `report_create_date` datetime NOT NULL,    `report_revised_date` datetime DEFAULT NULL,    `report_revised_id` int(11) DEFAULT NULL,    `report_root_id` int(11) DEFAULT NULL,    `enable` int(2) NOT NULL DEFAULT '1',    PRIMARY KEY (`report_id`),    UNIQUE KEY `unique_report` (`report_id`),    KEY `support_report` (`report_id`,`report_target_date`,`report_create_date`,`report_revised_date`,`report_revised_id`,`report_root_id`)  ) ENGINE=InnoDB  DEFAULT CHARSET=latin1 AUTO_INCREMENT=18497 ;      CREATE TABLE IF NOT EXISTS `report_content` (    `report_content_id` int(11) NOT NULL AUTO_INCREMENT,    `report_id` int(11) NOT NULL,    `timestamp` text COLLATE utf8_unicode_ci NOT NULL,    `visited_company` varchar(255) COLLATE utf8_unicode_ci NOT NULL,    `report_description` text COLLATE utf8_unicode_ci NOT NULL,    PRIMARY KEY (`report_content_id`),    UNIQUE KEY `unique_report` (`report_content_id`,`report_id`),    UNIQUE KEY `unique_report_content` (`report_content_id`,`report_id`),    KEY `support_report_content` (`report_content_id`,`report_id`,`visited_company`)  ) ENGINE=InnoDB  DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=278 ;  

My EXPLAIN :

id  | select |   table   |    type   |    possible_keys    |    key   |key_len|    ref  | rows  |  extra  1   |PRIMARY |     r     |    ALL    |         (NULL)      |  (NULL)  | (NULL)| (NULL)  | 20246 | using where/filesort  1   |PRIMARY |     p     |  eq_ref   | PRIMARY,unique_user |  NULL    | 4   |   NULL    |1      | using where  5   |SUBQUERY| report_content|  all  |          NULL       |    NULL  | 4   |   NULL    |160    |  Using where  6   |UNION   | report_content|  all  |          NULL       |    NULL  | 4   |   NULL    |160    |  Using where  

NOW I change query following :

    SELECT r.report_id,           r.user_id,           u.user_name,           u.user_mail,           d.department_name,           r.report_comment,            r.report_target_date,           r.report_create_date,           r.report_revised_date,           r.report_root_id  FROM report AS r FORCE INDEX (unique_report) , `user`                   AS u FORCE INDEX (unique_user) , `department`                   AS d FORCE INDEX (unique_department)  WHERE (u.user_name LIKE CONCAT('%', 'hieu', '%'))           AND (d.department_name LIKE CONCAT('%', 'bom', '%'))           AND r.report_id IN (  SELECT r.report_id      FROM report r      WHERE r.user_id =1 UNION ALL  SELECT r.report_id      FROM report r FORCE INDEX (unique_report)      JOIN USER u FORCE INDEX (unique_user) ON u.user_id = r.user_id      JOIN department d FORCE INDEX (unique_department) ON u.department_id = d.department_id      JOIN authority a FORCE INDEX (unique_authority) ON r.user_id = a.user_src_id      WHERE a.user_dest_id =1) AND (r.report_id IN (  SELECT  r.report_id      FROM report r      WHERE r.report_comment LIKE CONCAT('%', 'when', '%')) OR EXISTS (  SELECT rc.report_id      FROM report_content rc      WHERE rc.visited_company LIKE CONCAT('%', 'when', '%')       OR MATCH (`timestamp`,report_description)       AGAINST('+when+' WITH QUERY EXPANSION) AND r.report_id = rc.report_id))  ORDER BY r.report_target_date DESC  LIMIT 0, 30  

AND faster than before : 1.97s.

So somebody help me for speedup more.

"Waiting for initial communication packet" error 0 in MySQL

Posted: 02 Mar 2013 02:01 AM PST

I am using MySQL server 5.1.51 on a Linux machine.

The application was working fine for the last 32 days, but since yesterday a number of clients are not able to connect to the database from our application. They are getting the following error:

-2147467259 [MySQL] [ODBC 5.1 Driver] Lost Connection to MySQL server          at 'waiting for initial communication packet',system error: 0  

What can I do to fix this?

Firebird monitoring tables

Posted: 02 Mar 2013 11:30 AM PST

I am working with a Firebird 2.1.1 database and I started reading about the Monitoring tables.

I am trying to see somehow in real-time what queries does each client performs in the database.

I had a look at the MON$STATEMENTS table but it does not provide much info. I would have expected more content inside the table.

Screenshot from Flamerobin when table is displayed

Am I looking in the wrong place or did I not understand the purpose / use of the Firebird monitoring tables?

How to modify an update in Oracle so it performs faster?

Posted: 02 Mar 2013 08:11 AM PST

I have this query:

UPDATE   (      SELECT   h.valid_through_dt, h.LAST_UPDATE_TMSTMP      FROM   ETL_FEE_SCH_TMP d, FEE_SCHEDULE_HISTORICAL h      WHERE       h.FUND_ID = d.FUND_ID      AND h.FEETYPE_NAME = d.FEETYPE_NAME      AND h.BREAKPOINT_TYPE = d.BREAKPOINT_TYPE      AND h.BREAKPOINT_QTY = d.BREAKPOINT_QTY      AND h.LOW_BREAKPOINT_AMT = d.LOW_BREAKPOINT_AMT      AND h.VALID_THROUGH = TO_DATE ('31-DEC-9999', 'dd-mon-yyyy')      AND h.universe = 'DC'      AND h.universe = d.universe      AND EXISTS      (          SELECT 1          FROM FEE_SCHEDULE s          WHERE s.FUND_ID = h.FUND_ID          AND s.FEETYPE_NAME = h.FEETYPE_NAME          AND s.BREAKPOINT_TYPE = h.BREAKPOINT_TYPE          AND s.BREAKPOINT_QTY = h.BREAKPOINT_QTY          AND s.LOW_BREAKPOINT_AMT = h.LOW_BREAKPOINT_AMT          AND s.universe = 'DC'      )  ) updateTable  SET     updateTable.VALID_THROUGH = (SYSDATE - 1),  updateTable.LAST_UPDATE_TMSTMP = SYSTIMESTAMP;  

The trouble that I am having is that this query takes a long time to run. I don't know whether it is possible to run this on parallel, or it would be easier to update a cursor in a pipeline function.

What would you suggest?

This is all the information that I believe it is relevant.

This is the execution plan of the internal select:

Execution Plan  ----------------------------------------------------------  Plan hash value: 57376096  ---------------------------------------------------------------------------------------------------------  | Id  | Operation                    | Name                     | Rows  | Bytes| Cost (%CPU)| Time     |  ---------------------------------------------------------------------------------------------------------  |   0 | SELECT STATEMENT             |                          |     1 |   306 |  8427   (1)| 00:01:42 |  |   1 |  NESTED LOOPS                |                          |       |       |            |          |  |   2 |   NESTED LOOPS               |                          |     1 |    306|  8427   (1)| 00:01:42 |  |   3 |    MERGE JOIN CARTESIAN      |                          |     1 |    192|  8426   (1)| 00:01:42 |  |*  4 |     INDEX RANGE SCAN         | SYS_C000666              |     1 |     96|     2   (0)| 00:00:01 |  |   5 |     BUFFER SORT              |                          |  3045K|   278M|  8425   (1)| 00:01:42 |  |   6 |      SORT UNIQUE             |                          |  3045K|   278M|  8425   (1)| 00:01:42 |  |*  7 |       TABLE ACCESS FULL      | FEE_SCHEDULE             |  3045K|   278M|  8425   (1)| 00:01:42 |  |*  8 |    INDEX RANGE SCAN          | FEE_SCHDL_IDX1           |     1 |       |     1   (0)| 00:00:01 |  |*  9 |   TABLE ACCESS BY INDEX ROWID| FEE_SCHEDULE_HISTORICAL  |     1 |   114 |     1   (0)| 00:00:01 |  ---------------------------------------------------------------------------------------------------------  Predicate Information (identified by operation id):  ---------------------------------------------------     4 - access("D"."UNIVERSE"='DC')     7 - filter("S"."UNIVERSE"='DC')     8 - access("H"."UNIVERSE"='DC' AND "S"."FUND_ID"="H"."FUND_ID" AND                "S"."FEETYPE_NAME"="H"."FEETYPE_NAME" AND                "S"."BREAKPOINT_TYPE"="H"."BREAKPOINT_TYPE" AND                "S"."BREAKPOINT_QTY"="H"."BREAKPOINT_QTY" AND                "S"."LOW_BREAKPOINT_AMT"="H"."LOW_BREAKPOINT_AMT")         filter("H"."FUND_ID"="D"."FUND_ID" AND                "H"."FEETYPE_NAME"="D"."FEETYPE_NAME" AND                "H"."BREAKPOINT_TYPE"="D"."BREAKPOINT_UNIT_TY  

Table data:

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------  UNIVERSE|FUND_ID   |FEETYPE_NAME |BREAKPOINT_TYPE|BREAKPOINT_QTY|LOW_BREAKPOINT_AMT|HIGH_BREAKPOINT_AMT|FEE_PCT|FEE_SCHDL_SEQ_ID|GROUP_ID|LAST_UPDATE_TMSTMP  |VALID_FROM|VALID_THROUGH|INSERT_TMSTMP        |JOB_ID|  ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------  DC      |DC9ZTPLPHO|DeferLoad    |Percentage     |4             |10000             |300000             |3.14   |780250          |null    |1/4/2012  3:59:54 PM|6/23/2012 |12/31/9999   |1/5/2011   3:59:54 PM|666   |  DC      |DCE86Y8XFU|RedemptionFee|Percentage     |9             |  100             |100500             |7.67   |780251          |null    |6/4/2012  4:49:54 PM|11/12/2011|12/31/9999   |8/17/2011  2:00:54 PM|666   |  DC      |DCAYL0KONA|FrontLoad    |Percentage     |2             |50000             |601500             |5.00   |780252          |null    |4/25/2012 4:49:54 PM|8/2/2012  |12/31/9999   |12/19/2012 9:59:00 PM|666   |  DC      |DC9ZTPLPHO|DeferLoad    |Percentage     |7             |80000             |900000             |2.24   |780252          |null    |4/25/2012 4:49:54 PM|8/2/2012  |12/31/9999   |12/19/2012 9:59:00 PM|666   |  ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------  

This is the script of the historical table:

CREATE TABLE FEE_SCHEDULE_HISTORICAL  (    UNIVERSE                        VARCHAR2(2 BYTE) NOT NULL,    FUND_ID                         VARCHAR2(10 BYTE) NOT NULL,    FEETYPE_NAME                    VARCHAR2(75 BYTE),    BREAKPOINT_TYPE                 VARCHAR2(50 BYTE),    BREAKPOINT_QTY                  VARCHAR2(10 BYTE),    LOW_BREAKPOINT_AMT              NUMBER(19,6),    HIGH_BREAKPOINT_AMT             NUMBER(19,6),    FEE_PCT                         NUMBER(19,6),    FEE_SCHDL_SEQ_ID                NUMBER        NOT NULL,    GROUP_ID                        NUMBER,    LAST_UPDATE_TMSTMP              DATE          NOT NULL,    VALID_FROM                      DATE          NOT NULL,    VALID_THROUGH                   DATE          NOT NULL,    INSERT_TMSTMP                   DATE          NOT NULL,    JOB_ID                          NUMBER        NOT NULL  );    CREATE UNIQUE INDEX FEE_SCHDL_PK ON FEE_SCHEDULE_HISTORICAL(FEE_SCHDL_SEQ_ID);    CREATE UNIQUE INDEX FEE_SCHDL_HST_IDX ON FEE_SCHEDULE_HISTORICAL (      UNIVERSE,      FUND_ID,      FEETYPE_NAME,      BREAKPOINT_TYPE,      BREAKPOINT_QTY,       LOW_BREAKPOINT_AMT,      VALID_FROM,      JOB_ID  )    CREATE INDEX FEE_SCHEDULE_HST_IDX2 ON FEE_SCHEDULE_HISTORICAL(LAST_UPDATE_TMSTMP)    CREATE INDEX FEE_SCHEDULE_HST_IDX3 ON FEE_SCHEDULE_HISTORICAL(VALID_THROUGH)    ALTER TABLE FEE_SCHEDULE_HISTORICAL ADD (      CONSTRAINT FEE_SCHDL_PK      PRIMARY KEY      (FEE_SCHDL_SEQ_ID)  );  

This is the other table:

CREATE TABLE FEE_SCHEDULE  (    UNIVERSE                        VARCHAR2(2 BYTE) NOT NULL,    FUND_ID                         VARCHAR2(10 BYTE) NOT NULL,    FEETYPE_NAME                    VARCHAR2(75 BYTE),    BREAKPOINT_TYPE                 VARCHAR2(50 BYTE),    BREAKPOINT_QTY                  VARCHAR2(10 BYTE),    LOW_BREAKPOINT_AMT              NUMBER(19,6),    HIGH_BREAKPOINT_AMT             NUMBER(19,6),    FEE_PCT                         NUMBER(19,6),    JOB_RUN_ID                      NUMBER        NOT NULL,    FILE_DATE                       DATE          NOT NULL,    CYCLE_DATE                      DATE          NOT NULL  )  

The temporary table is the result of FEE_SCHEDULE_HISTORICAL minus FEE_SCHEDULE

Get the rank of a user in a score table

Posted: 02 Mar 2013 09:00 AM PST

I have a very simple MySQL table where I save highscores. It looks like that:

Id     Name     Score  

So far so good. The question is: How do I get what's a users rank? For example, I have a users Name or Id and want to get his rank, where all rows are ordinal ordered descending for the Score.

An Example

Id  Name    Score  1   Ida     100  2   Boo     58  3   Lala    88  4   Bash    102  5   Assem   99  

In this very case, Assem's rank would be 3, because he got the 3rd highest score.

The query should return one row, which contains (only) the required Rank.

[SQL Server] Insert to Parent/Child tables

[SQL Server] Insert to Parent/Child tables


Insert to Parent/Child tables

Posted: 02 Mar 2013 01:34 AM PST

I have a Parent table (Identity PK with 600,000+ rows) and a child table (with maybe 20,000 rows). The parent and child share the same PK (1 to {0 or 1} relationship). This was done (in hindsight I shouldn't have...) because the child table population is sparse.I have some new adds to the table in a "flattened" version...that is, the child contents are part of the parent row.I want to insert the new additions to the Parent/Child table, but I won't know the PK on the parent until the parent row is inserted. How does one accomplish this?

[MS SQL Server] Backup issue

[MS SQL Server] Backup issue


Backup issue

Posted: 02 Mar 2013 01:46 AM PST

Hi All, I have a 80B database in SQL Server 2000 and i migrating to SQL Server 2008 through side by side. When i taking backup through GUI or QUERY but not successfully taking getting automatically disconnected. When i checked in ERROR LOG get below massage.[b]3041 :BACKUP failed to complete the command backup database DBNAME to Disk = 'D:\Dbname\part1.bak',Disk = 'D:\Dbname\part2.bak',Disk = 'D:\Dbname\part3.bak',For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.[/b]Some where i saw this error like space issue but my 'D' Drive is 600GB free.Let me know what is the issue???Thanks in advance Satish

Database in Recovery mode while restoring it

Posted: 01 Mar 2013 09:34 AM PST

HI All, I need your to understand and resolve this issue.I have initiated the backup script for a database which was online. I took it to single user mode and am restoring it. Its been 130 mins now and is showing as recovery mode where as it should be in restoring mode. the messages tab in the SSMS says Processed 5531167 pages for database 'Mydatabase Name', file 'Mydatabase Log file' on file 1.What can be done in these cases. can you please reply at the earliest since it our production environment?

SQL Max server memory settings

Posted: 01 Mar 2013 06:14 AM PST

How can I get a max server memory setting of SQL if it's not running/disalbled?SueTons.

CPU considerations for multiple instances

Posted: 01 Mar 2013 04:42 AM PST

I've been researching this and haven't found much info so I'm not sure what most people do. Consider a server with 48 cores running 4 SQL instances. The server has 192GB RAM and I have max memory set per instance, but there isn't a max CPU so to speak. I have MAXDOP set to 8, but that is just per query. If multiple queries come in, one instance could spike all the cores and starve the other instances. I'm considering enabling only certain NumaNodes for each instance. Is that the route to go?Here's another wrench in the mix. The instances will be in a Veritas cluster so they can/will failover to another physical server. If I set specific NumaNodes for an instance, will it use the same NumaNodes on the failover server?Any suggestions or past experiences will be appreciated!

[SQL 2012] SSAS 2012 Actions not appearing in Excel 2010

[SQL 2012] SSAS 2012 Actions not appearing in Excel 2010


SSAS 2012 Actions not appearing in Excel 2010

Posted: 27 Nov 2012 08:48 PM PST

Hi,I have an SSAS cube that i want to add some actions to. I've had problems adding a reporting action to the cube so decided just to add a URL action instead. Start simple and build on the concepts...So i add a new action, give it a name, set the Target Type to Cells, Taget object to All Cells. I've put no condition on the action since i want it to appear all the time.The action content type is set to URL the action expression is set to "http://www.google.co.uk" I've also set a caption of "Google" under the additional properties and said that the caption is MDX (I'm aware that it isn't but i do intend to expand on this...).See the attached screenshot.I then build and deploy my cube, call up excel (2010) and then create a pivot table off the back of the cube but when i right-click the cells in the pivot table and go to "additional actions" it tells me that there are none specified.Does anyone know where i'm going wrong? I'm obviously missing something rather fundamental here.Thanks,Simon

Alias works with Linked Server but not with Management Studio

Posted: 01 Mar 2013 11:13 PM PST

I have a remote machine running SQL Server 2012 on Windows 2008 and am trying to connect to it from a machine running SQL Server Express 2012 on Windows 8.I have created an alias and am able to attach the remote machine as a linked server and browse the catalogue. When I try to connect to the alias through Management Studio, however, I get the network path was not found, error 53.Definitely not a problem on the remote server as I can connect from a different PC running SQL Server Express 2008R2 on Windows 2003.Any suggestions on what I should be looking at?

SSMS connection to SSIS with packages on remote MSDB server - is that even remotely possible?

Posted: 01 Mar 2013 09:03 AM PST

I've been working in a lab to see what is possible as far as separation of services in SQL Server 2012.I have this setup:Server 1: DB EngineServer 2: SSIS & SSASMy goal is to have the SSIS packages in a database on the main DB server (Server 1) and have the users access Server 2's SSIS via SSMS from their workstation. I'm finding that this is perhaps not possible. [url=http://www.sqlservercentral.com/Forums/Topic1373364-1550-1.aspx#bm1373371]As this forum post states:[/url]Which quotes [url=http://msdn.microsoft.com/en-us/library/aa337083(v=sql.110).aspx]this article from MS[/url]:[quote]Delegation Is Not SupportedSQL Server Integration Services does not support the delegation of credentials, sometimes referred to as a double hop. In this scenario, you are working on a client computer, Integration Services is installed on a second computer, and SQL Server is installed on a third computer. Although SQL Server Management Studio successfully passes your credentials from the client computer to the second computer on which Integration Services is running, Integration Services cannot delegate your credentials from the second computer to the third computer on which SQL Server is running.[/quote]So it looks like i finally hit a brick wall. I'm wondering if there is any way to configure such a scenario (mostly for knowledge) or if that is even practical. I'd be curious to hear from experienced DBA's who can perhaps point me in the right direction. Thank you in advance.

Installing SS 2012 Enterprise Edition (Requirements Question)

Posted: 01 Mar 2013 06:44 AM PST

From looking at this link it appears that if i want to install Enterprise Edition on my computer, I need to install Windows Server. http://msdn.microsoft.com/en-us/library/ms143506.aspxCurrently I have Windows 7 Home Premium. I'm sure I don't need Enterprise but I was thinking why not? So i guess I'll do developer.Anyone ever use Windows Server? Is there any reason why I'd want to install that on my laptop? chances are I won't play around with it much anyway. But i guess it would not hurt to get some experience.I get all this software for free from my university.Thanks!

Writing Powershell for BI SSIS 2012 project package deployment

Posted: 01 Mar 2013 02:36 AM PST

As a Powershell newbie, I want to write a PowerShell script that will execute on the production server and create all the necessary objects from dev. for the SSIS 2012 project/packages needed. I need a Powershell script that can 1. Create the SSISDB Catlalog 2. Deploy the SSIS 2012 project/packages to the catalog.3. Create the sql agent job that will execute on a daily schedule one of the parent packages in the catalog.4. Execute 4 of the SSIS packages in the SSISDB catalog that will create the BI staging databases, tables and other stuff. Does anyone have a Powershell script that does a SSIS 2012 deployment. I have found nothing on the web to show me the syntax. The ????? below is an example of where no samples exist for the jobstep command.$jobStep.Subsystem = [Microsoft.SqlServer.Management.Smo.Agent.AgentSubSystem]::SSIS$jobStep.Command = ?????????????Thanks for any help or guidance to a resource

[T-SQL] Adding a Table Lock on Purpose

[T-SQL] Adding a Table Lock on Purpose


Adding a Table Lock on Purpose

Posted: 01 Mar 2013 06:14 AM PST

Hello EveryoneI have a rather odd situation. I want to add a lock to a table on purpose. I need to update and then select from that table before anything else can happen. This is in the middle of a very large stored procedure, and I cannot log that table during the entire transaction of the entire stored procedure. I can Lock the table for the very minimum amount of time.@NumberOfListToInsert is set by @@ROWCOUNT after an Insert into a table. I then re-key that table starting with the number from the Keys table[code="sql"]UPDATE dbo.KeysSET MaxCount = (MaxCount + @NumberOfListToInsert) + 5WHERE TableName = 'Diagnosis';SET @MaxKeyNumber = (SELECT MaxCount AS MaxKeyCount FROM dbo.Keys WHERE TableName = 'Diagnosis')[/code] I need for both of these to start and complete before any other query can access the "Keys" table.Can I simply wrap this in a Being Tran and Commit Tran? Or should I use TABLOCKX ?I am not sure how to use TABLOCKX, so if you can give an example that would be very helpful.Thanks in advance for your help, suggestions and code samples.Andrew SQL DBA

How to make a Pivote table

Posted: 01 Mar 2013 12:55 PM PST

Hi,I have a table containing medical test results (as integer with 3 values: 0,1,2) like this :[code="plain"][PATIENT_ID],[Name],...,[T1],[T2],[T3]...... (the T? are test names (in abbreviations) dynamically added to table) ------------------------------------------------01, John,.....,1,2,0,.....02, Jake,.....,0,2,0,.....03, Joe,.....,2,2,2,.....04, Jane,.....,1,2,1,...............[/code]Another table includes information about each test :[code="plain"].....,[TEST_DESC],[ABBR],......-----------------------------------.....,Test name 1,T1,...............,Test name 2,T2,...............,Test name 3,T3,....................................[/code]Each time one test introduces to system, one row is added to second table and one column to first table with a default value.Now, I need to make a group report for tests. To do so, first I need to change it to this format :[code="plain"][PATIENT_ID],[Name],[TEST_DESC],[RESULT]-----------------------------------------------------01,John,Test name 1,101,John,Test name 2,201,John,Test name 3,0..........................02, Jake,Test name 1,002, Jake,Test name 2,202, Jake,Test name 3,0...........................[/code]Any suggestion will be appreciated.

Deletes taking long time

Posted: 01 Mar 2013 04:06 AM PST

I have a situation where I created I ran a query that generated about 32,000 individual deletes that I am sending to my client to run on their database.Delete x from table where id = 1234Delete x from table where id = 1235Delete x from table where id = 1237On this table, it takes about 3 hours to run.It has the requesite clustered PK and 2 foreign keys.I have another table same type 2 foreign keys and it takes 40 seconds to delete 90,000 records.Why would the table that I am deleting a 3rd as many records take a ton of time longer?I thought about disabling the FKs but not sure what happens when I re-enable them. Does it recreate the FKs pointers?Thanks,Tom

Custom order in SELECT

Posted: 01 Mar 2013 06:26 AM PST

Hi,How could we change the order in SELECT command based on a custom semicolon delimited phrase.We have :[Letters]----------abcdeAnd based on this phrase 'd;c;b;a;e' we want to have :[Letters]----------dcbaeThanks in advance for helps

USPS Addresses Load

Posted: 01 Mar 2013 04:14 AM PST

hi guys,Does anybody has loaded the USPS city/state addresses in a text file format to a table , my manager gave me a CD to load a flat file with no consistent delimiters and no headers in a table,does anybody have loaded USPS addresses in the past , if so can you guide me to do thisany help is much appreciated.

Exclude similar columns in a JOIN command

Posted: 01 Mar 2013 03:49 AM PST

Hi,I have three tables and the structures are :Table1 : [PATIENT_ID],[Name],[Age],[City],[TEST1],[TEST2]...... (the AB? are test names dynamically added to table)Table2 : [PATIENT_ID],[Name],[Age],[DR1],[DR2],...... (the DR? are drug names dynamically added to table)Table3 : [Hospital],[City],[DATE],[ROOM],[PATIENT_ID] I need to 'SELECT JOIN' them to create this table :[Hospital],[City],[DATE],[ROOM],[PATIENT_ID],[Name],[Age],[TEST1],[TEST2]......, [DR1],[DR2],......When I simply join them, I will have repeated columns such as [Name], [City]. Here, I simplified the structure but in reality we have many more such informative columns. In the other hand, part of tables are dynamically changing. So, it would be really difficult to define column names individually.I appreciate for any guide to tell me if there is a way to exclude those similar columns and they show up only once.Thanks in advance

Converting SQL DBMail XML to XLS

Posted: 01 Mar 2013 05:47 AM PST

Hi, Does anyone know of a good way to convert the XML table I'm generating to XLS prior to send. I'm doing something like this...http://www.mssqltips.com/sqlservertip/2347/send-email-in-a-tabular-format-using-sql-server-database-mail/Regards, Greg.

XML issue converting table to XML

Posted: 01 Mar 2013 02:46 AM PST

Hi,Thanks for your help in advance. I am trying to get the following data [code="sql"]CREATE TABLE [dbo].[TABLE_XML1]( [ID] [int] IDENTITY(1,1) NOT NULL, [TOP_LEVEL_ID] [int] NULL, [TOP_LEVEL_ITEM] [nvarchar](50) NULL, [SECOND_LEVEL_ID] [int] NULL, [SECOND_LEVEL_ITEM] [nvarchar](50) NULL, [THIRD_LEVEL_ID] [int] NULL, [THIRD_LEVEL_ITEM] [nvarchar](50) NULL, [FOURTH_LEVEL_ID] [int] NULL, [FOURTH_LEVEL_ITEM] [nvarchar](50) NULL, CONSTRAINT [PK_TABLE_XML1] PRIMARY KEY CLUSTERED ( [ID] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY]GOSET IDENTITY_INSERT [dbo].[TABLE_XML1] ON GOINSERT [dbo].[TABLE_XML1] ([ID], [TOP_LEVEL_ID], [TOP_LEVEL_ITEM], [SECOND_LEVEL_ID], [SECOND_LEVEL_ITEM], [THIRD_LEVEL_ID], [THIRD_LEVEL_ITEM], [FOURTH_LEVEL_ID], [FOURTH_LEVEL_ITEM]) VALUES (1, 294, N'TOP LEVEL TYPE A', 1244, N'SECOND LEVEL ITEM A', 1245, N'THIRD LEVEL ITEM A', 2222, N'FOURTH LEVEL ITEM A')GOINSERT [dbo].[TABLE_XML1] ([ID], [TOP_LEVEL_ID], [TOP_LEVEL_ITEM], [SECOND_LEVEL_ID], [SECOND_LEVEL_ITEM], [THIRD_LEVEL_ID], [THIRD_LEVEL_ITEM], [FOURTH_LEVEL_ID], [FOURTH_LEVEL_ITEM]) VALUES (2, 294, N'TOP LEVEL TYPE A', 1252, N'SECOND LEVEL ITEM B', 1245, N'THIRD LEVEL ITEM A', 2223, N'FOURTH LEVEL ITEM B')GOINSERT [dbo].[TABLE_XML1] ([ID], [TOP_LEVEL_ID], [TOP_LEVEL_ITEM], [SECOND_LEVEL_ID], [SECOND_LEVEL_ITEM], [THIRD_LEVEL_ID], [THIRD_LEVEL_ITEM], [FOURTH_LEVEL_ID], [FOURTH_LEVEL_ITEM]) VALUES (3, 294, N'TOP LEVEL TYPE A', 1254, N'SECOND LEVEL ITEM C', 1255, N'THIRD LEVEL ITEM B', 2231, N'FOURTH LEVEL ITEM C')GOINSERT [dbo].[TABLE_XML1] ([ID], [TOP_LEVEL_ID], [TOP_LEVEL_ITEM], [SECOND_LEVEL_ID], [SECOND_LEVEL_ITEM], [THIRD_LEVEL_ID], [THIRD_LEVEL_ITEM], [FOURTH_LEVEL_ID], [FOURTH_LEVEL_ITEM]) VALUES (4, 294, N'TOP LEVEL TYPE A', 1298, N'SECOND LEVEL ITEM D', 1255, N'THIRD LEVEL ITEM B', 2313, N'FOURTH LEVEL ITEM D')GOINSERT [dbo].[TABLE_XML1] ([ID], [TOP_LEVEL_ID], [TOP_LEVEL_ITEM], [SECOND_LEVEL_ID], [SECOND_LEVEL_ITEM], [THIRD_LEVEL_ID], [THIRD_LEVEL_ITEM], [FOURTH_LEVEL_ID], [FOURTH_LEVEL_ITEM]) VALUES (5, 294, N'TOP LEVEL TYPE A', 1302, N'SECOND LEVEL ITEM E', 1303, N'THIRD LEVEL ITEM C', 2318, N'FOURTH LEVEL ITEM E')GOINSERT [dbo].[TABLE_XML1] ([ID], [TOP_LEVEL_ID], [TOP_LEVEL_ITEM], [SECOND_LEVEL_ID], [SECOND_LEVEL_ITEM], [THIRD_LEVEL_ID], [THIRD_LEVEL_ITEM], [FOURTH_LEVEL_ID], [FOURTH_LEVEL_ITEM]) VALUES (6, 294, N'TOP LEVEL TYPE A', 1305, N'SECOND LEVEL ITEM F', 1303, N'THIRD LEVEL ITEM C', 2192, N'FOURTH LEVEL ITEM F')GOINSERT [dbo].[TABLE_XML1] ([ID], [TOP_LEVEL_ID], [TOP_LEVEL_ITEM], [SECOND_LEVEL_ID], [SECOND_LEVEL_ITEM], [THIRD_LEVEL_ID], [THIRD_LEVEL_ITEM], [FOURTH_LEVEL_ID], [FOURTH_LEVEL_ITEM]) VALUES (7, 656, N'TOP LEVEL TYPE B', 657, N'SECOND LEVEL ITEM G', 27, N'THIRD LEVEL ITEM A', NULL, NULL)GOINSERT [dbo].[TABLE_XML1] ([ID], [TOP_LEVEL_ID], [TOP_LEVEL_ITEM], [SECOND_LEVEL_ID], [SECOND_LEVEL_ITEM], [THIRD_LEVEL_ID], [THIRD_LEVEL_ITEM], [FOURTH_LEVEL_ID], [FOURTH_LEVEL_ITEM]) VALUES (8, 656, N'TOP LEVEL TYPE B', 657, N'SECOND LEVEL ITEM G', 638, N'THIRD LEVEL ITEM B', NULL, NULL)GOINSERT [dbo].[TABLE_XML1] ([ID], [TOP_LEVEL_ID], [TOP_LEVEL_ITEM], [SECOND_LEVEL_ID], [SECOND_LEVEL_ITEM], [THIRD_LEVEL_ID], [THIRD_LEVEL_ITEM], [FOURTH_LEVEL_ID], [FOURTH_LEVEL_ITEM]) VALUES (9, 656, N'TOP LEVEL TYPE B', 657, N'SECOND LEVEL ITEM G', 939, N'THIRD LEVEL ITEM C', NULL, NULL)GOINSERT [dbo].[TABLE_XML1] ([ID], [TOP_LEVEL_ID], [TOP_LEVEL_ITEM], [SECOND_LEVEL_ID], [SECOND_LEVEL_ITEM], [THIRD_LEVEL_ID], [THIRD_LEVEL_ITEM], [FOURTH_LEVEL_ID], [FOURTH_LEVEL_ITEM]) VALUES (10, 656, N'TOP LEVEL TYPE B', 657, N'SECOND LEVEL ITEM G', 939, N'THIRD LEVEL ITEM C', NULL, NULL)GOINSERT [dbo].[TABLE_XML1] ([ID], [TOP_LEVEL_ID], [TOP_LEVEL_ITEM], [SECOND_LEVEL_ID], [SECOND_LEVEL_ITEM], [THIRD_LEVEL_ID], [THIRD_LEVEL_ITEM], [FOURTH_LEVEL_ID], [FOURTH_LEVEL_ITEM]) VALUES (11, 656, N'TOP LEVEL TYPE B', 657, N'SECOND LEVEL ITEM G', 939, N'THIRD LEVEL ITEM C', NULL, NULL)GOINSERT [dbo].[TABLE_XML1] ([ID], [TOP_LEVEL_ID], [TOP_LEVEL_ITEM], [SECOND_LEVEL_ID], [SECOND_LEVEL_ITEM], [THIRD_LEVEL_ID], [THIRD_LEVEL_ITEM], [FOURTH_LEVEL_ID], [FOURTH_LEVEL_ITEM]) VALUES (12, 656, N'TOP LEVEL TYPE B', 1165, N'SECOND LEVEL ITEM H', 940, N'THIRD LEVEL ITEM D', NULL, NULL)GOINSERT [dbo].[TABLE_XML1] ([ID], [TOP_LEVEL_ID], [TOP_LEVEL_ITEM], [SECOND_LEVEL_ID], [SECOND_LEVEL_ITEM], [THIRD_LEVEL_ID], [THIRD_LEVEL_ITEM], [FOURTH_LEVEL_ID], [FOURTH_LEVEL_ITEM]) VALUES (13, 656, N'TOP LEVEL TYPE B', 1165, N'SECOND LEVEL ITEM H', 940, N'THIRD LEVEL ITEM D', NULL, NULL)GOINSERT [dbo].[TABLE_XML1] ([ID], [TOP_LEVEL_ID], [TOP_LEVEL_ITEM], [SECOND_LEVEL_ID], [SECOND_LEVEL_ITEM], [THIRD_LEVEL_ID], [THIRD_LEVEL_ITEM], [FOURTH_LEVEL_ID], [FOURTH_LEVEL_ITEM]) VALUES (14, 1234, N'TOP LEVEL TYPE C', 2206, N'SECOND LEVEL ITEM I', NULL, NULL, NULL, NULL)GOINSERT [dbo].[TABLE_XML1] ([ID], [TOP_LEVEL_ID], [TOP_LEVEL_ITEM], [SECOND_LEVEL_ID], [SECOND_LEVEL_ITEM], [THIRD_LEVEL_ID], [THIRD_LEVEL_ITEM], [FOURTH_LEVEL_ID], [FOURTH_LEVEL_ITEM]) VALUES (15, 1234, N'TOP LEVEL TYPE C', 2207, N'SECOND LEVEL ITEM J', NULL, NULL, NULL, NULL)GOINSERT [dbo].[TABLE_XML1] ([ID], [TOP_LEVEL_ID], [TOP_LEVEL_ITEM], [SECOND_LEVEL_ID], [SECOND_LEVEL_ITEM], [THIRD_LEVEL_ID], [THIRD_LEVEL_ITEM], [FOURTH_LEVEL_ID], [FOURTH_LEVEL_ITEM]) VALUES (16, 1234, N'TOP LEVEL TYPE C', 2208, N'SECOND LEVEL ITEM K', NULL, NULL, NULL, NULL)GOSET IDENTITY_INSERT [dbo].[TABLE_XML1] OFFGO[/code]Into XMLSuch that it is structured like[code="xml"]<TOP_LEVEL_ITEM= "TOP LEVEL TYPE A" TOP_LEVEL_ID="294"> <SECOND_LEVEL_ITEM= "SECOND LEVEL ITEM A" SECOND_LEVEL_ID="1244"> <THIRD_LEVEL_ITEM="THIRD LEVEL ITEM A" THIRD_LEVEL_ID="1245"> <FOURTH_LEVEL_ITEM="FOURTH LEVEL ITEM A" FOURTH_LEVEL_ID="2222"> <FOURTH_LEVEL_ITEM="FOURTH LEVEL ITEM b" FOURTH_LEVEL_ID="2223"> </THIRD_LEVEL_ITEM> <THIRD_LEVEL_ITEM="THIRD LEVEL ITEM B" THIRD_LEVEL_ID="1255"> <FOURTH_LEVEL_ITEM="FOURTH LEVEL ITEM C" FOURTH_LEVEL_ID="2231"> <FOURTH_LEVEL_ITEM="FOURTH LEVEL ITEM D" FOURTH_LEVEL_ID="2313"> </THIRD_LEVEL_ITEM> </SECOND_LEVEL_ITEM></TOP_LEVEL_ITEM>[/code]Any pointers as to how I can achieve this are gratefully appreciated.Cheers,Oliver

[SQL Server 2008 issues] Deletion of database

[SQL Server 2008 issues] Deletion of database


Deletion of database

Posted: 01 Mar 2013 03:34 PM PST

Hi, is there any way we can find the details of the deleted database. means who deleted and when deleted.

mirroring

Posted: 01 Mar 2013 05:44 PM PST

how to find the synchronisation of data of the previous month when the database is configured in mirroring ?

a tricky query

Posted: 01 Mar 2013 03:43 AM PST

I came across a certain query which was as follows:Find employees having salary greater than avg salary of their departmentThe solution was :[CENTER][CODE]select * from student e where salary > (select avg(salary) from student f where f.dep=e.dep);[/CODE][/CENTER]But when I did it as[CODE] select * from student where salary > (select avg(salary) from student e where dep=e.dep);[/CODE]It showed me a different result can someone please explain the difference?

MS SQL to Postgres ETL / Replication / Synch

Posted: 08 Feb 2013 01:59 PM PST

I've recently been tasked getting my SQL Server 2008 R2 data into a Postgres 9.2.2 server on Linux.My source SB is roughly 100GB in size, probably only 15% of which changes daily. I need to propogate those changes to the Postgres server but I'm having trouble with the speed of the various methods I've tried. I've tried using the 32bit Postgres ODCB connection in SSIS, this was very slow. On the order of 100 Rows/sec inserting to Postgres. I could not get the 64bit connection to work at all. I've tried using CloverETL and open source ETL platform, performance was again pretty slow, On the order of a 1000 rows/sec inserting to Postgres. I've also tried Kettle/Spoon ETL platform, which is getting me around 5000 rows/sec into Postgres, which is still very slow for my dataset.I think I'm going to have to resort to a BCP out ---> COPY FROM solution. Does anyone have any guidance or insight into how best to propagate data changes into Postgres?

Denormalising report from normalised nested source data

Posted: 28 Feb 2013 11:29 PM PST

HiI have an application that stores normalised and nested data. Each top level instance is identified by an instance Id. Each line of data has a Lineage and NestedIndexId such that it's child has the same instance Id with the Lineage = Parent Lineage+'_'+ Parent NestedIndexId. The data item is indicated by columns [DataName] and [DataValue]This structure allows a flexible data storage for any number of records and level of nesting. Using an object model on the application side to represent and manage this is easy enough, but I am battling with a way to run a flat report to get the data out in a usable format.Hopefully the code below describes what I am trying to explain, together with where I am with pivoting the data:[code="sql"]CREATE TABLE #Data (InstanceId INT, Lineage VARCHAR(12), IndexId INT, DataName VARCHAR(20),DataValue VARCHAR(100))INSERT INTO #Data ( InstanceId , Lineage , IndexId , DataName , DataValue )VALUES ( 1, '0' ,0,'Parent' ,'1 Parent 0') ,( 1, '0_0' ,0,'Child' ,'1 Child 0 0') ,( 1, '0_0_0',0,'Grandchild','1 Grandchild 0 0') ,( 1, '0_0_0',1,'Grandchild','1 Grandchild 0 1') ,( 1, '0_0' ,1,'Child' ,'1 Child 0 0') ,( 1, '0_0_1',0,'Grandchild','1 Grandchild 1 0') ,( 1, '0_0_1',1,'Grandchild','1 Grandchild 1 1') ,( 2, '0' ,0,'Parent' ,'2 Parent 0') ,( 2, '0_0' ,0,'Child' ,'2 Child 0 0') ,( 2, '0_0_0',0,'Grandchild','2 Grandchild 0 0') ,( 2, '0_0_0',1,'Grandchild','2 Grandchild 0 1') ,( 2, '0_0' ,1,'Child' ,'2 Child 0 0') ,( 2, '0_0_1',0,'Grandchild','2 Grandchild 1 0') ,( 2, '0_0_1',1,'Grandchild','2 Grandchild 1 1') SELECT * FROM( SELECT InstanceId ,Lineage ,IndexId ,DataValue ,DataName FROM #Data) SourceDataPIVOT ( MAX(DataValue) FOR DataName in ([Parent],[Child],[GrandChild])) As PVT ORDER BY InstanceId,Lineage + '_'+CAST(IndexId AS VARCHAR(12))DROP TABLE #Data[/code]I need the output of the query to be formatted in the following fashion:[code="other"]Parent Child GrandChild1 Parent 0 1 Child 0 0 1 Grandchild 0 01 Parent 0 1 Child 0 0 1 Grandchild 0 11 Parent 0 1 Child 0 0 1 Grandchild 1 01 Parent 0 1 Child 0 0 1 Grandchild 1 12 Parent 0 2 Child 0 0 2 Grandchild 0 02 Parent 0 2 Child 0 0 2 Grandchild 0 12 Parent 0 2 Child 0 0 2 Grandchild 1 02 Parent 0 2 Child 0 0 2 Grandchild 1 1but the closes I am able to get from a query like the one above is InstanceId Lineage IndexId Parent Child GrandChild1 0 0 1 Parent 0 1 0_0 0 1 Child 0 0 1 0_0_0 0 1 Grandchild 0 01 0_0_0 1 1 Grandchild 0 11 0_0 1 1 Child 0 0 1 0_0_1 0 1 Grandchild 1 01 0_0_1 1 1 Grandchild 1 12 0 0 2 Parent 0 2 0_0 0 2 Child 0 0 2 0_0_0 0 2 Grandchild 0 02 0_0_0 1 2 Grandchild 0 12 0_0 1 2 Child 0 0 2 0_0_1 0 2 Grandchild 1 02 0_0_1 1 2 Grandchild 1 1[/code]Any ideas appreciatedGavin

DateTime Rows to Columns

Posted: 01 Mar 2013 03:06 AM PST

HiI have something like a table of users and other table with dates and working time for the users, i want to display the working hours for the users, but i need to see the date in columns. I created a little script to show what i want, but i'm having a problem with the duplicate users.[code="sql"]DROP TABLE #UsersDROP TABLE #WorkingCREATE TABLE #Users (Num INT, NAME NVARCHAR(5))CREATE TABLE #Working (WorkDay SMALLDATETIME,Num INT, TimeWorked INT)INSERT INTO #UsersSELECT 1,'User1'UNION ALLSELECT 2,'User2'UNION ALLSELECT 3,'User3'INSERT INTO #WorkingSELECT '01/01/2013',1,2UNION ALLSELECT '02/01/2013',1,0UNION ALLSELECT '03/01/2013',1,1UNION ALLSELECT '04/01/2013',1,4UNION ALLSELECT '05/01/2013',1,0UNION ALLSELECT '01/01/2013',2,0UNION ALLSELECT '02/01/2013',2,0UNION ALLSELECT '03/01/2013',2,20UNION ALLSELECT '04/01/2013',2,20UNION ALLSELECT '05/01/2013',2,20UNION ALLSELECT '01/01/2013',3,30UNION ALLSELECT '02/01/2013',3,30UNION ALLSELECT '03/01/2013',3,0UNION ALLSELECT '04/01/2013',3,0UNION ALLSELECT '05/01/2013',3,0SELECT * FROM #UsersSELECT * FROM #WorkingSELECT DISTINCT U.Num, U.NAME, [1] = CASE WHEN MONTH(W.WorkDay) = 1 THEN W.TimeWorked ELSE 0 END, [2] = CASE WHEN MONTH(W.WorkDay) = 2 THEN W.TimeWorked ELSE 0 END, [3] = CASE WHEN MONTH(W.WorkDay) = 3 THEN W.TimeWorked ELSE 0 END, [4] = CASE WHEN MONTH(W.WorkDay) = 4 THEN W.TimeWorked ELSE 0 END, [5] = CASE WHEN MONTH(W.WorkDay) = 5 THEN W.TimeWorked ELSE 0 ENDFROM #Users U INNER JOIN #Working W ON U.Num = W.Num[/code]In the end the query will be built on the fly, with the help of the tally table, i just pass 2 paramteres to the sp, the year and the month...Thanks

how does sql read tempdb multiple .ldf files

Posted: 01 Mar 2013 04:49 AM PST

Does anybody know how sql uses multiple tempdb .ldf files when there is a restricted file growth limit on the data files? So, let's say I have tempdb1.mdf, tempdb2.ldf and tempdb3.ldf data files. When let's say tempdb2.ldf get's full does it move to the next data file (tempdb3.ldf) withough throwing a tempdb full error message? Or, does sql only throw the tempdb full error message when ALL the data files are full?Because a client has this set up with multiple tempdb data files with restricted file growth (not sure why cause i know this is not right practice). Appreciate the help.

SQL Agent skipping job execution

Posted: 01 Mar 2013 01:04 AM PST

I am experiencing an issue with a couple of SQL Agent jobs on SQL Server 2008 R2. The job will execute and you can see it in the history, but the run time is typically less than 10 seconds and the code itself did not execute. Actually, some of the code is executing, but not all of it. Strange as hell. And it is infrequent. Some days it works, some days it doesn't. Mr. Google shows me that other people have experienced this issue going back to SQL2005, however, nobody seems to have a solution. These are things it is not: - It is not a security issue. The job is running as system administrator - It is not due to other jobs running at the same time. - It is not due to SQL Agent or even SQL Server needing to be restarted or the server being rebooted (all of that has been done). - It is not specific to an edition of SQL. I'm seeing the problem in Enterprise and Standard.Here is one of the jobs that is showing this behavior:[code="sql"]DECLARE @Threads INT = 5DECLARE @ReorgThreshold INT = 30DECLARE @Rows BIGINT = 10000DECLARE @ByPassGetStats BIT = 0DECLARE @ByPassDefrag BIT = 0 DECLARE @DatabaseID INT;DECLARE @DatabaseName VARCHAR(255);DECLARE @TableName VARCHAR(255);DECLARE @SchemaName VARCHAR(255);DECLARE @ThreadCounter INT;DECLARE @SQL NVARCHAR(4000);CREATE TABLE #IndexFrag ( DatabaseName VARCHAR(255) NULL, ObjectID INT, IndexID INT, PartitionNumber INT, FragmentationPerc FLOAT, Pages INT, Records BIGINT, IndexName VARCHAR(255), SchemaName VARCHAR(255), TableName VARCHAR(255), AllowPageLocks INT);CREATE TABLE #TableList ( DatabaseName VARCHAR(255) NULL, SchemaName VARCHAR(255) NULL, TableName VARCHAR(255) NULL, Records BIGINT );IF @ByPassGetStats = 0BEGIN --Get the index fragmentation DECLARE ReorgIndexCursor01 CURSOR FOR SELECT [name], database_id FROM master.sys.databases WHERE [name] NOT IN ('master', 'model', 'tempdb') ORDER BY [name]; OPEN ReorgIndexCursor01; FETCH NEXT FROM ReorgIndexCursor01 INTO @DatabaseName, @DatabaseID; WHILE @@FETCH_STATUS = 0 BEGIN INSERT INTO #IndexFrag (DatabaseName, ObjectID, IndexID, PartitionNumber, FragmentationPerc, Pages, Records) SELECT @DatabaseName, ps.OBJECT_ID, ps.index_id, ps.partition_number, ps.avg_fragmentation_in_percent, ps.page_count, ps.record_count FROM master.sys.dm_db_index_physical_stats (@DatabaseID, NULL, NULL , NULL, N'SAMPLED') ps WHERE ps.index_id > 0 OPTION (MaxDop 1); --Update the table with the schema, table, and index names SELECT @SQL = 'USE [' + @DatabaseName + '] UPDATE #IndexFrag SET IndexName = i.name, SchemaName = s.name, TableName = o.name, AllowPageLocks = i.allow_page_locks FROM #IndexFrag ti INNER JOIN sys.objects o ON ti.ObjectID = o.object_id INNER JOIN sys.schemas s ON o.schema_id = s.schema_id INNER JOIN sys.indexes i ON o.object_id = i.object_id WHERE ti.DatabaseName = ' + CHAR(39) + @DatabaseName + CHAR(39) + ' AND i.index_id = ti.IndexID '; EXEC (@SQL); FETCH NEXT FROM ReorgIndexCursor01 INTO @DatabaseName, @DatabaseID; END CLOSE ReorgIndexCursor01; DEALLOCATE ReorgIndexCursor01; --Update the PNGCORP_IndexList table UPDATE msdb.dbo.PNGCORP_IndexList SET FragmentationPerc = f.FragmentationPerc, Pages = f.Pages, Records = f.Records, LastChecked = GETDATE() FROM #IndexFrag f INNER JOIN msdb.dbo.PNGCORP_IndexList il ON il.DatabaseName = f.DatabaseName AND il.ObjectID = f.ObjectID AND il.IndexID = f.IndexID AND il.PartitionNumber = f.PartitionNumber; --Insert new indexes into the PNGCORP_IndexList INSERT INTO msdb.dbo.PNGCORP_IndexList (DatabaseName, ObjectID, IndexID, PartitionNumber, FragmentationPerc, Pages, Records, IndexName, SchemaName, TableName, AllowPageLocks, LastChecked) SELECT DatabaseName, ObjectID, IndexID, PartitionNumber, FragmentationPerc, Pages, Records, IndexName, SchemaName, TableName, AllowPageLocks, GETDATE() FROM #IndexFrag f WHERE ( SELECT COUNT(*) FROM msdb.dbo.PNGCORP_IndexList il WHERE il.DatabaseName = f.DatabaseName AND il.ObjectID = f.ObjectID AND il.IndexID = f.IndexID AND il.PartitionNumber = f.PartitionNumber) = 0;END--Get the tables we need to reindexINSERT INTO #TableList (DatabaseName, SchemaName, TableName, Records)SELECT DatabaseName, SchemaName, TableName, MAX(Records)FROM msdb.dbo.PNGCORP_IndexListWHERE FragmentationPerc >= @ReorgThreshold AND Records >= @RowsGROUP BY DatabaseName, SchemaName, TableNameORDER BY MAX(Records) DESC--Cycle through the problem indexes and insert them into the PNGCORP_IndexReorg# tables.SET @ThreadCounter = 1; IF @Threads > 5 SET @Threads = 5;TRUNCATE TABLE msdb.dbo.PNGCORP_IndexReorg1;TRUNCATE TABLE msdb.dbo.PNGCORP_IndexReorg2;TRUNCATE TABLE msdb.dbo.PNGCORP_IndexReorg3;TRUNCATE TABLE msdb.dbo.PNGCORP_IndexReorg4;TRUNCATE TABLE msdb.dbo.PNGCORP_IndexReorg5;DECLARE ReorgIndexCursor02 CURSOR FORSELECT DatabaseName, SchemaName, TableNameFROM #TableListORDER BY Records DESC;OPEN ReorgIndexCursor02;FETCH NEXT FROM ReorgIndexCursor02 INTO @DatabaseName, @SchemaName, @TableName;WHILE @@FETCH_STATUS = 0BEGIN SET @SQL = ' INSERT INTO msdb.dbo.PNGCORP_IndexReorg' + CAST(@ThreadCounter AS VARCHAR(1)) + ' (DatabaseName, SchemaName, TableName, IndexName, AllowPageLocks) SELECT DISTINCT i.DatabaseName, i.SchemaName, i.TableName, i.IndexName, i.AllowPageLocks FROM msdb.dbo.PNGCORP_IndexList i INNER JOIN #TableList t ON t.DatabaseName = i.DatabaseName AND t.SchemaName = i.SchemaName AND t.TableName = i.TableName WHERE i.DatabaseName = ''' + @DatabaseName + ''' AND i.SchemaName = ''' + @SchemaName + ''' AND i.TableName = ''' + @TableName + ''' AND i.FragmentationPerc >= ' + CAST(@ReorgThreshold AS VARCHAR(25)) + ' AND i.Records >= ' + CAST(@Rows AS VARCHAR(25)) + '; '; EXEC (@SQL); SET @ThreadCounter = @ThreadCounter + 1; IF @ThreadCounter > @Threads SET @ThreadCounter = 1; FETCH NEXT FROM ReorgIndexCursor02 INTO @DatabaseName, @SchemaName, @TableName;ENDCLOSE ReorgIndexCursor02;DEALLOCATE ReorgIndexCursor02;DROP TABLE #TableList;DROP TABLE #IndexFrag;--Start the index jobsIF @ByPassDefrag = 0BEGIN EXEC msdb.dbo.sp_start_job @Job_Name = 'Database Maintenance.IndexReorg1'; IF @Threads >= 2 EXEC msdb.dbo.sp_start_job @Job_Name = 'Database Maintenance.IndexReorg2'; IF @Threads >= 3 EXEC msdb.dbo.sp_start_job @Job_Name = 'Database Maintenance.IndexReorg3'; IF @Threads >= 4 EXEC msdb.dbo.sp_start_job @Job_Name = 'Database Maintenance.IndexReorg4'; IF @Threads = 5 EXEC msdb.dbo.sp_start_job @Job_Name = 'Database Maintenance.IndexReorg5';END[/code]What we are doing here is finding all of the indexes with a fragmentation > 30% and putting them into 5 different tables. Then we kick off five other jobs to perform the index maintenance. On some days, this jobs executes perfectly. It normally takes about an hour to run (there are some large databases on this server). On other days, it takes less than 10 seconds to execute, the code that updates the index list isn't executed. But what is really strange is the last couple of lines of code are ALWAYS executed and the five "IndexReorg" jobs are kicked off. I have the same issue with a db check job. Some days it executes, some days it doesn't. SQL Agent always reports that the job executed successfully, but you could tell by the runtime that nothing was done.[code="sql"]SET NOCOUNT ONDECLARE @SQL NVARCHAR(MAX)DECLARE @DatabaseName VARCHAR(255)DECLARE CheckDatabaseIntegrityCursor CURSOR FORSELECT [name]FROM master.sys.databasesWHERE [name] NOT IN ('model', 'tempdb') ORDER BY [name]OPEN CheckDatabaseIntegrityCursorFETCH NEXT FROM CheckDatabaseIntegrityCursor INTO @DatabaseNameWHILE @@FETCH_STATUS = 0BEGIN PRINT @DatabaseName + ' ---------------------------------------------------------------' SELECT @SQL =' USE [' + @DatabaseName + '] DBCC CHECKDB(N' + CHAR(39) + @DatabaseName + CHAR(39) + ') ' PRINT @SQL EXEC master.dbo.sp_executesql @SQL FETCH NEXT FROM CheckDatabaseIntegrityCursor INTO @DatabaseNameENDCLOSE CheckDatabaseIntegrityCursorDEALLOCATE CheckDatabaseIntegrityCursor[/code]I've seen a few posts on various forums where folks have reporting such issues, but nobody has any solution. Anyone else see anything like this?

Kill alter Index rebuild Jobs

Posted: 01 Mar 2013 01:23 AM PST

Hi allWhat is you opinion about killing alter index rebuild jobs? Can this be done easily, never ever or depends? I'm asking because recently we had the issue that an alter index rebuild job had to be killed because it's a huge performance impact and the system was performing poorly. After killing this jobs the system was almost unusable and under heavy load from the rollback. So, should I never kill an alter index job again?Regards, Christian

How to change the named instance name to default instance name on SQL Server 2008 R2?

Posted: 01 Mar 2013 01:15 AM PST

Recently I was deployed SQL Server 2008 R2 on my windows server 2003 box with named instance, already 2005 default instance running on this box. Now i am going to uninstall the 2005 server on this box, so i would like to change the 2008 R2 named instance to default instance is that possible, please give me the suggestions.Thanks,Giri

MsxEncryptChannelOptions and Multi-server administration

Posted: 01 Mar 2013 01:30 AM PST

Setting up the multi-server administration failed for me because the MsxEncryptChannelOptions registry value on the server I am attempting to make the target is set to '2'. According to Microsoft and everyone else on the interwebz, changing it to '0' resolves this issue. My question is what are the effects of changing the value to '0'? Seeing as this is a production server, making changes to the registry without knowing what else it will effect kinda makes me nervous. What else uses this registry entry or is it exclusively used by the multi-server admin functions?

How to change the data retention period from 5 yr to 20yrs in SQL 2005

Posted: 01 Mar 2013 01:34 AM PST

hi guys I am working on one scenario to figure out How to change the data retention period from 5 yr to 20yrs in SQL 2005.when this server was setup the retention period was set up by someone to 5 yrs but this year they want to change it to 20yr so how to do ?Any ideas?Thanks for Great Help.

Need help with a select, insert

Posted: 22 Feb 2013 09:00 AM PST

GOAL: insert into tblcompanyassignments from tbluserassignments. Only attempt the insert for userid that exist in both tbluserassignments and tblexistingusers. For existing users, further filter down the insert attempt for only companies that exist. Final requirement, don't attempt to insert if there is already an existing user, company combination tbluserassignmentsusernamecompanynamerecord 1: userA, companyArecord 2: userA, companyBrecord 3: userB, companyBrecord 4: userB, companyCrecord 5: userC, companyArecord 5: userC, companyB tblexistingusersuseridrecord 1: userArecord 2: userBtblexistingcompaniescompanyidrecord 1: companyArecord 2: companyBtblcompanyassignmentsuseridcompanyidrecord 1: userA, companyArecord 2: userB, companyB

Problem with sp_refreshsqlmodule

Posted: 01 Mar 2013 03:27 AM PST

Hi,We are using sp_refreshsqlmodule to find out if there are any integrity issues with the database.The process is that we generate scripts by dropping and recreating stored procedures, functions or views every time we change any of the objects.I noticed today that when I execute sp_refreshsqlmodule procedure after running my scripts, SQL Server is reverting my changes done. For e.g. If a dropped a stored procedure and created it again using a drop and create script, SQL Server is reverting the change I have done with my new script for that procedure.Has anyone else faced a similar issue? I am interested in knowing the underlying cause as to why SQL Server is behaving in this manner?Thanks!

DB Mail - Controling What apears in the FROM setcion of an email

Posted: 27 Feb 2013 12:52 AM PST

We recently upgraded from SQL 2005 to SQL 200R2. I used DB Mail on the SQL 2005 Box and am doing the same on the 2008R2 box. I have created a profile and account on DB Mail on the 2008R2 server that is identical to the profile & account on the 2005 server with the exception that anywhere I had 'SS2005' on the acount or profile on my 2005 box I now have set to 2008R2 on the 2008R2 box. I am using the exact same (unchanged) email address on each and am using teh same mail server IP and port. I have taken screenshots and checked and double checked and I can find nothing different between the 2 and yet an email from each comes into OUtlook with very different FROM values. Using the exact same Call msdb.dbo.sp_send_dbmail on each box (with the diference being the subject line )...On my 2005 box the FROM is shown as SQL04\2005.DBA and the same call to this SP on the 2008R2 server produces an email with my name (format is FirstName Lastname) in the FROM field.Does anyone knwo if something changed with DB Mail between 2005 & 2008R2 that coudl cause this? Its important because this tag line in the FROM field lets me easily detremine which serevrt an email is coming from as we have a number of SQL Servers that use the sp_send_dbmail SP to send info throughout the day.night. Thoughts?If a screenshot is necessary I can do it but my IT guy tells me I have to use fake values for the vairous fields for security reasons and thats why I haven;t included it in the post. If its necessary I can do it though.

Query Help

Posted: 01 Mar 2013 01:24 AM PST

Hello EveryoneI need one help to develop my logic[code="sql"]create table #x2(CourseID varchar(10),ScheduleTerms Varchar(2))insert into #x2 values ('000541','Q1')insert into #x2 values ('000541','Q2')insert into #x2 values ('000541','Q3')insert into #x2 values ('000541','Q4')insert into #x2 values ('001951','Q1')insert into #x2 values ('001951','Q2')insert into #x2 values ('001951','Q3')insert into #x2 values ('001951','Q4')[/code]It gives me output as below[code="plain"]CourseID ScheduleTerms000541 Q1000541 Q2000541 Q3000541 Q4001951 Q1001951 Q2001951 Q3001951 Q4[/code]The desired output is [code="plain"]CourseID Q1 Q2 Q3 Q4000541 1 1 1 1001951 1 1 1 1[/code]Please help me to display in the bit format as desired output.Thanks

Testing scenario, physical reads.

Posted: 25 Feb 2013 08:16 PM PST

Hello in a testing scenario, I want to have access to the physical reads for every individual query.With statistics io on, I get the physical reads in text format.Is it possible to get these number in a table format, or query for these numbers ?Thanks,Ben Brugman

how to store DB roles before restoring db

Posted: 28 Feb 2013 10:14 PM PST

Hi guys is there anyway we can script out or store the database roles and permission before we restore that DB.If anyone can help me out with this Thanks And Happy Fri

Script Out Database Mail Settings?

Posted: 08 Sep 2010 06:30 AM PDT

Anyone bother to have created a script to Reverse Engineer / Script out your EXISTING database mail settings?I set up a profile to use gMail, and it seems logical for me to export out the settings to a script, then run the script on my laptop, other servers, etc.There's no built in option, so I figured i'd ping the forum before i do it myself.by the way, my google-fu [i]is strong[/i], and there are example scripts where you fill in the blanks, examples how to set up dbmail, but i did not find anything that scripts out existing settings.[img]http://www.stormrage.com/SQLStuff/mail_gmail.gif[/img]

Adding new xml tag with a value takeen from another column

Posted: 28 Feb 2013 11:39 PM PST

I have a table with xml column named xmlvalue.let xml value in those column are like shown below<z><a><b>1</b><c>2</c></a><a><b>4</b><c>5</c>...so many tags similar to above..</a></z>I want to add a new node called <new>here value should betaken from another column</new> inside the <z><a> for every row in that column with <new> tag having a value which is taken from another column "abc" for that row.(column abc is not a xml column) which is in same table.How can i do it using xml query or sql query?Thank you

SSAS caching resulting in inaccurate results being returned

Posted: 29 Jan 2013 12:38 AM PST

I have a question about some weird behavior we're seeing in SSAS 2008 (SP3), and I was wondering anyone out there would be able to help me?Short Story:SSAS seems to be doing some level of caching on its own. However, this caching is resulting in inaccurate results to be returned in some queries for our business users. That is leading them to question the data, and lose confidence in the queries that they are running. I'm wondering if there are settings that we can change with SSAS to keep it from caching results like this so the business will see accurate data when running their queries.Long Story:We have a measure group set up with about 47 million rows in it that we are trying to query through Report Builder. We pulled in one amount from that measure group. We then pulled in an attribute from a coverage dimension. Everything is good here, and the results are based on the 47-million rows in our measure group.Next we pull in an attribute from a coverage-specific table (fire in this case). That fire table only contains 5 million rows. That table also is not a direct foreign key to the measure/fact table. This is a referenced dimension through the coverage table that I mentioned earlier. Anyway, pulling in that attribute from the fire table limits the number of rows coming back to only the 5-million rows in our measure that have an associated row in the fire table. Again, everything is working as expected.Then we add another attribute from another dimension. This pulls in just fine.Then we realize that we really didn't want that fire attribute added. So we removed the fire attribute from our query. However, our results didn't expand back to look at all 47-million rows. Instead, it is still limiting the result set to only the 5-million rows that have a relationship with the fire table, even through fire is no longer included in our query.I tried this multiple times with the same results. I then opened a new report builder session and tried to create the report from scratch. I did not pull in anything from fire this time, but the results were still limited to just the 5-million rows from fire.I copied the MDX from the query designer window and ran that in Management Studio. Same results (limited to only fire rows). I verified that there is no reference to the fire dimension anywhere in the MDX. Then I took that MDX and ran it on a different computer, and the results were still limited to just the 5-million rows from fire. And finally I asked a co-worker to do the query, and still the same results.I did a little internet searching, and I found a way to clear the SSAS cache. I ran that on the SSAS database, and then we got the correct results.We are not using proactive caching on any of our dimensions or measures.So, I'm wondering if there is a setting (or settings) I can change with SSAS so it will not cache results like it appears to be doing? And if not, do you have a recommendation on how often we should be clearing out the cache to try to prevent the business users from seeing situations like I described again? It is crucial that the business have confidence in the data and the results of their queries run against the cube!!!!Thanks a lot for any help anyone may be able to provide!!!

GRANT VIEW DEFINITION

Posted: 28 Feb 2013 06:50 PM PST

Hi allI am running the following on one of my DBs - I am using SSMS 2012 against SQL 2008R2GRANT VIEW Definition TO [XXX\XXX]When I check the securables for that user it doesn't list anything as having being granted and if I run sp_helpprotect there is nothing in there either. The command executes successfully.Any ideas?Thanks

Can't get Sp_create_plan_guide to execute successfully

Posted: 28 Feb 2013 07:04 PM PST

Hi,I am not a DBA. We have a stored procedure used by our app that is using inefficient query plans. The sproc needs rebuilding and has been rebuilt for an up-coming hot-fix but we need an interim work-around (which cannot include altering the sproc). Re-indexing and updateStats works sporadically but not reliably.As a last resort I am attempting to use a Plan Guide to force the sproc to use a specific execution plan extracted from when the sproc was functioning well. If I alter the sproc and simply add the OPTION (USE PLAN N'<XML>') to the problematic statement, the sproc is altered successfully, the sproc runs efficiently and the application runs well again.However, if I use Sp_create_plan_guide, I keep getting the below error even with the exact same plan xml that works by altering the sproc:[font="Courier New"]Error detected at line 3: Incorrect syntax near the keyword 'ASC'[/font]I think it relates to this part of the plan (there are a few similar occurrences but this is the first)[code="xml"] <DefinedValue> <ColumnReference Column="Expr1185" /> <ScalarOperator ScalarString="CASE WHEN [@SortDirection]=N''ASC'' THEN [DataBaseName].[dbo].[Delegate].[DelegateIdentity] ELSE NULL END"> <IF> <Condition> <ScalarOperator> <Compare CompareOp="EQ"> <ScalarOperator> <Identifier> <ColumnReference Column="@SortDirection" /> </Identifier> </ScalarOperator> <ScalarOperator> <Const ConstValue="N''ASC''" /> </ScalarOperator> </Compare> </ScalarOperator> </Condition> <Then> <ScalarOperator> <Identifier> <ColumnReference Database="[DataBaseName]" Schema="[dbo]" Table="[Delegate]" Column="DelegateIdentity" /> </Identifier> </ScalarOperator> </Then> <Else> <ScalarOperator> <Const ConstValue="NULL" /> </ScalarOperator> </Else> </IF> </ScalarOperator> </DefinedValue>[/code]which relates to this part of the sproc[code="sql"] ORDER BY CTE.LocationFullName ASC , CASE WHEN @SortDirection = N''ASC'' THEN CTE.DelegateIdentity END ASC , CASE WHEN @SortDirection <> N''ASC'' THEN CTE.DelegateIdentity END DESC[/code]However, if I go through and remove all the '' (2 single quotes) then it the error goes away but I get another error relating to another literal value in the script.If I change all occurrences of ''ASC'' to just ASC then the sproc Sp_create_plan_guide runs successfully and the plan guide is successfully mapped to the statement (as confirmed with profiler's PlanGuideSuccessful event). But, I am reluctant to use this in production because I have 'hacked' the xml plan which MS advise not to do.I am pretty sure the string I am passing to @hints is correct, that is, if I use PRINT with the same value it comes out as I would expect. However, I did exchange all single quotes for 2 x single quote in the plan xml that I retrieved from Profiler originally.Any advice or similar experiences? Let me know if you need the entire syntax I am using, it is quite large.Thanks in advance.

Search This Blog