Wednesday, April 24, 2013

[how to] Creating a PostgreSQL SERIAL column using pgAdmin3

[how to] Creating a PostgreSQL SERIAL column using pgAdmin3


Creating a PostgreSQL SERIAL column using pgAdmin3

Posted: 24 Apr 2013 07:05 PM PDT

When I use pgAdmin3 to create a column of type serial, why does it instead create nextval('prices_id_seq'::regclass)?

Should we keep this type defined by pgAdmin? Manually create the table with SQL defining id column as type serial causes the pgAdmin SQL pane to again display it as nextval('prices_id_seq'::regclass).

pgAdmin SQL Pane

id integer NOT NULL DEFAULT nextval('prices_id_seq'::regclass)  

How do you correct a corrupted index in Oracle?

Posted: 24 Apr 2013 06:28 PM PDT

I have this ORA error in our logs:

Caused by: java.sql.BatchUpdateException: ORA-01578: ORACLE data block corrupted (file # 8, block # 22921) ORA-01110: data file 8: '/data/app/oracle/oradata/MYSRVR/datafile/o1_mf_mysrvr_88m82mdj.dbf'

I tried running this in sqlplus:

select segment_name,segment_type,tablespace_name, owner from sys.dba_extents where file_id=8 and 22921 between block_id and block_id + blocks -1;

output is:

SEGMENT_NAME             SEGMENT_TYPE   TABLESPACE_NAME     OWNER  -----------------------    ------------     ----------------    ---------------  PK_ZXCATSET_CATID      INDEX            MYSRVR_IDX_TB    MYSRVR  

Now I am stuck on which index is that actual segment name. (I think :/ ) I tried doing this to rebuild that index:

SQL> alter index PK_ZXCATSET_CATID rebuild tablespace MYSRVR_IDX_TB;

  • ERROR at line 1: ORA-01418: specified index does not exist

Even though if I do "select index_name from dba_indexes", that index name shows up. What am I doing wrong? This is a 10.2 Oracle server.

Thanks in advance!

EDIT If I do MYSRVR.PK_ZXCATSET_CATID the alter completes... but I still don't think my problem is fixed!

In SQL Server 2008R2 can I force a View to use objects within the user's default schema instead of the schema in which the View exists?

Posted: 24 Apr 2013 05:30 PM PDT

A bit of background. I have a base application and most clients use it as standard. However some clients have small code and database customisations. Each of these clients has their own branch and maintenance can be tricky.

I want to consolidate all these into a single database structure (not a single database - we aren't doing multi-tenancy) to enable upgrades to be applied in a much more uniform fashion. I'm still at the proof of concept stage, but the route I was going down would be to have the standard objects stay in the schema they currently exist in (mostly dbo) and have the custom objects reside in a schema for each client.

For example, I could have dbo.users and client1.users which has some additional columns. If I set the default schema for the client to be "client1" then the following query

SELECT * FROM users  

will return data from the client1 schema or the dbo schema depending on which login is connected. This is absolutely perfect for what I'm trying to achieve.

The problem I'm running into is with Views. I have many views which are in the dbo schema and refer to the Users table. No matter which user I connect to the database as, these views always select from dbo.users.

So I'm guessing the question I have is: Can I prefix the tables in the view with some variable like "DEFAULT"? e.g.

SELECT u.username, u.email, a.level  FROM DEFAULT.users u INNER JOIN DEFAULT.accessLevels a ON u.accessID = a.accessID  

If this isn't possible and I'm totally barking up the wrong tree, do you have any suggestions as to how I can achieve what I'm setting out to do?

Many thanks.

One to at most one relation or multiple null columns?

Posted: 24 Apr 2013 04:57 PM PDT

I have a table with many attributes that can be grouped in logical groups, and thus the idea of put this attributes in separate tables seems attractive. The logical groups doesn't represent entities themselves. The entity is requested most for reading than for write. Then a large table looks appropriate. Also, this way the joins are avoided in queries. The part I don't like of this approach is the large number of nullable fields resulting. What solution is better?

Table partionning problem

Posted: 24 Apr 2013 04:55 PM PDT

I need your help for a partionning problem. We are using a "log_event" table which log a lot of events sent by our players (subscription, connection, play, share, etc...). We insert 5,000 row / s in this table.

This table is quite big (more than 1 billion lines) and the structure is :

CREATE TABLE IF NOT EXISTS `log_event` (    `id` bigint(20) NOT NULL AUTO_INCREMENT,    `id_event` smallint(6) NOT NULL,    `id_inscri` bigint(20) NOT NULL,    `date` int(11) NOT NULL,    `data` bigint(20) NOT NULL,    PRIMARY KEY (`id`),    KEY `id_event_2` (`id_event`,`data`),    KEY `id_inscri` (`id_inscri`),    KEY `date` (`date`),    KEY `id_event_4` (`id_event`,`date`,`data`)  ) ENGINE=InnoDB  DEFAULT CHARSET=latin1 ROW_FORMAT=COMPRESSED KEY_BLOCK_SIZE=8 ;  

In order to increase performance, we would like to partition this table, but :

  • we don't have real PRIMARY KEY => we are using an AUTO INCREMENT which is useless in our select query. We never use this column but it's the only way we have found to have a PK.

  • we would like to partition with the columns 'id_event, date, data' but we don't know how to do that because we have the error "#1503 - A PRIMARY KEY must include all columns in the table's partitioning function"

Do you have any ideas on how to increase performance without cutting the possibility of quick SELECT ?

Thanks !

François

After moving database (backup, restore), I have to re-add user

Posted: 24 Apr 2013 04:10 PM PDT

I occasionally move a database (SQL Express 2012) from a development machine to a server, or vice versa, using database backup and restore in SQL Server Management Studio.

Whenever I do this, the applications on the target machine cannot access the database until I delete the user they use, "george", from the database users (Database, Security, Users in SQL Server Management Studio), and re-add it as owner under Security, Logins, george/properties, user mapping.

Is there a better way to do this? It seems a little convoluted.

MySQL stored routine performance while using PREPARE

Posted: 24 Apr 2013 07:11 PM PDT

Instead of maintaining stored routines for each database in my current environment i have decided to create separate database just for stored routines storage. Mainly i am using them for reporting. All databases have same schema structure.

Stored routine body:

PROCEDURE `sp_name`(IN db_name  VARCHAR(50)                      ,IN stardate DATE                      ,IN enddate DATE)  ......  SET @sql_query = CONCAT(  'SELECT       .....      .....      .....  FROM ',  db_name, '.view   WHERE       ....  stardate, .... enddate .....   GROUP BY       ..... ,.....');    PREPARE sql_query FROM @sql_query;  EXECUTE sql_query;  DEALLOCATE PREPARE sql_query;  

Now i faced with performance problem. While executing stored routine from each database execution time is less then 1 sec. With prepared statement 5-6 sec. Could someone please advice if there is space for any improvements.

Shrinking log files with SMO

Posted: 24 Apr 2013 02:09 PM PDT

I know I can use the following t-SQL to shrink a log file:

BACKUP LOG db1 TO DISK = '\\server\share\db1_log1.trn';  DBCC ShrinkFile([db1_log], 0);  

How do I do the same with SMO? I tried:

$server = new-Object Microsoft.SqlServer.Management.Smo.Server()    $dbBackup = new-object Microsoft.SqlServer.Management.Smo.Backup  $dbBackup.Action = "Log"  $dbBackup.Database = "db1"  $dbBackup.Devices.AddDevice("\\server\share\db1_log1.trn", "File")  $dbBackup.SqlBackup($server)    $db = $srv.Databases.Item("db1")  $db.TruncateLog()  

But here, I found that the TruncateLog() method does not work with anything higher than SQL 2005. That link recommended using:

$db.RecoveryModel = RecoveryModel.Simple  $db.Alter()  $db.LogFiles[0].Shrink(0, ShrinkMethod.TruncateOnly)  

However, switching to Simple Recovery Mode is going to break my log chain and I don't want to do that. Shrinking the log file doesn't break my log chain so I'm looking for an equivalent SMO method to do that. Does such a method exist?

How best to maintain SQL log file sizes

Posted: 24 Apr 2013 04:07 PM PDT

I'm somewhat of a new DBA and I'm managing a SQL Server 2012 instance that has a fair amount of activity. I'm running in Full Recovery mode because we need point in time recovery.

Right now, I'm taking a full backup of the databases and logs every day at 5am. Some of the log files have ballooned up to 300gb and even after taking a backup they don't reduce in size. I can get them to reduce in size by running something similar to:

BACKUP LOG db1 TO DISK = '\\server\share\db1_log1.trn';  DBCC ShrinkFile([db1_log], 0);    BACKUP LOG db1 TO DISK = '\\server\share\db1_log2.trn';  DBCC ShrinkFile([db1_log], 0);    BACKUP LOG db1 TO DISK = '\\server\share\db1_log3.trn';  DBCC ShrinkFile([db1_log], 0);  

When I check the LSNs of the backup files I see something like:

RESTORE headeronly FROM DISK = N'\\server\share\db1_log1.trn'  FirstLSN:  15781000014686200001  SecondLSN: 15802000000665000001    RESTORE headeronly FROM DISK = N'\\server\share\db1_log2.trn'  FirstLSN:  15802000000665000001  SecondLSN: 15805000000004100001    RESTORE headeronly FROM DISK = N'\\server\share\db1_log3.trn'  FirstLSN:  15805000000004100001  SecondLSN: 15808000000004200001  

I don't believe I'm breaking my log chain by shrinking the log files. Reading up on this, I do believe I'm hurting my performance because those shrunk log files have to re-grow themselves.

Questions:

  1. Why doesn't the log file shrink after my backups? Is it because there are uncommitted transactions?
  2. At first I was thinking I should shrink the log files after every 5:00 AM backup. After reading up on how that's bad for performance I now believe that I need to take regular log backups every couple of hours during the day. Is that correct?
  3. My normal full backup of the database/logs happens every day at 5:00 AM and sometimes takes 3 hours. If I schedule the log backups to happen every hour, what will happen when the log backup collides with the 5:00 AM backup?

DB2 db2fm proccess

Posted: 24 Apr 2013 03:15 PM PDT

Server is been up for 365 days, however i got some weird repeated procceses.

Are these normal?

ps -fea | grep db2fm

  db2inst1  643284  229516  29   May 25      - 212564:06 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1  671770  229516  56   May 14      - 227447:02 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1  757794 1237058   0   Apr 19  pts/7  0:00 /bin/sh /home/db2inst1/sqllib/bin/db2cc  db2inst1  774232  229516  30   Sep 25      - 94218:54 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1  962750  229516  30   Jul 18      - 145256:01 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1  999450  229516  29   Aug 17      - 117710:27 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1 1179898  229516  58   Nov 02      - 75788:49 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  

ps -fea | grep db2agent

  db2inst1  409770  680100   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1  450750  778412   0   Apr 18      -  0:03 db2agent (idle) 0  db2inst1  618688  680100   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1  651440  680100   0   Nov 17      -  0:20 db2agent (DATABASEA) 0  db2inst1  655508  491676   0   Apr 19      -  0:04 db2agent (idle) 0  db2inst1  684038  680100   0   Mar 23      -  0:03 db2agent (DATABASEA) 0  db2inst1  790706  491676   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1  880672  680100   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1  913438  778412   0   Nov 16      -  0:20 db2agent (idle) 0  db2inst1  946182  491676   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1  991312  778412   0   Apr 17      -  0:16 db2agent (idle) 0  db2inst1 1077466  491676   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1 1134726  680100   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1 1142964  491676   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1 1233112  491676   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1 1261748  778412   0   Jun 15      -  0:18 db2agent (idle) 0  db2inst1 1384678  778412   0   Mar 23      -  0:27 db2agent (idle) 0  db2inst1 1404936  680100   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1 1421368  778412   0   Mar 22      -  0:04 db2agent (idle) 0  db2inst1 1445936  491676   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1 1482864  491676   0   Jun 16      -  0:31 db2agent (idle) 0  db2inst1 1503440  778412   0   Jun 15      -  0:56 db2agent (idle) 0  db2inst1 1519842  778412   0   Mar 23      -  0:00 db2agent (DATABASEA) 0  db2inst1 1531946  680100   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1 1572884  680100   0   Apr 19      -  0:00 db2agent (idle) 0  

Other info

  oslevel -g  Fileset                                 Actual Level        Maintenance Level  -----------------------------------------------------------------------------  bos.rte                                 5.3.0.40            5.3.0.0    db2fm -s -S  Gcf module 'fault monitor' is NOT operable  Gcf module '/home/db2inst1/sqllib/lib/libdb2gcf.a' state is AVAILABLE      uptime    02:14PM   up 365 days,  12:51,  6 users,  load average: 6.69, 6.89, 6.97     db2level  DB21085I  Instance "db2inst1" uses "64" bits and DB2 code release "SQL08020"  with level identifier "03010106".  Informational tokens are "DB2 v8.1.1.64", "s040812", "U498350", and FixPak "7"    

Msg 21, Level 21, State 1, Line 1 Warning: Fatal error 9001

Posted: 24 Apr 2013 02:37 PM PDT

We recently moved to a SQL Server that's on a VPS and every now and then we get a 9001 error when trying to do an update.

Rebooting the server fixes the problem.

I changed the maximum server memory (in MB) to 2000 because it's a 4GB VPS.

But I just don't know what's causing the error.

How to clean up duplicate objects in SYS/SYSTEM

Posted: 24 Apr 2013 11:37 AM PDT

I have a sql query like below to check duplicate objects in SYS/SYSTEM.

select OBJECT_NAME, OBJECT_TYPE from  DBA_OBJECTS where OBJECT_NAME||OBJECT_TYPE  in (select OBJECT_NAME||OBJECT_TYPE from  DBA_OBJECTS where OWNER='SYS') and  OWNER='SYSTEM' and OBJECT_NAME not in  ('AQ$_SCHEDULES_PRIMARY', 'AQ$_SCHEDULES',  'DBMS_REPCAT_AUTH');  

I have a list of large duplicated objects. My question / problem is : how to get rid of the objects? What is the best way to approach this?

MySQL - SQL statement to test minimum length of the word 'ft_min_word_len'

Posted: 24 Apr 2013 11:53 AM PDT

I'm trying to understand FULLTEXT indexes.

Is there some SQL query to test 'ft_min_word_len' variable?

I've tried to match four characters only in the example below but it doesn't work. Why the third row is returned if there is no match for '(database|data)'

mysql> SHOW VARIABLES LIKE 'ft_min_word_len';  +-----------------+-------+  | Variable_name   | Value |  +-----------------+-------+  | ft_min_word_len | 4     |  +-----------------+-------+  1 row in set (0.00 sec)    mysql> SHOW CREATE TABLE articles\G  *************************** 1. row ***************************         Table: articles  Create Table: CREATE TABLE `articles` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `title` varchar(200) DEFAULT NULL,    `body` text,    PRIMARY KEY (`id`),    FULLTEXT KEY `title` (`title`,`body`)  ) ENGINE=MyISAM AUTO_INCREMENT=7 DEFAULT CHARSET=latin1  1 row in set (0.00 sec)    mysql> SELECT * FROM articles WHERE MATCH (title,body) AGAINST ('database' WITH QUERY EXPANSION);  +----+-------------------+------------------------------------------+  | id | title             | body                                     |  +----+-------------------+------------------------------------------+  |  1 | MySQL Tutorial    | DBMS stands for DataBase ...             |  |  5 | MySQL vs. YourSQL | In the following database comparison ... |  |  3 | Optimizing MySQL  | In this tutorial we will show ...        |  +----+-------------------+------------------------------------------+  3 rows in set (0.00 sec)    mysql> SELECT * FROM articles WHERE MATCH (title,body) AGAINST ('data' WITH QUERY EXPANSION);  Empty set (0.00 sec)  

how can i modify this trigger with loops and concat

Posted: 24 Apr 2013 12:37 PM PDT

i have this trigger below which inserts new values in the an emplog table,i want to make my trigger to insert in the description column the event that took place for example if i changed the last_name i want it to appear old.lastname was changed to new.last_name,if its the first-name or gender or dob or marital or ssn that were updated it should do the same,im a newbie how can i achieve this lets say i want to use concat function and loops??

CREATE TRIGGER emplog_update AFTER UPDATE ON emp   FOR EACH ROW   INSERT INTO emplog   VALUES (      NEW.id      ,NEW.lastname      ,NEW.firstname      ,NEW.gender      ,NEW.dob      ,NEW.marital      ,NEW.SSN      ,'U'      ,NULL      ,USER()      ,('this is where the description will go')      );  

"Rewinding" a Postgresql database

Posted: 24 Apr 2013 02:02 PM PDT

I have heard that Postgresql uses an append-only format for it's databases, and if this is true I should be able to 'rewind' the database to a previous point in time by removing the commits that came after it.

Anyone know how I can do this?

Thanks

Is there better way to 'touch' a row than performing a dummy update?

Posted: 24 Apr 2013 02:06 PM PDT

I use the xmin system column to implement a form of optimistic locking, and sometimes need to 'touch' the row to bump xmin without actually updating the row. I currently just do a 'dummy' update:

create table t(id integer);  insert into t(id) values(1);  insert into t(id) values(2);    select xmin::text from t where id=1  /*      |  XMIN |  ---------  | 87159 |  */    update t set id=id where id=1    select xmin::text from t where id=1  /*  |  XMIN |  ---------  | 87196 |  */  

(SQL Fiddle)

I'm curious whether there is another way of bumping the xid without doing an update, analogous to the unix touch command?

SQL - Grouping results by custom 24 hour period [closed]

Posted: 24 Apr 2013 12:04 PM PDT

I need to create an Oracle 11g SQL report showing daily productivity: how many units were shipped during a 24 hour period. Each period starts at 6am and finishes at 5:59am the next day.

How could I group the results in such a way as to display this 24 hour period? I've tried grouping by day, but, a day is 00:00 - 23:59 and so the results are inaccurate.

The results will cover the past 2 months.

Many thanks.

Is it better to use multiple one-to-many relationships, or polymorphic?

Posted: 24 Apr 2013 12:48 PM PDT

I've got the following tables:

  • Clicks table
  • Websites table
  • Ads table

Both Ads and Websites have many Clicks (i.e. both models have a one-to-many relation with Clicks). In such cases, is it better to use multiple one-to-many relationships, or would a Clickable polymorphic setup be better?

Live backup without lock

Posted: 24 Apr 2013 02:37 PM PDT

In our company we have a lot of MySQL databases. Some of these are critical and we want to take a back-up of all databases. These databases can't be locked because the application can change these database at every moment. I want to write a script that makes a dump of the database an store it somewhere on a backupdisk. What is the best way to achieve this without locking databases and get a consistent dump off all databases from a remote server.

Thanks

GROUP BY with OR/AND clause?

Posted: 24 Apr 2013 01:29 PM PDT

Lets say my database looks like this:

Table: messages

+----+------+----+----------+----------+     | id | from | to | double_1 | double_2 |     +----+------+----+----------+----------+     | 1  | 1    | 2  | 0        | 0        |    | 2  | 1    | 0  | 1        | 2        |    | 3  | 2    | 0  | 2        | 1        |    | 4  | 2    | 1  | 0        | 0        |    | 5  | 2    | 3  | 0        | 0        |    | 6  | 2    | 0  | 1        | 3        |    +----+------+----+----------+----------+    

Then in my selection from the table, I want to GROUP BY the following:

  • group rows where double_1 and double_2 is the same, while to = 0
  • group rows where from is the same, while to = $id and double_1 = 0 and double_2 = 0
  • group rows where to is the same, while from = $id and double_1 = 0 and double_2 = 0

Anyone have any ideas how to do that?

EDIT: As it seems that i did make some people misunderstanding this. I will now explain better. As I say the ways that I would like to GROUP BY, then when saying group rows where double_1 and double_2 is the same..., then its grouping the rows where double_1 and double_2 are the same. Like I have those two rows:

+----+------+----+----------+----------+     | id | from | to | double_1 | double_2 |     +----+------+----+----------+----------+     | 1  | 1    | 0  | 1        | 1        |    | 2  | 1    | 0  | 2        | 2        |      +----+------+----+----------+----------+      

They should not be grouped, as ID 1's double_1 AND double_2 is not the same as ID 2's double_1 AND double_2. While this would group:

+----+------+----+----------+----------+     | id | from | to | double_1 | double_2 |     +----+------+----+----------+----------+     | 1  | 1    | 0  | 1        | 2        |    | 2  | 1    | 0  | 1        | 2        |      +----+------+----+----------+----------+    

Im very sorry for the misunderstanding, hope that someone here are still willing to help me out.

T-SQL Issues With Defining 'AS'

Posted: 24 Apr 2013 04:15 PM PDT

I am creating a fully dynamic application but have ran into a bit of a hiccup. There are multiple 'undefined' fields that can be defined by the users. The only problem is redisplaying them. Currently they are showing as [UDF_0] to [UDF_7], i.e. :

SELECT [TABLE].[UDF_0],         [TABLE].[UDF_1],         [TABLE].[UDF_2],         [TABLE].[UDF_3],         [TABLE].[UDF_4],         [TABLE].[UDF_5],         [TABLE].[UDF_6],         [TABLE].[UDF_7]  FROM [TABLE]  

Would obviously display as:

UDF_0 || UDF_1 || etc...  

What I would like to be able to do is display them something to this effect, but I can't for the life of me figure it out.

EXECUTE PROCEDURE [dbo].[GetProjectInfo] @Project varchar(100)    AS  BEGIN    SELECT [TABLE].[UDF_0] AS (SELECT [TBL_PROJECT_DESC].[UDF_0]                             FROM [TBL_PROJECT_DESC]                             WHERE [TBL_PROJECT_DESC].[PROJECT_NAME]=@Project),         --etc....  FROM [TABLE]  

Desired display would be :

Field Name 0 || Field Name 1 || etc...  

Updating a table with more than 850 million rows of data

Posted: 24 Apr 2013 08:44 PM PDT

I have been tasked with writing an update query to update a table with more than 850 million rows of data. Here are the table structures:

Source Tables :

    CREATE TABLE [dbo].[SourceTable1](      [ProdClassID] [varchar](10) NOT NULL,      [PriceListDate] [varchar](8) NOT NULL,      [PriceListVersion] [smallint] NOT NULL,      [MarketID] [varchar](10) NOT NULL,      [ModelID] [varchar](20) NOT NULL,      [VariantId] [varchar](20) NOT NULL,      [VariantType] [tinyint] NULL,      [Visibility] [tinyint] NULL,   CONSTRAINT [PK_SourceTable1] PRIMARY KEY CLUSTERED   (      [VariantId] ASC,      [ModelID] ASC,      [MarketID] ASC,      [ProdClassID] ASC,      [PriceListDate] ASC,      [PriceListVersion] ASC  )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF,   IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON,   ALLOW_PAGE_LOCKS  = ON, FILLFACTOR = 90)      )    CREATE TABLE [dbo].[SourceTable2](      [Id] [uniqueidentifier] NOT NULL,      [ProdClassID] [varchar](10) NULL,      [PriceListDate] [varchar](8) NULL,      [PriceListVersion] [smallint] NULL,      [MarketID] [varchar](10) NULL,      [ModelID] [varchar](20) NULL,   CONSTRAINT [PK_SourceTable2] PRIMARY KEY CLUSTERED   (      [Id] ASC  )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF,   IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON,   ALLOW_PAGE_LOCKS  = ON, FILLFACTOR = 91) ON [PRIMARY]      ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]  

SourceTable1 contains 52 million rows of data and SourceTable2 contains 400,000 rows of data.

Here is the TargetTable structure

CREATE TABLE [dbo].[TargetTable](      [ChassisSpecificationId] [uniqueidentifier] NOT NULL,      [VariantId] [varchar](20) NOT NULL,      [VariantType] [tinyint] NULL,      [Visibility] [tinyint] NULL,   CONSTRAINT [PK_TargetTable] PRIMARY KEY CLUSTERED   (      [ChassisSpecificationId] ASC,      [VariantId] ASC      )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF,   ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON, FILLFACTOR = 71) ON [PRIMARY]      ) ON [PRIMARY]  

The relationship between these tables are as follows:

  • SourceTable1.VariantID is related to TargetTable.VariantID
  • SourceTable2.ID is related to TargetTable.ChassisSpecificationId

The update requirement is as follows:

  1. Get the values for VariantType and Visibility from SourceTable1 for each VariantID, having the maximum value in the PriceListVersion column.
  2. Get the value of the ID column from SourceTable2 where the values of ModelID, ProdClassID, PriceListDate and MarketID match with that of SourceTable1.
  3. Now update the TargetTable with the values for VariantType and Visibility where the ChassisspecificationID matches SourceTable2.ID and VariantID matches SourceTable1.VariantID

The challenge is to do this update on live production, with minimum locking. Here is the query I have put together.

-- Check if Temp table already exists and drop if it does  IF EXISTS(          SELECT NULL           FROM tempdb.sys.tables          WHERE name LIKE '#CSpec%'        )  BEGIN      DROP TABLE #CSpec;  END;    -- Create Temp table to assign sequence numbers  CREATE Table #CSpec  (      RowID int,      ID uniqueidentifier,      PriceListDate VarChar(8),      ProdClassID VarChar(10),      ModelID VarChar(20),      MarketID Varchar(10)   );    -- Populate temp table   INSERT INTO #CSpec  SELECT ROW_NUMBER() OVER (ORDER BY MarketID) RowID,         CS.id,          CS.pricelistdate,          CS.prodclassid,          CS.modelid,          CS.marketid   FROM   dbo.SourceTable2 CS   WHERE CS.MarketID IS NOT NULL;    -- Declare variables to hold values used for updates  DECLARE @min            int,           @max            int,          @ID             uniqueidentifier,          @PriceListDate  varchar(8),          @ProdClassID    varchar(10),          @ModelID        varchar(20),          @MarketID       varchar(10);  -- Set minimum and maximum values for looping  SET @min = 1;  SET @max = (SELECT MAX(RowID) From #CSpec);    -- Populate other variables in a loop  WHILE @min <= @max  BEGIN      SELECT           @ID = ID,          @PriceListDate = PriceListDate,          @ProdClassID = ProdClassID,          @ModelID = ModelID,          @MarketID = MarketID      FROM #CSpec      WHERE RowID = @min;      -- Use CTE to get relevant values from SourceTable1       ;WITH Variant_CTE AS      (      SELECT  V.variantid,               V.varianttype,               V.visibility,              MAX(V.PriceListVersion) LatestPriceVersion      FROM    SourceTable1 V       WHERE       V.ModelID = @ModelID              AND V.ProdClassID = @ProdClassID              AND V.PriceListDate = @PriceListDate              AND V.MarketID = @MarketID      GROUP BY              V.variantid,               V.varianttype,               V.visibility      )    -- Update the TargetTable with the values obtained in the CTE      UPDATE      SV           SET     SV.VariantType = VC.VariantType,                   SV.Visibility = VC.Visibility      FROM        spec_variant SV       INNER JOIN  TargetTable VC      ON          SV.VariantId = VC.VariantId      WHERE       SV.ChassisSpecificationId = @ID                  AND SV.VariantType IS NULL                  AND SV.Visibility IS NULL;        -- Increment the value of loop variable      SET @min = @min+1;  END  -- Clean up  DROP TABLE #CSpec  

It takes about 30 seconds when I set the limit of iterations to 10, by hardcoding the value of @max variable. However, when I increase the limit to 50 iterations, then it takes almost 4 minutes to complete. I am concerned that the execution time taken for 400,000 iterations will run into multiple days on production. However, that might still be acceptable, if the TargetTable does not get locked down, preventing users from accessing it.

All inputs are welcome.

Thanks, Raj

How to allow a user to create databases only with a certain prefix?

Posted: 24 Apr 2013 06:08 PM PDT

We're having a multi-user Linux server. I'd like to give each user the ability to create and manage databases at will, but to keep things sane, force them that each database name they create must start with their user name as prefix. For example, user joe would be only allowed to create databases like joe_app1 and joe_app2 but not app1 or jack_app1. Is this possible in PostgreSQL? Or is there another way how to separate name-spaces for user-created databases?

What could cause the wrong ID to be inserted?

Posted: 24 Apr 2013 12:12 PM PDT

I have a SQL Server 2008 server (build 10.0.5500). Earlier this week I ran this on a table that already had data in it:

delete from dbo.table  go      dbcc checkident('dbo.table',reseed,0)  

When the user went to create a new record later on, somehow an ID of 0 was inserted into the ID column, instead of the 1 SQL Server usually puts in if identity(1,1) is configured for the ID.

This caused some weird issues, but clearing the data and running the reseed resulted in a 1 being inserted, as expected. I can't duplicate the issue.

For reference, here's the general format for our save sp's:

alter procedure dbo._TableSave      @pk_id int,      @field varchar(50)  as      if (@pk_id is null)      begin          set nocount on;            insert into dbo.Table          (              Field          )          values          (              @field          );          select scope_identity();      end      else      begin          update dbo.Table          set Field=@field          where PK_ID=@pk_id            select @pk_id      end  

Does anyone know what could cause SQL Server to insert a 0 in the ID when it should have been a 1?

Transactions, references and how to enforce double entry bookkeeping? (PG)

Posted: 24 Apr 2013 06:19 PM PDT

Double entry bookkeeping is

a set of rules for recording financial information in a financial accounting system in which every transaction or event changes at least two different nominal ledger accounts.

An account can be "debited" or "credited", and the sum of all credits must be equal to the sum of all debits.

How would you implement this in a Postgres database? Specifying the following DDL:

CREATE TABLE accounts(      account_id serial NOT NULL PRIMARY KEY,      account_name varchar(64) NOT NULL  );      CREATE TABLE transactions(      transaction_id serial NOT NULL PRIMARY KEY,      transaction_date date NOT NULL  );      CREATE TABLE transactions_details(      id serial8 NOT NULL PRIMARY KEY,      transaction_id integer NOT NULL           REFERENCES transactions (transaction_id)          ON UPDATE CASCADE          ON DELETE CASCADE          DEFERRABLE INITIALLY DEFERRED,      account_id integer NOT NULL          REFERENCES accounts (account_id)          ON UPDATE CASCADE          ON DELETE RESTRICT          NOT DEFERRABLE INITIALLY IMMEDIATE,      amount decimal(19,6) NOT NULL,      flag varchar(1) NOT NULL CHECK (flag IN ('C','D'))  );  

Note: the transaction_details table does not specify an explicit debit/credit account, because the system should be able to debit/credit more than one account in a single transaction.

This DDL creates the following requirement: After a database transaction commits on the transactions_details table, it must debit and credit the same amount for each transaction_id, e.g:

INSERT INTO accounts VALUES (100, 'Accounts receivable');  INSERT INTO accounts VALUES (200, 'Revenue');    INSERT INTO transactions VALUES (1, CURRENT_DATE);    -- The following must succeed  BEGIN;      INSERT INTO transactions_details VALUES (DEFAULT, 1, 100, '1000'::decimal, 'D');      INSERT INTO transactions_details VALUES (DEFAULT, 1, 200, '1000'::decimal, 'C');  COMMIT;      -- But this must raise some error  BEGIN;      INSERT INTO transactions_details VALUES (DEFAULT, 1, 100, '1000'::decimal, 'D');      INSERT INTO transactions_details VALUES (DEFAULT, 1, 200, '500'::decimal, 'C');  COMMIT;  

Is it possible to implement this in a PostgreSQL database? Without specifying additional tables to store trigger states.

Optimization of a select statement

Posted: 24 Apr 2013 07:04 PM PDT

I'm using MySQL and have a table user_data like this:

user_id         int(10) unsigned  reg_date        int(10) unsigned  carrier         char(1)  

The reg_data is the unix timestamp of the registration date, and the carrier is the type of carriers, the possible values of which could ONLY be 'D', 'A' or 'V'.

I need to write a sql statement to select the registered user number of different carriers on each day from 2013/01/01 to 2013/01/31. So the desirable result could be:

2013/01/01   D   10  2013/01/01   A   31  2013/01/01   V   24  2013/01/02   D    9  2013/01/02   A   23  2013/01/02   V   14  ....  2013/01/31   D   11  2013/01/31   A   34  2013/01/31   V   22  

Can anyone help me with this question? I'm required to give the BEST answer, which means I can add index if necessary, but I need to keep the query efficient.

This is what I have right now:

select FLOOR((FROM_UNIXTIME(reg_date)) / 1000000) as reg_day, carrier, count(user_id) as user_count  from user_data  where reg_date >= UNIX_TIMESTAMP('2013-01-01 00:00:00') and reg_date < UNIX_TIMESTAMP('2013-02-01 00:00:00')  group by reg_day, carrier;  

Thanks!

The question has been moved to here and please find all the updates in the link.

Rent weekly cost database design

Posted: 24 Apr 2013 01:03 PM PDT

I have a database which contains a table BUILDING with in each row details about some building, another table BUILDING_UNIT contains rows with details about a single building unit which refers with a foreign key to the belonging BUILDING.ID. The BUILDING_UNIT table also refers to a table CATEGORY which tells whether the BUILDING_UNIT is of category A,B,C,D again with a foreign key pointing to CATEGORY.ID.

Now the final cost of renting the building unit depends on its building, category and on the number of days it is rented and specific period of the year. We only rent them weekly so I might as well use weeks only however I'd like it to be as flexible as possible in the future.

I cannot convince myself on a table which can represent this situation.

Do I have to use a table with coefficients for each day of the year and then a table with coefficients for A,B,C,D and then a table with coefficients for each Building and then somehow calculate a result?

Is there some standard and recognized implementation for problems of this type?

Thank you

EDIT: Notice the solution should abstract from the formula for calculating the cost which might change in the future. However I might be asked to make a specific week of the year, for building unit X inside building Y to cost 300$ while the week after 600$. Generally building units inside the same building and in the same week cost the same, however that might change in future so I'd like to treat already all specific cases.

How to add 'root' MySQL user back on MAMP?

Posted: 24 Apr 2013 12:03 PM PDT

On PhpMyAdmin, I removed 'root' user by mistake. I was also logged in as 'root'. How can I add the user 'root' back, on MAMP?

Referencing database programmatically via T-SQL

Posted: 24 Apr 2013 12:09 PM PDT

I am writing a stored procedure that takes a database name as an argument and returns a table of that database's indexes and their fragmentation level. This stored procedure will live in our DBA database (the DB that contains tables the DBAs use for monitoring and optimizing things). The systems in question are all SQL Server 2008 R2 if that makes a difference.

I have the basic query worked out, but am stuck on trying to provide the indexes' actual names. To the best of my knowledge, that information is contained in each individual's sys.indexes view. My specific problem is trying to reference that view programmatically from another database's stored procedure.

To illustrate, this is the portion of the query at issue:

FROM sys.dm_db_index_physical_stats(@db_id,NULL,NULL,NULL,NULL) p  INNER JOIN sys.indexes b ON p.[object_id] = b.[object_id]       AND p.index_id = b.index_id       AND b.index_id != 0  

The query works fine when executed from the database identified by @db_id, because it is using the proper sys.indexes view. If I try to call this from the DBA database, however, it all comes up null, as the sys.indexes view is for the wrong database.

In more general terms, I need to be able to do something like this:

DECLARE @db_name NVARCHAR(255) = 'my_database';  SELECT * FROM @db_name + '.sys.indexes';  

or

USE @db_name;  

I have tried switching databases or referencing other databases using combinations of string concatenation and OBJECT_NAME/OBJECT_ID/DB_ID functions and nothing seems to work. I'd appreciate any ideas the community might have, but suspect I will have to retool this stored procedure to reside in each individual database.

Thanks in advance for any suggestions.

[Articles] Natural Data

[Articles] Natural Data


Natural Data

Posted: 23 Apr 2013 11:00 PM PDT

Can you own data itself or patent it? There's a Supreme Court case dealing with this right now. Steve Jones notes that the outcome could affect our jobs as we deal with more and more data and laws to control data are enacted.

[MS SQL Server] 2005 Ent to 2008 R2 Std

[MS SQL Server] 2005 Ent to 2008 R2 Std


2005 Ent to 2008 R2 Std

Posted: 24 Apr 2013 04:47 AM PDT

Hi,We are looking to upgrade SQL Server from 2005 Enterprise Ed to 2008 R2 Standard Ed.Assuming there are no enterprise features in place, can a straightforward in-place upgrade be performed? I've seen in other sites that the procedure for downgrading editions requires uninstalling/installing SQL Server, but there were no version changes involved there.thank you

Table size swell after altering a column

Posted: 24 Apr 2013 05:07 AM PDT

HiI recently ran some code that altered some columns in a 38 million row table from varchar to nvarchar. Before the code the 38 million rows consumed 7Gb of space. After i can the code to alter the column to nvarchar the table now takes up 13gb of space.The reason this concerns me is that i ran the same process on a test environment and the 38 million rows only consumes 8gb of space.I have run dbcc checktable and it returns no errors. Is there a way i can check the table page usage to see what might be going on ( i have run shrinkfile but this only reclaimed the free space) im concerned that table had doubled in size and might have duplicated / wasted data causing slowness in the application.Im using sql server 2008 r2 and running the alter table xxx alter column xxx command Any thoughts would be greatFrant101

What do DBAs need to know about networking technologies?

Posted: 23 Apr 2013 07:44 AM PDT

Hi,Could anyone point me to a reference or two that describes the kinds of networking knowledge (technical not social or business networking :-p ) that are important for DBAs to master?I realize this is probably a huge topic, but I'm just looking for an entry point to form a list of things to study and practice.Thanks in advance for any help.- webrunner

Adding Report Services to existing passive cluster node

Posted: 24 Apr 2013 03:25 AM PDT

I have a clustered SQL Server 2008 R2 setup. Node01 = runs an instance of Report Services pointing at a production report services database on the cluster.Node02 = isn't running a Report Services instance.The clustered server contains two primary application databases - Production and StagingThe already installed RS instance on Node01 services the production database. I need to setup Reporting Services on Node02 so that it will service the Staging database.I began rolling through the setup on Node02 this morning but it failed validation saying that this instance was already clustered. I noted that early on in the setup program, it wanted me to choose the instance to install RS onto, but the pull down only listed the cluster name...not the particular Node name(s).What steps am I going to have to follow to install an RS instance on Node02 that ties to the Staging database? I found a couple of posts in the forum here describing similar but not quite the same circumstances.I'm new to working with clustered servers, so I'm a little confused about what things I can and can't do while the cluster is running. Can I simply "pause" the Node02 machine and then run the SETUP.exe program? Do I have bring the Node02 resource offline first?As always, any help any of you can toss my way will be greatly appreciated. Thanks!Larry

Incorrect syntax near '-'

Posted: 24 Apr 2013 12:42 AM PDT

I am receiving this following error on step 1 on a job that I am trying to run. LoadAllDatabaseInfoAccrossServerDuration 00:00:00Sql Severity 15Sql Message ID 102Operator Emailed Operator Net sent Operator Paged Retries Attempted 0MessageExecuted as user: Domain\AccountA. Incorrect syntax near '-'. [SQLSTATE 42000] (Error 102). The step failed.The sql server service's are setup with SQL domain accounts and I have a secure link created between ServerA and ServerB. When I run each SP manually they run without any error but when ran from a sql job I receive the above error. Is there something i'm missing in the linked server? Here is the code I usedEXEC master.dbo.sp_addlinkedsrvlogin --@rmtsrvname = N'ServerB', --@locallogin = NULL , --@useself = N'False', --@rmtuser = N'UserB', --@rmtpassword = N'password'--GoThe server I'm running the job on is SQL 2008 R2 and ServerB is SQL Server 2005.

Process causing high Reads - cube?

Posted: 24 Apr 2013 02:01 AM PDT

How can I find out which process, if any, is causing high reads against a cube? Our SAN guys are telling us that there is significant spike causing lag on the SAN. Through our queries, we do not see anything in the database engine that correlates with this. Our assumption is that it is SSAS, but I don't know how to determine this. Help!?

Upgrade from SQL Server 2005 to SQL 2008 R2

Posted: 24 Apr 2013 12:56 AM PDT

This might sound like an obviously simple question - but just wanted to check before I take the plunge...I have a production server that is running on SQL 2005 - I would like to upgrade to 2008.It is a simple case of ... BACKUP / BACKUP then DVD in and run the upgrade wizard.Check SP's and Triggers - Simples right ?

Long running CDC Capture

Posted: 23 Apr 2013 10:57 AM PDT

Hi All,I have a major issue now in my SQL Instance where in I my CDC capture is 48 behind my ETL on base tables. CDC retention is 8 days. we have 8 million updates and 2 million inserts every 4 hours on the database. How can i reduce the latency. Please help.

Catching culprits of high tempdb growth

Posted: 25 Mar 2013 11:48 PM PDT

Hello,We are facing issues with tempdb on our SQL server 2008 clustered instance. The SQL version is Microsoft SQL Server 2008 (SP1) - 10.0.2714.0 (X64) May 14 2009 16:08:52 Copyright (c) 1988-2008 Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 5.2 <X64> (Build 3790: Service Pack 2) Once or twice a month, tempdb grows to unexpectedly high values and log file growth fills up the drive. The problem is that it is happening at Random times and there is no trend. We are looking to find out what is causing the tempdb to fill. Is there a script that I can deploy in my environment using which I can find out the culprits that are growing my tempdb? I tried searching through default trace files but those are not very helpful. This has been a headache since last 3 months or so and everytime we have to restart SQL server during production hours to solve the problem. Any help would be appreciated.Thanks in advance!!

[SQL 2012] Replication between 2005 and 2012 conundrum

[SQL 2012] Replication between 2005 and 2012 conundrum


Replication between 2005 and 2012 conundrum

Posted: 24 Apr 2013 03:20 AM PDT

In our existing setup we have a local intranet SQL Server 2005 install which holds all of our intranet data (customer details, communications, invoices etc). We also have a second SQL Server 2005 install at our DC which is used by our publically accessible portal site, which allows our customers to login, check their details etc. We've got merge replication setup between the two, with the local server acting as distributor and publisher, and the publication (and as such the live database) consisting of just a subset of the data stored locally, so only the data required in the portal is replicated, not the more sensitive data.The issue is that I'm in the process of migrating the databases in the DC to a new SQL 2012 server, and as such will need to get replication working for the intranet/portal database between the intranet 2005 server and the internet 2012 one. Eventually I want to get the intranet server upgraded as well, but for now that's not an option.The question is, what's the best way to go about this?While the local server can't manage the live one I can obviously do the reverse, however a subscriber in merge replication can't be newer than the publisher, so I can't simply setup the publication on the local server using SMSS 2012 on the live server and then make the live server a subscriber.I could setup the distributor / publisher on the live 2012 server, and then make the local 2005 server a subscriber, but the local 2005 server is the data source so I'm not sure that would work. Also I don't want the live 2012 server to have full access to the local database since it's potentially more at risk than the local server.I could make the live 2012 server the distributor and keep the local 2005 server as the publisher (according to the docs), but again I couldn't make the 2012 server a subscriber to it.If anyone can confirm whether the above are / aren't possible or if there's a better way I'd appreciate it, it might just save what little hair I have left!

Page life expectancy diving to sub-10 on a 128 GB server

Posted: 27 Feb 2013 09:36 AM PST

Last weekend we migrated our primary ERP system from a Windows 2003, SQL 2008R2 server to a windows 2008R2, SQL 2012 server.The new servers are much beefier than the old ones, including an extra 32 gig of ram. The ERP database is ~500 gig, of which a good amount is audit data. On the old server I generally had page life expectancy values around 1000 or more for normal daily activity.On the new server I have been capturing perfmon stats for the past week and I see the PLE jumping all over the place. It climbs up into the several hundred but then suiddenly dives to ridiculously low values like 12 or even 7. The server usage profile has not changed, it's still the same users and applications doing the same things they were doing last week.The instance is set to have min 70 gig, max 110 gig allocated to SQL.The service account has lock pages in memory permission.Can anyone think of some setting I might have missed during migraiton, or some new option to be set in SQL2012 that I've overlooked, that could explain this?

Looking for recommended vendor on SQL 2012 training

Posted: 24 Apr 2013 12:54 AM PDT

Hi there!I am interested in a SQL 2012 Admistrator training course. ofcourse I prefer classroom, but I guess online would be more affordable. have anyone have experience on this type of training? if so could you share with me your preferred vendor. I live in houston, TXI just come across this site, seems to be reasonable. not sure if it is worth it or not.http://www.hdstrainingstore.com/ProductDetails.asp?ProductCode=MS%2D70%2D462Thanks, in advance.

End Points in SQL 2012

Posted: 23 Apr 2013 10:43 PM PDT

Can anyone explain me what is the use of end points?when will we use endpoints?types of endpoints?how to create endpoints?Please clarify me these topics..thanks in advance :)

[T-SQL] Add column with previous days results

[T-SQL] Add column with previous days results


Add column with previous days results

Posted: 24 Apr 2013 12:02 AM PDT

Hi,Please help me out with the below query.We are calculating Exposure column (Hilighted in the query) based on current date and my requirement now is to add a new column next to it with yesterdays values (Currentdate-1). Please help me out.SQLServer 2008 R2-----------------------------SELECT --bi.cobdate,fact.batchid, fact.mastergroup, fact.counterparty, fact.counterpartyname, fact.parentcounterpartyname AS ParentCounterparty --If No Parent, then show the original CP as Parent , fact.portfoliolevelcd AS AggregationScheme, fact.portfolionodevalue AggregationNode, ptycc.anzccr AS CCR, ptycc.securityindicator AS SI, fact.limittimeband, fact.limitstartdt AS [Start_Date], fact.limitenddt AS [End_Date], COALESCE(ext.currency, fact.limitcurrency) AS LocalCurrency, fact.limitcurrency AS LimitCCY, COALESCE(fun.currency, fun1.limitcurrency) LocalExchnagerate, fun1.limitcurrency AS LimitExchnagerate, ( fun1.curvepointvalue / fun.curvepointvalue ) * CONVERT(FLOAT, fact.limitamount) AS Limit --,fun.CurvePointValue LocalCurrency , fun1.curvepointvalue Limitcurrency, pfe.riskvalue AS Exposure --,pfe.RiskValue2 as "ExposureT-1" , ( ( fun1.curvepointvalue / fun.curvepointvalue ) * CONVERT(FLOAT, fact.limitamount) - pfe.riskvalue ) AS Availability, ( CASE WHEN ( ( fun1.curvepointvalue / fun.curvepointvalue ) * CONVERT(FLOAT, fact.limitamount) ) IS NULL OR pfe.riskvalue IS NULL THEN 0 WHEN ( ( fun1.curvepointvalue / fun.curvepointvalue ) * CONVERT(FLOAT, fact.limitamount) ) = 0 AND pfe.riskvalue > 0 THEN 1 WHEN ( ( fun1.curvepointvalue / fun.curvepointvalue ) * CONVERT(FLOAT, fact.limitamount) ) = 0 AND pfe.riskvalue = 0 THEN 0 ELSE Cast(( Isnull(pfe.riskvalue, 0) / ( ( fun1.curvepointvalue / fun.curvepointvalue ) * CONVERT( FLOAT, fact.limitamount) ) ) AS FLOAT) END ) AS Utilisation, ptycc.creditcontrolpoint AS ControlPoint, fact.[CollateralAgreementCd] CollateralApplied, fact.[NettingAgreementCd] NettingApplied, fact.israzor FROM dw.[factlimitutilizationbyportfolio] FACT (nolock) INNER JOIN (SELECT br.batchrunid BatchID, bi.businessdate CobDate, bi.batchinstanceid, br.startdatetime AS ReportingDate FROM logging.batchrun br (nolock) INNER JOIN logging.batchinstance bi (nolock) ON br.batchinstanceid = bi.batchinstanceid) BI ON fact.batchid = Bi.batchid INNER JOIN dw.partycreditcontrol ptycc (nolock) ON fact.counterpartyid = ptycc.partyid AND ( ptycc.effstartdate <= bi.cobdate AND ptycc.effenddate > bi.cobdate ) INNER JOIN dw.portfolio port (nolock) ON fact.portfolioid = port.portfolioid AND port.providersystemcd = 'Razor' AND port.portfoliolevelcd = 'Customer Asset Group' AND port.effstartdate <= bi.cobdate AND port.effenddate > bi.cobdate LEFT JOIN dw.portfoliobridge bport (nolock) ON ( bport.tgtportfolioid = fact.portfolioid AND bport.tgtprovidercd = 'Razor' AND bport.effstartdate <= bi.cobdate AND bport.effenddate > bi.cobdate ) LEFT JOIN (SELECT t.portfolioid, t.limittimeband, Max(t.maxexposure) AS RiskValue, t.batchid FROM dw.factlimitutilizationbyportfolio t (nolock) INNER JOIN dw.riskmeasuredefinition rmd (nolock) ON t.riskmeasuredefinitionid = rmd.riskmeasureid AND rmd.riskmeasurename = 'PFE_LC' AND t.isfact = 1 GROUP BY t.portfolioid, t.limittimeband, t.batchid) pfe ON bport.srcportfolioid = pfe.portfolioid AND bport.srcprovidercd = 'CRE' AND fact.limittimeband = pfe.limittimeband AND fact.batchid = PFE.batchid LEFT JOIN (SELECT DISTINCT portfolioid, currency, runid FROM extract.razorportfoliotraderelation)ext ON fact.batchid = ext.runid AND PFE.portfolioid = ext.portfolioid LEFT JOIN (SELECT cpt.curvepointvalue, cdt.observationdt, cid.curveidentifier currency FROM dw.curveidentifier cid (nolock) INNER JOIN dw.curvedata cdt (nolock) ON cid.curveid = cdt.curveid INNER JOIN dw.curvepoint cpt (nolock) ON cdt.curvedataid = cpt.curvedataid WHERE cid.curvetype = 'Exchange' AND cid.curvedomain = 'QuIC' AND cid.islatest = 1 AND cdt.islatest = 1 AND cpt.islatest = 1 AND cdt.ccycd = 'USD') fun ON fun.observationdt = bi.cobdate AND fun.currency = ext.currency LEFT JOIN (SELECT cpt.curvepointvalue, cdt.observationdt, cid.curveidentifier LimitCurrency FROM dw.curveidentifier cid (nolock) INNER JOIN dw.curvedata cdt (nolock) ON cid.curveid = cdt.curveid INNER JOIN dw.curvepoint cpt (nolock) ON cdt.curvedataid = cpt.curvedataid WHERE cid.curvetype = 'Exchange' AND cid.curvedomain = 'QuIC' AND cid.islatest = 1 AND cdt.islatest = 1 AND cpt.islatest = 1 AND cdt.ccycd = 'USD') fun1 ON fun1.observationdt = bi.cobdate AND fun1.limitcurrency = fact.limitcurrency WHERE isfact = 0-----------------------------Thanks,Nagarjun.

Set-Based Solution to this Problem?

Posted: 23 Apr 2013 02:36 AM PDT

Hello there! I've got an interesting situation I'm handling at present; I've developed a means of completing the task at hand, but it requires a WHILE loop at present, and I was wondering if I could remove the loop. Performance is good; for the use cases the procedure will run in, it completed in a few seconds, which is acceptable. However, should the use case scale up, I'd like to keep it from bogging down, if I can.Basically, I've got a lot of databases (100+), all with identical tables and table structures, and a PHP front-end where users will select projects from a checklist, and pass the project names to SQL Server, where a corresponding database name is picked up and a short UPDATE is run. So, it ends up like this:[code]CREATE TABLE #Test(ProjectName varchar(75), DBName varchar(75))INSERT INTO #Test(ProjectName, DBName)VALUES('Project1','DB1'),('Project2','DB2'),('Project3','DB3')[/code]The ProjectName is used to JOIN the #Test table to another table that contains the project names and their associated database name, and retrieves the DBName for each project, but I'm just supplying dummy DBNames here. From there, the update goes like so:[code]DECLARE @DBName varchar(75)DECLARE @sqlstatement varchar(1000)DECLARE @Counter intSELECT @Counter = (SELECT COUNT(DBName) FROM #Test)WHILE @Counter > 0BEGINSELECT @DBName = (SELECT TOP 1 DBName FROM #Test ORDER BY DBName ASC)SELECT @sqlstatement = 'UPDATE [' +@DBName+ '].dbo.tablename SET ...'EXEC(@sqlstatement)DELETE FROM #Test WHERE DBName = @DBNameSELECT @Counter = @Counter - 1END[/code]As stated, this works out nicely, and does what it needs to do quickly enough; however, is there any way to abolish the WHILE loop in there? I'm inclined to think there isn't, since this has to go across a subset of databases and doing so almost always requires a loop, but I could very well be wrong. Thanks in advance for any help provided here!

How To Sum...

Posted: 23 Apr 2013 10:08 PM PDT

I Having Following Table Structure....[code="sql"]Create Table Adding (ID int identity(1,1),Result int,ActualResult int)insert into Adding (Result) values (10),(10),(10),(10),(-10),(-10),(-10),(-10)select * from AddingBut My Required Result Should be below Mentioned...ID Result RequiredResult1 10 02 10 203 10 304 10 405 -10 306 -10 207 -10 108 -10 0Example :----------{( Result + ActualResult ) = ActualResult Then ( ActualResult+ Result ) = ActualResult ..... 10 + 0 = 10 Then 10 +10 = 20 Then 20 +10 = 30 like that is going....}[/code]Thanks & Regards,Saravanan.D

Two SQL Staatements different results

Posted: 23 Apr 2013 08:25 PM PDT

Got a feeling this is something to do with an implied cast that I dont understanddeclare @DataReady INTSELECT @DataReady = 1if not exists (select start_lsn from cdc.lsn_time_mapping where start_lsn <= 0X0002696800000AE90002 ) select @DataReady = 0 if not exists (select start_lsn from cdc.lsn_time_mapping where start_lsn >= 0X0002696E000002EE0001 ) select @DataReady = 2 select @DataReady as PkgLSNsValidated declare @start_lsn binary(10), @end_lsn binary(10) declare @start_lsn_str nvarchar(42), @end_lsn_str nvarchar(42) declare @DataReady1 int set @start_lsn_str = 0X0002696800000AE90002 set @end_lsn_str = 0X0002696E000002EE0001 set @start_lsn = sys.fn_cdc_hexstrtobin(@start_lsn_str) set @end_lsn = sys.fn_cdc_hexstrtobin(@end_lsn_str) select @DataReady1 = 1 if not exists (select start_lsn from cdc.lsn_time_mapping where start_lsn <= @start_lsn_str ) select @DataReady1 = 0 if not exists (select start_lsn from cdc.lsn_time_mapping where start_lsn >= @end_lsn_str ) select @DataReady1 = 2 select @DataReady1 as PkgLSNsValidated The first set of SQL returns 0 implying that the value is not validThe second set of SQL returns 1 implying that the value IS validThe second set has to accept the LSNs as string values (using SSIS)Whats going on? Which is right and why?Many thanksM

Pratical Advantage of Heap

Posted: 23 Apr 2013 07:39 PM PDT

Hi all experts,Is there any practical advantage of using Heap as a storage instead of clustered index for table.

incrementing

Posted: 23 Apr 2013 01:46 PM PDT

I have this table with these info:ID intFirstorder intSecondorder intThirdorder intsequenceNumber intHow would I write a script so that the result would be:Before the script:ID firstorder secondorder thirdorder sequencenumber 1 null null null 12 null null null 2100000 null null null 100000The logic for the script is: The initial value for firstorder, secondorder, and thirdorder is 1for 1..10 (sequencenumber) : first order would be 1, from 11..20 would be 2, etc…for 1..100 (sequencenumber): secondorder would be 1, from 101 to 200 would be 2, etc…for 1..10000 (sequencenumber): thirdorder would be 1, from 10001 to 20000 would be 2, etc..After executing the script:ID firstorder secondorder thirdorder sequencenumber1 1 1 1 110 1 1 1 1011 2 1 1 11Thanks for all the help.

SQL Server equivalent for MySQL's Substring_index

Posted: 23 Apr 2013 06:44 AM PDT

Thanks for looking in to my question.Does anyone know if there is an equivalent function available for MySQL's substring_index..? MySQL SUBSTRING_INDEX() returns the substring from the given string before a specified number of occurrences of a delimiter. SUBSTRING_INDEX(str, delim, count) SELECT SUBSTRING_INDEX('www.mytestpage.info','.',2)Returns -> 'www.mytestpage'.Thanks!Siva.

Trigger to delete old data before Insert

Posted: 23 Apr 2013 02:42 AM PDT

Hi guys .. i am trying to create Trigger on sql 2000 . creating trigger which deletes old data more then 30 days and then do inserts. any scripts for this .would be great help Thanks

While Loop/ CTE Statement

Posted: 23 Apr 2013 02:22 AM PDT

HiI have a table, PR_LINK_INV_HST, and I need to find all the records on PR_LINK_INV_HST where the "Client (CSN)" = PR_LINK_INV_HST.CLIENT_ID or PR_LINK_INV_HST.LINKED_CLIENT_ID. and return all LINK_CLIENT_ID's and CLIENT_ID's for the specified "Client (CSN)". then continue looping through the PR_LINK_INV_HST table to also find the clients that are linked to the linked clients of the specified "Client (CSN)".so this is my table PR_LINK_INV_HST(CLIENT_ID, LINK_CLIENT_ID)Can anyone help help me on how to get this right because the query I have gives me an infinite loop.My Query:Create TABLE #PR_LINK_INV_HST (CLIENT_ID varchar(15), NAME varchar(15), LINK_CLIENT_ID varchar(30))Insert into #PR_LINK_INV_HST (CLIENT_ID,NAME,LINK_CLIENT_ID) values('1-23','John','linked to Diana (1-14)')Insert into #PR_LINK_INV_HST (CLIENT_ID,NAME,LINK_CLIENT_ID) values('1-23','John','linked to Mary (1-33)')Insert into #PR_LINK_INV_HST (CLIENT_ID,NAME,LINK_CLIENT_ID) values('1-14','Diana','not linked')Insert into #PR_LINK_INV_HST (CLIENT_ID,NAME,LINK_CLIENT_ID) values('1-33','Mary','linked to Smith (1-16)')Insert into #PR_LINK_INV_HST (CLIENT_ID,NAME,LINK_CLIENT_ID) values('1-33','Mary','linked to Pope (1-17)')Insert into #PR_LINK_INV_HST (CLIENT_ID,NAME,LINK_CLIENT_ID) values('1-16','Smith','not linked')Insert into #PR_LINK_INV_HST (CLIENT_ID,NAME,LINK_CLIENT_ID) values('1-17','Pope','linked to Thabo (1-19)')Insert into #PR_LINK_INV_HST (CLIENT_ID,NAME,LINK_CLIENT_ID) values('1-19','Thabo','not linked')DECLARE @CLIENT_ID VARCHAR(15)SET @CLIENT_ID = '1-23' ;WITH pr_linked(CLIENT_ID, NAME, LINK_CLIENT_ID, [LEVEL])AS( SELECT CLIENT_ID, Name, LINK_CLIENT_ID, 0 FROM #PR_LINK_INV_HST WHERE CLIENT_ID = @CLIENT_ID UNION ALL SELECT HST.CLIENT_ID, HST.Name, HST.LINK_CLIENT_ID, LNK.[LEVEL] + 1 FROM #PR_LINK_INV_HST HST JOIN pr_linked LNK ON HST.CLIENT_ID = LNK.CLIENT_ID )SELECT * INTO #RESULTSFROM pr_linkedselect * from #RESULTS

Is there any native support available for JSON in SQL SERVER ?

Posted: 05 Sep 2012 05:31 PM PDT

Is there any native support available for JSON in SQL SERVER ? ie. Is there any inbuilt tools in SSIS which can be used as a source ?

not exists versus not in

Posted: 23 Apr 2013 02:00 AM PDT

Looking at the 2 statements should they not return the same?select *,'I' from pharmdb.pat.dbo.patients where PHPatid not in (select PHPatid from Patients)select *,'I' from pharmdb.pat.dbo.patients where not exists (select phpatid from Patients inner join pharmdb.pat.dbo.patients p on p.phpatid = patients.PHPatid)2nd statement returns no records

[SQL Server Data Warehousing] Recovery model for DWH databases


thanks Kieran and mike.


We have ETLs which run throught out the night and populate data in Daawarehouse and dartmart dbs and ETLs will not run during the Day time. The database size will be around 300GB for earch DB. As mike said, we can take fulll backup of the DBs but in our case, we are in processin of designing a DR strategy for DWH databases and so transferring the full backup over the network is not feasable (even after compression enabled), hence we are planning for a weekly fulls and daily differential with simple recovery model...


what is your opinion on this solutuin or you can suggest some alternate plans...


Thanks for the help



.

social.technet.microsoft.com/Forums

[SQL Server] Advanced SQL Server 2008 Extended Events with Examples



Weekly SQL Server newsletter with articles, forum posts, and blog posts via email. Subscribers receive our white paper with performance tips for developers.



SQLTeam.com Articles via RSS


SQLTeam.com Weblog via RSS



.

sqlteam.com

Search This Blog