Wednesday, April 24, 2013

[how to] Creating a PostgreSQL SERIAL column using pgAdmin3

[how to] Creating a PostgreSQL SERIAL column using pgAdmin3


Creating a PostgreSQL SERIAL column using pgAdmin3

Posted: 24 Apr 2013 07:05 PM PDT

When I use pgAdmin3 to create a column of type serial, why does it instead create nextval('prices_id_seq'::regclass)?

Should we keep this type defined by pgAdmin? Manually create the table with SQL defining id column as type serial causes the pgAdmin SQL pane to again display it as nextval('prices_id_seq'::regclass).

pgAdmin SQL Pane

id integer NOT NULL DEFAULT nextval('prices_id_seq'::regclass)  

How do you correct a corrupted index in Oracle?

Posted: 24 Apr 2013 06:28 PM PDT

I have this ORA error in our logs:

Caused by: java.sql.BatchUpdateException: ORA-01578: ORACLE data block corrupted (file # 8, block # 22921) ORA-01110: data file 8: '/data/app/oracle/oradata/MYSRVR/datafile/o1_mf_mysrvr_88m82mdj.dbf'

I tried running this in sqlplus:

select segment_name,segment_type,tablespace_name, owner from sys.dba_extents where file_id=8 and 22921 between block_id and block_id + blocks -1;

output is:

SEGMENT_NAME             SEGMENT_TYPE   TABLESPACE_NAME     OWNER  -----------------------    ------------     ----------------    ---------------  PK_ZXCATSET_CATID      INDEX            MYSRVR_IDX_TB    MYSRVR  

Now I am stuck on which index is that actual segment name. (I think :/ ) I tried doing this to rebuild that index:

SQL> alter index PK_ZXCATSET_CATID rebuild tablespace MYSRVR_IDX_TB;

  • ERROR at line 1: ORA-01418: specified index does not exist

Even though if I do "select index_name from dba_indexes", that index name shows up. What am I doing wrong? This is a 10.2 Oracle server.

Thanks in advance!

EDIT If I do MYSRVR.PK_ZXCATSET_CATID the alter completes... but I still don't think my problem is fixed!

In SQL Server 2008R2 can I force a View to use objects within the user's default schema instead of the schema in which the View exists?

Posted: 24 Apr 2013 05:30 PM PDT

A bit of background. I have a base application and most clients use it as standard. However some clients have small code and database customisations. Each of these clients has their own branch and maintenance can be tricky.

I want to consolidate all these into a single database structure (not a single database - we aren't doing multi-tenancy) to enable upgrades to be applied in a much more uniform fashion. I'm still at the proof of concept stage, but the route I was going down would be to have the standard objects stay in the schema they currently exist in (mostly dbo) and have the custom objects reside in a schema for each client.

For example, I could have dbo.users and client1.users which has some additional columns. If I set the default schema for the client to be "client1" then the following query

SELECT * FROM users  

will return data from the client1 schema or the dbo schema depending on which login is connected. This is absolutely perfect for what I'm trying to achieve.

The problem I'm running into is with Views. I have many views which are in the dbo schema and refer to the Users table. No matter which user I connect to the database as, these views always select from dbo.users.

So I'm guessing the question I have is: Can I prefix the tables in the view with some variable like "DEFAULT"? e.g.

SELECT u.username, u.email, a.level  FROM DEFAULT.users u INNER JOIN DEFAULT.accessLevels a ON u.accessID = a.accessID  

If this isn't possible and I'm totally barking up the wrong tree, do you have any suggestions as to how I can achieve what I'm setting out to do?

Many thanks.

One to at most one relation or multiple null columns?

Posted: 24 Apr 2013 04:57 PM PDT

I have a table with many attributes that can be grouped in logical groups, and thus the idea of put this attributes in separate tables seems attractive. The logical groups doesn't represent entities themselves. The entity is requested most for reading than for write. Then a large table looks appropriate. Also, this way the joins are avoided in queries. The part I don't like of this approach is the large number of nullable fields resulting. What solution is better?

Table partionning problem

Posted: 24 Apr 2013 04:55 PM PDT

I need your help for a partionning problem. We are using a "log_event" table which log a lot of events sent by our players (subscription, connection, play, share, etc...). We insert 5,000 row / s in this table.

This table is quite big (more than 1 billion lines) and the structure is :

CREATE TABLE IF NOT EXISTS `log_event` (    `id` bigint(20) NOT NULL AUTO_INCREMENT,    `id_event` smallint(6) NOT NULL,    `id_inscri` bigint(20) NOT NULL,    `date` int(11) NOT NULL,    `data` bigint(20) NOT NULL,    PRIMARY KEY (`id`),    KEY `id_event_2` (`id_event`,`data`),    KEY `id_inscri` (`id_inscri`),    KEY `date` (`date`),    KEY `id_event_4` (`id_event`,`date`,`data`)  ) ENGINE=InnoDB  DEFAULT CHARSET=latin1 ROW_FORMAT=COMPRESSED KEY_BLOCK_SIZE=8 ;  

In order to increase performance, we would like to partition this table, but :

  • we don't have real PRIMARY KEY => we are using an AUTO INCREMENT which is useless in our select query. We never use this column but it's the only way we have found to have a PK.

  • we would like to partition with the columns 'id_event, date, data' but we don't know how to do that because we have the error "#1503 - A PRIMARY KEY must include all columns in the table's partitioning function"

Do you have any ideas on how to increase performance without cutting the possibility of quick SELECT ?

Thanks !

François

After moving database (backup, restore), I have to re-add user

Posted: 24 Apr 2013 04:10 PM PDT

I occasionally move a database (SQL Express 2012) from a development machine to a server, or vice versa, using database backup and restore in SQL Server Management Studio.

Whenever I do this, the applications on the target machine cannot access the database until I delete the user they use, "george", from the database users (Database, Security, Users in SQL Server Management Studio), and re-add it as owner under Security, Logins, george/properties, user mapping.

Is there a better way to do this? It seems a little convoluted.

MySQL stored routine performance while using PREPARE

Posted: 24 Apr 2013 07:11 PM PDT

Instead of maintaining stored routines for each database in my current environment i have decided to create separate database just for stored routines storage. Mainly i am using them for reporting. All databases have same schema structure.

Stored routine body:

PROCEDURE `sp_name`(IN db_name  VARCHAR(50)                      ,IN stardate DATE                      ,IN enddate DATE)  ......  SET @sql_query = CONCAT(  'SELECT       .....      .....      .....  FROM ',  db_name, '.view   WHERE       ....  stardate, .... enddate .....   GROUP BY       ..... ,.....');    PREPARE sql_query FROM @sql_query;  EXECUTE sql_query;  DEALLOCATE PREPARE sql_query;  

Now i faced with performance problem. While executing stored routine from each database execution time is less then 1 sec. With prepared statement 5-6 sec. Could someone please advice if there is space for any improvements.

Shrinking log files with SMO

Posted: 24 Apr 2013 02:09 PM PDT

I know I can use the following t-SQL to shrink a log file:

BACKUP LOG db1 TO DISK = '\\server\share\db1_log1.trn';  DBCC ShrinkFile([db1_log], 0);  

How do I do the same with SMO? I tried:

$server = new-Object Microsoft.SqlServer.Management.Smo.Server()    $dbBackup = new-object Microsoft.SqlServer.Management.Smo.Backup  $dbBackup.Action = "Log"  $dbBackup.Database = "db1"  $dbBackup.Devices.AddDevice("\\server\share\db1_log1.trn", "File")  $dbBackup.SqlBackup($server)    $db = $srv.Databases.Item("db1")  $db.TruncateLog()  

But here, I found that the TruncateLog() method does not work with anything higher than SQL 2005. That link recommended using:

$db.RecoveryModel = RecoveryModel.Simple  $db.Alter()  $db.LogFiles[0].Shrink(0, ShrinkMethod.TruncateOnly)  

However, switching to Simple Recovery Mode is going to break my log chain and I don't want to do that. Shrinking the log file doesn't break my log chain so I'm looking for an equivalent SMO method to do that. Does such a method exist?

How best to maintain SQL log file sizes

Posted: 24 Apr 2013 04:07 PM PDT

I'm somewhat of a new DBA and I'm managing a SQL Server 2012 instance that has a fair amount of activity. I'm running in Full Recovery mode because we need point in time recovery.

Right now, I'm taking a full backup of the databases and logs every day at 5am. Some of the log files have ballooned up to 300gb and even after taking a backup they don't reduce in size. I can get them to reduce in size by running something similar to:

BACKUP LOG db1 TO DISK = '\\server\share\db1_log1.trn';  DBCC ShrinkFile([db1_log], 0);    BACKUP LOG db1 TO DISK = '\\server\share\db1_log2.trn';  DBCC ShrinkFile([db1_log], 0);    BACKUP LOG db1 TO DISK = '\\server\share\db1_log3.trn';  DBCC ShrinkFile([db1_log], 0);  

When I check the LSNs of the backup files I see something like:

RESTORE headeronly FROM DISK = N'\\server\share\db1_log1.trn'  FirstLSN:  15781000014686200001  SecondLSN: 15802000000665000001    RESTORE headeronly FROM DISK = N'\\server\share\db1_log2.trn'  FirstLSN:  15802000000665000001  SecondLSN: 15805000000004100001    RESTORE headeronly FROM DISK = N'\\server\share\db1_log3.trn'  FirstLSN:  15805000000004100001  SecondLSN: 15808000000004200001  

I don't believe I'm breaking my log chain by shrinking the log files. Reading up on this, I do believe I'm hurting my performance because those shrunk log files have to re-grow themselves.

Questions:

  1. Why doesn't the log file shrink after my backups? Is it because there are uncommitted transactions?
  2. At first I was thinking I should shrink the log files after every 5:00 AM backup. After reading up on how that's bad for performance I now believe that I need to take regular log backups every couple of hours during the day. Is that correct?
  3. My normal full backup of the database/logs happens every day at 5:00 AM and sometimes takes 3 hours. If I schedule the log backups to happen every hour, what will happen when the log backup collides with the 5:00 AM backup?

DB2 db2fm proccess

Posted: 24 Apr 2013 03:15 PM PDT

Server is been up for 365 days, however i got some weird repeated procceses.

Are these normal?

ps -fea | grep db2fm

  db2inst1  643284  229516  29   May 25      - 212564:06 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1  671770  229516  56   May 14      - 227447:02 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1  757794 1237058   0   Apr 19  pts/7  0:00 /bin/sh /home/db2inst1/sqllib/bin/db2cc  db2inst1  774232  229516  30   Sep 25      - 94218:54 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1  962750  229516  30   Jul 18      - 145256:01 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1  999450  229516  29   Aug 17      - 117710:27 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1 1179898  229516  58   Nov 02      - 75788:49 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  

ps -fea | grep db2agent

  db2inst1  409770  680100   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1  450750  778412   0   Apr 18      -  0:03 db2agent (idle) 0  db2inst1  618688  680100   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1  651440  680100   0   Nov 17      -  0:20 db2agent (DATABASEA) 0  db2inst1  655508  491676   0   Apr 19      -  0:04 db2agent (idle) 0  db2inst1  684038  680100   0   Mar 23      -  0:03 db2agent (DATABASEA) 0  db2inst1  790706  491676   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1  880672  680100   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1  913438  778412   0   Nov 16      -  0:20 db2agent (idle) 0  db2inst1  946182  491676   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1  991312  778412   0   Apr 17      -  0:16 db2agent (idle) 0  db2inst1 1077466  491676   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1 1134726  680100   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1 1142964  491676   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1 1233112  491676   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1 1261748  778412   0   Jun 15      -  0:18 db2agent (idle) 0  db2inst1 1384678  778412   0   Mar 23      -  0:27 db2agent (idle) 0  db2inst1 1404936  680100   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1 1421368  778412   0   Mar 22      -  0:04 db2agent (idle) 0  db2inst1 1445936  491676   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1 1482864  491676   0   Jun 16      -  0:31 db2agent (idle) 0  db2inst1 1503440  778412   0   Jun 15      -  0:56 db2agent (idle) 0  db2inst1 1519842  778412   0   Mar 23      -  0:00 db2agent (DATABASEA) 0  db2inst1 1531946  680100   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1 1572884  680100   0   Apr 19      -  0:00 db2agent (idle) 0  

Other info

  oslevel -g  Fileset                                 Actual Level        Maintenance Level  -----------------------------------------------------------------------------  bos.rte                                 5.3.0.40            5.3.0.0    db2fm -s -S  Gcf module 'fault monitor' is NOT operable  Gcf module '/home/db2inst1/sqllib/lib/libdb2gcf.a' state is AVAILABLE      uptime    02:14PM   up 365 days,  12:51,  6 users,  load average: 6.69, 6.89, 6.97     db2level  DB21085I  Instance "db2inst1" uses "64" bits and DB2 code release "SQL08020"  with level identifier "03010106".  Informational tokens are "DB2 v8.1.1.64", "s040812", "U498350", and FixPak "7"    

Msg 21, Level 21, State 1, Line 1 Warning: Fatal error 9001

Posted: 24 Apr 2013 02:37 PM PDT

We recently moved to a SQL Server that's on a VPS and every now and then we get a 9001 error when trying to do an update.

Rebooting the server fixes the problem.

I changed the maximum server memory (in MB) to 2000 because it's a 4GB VPS.

But I just don't know what's causing the error.

How to clean up duplicate objects in SYS/SYSTEM

Posted: 24 Apr 2013 11:37 AM PDT

I have a sql query like below to check duplicate objects in SYS/SYSTEM.

select OBJECT_NAME, OBJECT_TYPE from  DBA_OBJECTS where OBJECT_NAME||OBJECT_TYPE  in (select OBJECT_NAME||OBJECT_TYPE from  DBA_OBJECTS where OWNER='SYS') and  OWNER='SYSTEM' and OBJECT_NAME not in  ('AQ$_SCHEDULES_PRIMARY', 'AQ$_SCHEDULES',  'DBMS_REPCAT_AUTH');  

I have a list of large duplicated objects. My question / problem is : how to get rid of the objects? What is the best way to approach this?

MySQL - SQL statement to test minimum length of the word 'ft_min_word_len'

Posted: 24 Apr 2013 11:53 AM PDT

I'm trying to understand FULLTEXT indexes.

Is there some SQL query to test 'ft_min_word_len' variable?

I've tried to match four characters only in the example below but it doesn't work. Why the third row is returned if there is no match for '(database|data)'

mysql> SHOW VARIABLES LIKE 'ft_min_word_len';  +-----------------+-------+  | Variable_name   | Value |  +-----------------+-------+  | ft_min_word_len | 4     |  +-----------------+-------+  1 row in set (0.00 sec)    mysql> SHOW CREATE TABLE articles\G  *************************** 1. row ***************************         Table: articles  Create Table: CREATE TABLE `articles` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `title` varchar(200) DEFAULT NULL,    `body` text,    PRIMARY KEY (`id`),    FULLTEXT KEY `title` (`title`,`body`)  ) ENGINE=MyISAM AUTO_INCREMENT=7 DEFAULT CHARSET=latin1  1 row in set (0.00 sec)    mysql> SELECT * FROM articles WHERE MATCH (title,body) AGAINST ('database' WITH QUERY EXPANSION);  +----+-------------------+------------------------------------------+  | id | title             | body                                     |  +----+-------------------+------------------------------------------+  |  1 | MySQL Tutorial    | DBMS stands for DataBase ...             |  |  5 | MySQL vs. YourSQL | In the following database comparison ... |  |  3 | Optimizing MySQL  | In this tutorial we will show ...        |  +----+-------------------+------------------------------------------+  3 rows in set (0.00 sec)    mysql> SELECT * FROM articles WHERE MATCH (title,body) AGAINST ('data' WITH QUERY EXPANSION);  Empty set (0.00 sec)  

how can i modify this trigger with loops and concat

Posted: 24 Apr 2013 12:37 PM PDT

i have this trigger below which inserts new values in the an emplog table,i want to make my trigger to insert in the description column the event that took place for example if i changed the last_name i want it to appear old.lastname was changed to new.last_name,if its the first-name or gender or dob or marital or ssn that were updated it should do the same,im a newbie how can i achieve this lets say i want to use concat function and loops??

CREATE TRIGGER emplog_update AFTER UPDATE ON emp   FOR EACH ROW   INSERT INTO emplog   VALUES (      NEW.id      ,NEW.lastname      ,NEW.firstname      ,NEW.gender      ,NEW.dob      ,NEW.marital      ,NEW.SSN      ,'U'      ,NULL      ,USER()      ,('this is where the description will go')      );  

"Rewinding" a Postgresql database

Posted: 24 Apr 2013 02:02 PM PDT

I have heard that Postgresql uses an append-only format for it's databases, and if this is true I should be able to 'rewind' the database to a previous point in time by removing the commits that came after it.

Anyone know how I can do this?

Thanks

Is there better way to 'touch' a row than performing a dummy update?

Posted: 24 Apr 2013 02:06 PM PDT

I use the xmin system column to implement a form of optimistic locking, and sometimes need to 'touch' the row to bump xmin without actually updating the row. I currently just do a 'dummy' update:

create table t(id integer);  insert into t(id) values(1);  insert into t(id) values(2);    select xmin::text from t where id=1  /*      |  XMIN |  ---------  | 87159 |  */    update t set id=id where id=1    select xmin::text from t where id=1  /*  |  XMIN |  ---------  | 87196 |  */  

(SQL Fiddle)

I'm curious whether there is another way of bumping the xid without doing an update, analogous to the unix touch command?

SQL - Grouping results by custom 24 hour period [closed]

Posted: 24 Apr 2013 12:04 PM PDT

I need to create an Oracle 11g SQL report showing daily productivity: how many units were shipped during a 24 hour period. Each period starts at 6am and finishes at 5:59am the next day.

How could I group the results in such a way as to display this 24 hour period? I've tried grouping by day, but, a day is 00:00 - 23:59 and so the results are inaccurate.

The results will cover the past 2 months.

Many thanks.

Is it better to use multiple one-to-many relationships, or polymorphic?

Posted: 24 Apr 2013 12:48 PM PDT

I've got the following tables:

  • Clicks table
  • Websites table
  • Ads table

Both Ads and Websites have many Clicks (i.e. both models have a one-to-many relation with Clicks). In such cases, is it better to use multiple one-to-many relationships, or would a Clickable polymorphic setup be better?

Live backup without lock

Posted: 24 Apr 2013 02:37 PM PDT

In our company we have a lot of MySQL databases. Some of these are critical and we want to take a back-up of all databases. These databases can't be locked because the application can change these database at every moment. I want to write a script that makes a dump of the database an store it somewhere on a backupdisk. What is the best way to achieve this without locking databases and get a consistent dump off all databases from a remote server.

Thanks

GROUP BY with OR/AND clause?

Posted: 24 Apr 2013 01:29 PM PDT

Lets say my database looks like this:

Table: messages

+----+------+----+----------+----------+     | id | from | to | double_1 | double_2 |     +----+------+----+----------+----------+     | 1  | 1    | 2  | 0        | 0        |    | 2  | 1    | 0  | 1        | 2        |    | 3  | 2    | 0  | 2        | 1        |    | 4  | 2    | 1  | 0        | 0        |    | 5  | 2    | 3  | 0        | 0        |    | 6  | 2    | 0  | 1        | 3        |    +----+------+----+----------+----------+    

Then in my selection from the table, I want to GROUP BY the following:

  • group rows where double_1 and double_2 is the same, while to = 0
  • group rows where from is the same, while to = $id and double_1 = 0 and double_2 = 0
  • group rows where to is the same, while from = $id and double_1 = 0 and double_2 = 0

Anyone have any ideas how to do that?

EDIT: As it seems that i did make some people misunderstanding this. I will now explain better. As I say the ways that I would like to GROUP BY, then when saying group rows where double_1 and double_2 is the same..., then its grouping the rows where double_1 and double_2 are the same. Like I have those two rows:

+----+------+----+----------+----------+     | id | from | to | double_1 | double_2 |     +----+------+----+----------+----------+     | 1  | 1    | 0  | 1        | 1        |    | 2  | 1    | 0  | 2        | 2        |      +----+------+----+----------+----------+      

They should not be grouped, as ID 1's double_1 AND double_2 is not the same as ID 2's double_1 AND double_2. While this would group:

+----+------+----+----------+----------+     | id | from | to | double_1 | double_2 |     +----+------+----+----------+----------+     | 1  | 1    | 0  | 1        | 2        |    | 2  | 1    | 0  | 1        | 2        |      +----+------+----+----------+----------+    

Im very sorry for the misunderstanding, hope that someone here are still willing to help me out.

T-SQL Issues With Defining 'AS'

Posted: 24 Apr 2013 04:15 PM PDT

I am creating a fully dynamic application but have ran into a bit of a hiccup. There are multiple 'undefined' fields that can be defined by the users. The only problem is redisplaying them. Currently they are showing as [UDF_0] to [UDF_7], i.e. :

SELECT [TABLE].[UDF_0],         [TABLE].[UDF_1],         [TABLE].[UDF_2],         [TABLE].[UDF_3],         [TABLE].[UDF_4],         [TABLE].[UDF_5],         [TABLE].[UDF_6],         [TABLE].[UDF_7]  FROM [TABLE]  

Would obviously display as:

UDF_0 || UDF_1 || etc...  

What I would like to be able to do is display them something to this effect, but I can't for the life of me figure it out.

EXECUTE PROCEDURE [dbo].[GetProjectInfo] @Project varchar(100)    AS  BEGIN    SELECT [TABLE].[UDF_0] AS (SELECT [TBL_PROJECT_DESC].[UDF_0]                             FROM [TBL_PROJECT_DESC]                             WHERE [TBL_PROJECT_DESC].[PROJECT_NAME]=@Project),         --etc....  FROM [TABLE]  

Desired display would be :

Field Name 0 || Field Name 1 || etc...  

Updating a table with more than 850 million rows of data

Posted: 24 Apr 2013 08:44 PM PDT

I have been tasked with writing an update query to update a table with more than 850 million rows of data. Here are the table structures:

Source Tables :

    CREATE TABLE [dbo].[SourceTable1](      [ProdClassID] [varchar](10) NOT NULL,      [PriceListDate] [varchar](8) NOT NULL,      [PriceListVersion] [smallint] NOT NULL,      [MarketID] [varchar](10) NOT NULL,      [ModelID] [varchar](20) NOT NULL,      [VariantId] [varchar](20) NOT NULL,      [VariantType] [tinyint] NULL,      [Visibility] [tinyint] NULL,   CONSTRAINT [PK_SourceTable1] PRIMARY KEY CLUSTERED   (      [VariantId] ASC,      [ModelID] ASC,      [MarketID] ASC,      [ProdClassID] ASC,      [PriceListDate] ASC,      [PriceListVersion] ASC  )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF,   IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON,   ALLOW_PAGE_LOCKS  = ON, FILLFACTOR = 90)      )    CREATE TABLE [dbo].[SourceTable2](      [Id] [uniqueidentifier] NOT NULL,      [ProdClassID] [varchar](10) NULL,      [PriceListDate] [varchar](8) NULL,      [PriceListVersion] [smallint] NULL,      [MarketID] [varchar](10) NULL,      [ModelID] [varchar](20) NULL,   CONSTRAINT [PK_SourceTable2] PRIMARY KEY CLUSTERED   (      [Id] ASC  )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF,   IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON,   ALLOW_PAGE_LOCKS  = ON, FILLFACTOR = 91) ON [PRIMARY]      ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]  

SourceTable1 contains 52 million rows of data and SourceTable2 contains 400,000 rows of data.

Here is the TargetTable structure

CREATE TABLE [dbo].[TargetTable](      [ChassisSpecificationId] [uniqueidentifier] NOT NULL,      [VariantId] [varchar](20) NOT NULL,      [VariantType] [tinyint] NULL,      [Visibility] [tinyint] NULL,   CONSTRAINT [PK_TargetTable] PRIMARY KEY CLUSTERED   (      [ChassisSpecificationId] ASC,      [VariantId] ASC      )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF,   ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON, FILLFACTOR = 71) ON [PRIMARY]      ) ON [PRIMARY]  

The relationship between these tables are as follows:

  • SourceTable1.VariantID is related to TargetTable.VariantID
  • SourceTable2.ID is related to TargetTable.ChassisSpecificationId

The update requirement is as follows:

  1. Get the values for VariantType and Visibility from SourceTable1 for each VariantID, having the maximum value in the PriceListVersion column.
  2. Get the value of the ID column from SourceTable2 where the values of ModelID, ProdClassID, PriceListDate and MarketID match with that of SourceTable1.
  3. Now update the TargetTable with the values for VariantType and Visibility where the ChassisspecificationID matches SourceTable2.ID and VariantID matches SourceTable1.VariantID

The challenge is to do this update on live production, with minimum locking. Here is the query I have put together.

-- Check if Temp table already exists and drop if it does  IF EXISTS(          SELECT NULL           FROM tempdb.sys.tables          WHERE name LIKE '#CSpec%'        )  BEGIN      DROP TABLE #CSpec;  END;    -- Create Temp table to assign sequence numbers  CREATE Table #CSpec  (      RowID int,      ID uniqueidentifier,      PriceListDate VarChar(8),      ProdClassID VarChar(10),      ModelID VarChar(20),      MarketID Varchar(10)   );    -- Populate temp table   INSERT INTO #CSpec  SELECT ROW_NUMBER() OVER (ORDER BY MarketID) RowID,         CS.id,          CS.pricelistdate,          CS.prodclassid,          CS.modelid,          CS.marketid   FROM   dbo.SourceTable2 CS   WHERE CS.MarketID IS NOT NULL;    -- Declare variables to hold values used for updates  DECLARE @min            int,           @max            int,          @ID             uniqueidentifier,          @PriceListDate  varchar(8),          @ProdClassID    varchar(10),          @ModelID        varchar(20),          @MarketID       varchar(10);  -- Set minimum and maximum values for looping  SET @min = 1;  SET @max = (SELECT MAX(RowID) From #CSpec);    -- Populate other variables in a loop  WHILE @min <= @max  BEGIN      SELECT           @ID = ID,          @PriceListDate = PriceListDate,          @ProdClassID = ProdClassID,          @ModelID = ModelID,          @MarketID = MarketID      FROM #CSpec      WHERE RowID = @min;      -- Use CTE to get relevant values from SourceTable1       ;WITH Variant_CTE AS      (      SELECT  V.variantid,               V.varianttype,               V.visibility,              MAX(V.PriceListVersion) LatestPriceVersion      FROM    SourceTable1 V       WHERE       V.ModelID = @ModelID              AND V.ProdClassID = @ProdClassID              AND V.PriceListDate = @PriceListDate              AND V.MarketID = @MarketID      GROUP BY              V.variantid,               V.varianttype,               V.visibility      )    -- Update the TargetTable with the values obtained in the CTE      UPDATE      SV           SET     SV.VariantType = VC.VariantType,                   SV.Visibility = VC.Visibility      FROM        spec_variant SV       INNER JOIN  TargetTable VC      ON          SV.VariantId = VC.VariantId      WHERE       SV.ChassisSpecificationId = @ID                  AND SV.VariantType IS NULL                  AND SV.Visibility IS NULL;        -- Increment the value of loop variable      SET @min = @min+1;  END  -- Clean up  DROP TABLE #CSpec  

It takes about 30 seconds when I set the limit of iterations to 10, by hardcoding the value of @max variable. However, when I increase the limit to 50 iterations, then it takes almost 4 minutes to complete. I am concerned that the execution time taken for 400,000 iterations will run into multiple days on production. However, that might still be acceptable, if the TargetTable does not get locked down, preventing users from accessing it.

All inputs are welcome.

Thanks, Raj

How to allow a user to create databases only with a certain prefix?

Posted: 24 Apr 2013 06:08 PM PDT

We're having a multi-user Linux server. I'd like to give each user the ability to create and manage databases at will, but to keep things sane, force them that each database name they create must start with their user name as prefix. For example, user joe would be only allowed to create databases like joe_app1 and joe_app2 but not app1 or jack_app1. Is this possible in PostgreSQL? Or is there another way how to separate name-spaces for user-created databases?

What could cause the wrong ID to be inserted?

Posted: 24 Apr 2013 12:12 PM PDT

I have a SQL Server 2008 server (build 10.0.5500). Earlier this week I ran this on a table that already had data in it:

delete from dbo.table  go      dbcc checkident('dbo.table',reseed,0)  

When the user went to create a new record later on, somehow an ID of 0 was inserted into the ID column, instead of the 1 SQL Server usually puts in if identity(1,1) is configured for the ID.

This caused some weird issues, but clearing the data and running the reseed resulted in a 1 being inserted, as expected. I can't duplicate the issue.

For reference, here's the general format for our save sp's:

alter procedure dbo._TableSave      @pk_id int,      @field varchar(50)  as      if (@pk_id is null)      begin          set nocount on;            insert into dbo.Table          (              Field          )          values          (              @field          );          select scope_identity();      end      else      begin          update dbo.Table          set Field=@field          where PK_ID=@pk_id            select @pk_id      end  

Does anyone know what could cause SQL Server to insert a 0 in the ID when it should have been a 1?

Transactions, references and how to enforce double entry bookkeeping? (PG)

Posted: 24 Apr 2013 06:19 PM PDT

Double entry bookkeeping is

a set of rules for recording financial information in a financial accounting system in which every transaction or event changes at least two different nominal ledger accounts.

An account can be "debited" or "credited", and the sum of all credits must be equal to the sum of all debits.

How would you implement this in a Postgres database? Specifying the following DDL:

CREATE TABLE accounts(      account_id serial NOT NULL PRIMARY KEY,      account_name varchar(64) NOT NULL  );      CREATE TABLE transactions(      transaction_id serial NOT NULL PRIMARY KEY,      transaction_date date NOT NULL  );      CREATE TABLE transactions_details(      id serial8 NOT NULL PRIMARY KEY,      transaction_id integer NOT NULL           REFERENCES transactions (transaction_id)          ON UPDATE CASCADE          ON DELETE CASCADE          DEFERRABLE INITIALLY DEFERRED,      account_id integer NOT NULL          REFERENCES accounts (account_id)          ON UPDATE CASCADE          ON DELETE RESTRICT          NOT DEFERRABLE INITIALLY IMMEDIATE,      amount decimal(19,6) NOT NULL,      flag varchar(1) NOT NULL CHECK (flag IN ('C','D'))  );  

Note: the transaction_details table does not specify an explicit debit/credit account, because the system should be able to debit/credit more than one account in a single transaction.

This DDL creates the following requirement: After a database transaction commits on the transactions_details table, it must debit and credit the same amount for each transaction_id, e.g:

INSERT INTO accounts VALUES (100, 'Accounts receivable');  INSERT INTO accounts VALUES (200, 'Revenue');    INSERT INTO transactions VALUES (1, CURRENT_DATE);    -- The following must succeed  BEGIN;      INSERT INTO transactions_details VALUES (DEFAULT, 1, 100, '1000'::decimal, 'D');      INSERT INTO transactions_details VALUES (DEFAULT, 1, 200, '1000'::decimal, 'C');  COMMIT;      -- But this must raise some error  BEGIN;      INSERT INTO transactions_details VALUES (DEFAULT, 1, 100, '1000'::decimal, 'D');      INSERT INTO transactions_details VALUES (DEFAULT, 1, 200, '500'::decimal, 'C');  COMMIT;  

Is it possible to implement this in a PostgreSQL database? Without specifying additional tables to store trigger states.

Optimization of a select statement

Posted: 24 Apr 2013 07:04 PM PDT

I'm using MySQL and have a table user_data like this:

user_id         int(10) unsigned  reg_date        int(10) unsigned  carrier         char(1)  

The reg_data is the unix timestamp of the registration date, and the carrier is the type of carriers, the possible values of which could ONLY be 'D', 'A' or 'V'.

I need to write a sql statement to select the registered user number of different carriers on each day from 2013/01/01 to 2013/01/31. So the desirable result could be:

2013/01/01   D   10  2013/01/01   A   31  2013/01/01   V   24  2013/01/02   D    9  2013/01/02   A   23  2013/01/02   V   14  ....  2013/01/31   D   11  2013/01/31   A   34  2013/01/31   V   22  

Can anyone help me with this question? I'm required to give the BEST answer, which means I can add index if necessary, but I need to keep the query efficient.

This is what I have right now:

select FLOOR((FROM_UNIXTIME(reg_date)) / 1000000) as reg_day, carrier, count(user_id) as user_count  from user_data  where reg_date >= UNIX_TIMESTAMP('2013-01-01 00:00:00') and reg_date < UNIX_TIMESTAMP('2013-02-01 00:00:00')  group by reg_day, carrier;  

Thanks!

The question has been moved to here and please find all the updates in the link.

Rent weekly cost database design

Posted: 24 Apr 2013 01:03 PM PDT

I have a database which contains a table BUILDING with in each row details about some building, another table BUILDING_UNIT contains rows with details about a single building unit which refers with a foreign key to the belonging BUILDING.ID. The BUILDING_UNIT table also refers to a table CATEGORY which tells whether the BUILDING_UNIT is of category A,B,C,D again with a foreign key pointing to CATEGORY.ID.

Now the final cost of renting the building unit depends on its building, category and on the number of days it is rented and specific period of the year. We only rent them weekly so I might as well use weeks only however I'd like it to be as flexible as possible in the future.

I cannot convince myself on a table which can represent this situation.

Do I have to use a table with coefficients for each day of the year and then a table with coefficients for A,B,C,D and then a table with coefficients for each Building and then somehow calculate a result?

Is there some standard and recognized implementation for problems of this type?

Thank you

EDIT: Notice the solution should abstract from the formula for calculating the cost which might change in the future. However I might be asked to make a specific week of the year, for building unit X inside building Y to cost 300$ while the week after 600$. Generally building units inside the same building and in the same week cost the same, however that might change in future so I'd like to treat already all specific cases.

How to add 'root' MySQL user back on MAMP?

Posted: 24 Apr 2013 12:03 PM PDT

On PhpMyAdmin, I removed 'root' user by mistake. I was also logged in as 'root'. How can I add the user 'root' back, on MAMP?

Referencing database programmatically via T-SQL

Posted: 24 Apr 2013 12:09 PM PDT

I am writing a stored procedure that takes a database name as an argument and returns a table of that database's indexes and their fragmentation level. This stored procedure will live in our DBA database (the DB that contains tables the DBAs use for monitoring and optimizing things). The systems in question are all SQL Server 2008 R2 if that makes a difference.

I have the basic query worked out, but am stuck on trying to provide the indexes' actual names. To the best of my knowledge, that information is contained in each individual's sys.indexes view. My specific problem is trying to reference that view programmatically from another database's stored procedure.

To illustrate, this is the portion of the query at issue:

FROM sys.dm_db_index_physical_stats(@db_id,NULL,NULL,NULL,NULL) p  INNER JOIN sys.indexes b ON p.[object_id] = b.[object_id]       AND p.index_id = b.index_id       AND b.index_id != 0  

The query works fine when executed from the database identified by @db_id, because it is using the proper sys.indexes view. If I try to call this from the DBA database, however, it all comes up null, as the sys.indexes view is for the wrong database.

In more general terms, I need to be able to do something like this:

DECLARE @db_name NVARCHAR(255) = 'my_database';  SELECT * FROM @db_name + '.sys.indexes';  

or

USE @db_name;  

I have tried switching databases or referencing other databases using combinations of string concatenation and OBJECT_NAME/OBJECT_ID/DB_ID functions and nothing seems to work. I'd appreciate any ideas the community might have, but suspect I will have to retool this stored procedure to reside in each individual database.

Thanks in advance for any suggestions.

No comments:

Post a Comment

Search This Blog