Wednesday, April 17, 2013

[how to] PostgreSQL, How to keep only one schema?

[how to] PostgreSQL, How to keep only one schema?


PostgreSQL, How to keep only one schema?

Posted: 17 Apr 2013 09:05 PM PDT

I don't want some users be available to create schemas, and only use the public schema.

I'd like to remove the permission of creating schemas, but I can't found anything on PotsgreSQL 9.1 documentation. How can I do this?

SQL Server Transaction Backups vs Logs

Posted: 17 Apr 2013 09:18 PM PDT

I've inherited a moderately-sized business-critical SQL Server 2008 database and am trying to wrap my head around backup planning. (I'm a developer, not a DBA.)

The way our system is set up right now there are two backup systems:

  1. Weekly full backups (.bak) and hourly transaction log (.trn) backups. We keep several sets of these backups and they are regularly shipped offsite.
  2. SQL Server logs (.ldf), with recovery model set to Full. This file is on a separate drive from the main .mdf file, but is otherwise not backed up.

In the event of an emergency restore (or when restoring backups to a dev machine), my procedure is to use the .bak files and then apply the .trn files. We have a script that makes this procedure relatively straightforward.

My questions:

  1. Is it possible to restore the database from the .ldf file? What is it even for?
  2. Is it needlessly redundant to have both of these transaction logs?
  3. Is it important to back up the .ldf file?

Moving one TempdB on 3 instance server

Posted: 17 Apr 2013 09:07 PM PDT

I have a large virtualized SQL Server (Full 2008R2). I run 3 SQL instances and would like to relocate the TempdB database file to another location, splitting off from the TempdB log file.

The trouble I am having is that, even though the query "ALTER DATBASE" executes successfully, and then restarting that particular instance's SQL Server Service, does not relocate that database file.

Do I have to restart the whole server to move this database?

Oracle SQL, how to filter on a set by a rank?

Posted: 17 Apr 2013 04:24 PM PDT

I am struggling with the following SQL query right now:

http://sqlfiddle.com/#!4/736a7/12

select uc.*  from usercredential uc  inner join users u on u.userid = uc.userid  inner join credential c on c.credentialid = uc.credentialid  inner join healthsystemcredential hsc on hsc.credentialid = c.credentialid  inner join accesslevel ac on hsc.hscredentialid in (    -- Trying to figure out how to filter here?    select 1 from dual  )  where c.fulfillmentmethod = 'Attach'  and c.display = 1  and uc.auditstatus <> 0  and ( (uc.auditstatus is null) or (uc.auditstatus = 1 and uc.lastmodified > uc.auditdate) )  

Basically I am trying to get UserCredential that are Many To One with Credential. Credential is One To Many with HealthSystemCredential.

The HealthSystemCredential has a foreign key to AccessLevel, however this AccessLevel column in HealthSystemCredential is actually the Lowest Allowable Access Level. AccessLevel records have a Type Rank between 1 at the lowest and 4 at the highest.

Example: A HealthSystemCredential with an AccessLevel record that has a Type Rank of 2 should be fetched when I filter my query by AccessLevels with Types of 2, 3, or 4.

The problem is I can't figure out how to do this in one query. Is there some way I can get a DENSE_RANK column in a subquery of AccessLevels, apply my filter in the subquery and join it to the table somehow to give me all the applicable HealthSystemCredentials I want?

MySQL shared server regular lockup

Posted: 17 Apr 2013 04:20 PM PDT

I have a web/ database server with 5 websites on it, all of which execute fairly complex queries regularly.

Some of these queries appear in the slow log and take around 0.6-0.8 seconds. However I have a regular occurrence of such queries coming from 2 specific websites with 2 separate database that seem to lock each other up. What I am finding in this case in top is a MySQL process occupied with about 20-30% of a CPU and about 30 Apache processes taking up about 30% CPU each. This situation goes away after about 15-20 minutes or if I restart the MySQL server. The slow query log shows more slow queries at this time.

The problem never seems to involve only queries to a single database, but always to the same 2 databases (and websites)

The server is on a VPS with 4GB of RAM, 5 CPU cores on Ubuntu 12.04

Any idea how I could diagnose what is happening?

What's the proper term for the diverse systems to which a DW connects?

Posted: 17 Apr 2013 03:39 PM PDT

The data warehouse I've built gets data from transactional databases, web services, flat files, and a few other bits and pieces. It also writes back to some of these, and to analysis tools. I'm building a unified system for passing changes between these different systems, and it occurs to me that I don't have a single simple term that covers them all.

  • "Database" and "service" are correct for some, but not all.
  • "Data source" implies that these only supply data, which is not correct.
  • "System" and "data system" are correct, but very vague.
  • "End point" fits my mental model of spokes connected to the data warehouse hub. It's not true in that some of these systems to talk to each other directly.

What term do you use for all the systems you integrate, as part of a data warehouse or just in general? Ideally I'd like a single word which will make intuitive sense to both DBAs and non-technical parties.

problem running xp_cmdshell when Application Name set in connection string

Posted: 17 Apr 2013 02:58 PM PDT

I have an app on 10 or so servers that runs a few xp_cmdshell statements against MSSQL 2008. It works fine on all the servers except for one. To make things worse, I can run all the commands in SQL Management Studio, but in the app, they don't work. I even made a temporary app to test and it works fine! But in the deployed app, I get a simple SQL error "Access Denied". I have narrowed it down to the connection string, if I include the application name

Data Source={0};Initial Catalog={1};Persist Security Info=True;User ID={2};Password={3};Application Name=TheGroovyApp  

It throws access denied only when calling xp_cmdshell, normal SQL statements works fine. But if I remove the application name

Data Source={0};Initial Catalog={1};Persist Security Info=True;User ID={2};Password={3}  

It works fine for both normal SQL statements and calls to xp_cmdshell. The strange thing is, its only happening on one of the ten servers. The only difference being that the server has SP1 and the others do not.

@@VERSION returns

Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (X64)   Apr  2 2010 15:48:46   Copyright (c) Microsoft Corporation  Standard Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1) (VM)   

I am thinking that there is some sort of authentication that can be granted to applications, but I cant seem to find anything. I can replicate it in SQL Managment Studio by adding

Application Name=TheGroovyApp  

to the Additional Connection Parameters tab on the Connect to Database Engine dialog when you create a new query or change its connection.

The simple test statement I use is

EXEC master..xp_cmdshell 'DIR F:\SomeDirectory'  

If anyone could shed some light on what is happening it would be much appreciated.

How should I set up these tables in my relational database?

Posted: 17 Apr 2013 06:31 PM PDT

I'm working on a project where a person can create a survey with custom questions and display the questions in whatever order they choose. They can break the questions into sections. The problem I'm having is that there are 2 or more groups of people. Each group can have the same question in different sections and/or in a different order. My tables right now are:

Group: key(gId), gName  Question: key(qId), qName  Section: key(sId), sName  QOrder: key(gId, qId), sId, qOrder  

Is this okay or is there a better/more efficient way of doing this? Any help would be appreciated.

How does SQL Server generate a query execution plan that adds up to 6,000%?

Posted: 17 Apr 2013 01:55 PM PDT

weird execution plan

Today I was on The Heap and was looking at a query plan for something I thought could be improved. However, it created something that shook my belief in the SQL Server query optimizer. Can I still trust if it cannot even count to 100%?

Characteristics of the table:

  • clustered on a non-identity column
  • 12 indexes, one of them being the date_entered column in question
  • 60,000 records
  • 26 columns of various types and length
  • PAGE compressed table

Has anyone seen this before and what causes the plan to look so skewed?


And below from SQL Sentry Plan Explorer

SQL Sentry Plan Explorer - image Top Operations

exec xp_cmdshell 'sc \\<servername/serverip> start <service_name>'

Posted: 17 Apr 2013 03:37 PM PDT

When I execute this on sql server on another machine (in same sub net) I am getting "Access is denied". But when I did it from command prompt (sc \ start ) it is working.

Please advice why I am getting access denied when I am doing it from sql server using exec xp_cmdshell cmd

Best practice for copying tables from server to server

Posted: 17 Apr 2013 07:12 PM PDT

Not being a DBA and not having sysadmin privileges to do a backup/restore, would the following be a good solution for copying a set of tables? (I have a linked server from serverA to serverB)

copyTables

SET ANSI_NULLS ON  GO  SET QUOTED_IDENTIFIER ON  GO  CREATE PROCEDURE [dbo].[copyTables]  AS  -- NOCOUNT ON added to prevent extra result sets from interfering  -- with SELECT statements. XACT_ABORT ON to terminate the transaction  -- and rollback if a run-time error occurs.  SET NOCOUNT, XACT_ABORT ON    BEGIN      DECLARE @serverName varchar(50), @dbName varchar(50), @schemaName varchar(50)        SET @serverName = QUOTENAME('serverB')      SET @dbName = QUOTENAME('db')      SET @schemaName = QUOTENAME('dbo')        BEGIN TRY          BEGIN TRANSACTION            EXEC [dbo].[copyTable] @serverName, @dbName, @schemaName, 'tbl1', 'copyLog'          EXEC [dbo].[copyTable] @serverName, @dbName, @schemaName, 'tbl2', 'copyLog'          EXEC [dbo].[copyTable] @serverName, @dbName, @schemaName, 'tbl3', 'copyLog'          ...            COMMIT TRANSACTION      END TRY      BEGIN CATCH          -- Insert error into log table            ROLLBACK      END CATCH  END  GO  

copyTable

SET ANSI_NULLS ON  GO  SET QUOTED_IDENTIFIER ON  GO  CREATE PROCEDURE [dbo].[copyTable]      @serverName varchar(50),      @dbName varchar(50),      @schemaName varchar(50),      @tableName varchar(50),      @logTableName varchar(50)  AS  -- NOCOUNT ON added to prevent extra result sets from interfering  -- with SELECT statements. XACT_ABORT ON to terminate the transaction  -- and rollback if a run-time error occurs.  SET NOCOUNT, XACT_ABORT ON    BEGIN      DECLARE @localFullName varchar(200), @localShortName varchar(150),          @foreignFullName varchar(200), @logShortName varchar(150);        SET @localFullName = QUOTENAME(@dbName) + '.' + QUOTENAME(@schemaName)          + '.' + QUOTENAME(@tableName);      SET @localShortName = QUOTENAME(@schemaName) + '.' + QUOTENAME(@tableName);      SET @foreignFullName = QUOTENAME(@serverName) + '.' + QUOTENAME(@dbName)          + '.' + QUOTENAME(@schemaName) + '.' + QUOTENAME(@tableName);      SET @logShortName = QUOTENAME(@logTableName) + '.' + QUOTENAME(@schemaName)          + '.' + QUOTENAME(@tableName);        IF EXISTS       (          SELECT *           FROM sys.objects           WHERE object_id = OBJECT_ID(@localShortName)                   AND type in (N'U')      )      BEGIN          DROP TABLE @localShortName;          SELECT *               INTO @localFullName               FROM @foreignFullName;            INSERT INTO @logShortName (stamp, [message])           VALUES               (                      GETDATE(),                       'Table ' + @foreignFullName + ' was copied'              );      END  END  

SQL Server 2012 AlwaysOn: need to automatically add databases by script- T-sql or PowerShell

Posted: 17 Apr 2013 02:35 PM PDT

I've installed and successfully configured our SQL Server 2012 AlwaysOn 2-node servers for our new "Intranet" that is coming out. I've gotten AlwaysOn working great, and our Front End servers for the Intranet will be using SharePoint 2013. The glitch is that SharePoint 2013 is configured to add databases automatically to our SQL Server 2012 back end, but NOT to AlwaysOn. In reading about this and in contacting Microsoft MSDN support, the default answer is "you must manually find, select, back-up and then add those new databases individually to get them into AlwaysOn."

But wait; that can be quite a task, constantly checking the SQL Server back-end servers to see what databases were created, then having to add them into AlwaysOn, 7/24! I'm looking for a script or process that will check for new databases, back those new databases up in FULL mode, (for being added to AlwaysOn, of course) then add those databases to AlwaysOn, all automatically. Or have this run every...1-2 hours? (without user intervention)

What I've come up with so far is this script that actually identifies the newly-added databases, (not yet in AlwaysOn), and then backs them up to a shared location. My next task is to find those newly-added databases and through the various processes needed, get them added to AlwaysOn. This will involve some sort of looping action, I imagine. I'm not a T-SQL/scripting guru; is there any solution or script that I might access that would do this? (add databases to AlwaysOn automatically)?

Please advise, I'm sure I'm not the first person to have this issue. I have seen previous posts on various Internet Sites (including this one!) , and the solution is either incorrect, or states something like "sure, go ahead and just script that!". Thanks, but I need just a little more detail there.

Thanks again,

-Allen

DECLARE @name VARCHAR(50) -- database name    DECLARE @path VARCHAR(256) -- path for backup files    DECLARE @fileName VARCHAR(256) -- filename for backup      -- specify database backup directory  SET @path = '\\atel-web-be2\backups\'      DECLARE db_cursor CURSOR FOR    select name from sys.databases  where group_database_id is null and replica_id is null       and name not in('master','model','msdb','tempdb')    OPEN db_cursor     FETCH NEXT FROM db_cursor INTO @name     WHILE @@FETCH_STATUS = 0     BEGIN         SET @fileName = @path + @name + '.BAK'        BACKUP DATABASE @name TO DISK = @fileName         FETCH NEXT FROM db_cursor INTO @name     END     CLOSE db_cursor     DEALLOCATE db_cursor  

How can I join one to one to (many to one) without using group_concat

Posted: 17 Apr 2013 11:36 AM PDT

I have a table structure as follows:

    --  -- Table structure for table `emails`  --    CREATE TABLE IF NOT EXISTS `emails` (    `ID` int(5) NOT NULL AUTO_INCREMENT,    `email` varchar(255) COLLATE utf8_unicode_ci NOT NULL,    `sent` smallint(1) NOT NULL DEFAULT '0',    PRIMARY KEY (`ID`)  ) ENGINE=MyISAM  DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=5062 ;    -- --------------------------------------------------------    --  -- Table structure for table `ips`  --    CREATE TABLE IF NOT EXISTS `ips` (    `ID` int(11) NOT NULL AUTO_INCREMENT,    `ip` varchar(255) NOT NULL,    PRIMARY KEY (`ID`)  ) ENGINE=InnoDB  DEFAULT CHARSET=latin1 AUTO_INCREMENT=7534 ;    -- --------------------------------------------------------    --  -- Table structure for table `user_items`  --    CREATE TABLE IF NOT EXISTS `user_items` (    `ID` int(10) NOT NULL AUTO_INCREMENT COMMENT 'Allows sorting by last added..',    `name` varchar(255) NOT NULL,    `owner` varchar(255) NOT NULL,    `folder` int(10) NOT NULL,    `version` int(5) NOT NULL,    PRIMARY KEY (`ID`),    KEY `name` (`name`),    KEY `folder` (`folder`)  ) ENGINE=InnoDB  DEFAULT CHARSET=latin1 AUTO_INCREMENT=10431 ;    -- --------------------------------------------------------    --  -- Table structure for table `data`  --    CREATE TABLE IF NOT EXISTS `data` (    `ID` int(10) NOT NULL AUTO_INCREMENT,    `name` varchar(255) NOT NULL,    `info` varchar(255) DEFAULT NULL,    `inserted` varchar(255) NOT NULL,    `version` int(5) NOT NULL,    PRIMARY KEY (`ID`),    KEY `inserted` (`inserted`),    KEY `version` (`version`),    KEY `name_version` (`name`,`version`)  ) ENGINE=InnoDB  DEFAULT CHARSET=latin1 AUTO_INCREMENT=7207 ;    -- --------------------------------------------------------    --  -- Table structure for table `data_emails`  --    CREATE TABLE IF NOT EXISTS `data_emails` (    `ID` int(11) NOT NULL AUTO_INCREMENT,    `email_id` int(5) NOT NULL,    `name` varchar(255) NOT NULL,    `version` int(5) NOT NULL,    `time` int(255) NOT NULL,    PRIMARY KEY (`ID`),    KEY `version` (`version`),    KEY `name_version` (`name`,`version`)  ) ENGINE=InnoDB  DEFAULT CHARSET=latin1 AUTO_INCREMENT=9849 ;    -- --------------------------------------------------------    --  -- Table structure for table `data_ips`  --    CREATE TABLE IF NOT EXISTS `data_ips` (    `ID` int(5) NOT NULL AUTO_INCREMENT,    `ns_id` int(5) NOT NULL,    `name` varchar(255) NOT NULL,    `version` int(5) NOT NULL,    `time` int(255) NOT NULL,    PRIMARY KEY (`ID`),    KEY `version` (`version`),    KEY `name_version` (`name`,`version`)  ) ENGINE=InnoDB  DEFAULT CHARSET=latin1 AUTO_INCREMENT=17988 ;  

What i need to achieve is as follows. I need to get each user_item and get the data, emails, and ips associated with it. user_items links to data, data_emails, and data_ips by the columns 'name' and version.

data_emails and data_nameservers link to emails and ips respectively with email_id/ip_id = ID

Because multiple joins result in row multiplication I have had to use nested subqueries. Because a user can have multiple emails and ips associated with an item I have used group_concat to group all that particular row.. I then explode this column in my PHP - this in itself seems inefficient, but I see no other way of doing it?

Now I am appreciative that I could speed this up with proper indexes (which I am getting my head fully around atm), but I am doubtful that there would be decent speed improvements. Others have likewise been doubtful.

Without nested subqueries and group_concat noone on the Internet seems to be able to propose another way of doing this..

I ideally want to be able to do this in a SQL query as I want to integrate search functionality whereby a user for example can search for items he has which have a given email associated with them. This would be incredibly complex/resource intensive if the tables are not joined.

Does ANYONE have any proposals as to how a SQL join such as that described could be implemented without huge load times?

Thanks

How to optimize a query that's running slow on Nested Loops (Inner Join)

Posted: 17 Apr 2013 02:54 PM PDT

Consider the following simple query (only 3 tables involved)

    SELECT            l.sku_id AS ProductId,          l.is_primary AS IsPrimary,          v1.category_name AS Category1,          v2.category_name AS Category2,          v3.category_name AS Category3,          v4.category_name AS Category4,          v5.category_name AS Category5        FROM category c4      JOIN category_voc v4 ON v4.category_id = c4.category_id and v4.language_code = 'en'        JOIN category c3 ON c3.category_id = c4.parent_category_id      JOIN category_voc v3 ON v3.category_id = c3.category_id and v3.language_code = 'en'        JOIN category c2 ON c2.category_id = c3.category_id      JOIN category_voc v2 ON v2.category_id = c2.category_id and v2.language_code = 'en'        JOIN category c1 ON c1.category_id = c2.parent_category_id      JOIN category_voc v1 ON v1.category_id = c1.category_id and v1.language_code = 'en'        LEFT OUTER JOIN category c5 ON c5.parent_category_id = c4.category_id      LEFT OUTER JOIN category_voc v5 ON v5.category_id = c5.category_id and v5.language_code = @lang        JOIN category_link l on l.sku_id IN (SELECT value FROM #Ids) AND      (          l.category_id = c4.category_id OR          l.category_id = c5.category_id      )        WHERE c4.[level] = 4 AND c4.version_id = 5  

This is a pretty simple query, the only confusing part is the last category join, it's this way because category level 5 might or might not exist. At the end of the query I am looking for category info per product ID (SKU ID), and the that's where the very large table category_link comes in. Finally, the table #Ids is just a temp table containing 10'000 Ids.

When executed, I get the following actual execution plan:

Actual Execution Plan

As you can see, almost 90% of the time is spent in the Nested Loops (Inner Join). Here's extra information on those Nested Loops:

Nested Loops (Inner Join)

Note that the table names don't match exactly because I edited the query table names for readability, but it's pretty easy to match (ads_alt_category = category). Is there any way to optimize this query? Also note that in production, the temp table #Ids doesn't exist, it's a Table Valued Parameter of the same 10'000 Ids passed on to the Stored Procedure.

Additional info:

  • category indices on category_id and parent_category_id
  • category_voc index on category_id, language_code
  • category_link index on sku_id, category_id

Buffer usage for index with no reads

Posted: 17 Apr 2013 02:14 PM PDT

Using SQL Server 2008 R2, the main transactional table in a vendor database is massive compared to all others and has about 14 indexes. Some of these indexes don't get used in our environment, but we can't remove them. That's not a problem, it's just something we have to live with.

The question I have is about why some of these low-or-no-read indexes seem to be using memory - more than other indexes on the same large table that get used often. I would have thought that most of the buffer cache would go towards objects that are read from frequently. The only thing happening on these indexes is write overhead.

For example, one of these low-read indexes is being allocated about 2 GB of memory (58% of the index's total size) and another has 1.7 GB of memory (27% of its size). Meanwhile, the monster-sized and well-used clustered index itself only has 4 gigs (2% of its size). A different NC index with lots of reads only has 100 GB in the buffer cache (5% of its size).

Looking at the physical stats, I can see the fragmentation is pretty bad. That's understandable from all the writes on this table and the non-sequential inserts. I'm not sure if it could be related to memory usage, though.

Looking at the operational stats for these indexes is also interesting.

  • The leaf_ghost_count reminds me that there are just about as many deletes taking place on this table as there are inserts (from a regular cleaning process).
  • One of these low-read indexes has some of the highest page_lock_wait values in the database. Perhaps that's only because of the writes?
  • 2 others have some of the highest page_io_latch_wait values. I understand that latch usage would be related to memory usage, so that makes sense.

I realize this is an abstract question and that I'm not providing many actual stats. I'm just curious about how SQL Server comes to these buffer cache usage decisions and wonder if anyone out there understands it.

How can I verify that a full database restore reflects the exact source database in SQL Server?

Posted: 17 Apr 2013 09:02 PM PDT

We are decommissioning our old SQL Server 2000 Ent. instance in favor of SQL Server 2008 R2 Ent. My planned migration path is:

  1. Terminate client connections (2000)
  2. Full Backup (2000)
  3. Restore (2008 R2)

I am being asked to provide conclusive proof that every single transaction "made it" and that the data is an exact replication of what existed on the 2000 instance.

I hope that I can use the following documentation as evidence:

However, if this is not sufficient, the only thing I can think of is to rip through ever row of every table of every database and calculate a checksum (on both instances) as well as get row counts for every table in every database.

Is there any better way to satisfy the "exact replica" verification criteria? I'm also open to better documentation.

How to recover data from corrupted innoDB table from a crashed mySQL server?

Posted: 17 Apr 2013 08:25 PM PDT

Some background:

I had a MySQL server running on Ubuntu Server. I don't remember which versions they were running but it was likely a Ubuntu 12.04 with MySQL 5.5. During the course of a day, connections from the client application to the server kept dropping. On the next day, the server crashed and mysql wouldn't start again. I was also getting lots of disk errors so I decided to attempt to recover the data on a new machine.

What I tried:

I tried to recover the data with a Ubuntu 12.04 running MySQL 5.5 but I couldn't get the service running. Before troubleshooting it, I tried the same steps on a Ubuntu Desktop running mySQL 5.1 that I had available, and they worked. Here's what I did:

  1. I copied /var/lib/mysql from the crashed server, to the new one
  2. I ran chown -R mysql:mysql /var/lib/mysql
  3. I ran service mysql start

The service started and everything seemed to work, except on one table. This is a table with BLOB data.

After some trial and error, I narrowed the problematic queries to record ids above a certain number (roughly, the last 100 records of a total of 7000 records).

Sample of some queries and outputs:

> select field1 from table1 where field1 = 6900  > (completes successfully)  > select field1 from table1 where field1 <= 6900  > (completes successfully)  > select field1 from table1 where field1 = 6901  > Error 2013 (HY000) at line 1: Lost connection to mySQL server during query  

Then, I started trying to dump the data in an attempt to recover it:

> mysqldump -u root -p database1 table1 > table1.sql  > Lost connection to mySQL server during query when dumping table 'table1' at row 6642  

I looked into mysqldump's options and decided to try to add -f and the output was different:

> Couldn't execute 'UNLOCK TABLES': MySQL server has gone away (2006)  

To make sure the problem's in this table, I tried:

> mysqldump -u root -p --ignore-table=database1.table1 database1 > database1.sql  

... and the command completed successfully.

I tried to delete one of the offending records and the delete command completed successfully. I guess I could delete them all, from the first problematic one and minimize losses but is there any hope of recovering them?

What would you suggest I do from here?

Sidenote: I was using Navicat to manage the server and make backups, and strangely enough, restoring that particular table with a recent backup also fails with the connection lost error, which leads me to believe that this table has been having problems for a while now, maybe due to disk errors and not specifically due to the crash.

How to permanently delete a row in oracle forever

Posted: 17 Apr 2013 01:13 PM PDT

I want to delete a record from a table and no one should ever be able to recover the deleted data. I have done delete operation but the record is still in the undo segments. And one can do flashback and retrive that row.

Executing Child packages without using master package

Posted: 17 Apr 2013 08:30 PM PDT

I have many Fact and Dimension packages which are executed using vb script . Today one of my seniors proposed that these packages should be executed individually instead of creating a master package to execute the child pkgs using execute package task .His reasoning was based on the fact that execute package task will create several threads which will be performance intensive when compared to individually executing the packages.

Leaving aside sharing of config file and other issue how can individually executing it be more performant that creating master package for all the child packages?

By the way the child packages are executed out of process which is configured in the Execute Package Task

Can SQL Server 2008 / 2012 have execute permission in a function for a user, but not for a table used in the function?

Posted: 17 Apr 2013 01:54 PM PDT

I have a function that returns a table. This function SELECTS DB1.dbo.table1

user1 has SELECT permission in this function only, but when selecting from the function error comes up stating that the security context does not allow access to DB1

Can a user have permissions to read only functions but not tables?

Migrating from SQL Server 2005 to SQL Server 2008 R2 on a different box

Posted: 17 Apr 2013 12:26 PM PDT

We currently have a SQL Server 2005 instance hosting a lot of application specific databases (ArchiveManager, SolarWinds, Kaspersky etc). We are planning to deploy a new SQL Server 2008 R2 instance on a new server and slowly migrate across to this instance. All of the applications have fairly straightforward migration plans for changing the database location normally revolving around stopping services, detaching, attaching and rerunning installers.

All looks good so far. My only concern is with SQL Server Agent jobs and logins. Initially I was under the impression that these were all stored in the system databases and that I could move these. I understand now that this is not the case. Microsft KB 918992 suggest running a script to generate a stored procedure and then calling this and running the generated script on the new instance. There seem to be a lot of caveats to it though. Another option would be to use SSIS to generate a "Transfer Logins" and "Transfer Jobs" package and run that (both editions are Standard) but I am unversed in SSIS so am a touch scared to use that.

I suppose my question boils down to this:

  • Is the Microsft KB the best way to migrate logins to the new SQL instance?
  • Is the only way to migrate tasks to the new instance via SSIS?

I am trying to get it as clear in my head as possible for my planning documentation so am very appreciative of any replies/guidance.

MSDTC on SQL Server 2012 with mount point for data disk

Posted: 17 Apr 2013 11:29 AM PDT

I have installed a new SQL Server 2012 instance on a cluster. The installation went fine but when I've added the MSDTC as resource of the instance, it went wrong. When I just added it, the MSDTC was online but when I tried my first failover, it refused to go online. The error message was "MSDTC failed".

My configuration is: a physical drive disk (let say J:) with two mount points "DATA" and "LOG".

I read some people who experienced this kind of errors with mount points and SQL Server 2008 but nothing with 2012.

I tried to create the "MSDTC" folder on the physical drive (the J: one) but nothing better.

Does anyone know something about it or know how to configure the MSDTC and force it to use the J: drive?

Thx a lot!

Backing up MySQL database

Posted: 17 Apr 2013 11:22 AM PDT

I have quite large and rather important DB to backup.

First off, whats best way to do it?

Secondly, easiest way I know is to export it through phpmyadmin, but, i'm worried about options, although I'm quite sure this "Add DROP DATABASE" will not drop actual databases, I'm interested what are these options for? (see pic)

Thanks.

enter image description here

SQL 2008: Why is *any* user, with minimal permissions able to restart/stop the SQL instance through Management Studio

Posted: 17 Apr 2013 01:26 PM PDT

So I'm on a project that I need to help admin SQL 2008 . I'm coming from a MySQL background.

Why is it that a basic user, with the most basic/default permissions, through Management Studio is able to restart or stop the entire instance of SQL but not view a simple table?

Optimizing queries on a range of timestamps (two columns)

Posted: 17 Apr 2013 03:45 PM PDT

I use postgresql-9.1 with ubuntu 12.04.

I need to select records inside a range of time: my table time_limits has two timestamp fields and one property integer. Indeed there are other info columns but not used for any queries, only for storing information, I won't speak about these ones.

create table (     start_date_time timestamp,     end_date_time timestamp,      id_phi integer,      primary key(start_date_time,end_date_time,id_phi);  

this table now contains something like 2.000.000 records.

but the time spent were so enormous for a query like:

select * from time_limits as t   where t.id_phi=0   and t.start_date_time <= timestamp'2010-08-08 00:00:00'  and t.end_date_time >= timestamp'2010-08-08 00:05:00'  

that I tried creating the inversed index:

create index idx_inversed on time_limits(id_phi,start_date_time,end_date_time);  

I have the impression that the times spent are better: The time needed for accessing the records in the middle of the table seems to be more reasonable. ( somwhere between 40 and 90 seconds)

But's its several tens of seconds for values in the middle of the time range. And twice more when targeting the end of the table (chronologically speaking).

So I have tried explain/analyze for the first time. You can see the result here on depeszcom.

    QUERY PLAN                                                                                           Bitmap Heap Scan on time_limits  (cost=4730.38..22465.32 rows=62682 width=36) (actual time=44.446..44.446 rows=0 loops=1)     Recheck Cond: ((id_phi = 0) AND (start_date_time <= '2011-08-08 00:00:00'::timestamp without time zone) AND (end_date_time >= '2011-08-08 00:05:00'::timestamp without time zone))     ->  Bitmap Index Scan on idx_time_limits_phi_start_end  (cost=0.00..4714.71 rows=62682 width=0) (actual time=44.437..44.437 rows=0 loops=1)           Index Cond: ((id_phi = 0) AND (start_date_time <= '2011-08-08 00:00:00'::timestamp without time zone) AND (end_date_time >= '2011-08-08 00:05:00'::timestamp without time zone))   Total runtime: 44.507 ms  (5 rows)  

What could I do to optimize the search ? You can see all the time is spent scanning the two timestamps columns once the id_phi is set to 0. And I don't understand the big scan (60K rows!) on the timestamps, aren't they indexed ? ( both the primary key and the idx_inversed I created)

Should I change from timestamp types to something else ?

I have read (really little) about GIST and GIN indexes, I just know they can be more efficient on certain conditions for custom types. Should I go this way to have a efficient time index ?

Payment methods conceptual and logical model

Posted: 17 Apr 2013 09:02 PM PDT

I need to create a conceptual and logical (normalized) models of parking house according to the requirements below. It looks to me as a very simple concept that doesn't need all tables to have relationships - but then they could not be modelled as entities. I tried asking this on stackoverflow but got no feedback for couple of days now.

  1. Three possible methods of payment:

    • a ticket paid on leave,
    • prepaid card with cash credit,
    • prepaid card with "time credit",
  2. Price of ticket depends on time:

    1. 1-2hrs = $0,
    2. 3hrs = $2,
    3. 4hrs = $4,
    4. afterwards x hrs = $(x+1), but max. $20 for 24hrs (... easiest to put these to 24 rows, right?).
  3. A ticket (a) may be allowed a 20% discount (ie. for shopping in the mall).

  4. Cash credit card uses same prices as tickets but with 40% discount.
  5. Cash credit card can be reloaded.
  6. Time card is paid once and allows parking while valid.

The problem is I don't know how to put those highlighted relations to the logical db model and whether event to put them there. Is it ok-practice to have isolated tables in the design?

How to check the status of SQL Server?

Posted: 17 Apr 2013 08:21 PM PDT

I need to check the status of SQL Server. When I say status its not the status for each database, but rather for the whole SQL Server.

I need to know what are the queries running in the server and no. of connections, which query is taking too long.

Can anyone please help me with this..

SQLITE: A problem of tags and products

Posted: 17 Apr 2013 07:42 PM PDT

I'm searching for a way to create a query to do the following:

Let's consider 3 tables:

  • products: List of products
  • tags: list of tags
  • tag_ties: table used to associate a tag to a product

Let's consider this structure for each table:

Products:

  • id (int, autoincrement)
  • name (varchar, name of the product)

Tags:

  • id (int autoincrement)
  • label (varchar, label of the tag)

Tag_ties:

  • id (int, autoincrement)
  • tag_id (int, reference to a tag id)
  • ref_id (int, reference to a product id)

What I want:

Obtain all the products who are tagged with tags id 10, 11 and 12 for example.

This query does not work, as it returns the products having at least one of the tags:

select       p.name as name,      p.id as id  from       products p inner join tag_ties ties  on      p.id=ties.ref_id  where      ties.ref_id=p.id and      ties.tag_id in (10,11,12)  group by       p.id  order by       p.name asc  

No comments:

Post a Comment

Search This Blog