Friday, October 11, 2013

[how to] How to add Master and Slave PostgrSQL 9.1 Database in Rails app

[how to] How to add Master and Slave PostgrSQL 9.1 Database in Rails app


How to add Master and Slave PostgrSQL 9.1 Database in Rails app

Posted: 11 Oct 2013 04:55 PM PDT

I am working on a rails app using postgresql 9.1 as a database.in postgresql i a have configured for streaming replication, Now how to add Master and Slave PostgreSQL database in rails application ( in database.yml file) I have specified the database details as

development:

adapter: postgresql encoding: unicorn database: app_development pool: 5 username: username password: password host: master ip port: port number

development:

adapter: postgresql encoding: unicorn database: app_development pool: 5 username: username password: password host: slave ip port: port number

development:

adapter: postgresql encoding: unicorn database: app_development pool: 5 username: username password: password host: svale ip port: port number

But While executing this produces error as " ActiveRecord::StatementInvalid" " PG::ReadOnlySqlTransaction: ERROR: cannot execute INSERT in a read-only transaction"

How to specify the master and slave database in database.yml file

Regards, Balu

Help with a PIVOT

Posted: 11 Oct 2013 03:45 PM PDT

I have a view that gives data which looks like this:

          Rank        |        Score          --------------------------          1           |            1          2           |          2.5          3           |            5          1           |            7          2           |          2.5          3           |            8          1           |            9          2           |          2.5          3           |            5  

What I would like to do is apply a PIVOT so that it appears in a 3x3 matrix. The result is always guaranteed to have 9 rows, with RANK going 123..123..123.

         1   |   2   |   3      ------------------------         1      2.5      5         7      2.5      8         9      2.5      5  

I've attempted to to use a PIVOT, but have been unsuccessful (the part about the aggregate function always confuses me):

SELECT      *  FROM      MyView  PIVOT  (      MAX([Rank]) FOR [Rank] IN ([1], [2], [3])  ) pvt  

How can I shape my data into the form that I am looking for (PIVOT or otherwise)?

Sorry if the requirement is a bit odd, I'm working with a piece of reporting software that does what I want it to do, if I can shape my data into that form.

How to show NULL results from query?

Posted: 11 Oct 2013 07:38 PM PDT

i'm trying to create a procedure with filters for search.

If the parameter is "NULL" the value of parameter will be column value;

Here my problem: When my parameter and column value is NULL.

I have no results because of this.

I know that query: "COLUMN_NAME IS NULL" return the result even if the value is NULL, but i don't know how to implement this.

I trying to do this with "CASE" but without success.

CREATE PROCEDURE SEARCH_QUERYS          @ID   int,          @LIVE bit,          @DATA datetime  AS  BEGIN          SELECT * FROM TB_SEARCH          WHERE           ID = ISNULL(@ID,ID) AND          LIVE = ISNULL(@LIVE,LIVE) AND          DATA = ISNULL(@DATA,DATA)  END  

How can I create an ad-hoc domain user on SQL Server

Posted: 11 Oct 2013 05:30 PM PDT

Okay, so here's the problem. I have an application I need to install and run locally for testing and debugging. The application has been designed to run on a domain and as such expects to log into the database using domain credentials.

I can't modify the section of code that handles the logins without a serious amount of work which I don't have time for this week, nor do I have any ability to add the machine hosting the database to the domain (contractors machines aren't allowed to be added to the domain under the I.T. policy).

So I'm in a bit of a bind. I cannot use the CREATE LOGIN [DOMAIN\User] From Windows routine as I get the following error:

Windows NT user or group 'DOMAIN\User' not found. Check the name again.  

I can't just insert into the relevant security tables: sys.server_principals table, sys.SysLogins etc. because

Ad hoc updates to system catalogs are not allowed.  

Is there any way I can back door my local SQL Server installation to accept these user credentials from the application without being attached to the domain?

Of course adding the machine to the domain and logging in as a domain user would resolve this issue in a couple of minutes, but political issues prevent that from happening.

COUNT WITH JOIN

Posted: 11 Oct 2013 02:56 PM PDT

I've the following tables:

   fss_post(post_id,text)     fss_comment(idComment, post_id, text)  

Is possible to write a query that give me as result a row for each post_id with total comment count for this post?

EXAMPLE:

    post_id       comment_count      101010        5      101011        0  

And so on... Thanks.

Getting MySQL User Variable in Sequence, While Using ORDER BY RAND()

Posted: 11 Oct 2013 05:32 PM PDT

I am running a SELECT statement, which ultimately will be in an INSERT..SELECT statement.

I want a result like this

ID        COUNTER
3 1
7 2
1 3
20 4
12 5

When I run this query, I get everything in sequence:

SELECT 16132,@counter:=@counter+1 as counter     FROM table t    INNER JOIN (SELECT @counter:=-1) b   LIMIT 5;  

But I want it to be randomized, so I add ORDER BY RAND() at the end of the query, but then I end up with "random" values in the counter column. I think the randomized counter values match to the larger, entire result set, but is there a way I can get a counter column that will go in sequence when I am using ORDER BY RAND() ?

Trigger needs dbms_alert declared

Posted: 11 Oct 2013 12:21 PM PDT

A trigger has given me the following:

 ORA-04098: trigger 'adminuser.DB_ERROR_CHANGE_TRIG' is invalid and failed re-validation  

and this is when working on altering schemas/triggers/etc from an import of a client database.

I ran

show errors trigger adminuser.db_error_change_trig  

and the results were:

LINE/COL ERROR  -------- -----------------------------------------------------------------  2/5      PL/SQL: Statement ignored  2/5      PLS-00201: identifier 'DBMS_ALERT' must be declared  

Now, googling, I have found that I need to grant explicit execute privileges to the public user which I did with

grant execute on dms_public to public  

However, this has not worked.
How can I determine what is going on, as well as how to fix this. Thirdly, is dbms_alert schema specific? Meaning, my understanding is the the full name of the object is SYS.dbms_alert, is there another for another schema or is one global one?

Best Practices for Testing Application Dashboard Data

Posted: 11 Oct 2013 03:16 PM PDT

We have a dashboard that contains metrics based on a 6 month reporting period. Users log in and see their metrics based on the current 6-month reporting period.

Every 6 months, we load new metrics in our DEV/QA environment and configure them for the upcoming reporting period so we can test. But, our application and ETL use GETDATE(), so for the ETL to calculate properly, we have been setting the server's DATE in the future so everything will calculate and display as if we were in the future.

Our DBA group is now telling us that this should not be done and no one else in the industry does this. Can someone give me some examples of what they have done for testing in this situation? Should we be looking to re-design our ETL and application and what are some ideas for best practices?

We have unit testing for the application, so it will work. We are more worried that the data from the ETL will not be correct on Day 1 of the new period. It is very important that the metrics are displaying properly and the data is correct on Day 1.

MySQL Query Optimization - Indexes & Joins 12 million+ records

Posted: 11 Oct 2013 12:54 PM PDT

I'm running the following query:

SELECT      `track_title`.`title`,      `track`.`id`,      `artist`.`name` artist,      `album`.`title` album,      `st_tag`.`tag`,      `track`.`year`  FROM       `track`  JOIN `artist` ON `track`.`artist_id` = `artist`.`id`  JOIN `album` ON `track`.`album_id` = `album`.`id`  JOIN `track_title` ON `track`.`id` = `track_title`.`id`  JOIN `track_tags` ON `track`.`id` = `track_tags`.`track_id`  JOIN `st_tag` ON `track_tags`.`tag_id` = `st_tag`.`id`  WHERE      `track_title`.`title_sch` LIKE "% love %"  ORDER BY      `track_title`.`title`  LIMIT      0,500;  

Right now it takes at least 15 mins to execute. Both track and track_title have about 12 million records. Album and artist have at least a million each. Can this be sped up with better indexes? I've been playing with a combination of indexes in order to speed up the queries but i'm still having trouble getting it right.

Table structure:

CREATE TABLE `album` (    `id` int(10) unsigned NOT NULL,    `title` varchar(1000) default NULL,    `title_sch` varchar(1000) default NULL,    `flags` smallint(5) unsigned NOT NULL default '0',    PRIMARY KEY  (`id`),    KEY `title_sch` (`title_sch`(255)),    KEY `album_title` USING BTREE (`title`(255)),    KEY `album_cluster` USING BTREE (`id`,`title`(255),`title_sch`(255))  ) ENGINE=InnoDB DEFAULT CHARSET=utf8;    CREATE TABLE `artist` (    `id` int(10) unsigned NOT NULL,    `name` varchar(1000) default NULL,    `name_sch` varchar(1000) default NULL,    `flags` smallint(5) unsigned NOT NULL default '0',    PRIMARY KEY  (`id`),    KEY `name_sch` (`name_sch`(255)),    KEY `name` USING BTREE (`name`(255)),    KEY `artist_name` USING BTREE (`name`(255)),    KEY `artist_cluster` USING BTREE (`id`,`name`(255),`name_sch`(255))  ) ENGINE=InnoDB DEFAULT CHARSET=utf8;    CREATE TABLE `st_tag` (    `id` smallint(5) unsigned NOT NULL auto_increment,    `tag` varchar(50) NOT NULL,    `genre` smallint(5) unsigned NOT NULL,    `parent` smallint(5) unsigned default NULL,    `depth` tinyint(3) unsigned NOT NULL default '0',    `display` tinyint(4) NOT NULL default '-1',    PRIMARY KEY  (`id`)  ) ENGINE=MyISAM AUTO_INCREMENT=352 DEFAULT CHARSET=utf8;    CREATE TABLE `track` (    `id` int(10) unsigned NOT NULL,    `artist_id` int(10) unsigned NOT NULL,    `album_id` int(10) unsigned NOT NULL,    `status` smallint(5) unsigned NOT NULL default '0',    `flags` smallint(5) unsigned NOT NULL default '0',    `year` smallint(5) unsigned default NULL,    `duration` smallint(5) unsigned NOT NULL,    `track_no` smallint(5) unsigned default NULL,    `disc_no` tinyint(3) unsigned default NULL,    `explicit` tinyint(3) unsigned NOT NULL default '0',    `popularity` tinyint(3) unsigned NOT NULL default '0',    PRIMARY KEY  (`id`),    KEY `album_id` (`album_id`),    KEY `artist_id` (`artist_id`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8;    CREATE TABLE `track_tags` (    `id` int(10) unsigned NOT NULL auto_increment,    `track_id` int(10) unsigned NOT NULL,    `tag_id` smallint(5) unsigned NOT NULL,    `status` tinyint(3) unsigned NOT NULL default '0',    PRIMARY KEY  (`id`),    KEY `track_id` (`track_id`),    KEY `tag_id` (`tag_id`)  ) ENGINE=InnoDB AUTO_INCREMENT=26661380 DEFAULT CHARSET=utf8;    CREATE TABLE `track_title` (    `id` int(10) unsigned NOT NULL,    `title` varchar(1000) default NULL,    `title_sch` varchar(1000) default NULL,    `version` varchar(100) default NULL,    PRIMARY KEY  (`id`),    KEY `title` (`title`(255)),    KEY `title_sch` (`title_sch`(255)),    KEY `title_cluster` USING BTREE (`id`,`title`(255),`title_sch`(255))  ) ENGINE=InnoDB DEFAULT CHARSET=utf8;  

Are de-normalised tables preferred in any aspect of a project? [on hold]

Posted: 11 Oct 2013 10:15 AM PDT

While going through interview , My Interviewer asked me a question about Denormalization of tables and its usage in applications.

On guess, I answered, Yes it might be. Being denormalized , you have all columns in a single table where you do not require any kind of joins so your biggest plus point will be performance if proper look up conditions are given. And you can use a denormalized table in Reportings.

Is it correct, what I answered?

use function arguments or variable in postgres function

Posted: 11 Oct 2013 07:37 PM PDT

I want to set the default value for the column building and floor in my table points in the below function called test(b,f)

Create or replace function test(b text, f text) returns void as  $$  Begin  Alter table points alter COLUMN building set default b;  Alter table points alter COLUMN floor set default f;  End;  $$  language plpgsql;  

When I run select test('Shopping Mall','03') it gives me below error:

 column b does not exist  

It seems that I cannot use a function arguments b in the alter ... query?

Issue In INSERT INTO SELECT Statement [duplicate]

Posted: 11 Oct 2013 12:46 PM PDT

This question already has an answer here:

Can anyone please advise how to rectify below code:

INSERT INTO table1 values(10, SELECT col1 FROM table2, null, null, 'variable');

---Here table1 has only 4 column(In actual table there are 100 columns and I don't want to mention each column and then enter value)

Is it possible to mention select statement inside values during table insertion ?

This query is not answered earlier because here I wanted to insert data in a table1 by fetching few column data from database of table2 and rest of data needs some mathematical calculation(you can assume rest of data as variable).

If this is answered, please provide me link.

Creating domain user login from another domain?

Posted: 11 Oct 2013 10:00 AM PDT

I have two domains named "Domain1" and "Domain2". I have SQL server 2005 in Domain2, and users in Domain1.

How can I create a login in SQL Server 2005 (in Domain2) for the user that is present in Domain1?

db2 database alias not listed via list database directory but still unable to create that database

Posted: 11 Oct 2013 04:13 PM PDT

I am trying to create a database by logging on to the linux server on which the db2 server is installed, as the instance owner

db2 => list database directory     System Database Directory     Number of entries in the directory = 1    Database 1 entry:     Database alias                       = SAMPLE   Database name                        = SAMPLE   Local database directory             = /home/mylogin/sqllib   Database release level               = f.00   Comment                              =   Directory entry type                 = Indirect   Catalog database partition number    = 0   Alternate server hostname            =   Alternate server port number         =    db2 => create database testdb2  SQL1005N  The database alias "TESTDB2" already exists in either the local   database directory or system database directory.      db2 => list database directory on /home/mylogin/sqllib     Local Database Directory on /home/mylogin/sqllib     Number of entries in the directory = 1    Database 1 entry:     Database alias                       = SAMPLE   Database name                        = SAMPLE   Database directory                   = SQL00001   Database release level               = f.00   Comment                              =   Directory entry type                 = Home   Catalog database partition number    = 0   Database member number               = 0  db2 => quit  

I also checked if there are any other instances that exist other than mine, but there aren't.

DB20000I  The QUIT command completed successfully.  -bash-4.1$ db2ilist  mylogin  

While the alias testdb2 is not listed by list database directory , db2 claims that this alias already exists in either the system database directory or the local database directory.

If this is true, how do I get the equivalent of list database directory that lists testdb2 as well?

Way to make differential backup from full different but from same database

Posted: 11 Oct 2013 05:21 PM PDT

I have the following situation: Our sql server is running in a cloud service. The database to be backed up have 60gb. We need to make a local copy of the database for some tests, and it need to be up to date, but 60gb is impracticable to download daily.

We thought to make a full backup, download it one time, make differential backups in following days. Once a week, we can stop the database operations to make the last differential backup from last week full backup and make a full backup, and start again the database operations. In this moment, the local bd, when restored with the last diff backup (made when the db was with stopped operations), have the same data than the last week full backup.

The idea behind this is, if is possible to use the diff backup from last full backup and restore it in our local database. If it is not possible, anyone have some idea of what we can do to get all the data, up to date, from our cloud database?

Is it safe to kill ANALYZE TABLE query on a MyISAM table?

Posted: 11 Oct 2013 09:26 AM PDT

I have a huge MyISAM table that is blocking my application. I frequently find ANALYZE TABLE queries running on that table. I guess that's because a join is being performed on the table or at least an index is being used, I'm not sure.

So I'm thinking to run a OPTIMIZE TABLE query on the table to speed things up. But there are currently multiple ANALAYZE TABLE queries locking the table, so is it safe to kill such queries ? I couldn't find enough information on mysql's documentation for that matter..

Thanks.

scramble the value of one column and unscramble it?

Posted: 11 Oct 2013 10:26 AM PDT

I want to change all the values in one column, as it is a production database and I don't want to give the data to testers without obfuscation.

Example:

Name             Name  ----     -->     ----  ABC              BCD  PQR              QRS  

I have searched scrambling and other things but my issue is I want to change the obfuscated data back to their original values when needed.

I need a generic script so I can apply it to an arbitrary table if required.

Error: "Storage Engine for the Table Doesn't Support Nullable Columns" (SequelPro)

Posted: 11 Oct 2013 02:26 PM PDT

I'm trying to load a very normal .csv file (that was created from Excel 2011 for Mac) into SequelPro (using MySQL) with my Mac -- and I've recently started getting this error consistently. Can anybody let me know what it is and how to fix it?

An error occurred while trying to add the new table 'wblist' by    CREATE TABLE `wblist` (  `FILE` VARCHAR(255),   `FIRSTNAME` VARCHAR(255),   `MIDDLE` VARCHAR(255),   `LASTNAME` VARCHAR(255),   `FULLNAME` VARCHAR(255),   `GENDER` VARCHAR(255),   `ADDRESS` VARCHAR(255),   `CITY` VARCHAR(255),   `STATE` VARCHAR(255),   `ZIP` VARCHAR(255),   `PHONE` BIGINT(11),   `UNIT` VARCHAR(255),   `JOB` VARCHAR(255),   `AREA` VARCHAR(255),   `TIME` VARCHAR(255),   `MAILINGADDRESS` VARCHAR(255),   `MAILINGCITY` VARCHAR(255),   `MAILINGSTATE` VARCHAR(255),   `MAILINGZIP` VARCHAR(255),   `ID` BIGINT(11),   `CONFIDENCE` VARCHAR(255),   `BIRTHDATE` VARCHAR(255),   `AGE` INT(11),   `RACE` VARCHAR(255),   `ETHNICITY` VARCHAR(255),   `RELIGION` VARCHAR(255),   `PARTY` VARCHAR(255),   `REGISTRATIONDATE` VARCHAR(255),   `VOTERSTATUS` VARCHAR(255),   `OtherPhone` VARCHAR(255),   `POSSIBLEADDRESS` VARCHAR(255),   `POSSIBLEMAILADDRESS` VARCHAR(255),   `RECID` VARCHAR(255)) ENGINE=CSV;    MySQL said: The storage engine for the table doesn't support nullable columns  

This is stopping me before I'm able to import the table. Thanks for the help!

optimizing MySQL for traffic analytics system

Posted: 11 Oct 2013 08:26 PM PDT

background :

I've developed a URL shortener system like Bitly with same features , so the system also tracks clickers info and represent as graphs to the person who has shorten the link as analytics data. currently I'm using MySQL and have a table to store click info with this schema:

visit_id (int)  ip (int)  date (datetime)  country  browser  device  os  referrer (varchar)  url_id (int)  //as foreign key to the shortened URL  

and for now , just the url_id field has index

The system should represent click analytics in the time periods the user wants, for example past hour, past 24 hours , the past month , ...

for example to generate graphs for past month , I do following queries:

SELECT all DAY(date) AS period, COUNT( * )                           FROM (                            SELECT *                           FROM visits                          WHERE url_id =  '$url_id'                          ) AS URL                          WHERE DATE > DATE_SUB( CURRENT_TIMESTAMP( ) , INTERVAL 1 MONTH )                           GROUP BY DAY( DATE )    //another query to display clicker browsers in this period  //another query to display clicker countries in this period  // ...  

issues:

  • for a shortened link with about 500,000 clicks , it takes about 3-4 seconds to calculate just the first query , so for total queries about 10-12 seconds which is terrible.
  • lots of memory and CPU is needed to run such queries

questions :

1- how to improve and optimize the structure , so the analytics of high traffic links will be shown in less than 1 second(like bitly and similar web apps) and with less usage of CPU and RAM ? should I make an index on the fields date, country, browser, device, os, referrer ? if yes , how to do that for the field date because I should group clicks some times by DAY(date), sometimes by HOUR(date), sometimes by MINUTE(date) and ...

2- is MySQL suitable for this application? assume at maximum my application should handle 100 million links and 10 billion clicks on them totally. Should I consider switching to an NoSQL solution for example?

3- if MySQL is ok , is my database design and table structure proper and well designed for my application needs? or you have better recommendations and suggestions?

UPDATE: I made an index on column referrer but it didn't help at all and also damaged the performance and I think that's because of the low cardinality of this column (also others) and the big resulting index size related to the RAM of my server.

I think making index on these columns would not help to solve my problem, my idea is about one of these:

1- if using MySQL, maybe generating statistics using background processing for high traffic links is better instead of calculating lively at the user request.

2- using some caching solution like memcached to help MySQL with high traffic links.

3- using a NoSQL such as MongoDB and solutions like Map-Reduce which I am poorly familiar with and haven't used ever.

what do you think?

Primary replica set server goes secondary after secondary fails

Posted: 11 Oct 2013 05:26 PM PDT

I have a 2 servers replica set that, after the secondary fails the primary goes into secondary mode while the secondary is in STARTUP2 (recovering). The problem with this is that I can't use the collection stored in that replica set freely, I'm getting errors trying to use the collection:

pymongo.errors.OperationFailure: database error: ReplicaSetMonitor no master found for set: rs2  

Sometimes if I restart the mongod instances, the server rs2-1 is the primary for a while, but after some time (while the secondary is recovering) I see this in the logs of rs2-1 (the primary):

Tue May  7 17:43:40.677 [rsHealthPoll] replSet member XXX.XXX.XXX.XXX:27017 is now in state DOWN  Tue May  7 17:43:40.677 [rsMgr] can't see a majority of the set, relinquishing primary  Tue May  7 17:43:40.682 [rsMgr] replSet relinquishing primary state  Tue May  7 17:43:40.682 [rsMgr] replSet SECONDARY  Tue May  7 17:43:40.682 [rsMgr] replSet closing client sockets after relinquishing primary  

Is there an easy way to make the primary keep being primary after the secondary fails? Am I doing something wrong?

Thanks in advance!

MySQL backup InnoDB

Posted: 11 Oct 2013 01:26 PM PDT

I have a VoIP server running 24x7. At low peak hour at lease 150+ users are connected. My server has MySQL running with InnoDB engine on Windows 2008 platform. I like to take at least 2 times full database backup without shutting down my service.

As per Peter Zaitsev - the founder of percona, mysqldump –single-transaction is not always good.

read here if you are interested

As I'm not a DBA, I like to know in my scenario, which would be best solution to take a database backup?

Thanks,

Strange characters in mysqlbinlog output

Posted: 11 Oct 2013 11:26 AM PDT

Has anyone experienced this? Data replicates fine but when output in mysqlbinlog there are hidden characters that break the input?

  • mysqlbinlog Ver 3.3 for Linux at x86_64
  • mysql 5.5.28 server

Thanks! Julie

12c Grid Agent installation issues - Shell path is incorrect or not defined

Posted: 11 Oct 2013 08:26 AM PDT

I am installing 12c Grid Control agent on my server via Enterprise manager.

The targer server is a UNIX server with AIX OS on it. I add target host manually, enter authentication information, host name, installation directory, etc. and all goes well untill I click "Deploy Agent" button, then I get the following error:

"Shell path is incorrect or not defined.:/bin/bash(SH_PATH),-c(SH_ARGS) on host MyHostName"

and details are :

"Check the property values in the following files in this order, ssPaths_.properties or sPaths.properties or Paths.properties, in "/u02/app/oracle/middleware/oms/oui/prov/resources"

There isn't /u02/app" directory on my target server, only /u01/app/oracle" so I assume the above directory is on the same server as Enterprise manager.

Although I did find a directory where I have files of the format ssPaths_.properties - /u01/app/oracle/product/11.2.0.3/oui/prov/resources

Since my platform ia AIX, I used file ssPaths_aix.properties

On my target server if I run which bash I get /usr/bin/bash and which ssh returns /usr/bin/ssh ahd which sh returns /usr/bin/sh

So I edited the ssPaths_aix.properties file so that SH_PATH=/usr/bin/sh but still getting the same error. I have also tried setting to SH_PATH=/usr/bin/bash but nothing seems to make any difference. I am thinking one possibility would be I am editing wrong file.

Can anyone help?

DB2 to require password each time

Posted: 11 Oct 2013 04:07 PM PDT

I am using db2inst1 to connect to a database in DB2 which I have installed on my machine. Therefore, db2inst1 user does not require username/password authentication (borrows them from the OS). I would like to change that, and force every time a connection is initiated a username/password to be requested.

More specifically, this is how the authentication configuration looks like:

db2 get dbm cfg|grep -i auth

 GSS Plugin for Local Authorization    (LOCAL_GSSPLUGIN) =    Server Connection Authentication          (SRVCON_AUTH) = NOT_SPECIFIED   Database manager authentication        (AUTHENTICATION) = CLIENT   Alternate authentication           (ALTERNATE_AUTH_ENC) = NOT_SPECIFIED   Cataloging allowed without authority   (CATALOG_NOAUTH) = NO   Trusted client authentication          (TRUST_CLNTAUTH) = SERVER   Bypass federated authentication            (FED_NOAUTH) = NO  

db2 connect to dbName

   Database Connection Information       Database server        = DB2/LINUXX8664 10.1.0     SQL authorization ID   = DB2INST1     Local database alias   = DBNAME  

db2 connect to dbName user db2inst1 using password

   SQL1639N  The database server was unable to perform authentication because      security-related database manager files on the server do not have the required      operating system permissions.  SQLSTATE=08001  

I have played with some authentication combinations for "AUTHENTICATION" and "TRUST_CLNTAUTH" without much luck.

Connecting to a SQL Server database from a Flash program

Posted: 11 Oct 2013 12:26 PM PDT

I currently have the ability to utilize Microsoft SQL Server 2012. I am developing a project with Adobe Flash Builder 4.7.

If I link my database with Adobe Flash Builder is there any additional steps I must take in order to make the database live, or as long as my computer is running will this database be accessible from any device that is utilizing it?

In other words is this a LAN only system or does it automatically make itself available for the programs I link to it?

Does the order of columns in a table's definition matter?

Posted: 11 Oct 2013 03:17 PM PDT

When defining a table, it's helpful to order the columns in logical groups and the groups themselves by purpose. The logical ordering of columns in a table conveys meaning to the developer and is an element of good style.

That is clear.

What is not clear, however, is whether the logical ordering of columns in a table has any impact on their physical ordering at the storage layer, or if it has any other impact that one might care about.

Apart from the impact on style, does column order ever matter?

RMAN & Archivelog Deletion Policy confusion

Posted: 11 Oct 2013 03:12 PM PDT

Never been able to fully grasp the notion of delete input/delete input all when backup up Archivelog with rman.

Tied to the same issue is the configure Archivelog deletion policy to backed up X times to disk.

Deleting input, or deleting input all, seems counterintuitive to me when the deletion policy is set to 7 times to match an overall backup policy of 7 days.

Is there anyone here who can explain to someone who may be just a little dense how this all plays together? And what should the Archivelog deletion policy be if the goal is to have one week's worth of backups available at all times? All backups are weekly backed up to a tape drive. With seven days as overall, and 7 times on Archivelog deletions policy, am I backing it up 7 times or 7 times 7 times?

Thank you for any help anyone can offer……

PostgreSQL: The database cluster initialization failed

Posted: 11 Oct 2013 01:47 PM PDT

Both C:\PostgreSQL and C:\PostgreSQL\data have postgres user with full access and admin rights.

I ran postgresql-9.1.2-1-windows.exe from postgres user as admin. Target C:\PostgreSQL

Every way I try, I get "The database cluster initialization failed."

Questions

  • Is it OK to run everything w/o being a Windows Service?
  • Is there a work-around to install as Windows Service?

I am trying to setup PostGIS to work with GeoDjango.

I was able to manually install PostGIS. New to PostgreSQL and I am having a crisis of confidence over all of this. Coming from MySQL to PostgreSQL for the first time.


Pertinent log output from C:\Users\Larry\AppData\Local\Temp\install-postgresql.log:

WScript.Network initialized...  Called IsVistaOrNewer()...      'winmgmts' object initialized...      Version:6.1      MajorVersion:6  Ensuring we can read the path C: (using icacls) to Larry:      Executing batch file 'radA3CF7.bat'...      Output file does not exists...  Called IsVistaOrNewer()...      'winmgmts' object initialized...      Version:6.1      MajorVersion:6  Ensuring we can read the path C:\PostgreSQL (using icacls) to Larry:      Executing batch file 'radA3CF7.bat'...      Output file does not exists...  Called IsVistaOrNewer()...      'winmgmts' object initialized...      Version:6.1      MajorVersion:6  Ensuring we can read the path C:\PostgreSQL\data (using icacls) to Larry:      Executing batch file 'radA3CF7.bat'...      Output file does not exists...  Called IsVistaOrNewer()...      'winmgmts' object initialized...      Version:6.1      MajorVersion:6  Ensuring we can write to the data directory (using icacls) to  Larry:      Executing batch file 'radA3CF7.bat'...      Output file does not exists...  Failed to ensure the data directory is accessible (C:\PostgreSQL\data)      Executing batch file 'radA3CF7.bat'...      Output file does not exists...  Called Die(Failed to initialise the database cluster with initdb)...  Failed to initialise the database cluster with initdb  

Suggestions?

[Articles] Union Benefits

[Articles] Union Benefits


Union Benefits

Posted: 10 Oct 2013 11:00 PM PDT

IT is an industry that hasn't adopted a union, at least not yet. Many IT workers hope it never happens, but what if it does? This Friday Steve Jones asks what benefits you might want from a union.

[MS SQL Server] Not Able to Uninstall SQL SERVER 2008 R2 from WIndows server 2008

[MS SQL Server] Not Able to Uninstall SQL SERVER 2008 R2 from WIndows server 2008


Not Able to Uninstall SQL SERVER 2008 R2 from WIndows server 2008

Posted: 10 Oct 2013 05:38 PM PDT

HI All,I am not able to uninstall sql server 2008 R2 from my windows server 2008 the installer is throwing the error "Unable to perform unauthorized operation." and in logs some registry keys exception are coming and one more error regarding sqlengine_validate_cpu_64 not able to perform action is coming.Please let me know if any of you have faced this scenario and how you resolved it.Thanks in Advance.Regards,Ajit

Backup & Restore Question

Posted: 10 Oct 2013 11:00 PM PDT

I have one scenario based query based on this backup schedule:Full Backup: Monday/Thursday @10:00 PMDifferential Backup: Tuesday/Wednesday/Friday/Saturday @10 PMTransaction Backup: Every 2 hours.Now say my Full backup on Thursday is unusable (due to corruption or any issue), if the db crashes at 11:00 am on Saturday, how can I restore the database back upto the maximum usable state ?

Restoring Reporting Services to Standby Server

Posted: 11 Oct 2013 01:27 AM PDT

(I'm a RS Noob)I want to set up a semi-stanby server for our Reporting Services application, and other associated databases. It is acceptable to have a bit of time to manually switch over. Currently I back up the ReportServer and ReportServerTempDB databases from "MyRS_Server" each day and push them to "RS_StandByServer" . I then restore them to "RS_StandByServer". I have RS configured on "RS_StandByServer", and it appears to have the same settings as "MyRS_Server".What else do I need to do ? Copy and restore the RS keys to StandbyServer ?

error i am getting while logging to sql server in my local

Posted: 10 Oct 2013 06:29 PM PDT

A network-related or instance-specific error occurred while establishing a connection to SQL Server.The server was not found or was not accessible. Verify that the instance name is correct and that SQL Serveris configured to allow remote connections.(provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server)

How can remove the SQL server error log for CDC?

Posted: 10 Oct 2013 02:58 PM PDT

How can remove the SQL server error log for CDC?

DB Backup Is Backing Up Previous Version of Proc

Posted: 10 Oct 2013 06:39 AM PDT

Hi All,I'm experiencing a rather bizarre problem that baffles me.I have a production DB that I backup from our production box and then restore to our Dev server for some testing. Yesterday I had to deploy a new version of a stored proc to the production DB. This new version of the stored proc only had some very minor changes. Today I took a backup of the DB and restored it to our dev box. When I diff the DB (with RedGate SQL Diff) it shows a difference between the two DBs for that proc. The newly restored DB on the dev server has the old version of the procedure!If I go into the object explorer and script out the proc, the old version is in fact what comes out from the newly restored DB (on our dev box). If I do the same on our production machine, the new version of the proc is there. And it's not just that one proc. There are a few others as well. Some of which I updated weeks ago.It seems that the backup didn't capture the new version of the proc?Has anyone ever experienced this?This is the backup command I use.BACKUP DATABASE DbNameTO DISK = C:\SomePathWITH COPY_ONLY, INIT, NOSKIP, NAME = SomeBackupName, DESCRIPTION = SomeBackupDescription, STATS = 10;Any thoughts would be most appreciated.Thanks!Peter

Upgrade from SQL Server 7.0 to SQL Server 2008 R2

Posted: 10 Oct 2013 07:05 AM PDT

We would like to do SQL Server 7.0 to SQL Server 2008 R2. What is best suitable method for upgrading the SQL 7.0 to SQL Server 2008R2

[SQL 2012] Find value of cell from header name and row name (like going down and across)

[SQL 2012] Find value of cell from header name and row name (like going down and across)


Find value of cell from header name and row name (like going down and across)

Posted: 02 Oct 2013 08:38 PM PDT

what I want is the value of cell just like where x meets y, going down and across what is the value in that cell.Thank you for your time

SQL Server Merge Web - Pull replication error

Posted: 11 Oct 2013 01:58 AM PDT

I have some problems with web synchronization for the Merge Replication. I repeated every step described here: http://technet.microsoft.com/en-us/library/ms151763.aspx but I have always the same error:Security Error ....Unable to connect to the URL [https://web3/SQLReplication/replisapi.dll] during Web synchronization Ensure that the URL login credentials to the Internet and the proxy server settings are correct and that the Web server is reachable., 00:00:02,0,0,,, 0..... From subscriber's InternertExplorerI try the url: [https://web3/SQLReplication/replisapi.dll?diag] and it works.I also have repeated this step [http://technet.microsoft.com/en-us/library/ms152511.aspx]. Can anyone help me?

HADR solution for DW DB of huge size on TBS

Posted: 10 Oct 2013 10:10 PM PDT

Hi Experts,In my environment we are hosting a new projects which is DW and also a very bulky DBs in TBs.> what type of HADR solution is available for fasttrack databases in MS SQL Server 2012?>what type of HADR is available for SQL serevr 2012 for this heavy DBs?PLease advice.

SQL Server Alert that will work for SSAS Proactive Caching Errors in Event Viewer

Posted: 11 Oct 2013 01:20 AM PDT

I have a SSAS cube configured with Proactive caching. Sometimes, there are facts that arrive just seconds before dimension data is available in the dimension structure, and SSAS fires the following error "[b][i]OLE DB error: OLE DB or ODBC error: Operation canceled; HY008[/i][/b]" on Event Viewer. After that, nothing more that implement proactive caching will work in that cube.I would like to create an Alert to notify to DBA about the error and so he can take an action to solve the problem as soon as possible.Has anyone tried something similar?

Not getting the email notification that I set up

Posted: 11 Oct 2013 01:10 AM PDT

I've got a SQL Job that runs once a way, every day at 7:30 AM. It's supposed to send me an email, upon successful completion, however it hasn't since we've migrated from SQL 2005 to SQL 2012. I'm pretty sure that I've got this setup correctly, but it's still not sending me an email upon successful completion. I have myself set up as an operator in SQL, it has the correct email address. So, what am I doing wrong?

Best way to configure jobs on AlwaysON

Posted: 25 Sep 2013 04:34 AM PDT

Hi,We are planning to configure AlwaysON with 3 servers. Two in the same datacenter with synchronous commit and automatic failover and the third one in a different datacenter as a DR with asynchronous commit. Now what is the best way to configure the SQL jobs like Reindexing, DBCC Checkdb, Update Statistics and other application related jobs. The reindexing, update statistics and Checkdb should only be run on the Primary and should we keep them enabled on the secondary also.

Application compatibility for SQL 2012 Always on, vendor question

Posted: 10 Oct 2013 10:24 PM PDT

[size="4"][font="Times New Roman"]We currently use a SQL 2005 server to host numerous databases for a selection of applications. As this is getting to end-of-life, we are looking to replace it with a new 2012 implementation and hoping to use Always On Availability Groups to provide some HA and DR capabilities. I want to check with the application vendors, (some of which are quire small), that their applications will still work if we move to such an environment. To this end I am creating a document which can be sent to them to ask if they support this, I expect it to read something like:-[/font][/size][size="3"][i][font="Arial"]We are intending to implement a new SQL server 2012 instance and may look to move the backend database(s) for your application [application name] to this. As part of this SQL installation, we also intend to utilise the 'Always On Availability Groups' feature, which means that connections to the database will be made through an 'Availability Group Listener'. To use this feature, connections must be made using TCP/IP protocol.1) Has the application [application name] been tested in this configuration?2) Do you support the use of this configuration of database?3) Are there any special prerequisites which we will need to have in place to support the use of your application in such an environment?4) Does your application support connection strings which use ports other than the default 1433?[/font][/i][/size][size="4"][font="Times New Roman"]Are there any glaring mistakes or omissions?Is there anything else I should be asking?Thanks for your input.[/size][/font]

SQL 2012 - Updating a field based on Row to Row Comparison

Posted: 10 Oct 2013 06:57 AM PDT

I have a table that contains dates and times. For example fields are Date, ExTime, NewTime, Status. I am ordering them based on a expkey field that makes them show in the right order. I am wanting to do a row by row comparison and compare the second row field of extime to the first row field NewTime. If extime < Newtime then I want to update status with a "1". And then travers through the table row by row where second row in the above example becomes the first and a new second is pull and used. Here is a sample of what I have now - but it is not hitting and working all all of the rows for some reason.{UPDATE tSET t.Status = 1FROM MyTable tCROSS APPLY (SELECT TOP 1 NewTime FROM MyTable WHERE ID = t.ID AND [Date] = t.[Date] ORDER BY ExpKey) t1WHERE t.Extime < t1.NewTime}This is not hitting all the rows like I want it to. I have the where clause comparing fields ID and Date to insure that the rows are attached to the same person. If the ID or Dates are not the same it is not attached to the same person so I would not want to update the status. So basically if the ID of Row 2 = ID of Row 1 and Date of Row 2 = Date of Row 1 I want to compare extime of row 2 and see if it is less than newtime of Row 1 - if so then update the status field of row 2.Any help in figuring out why this sort of works but not on all I would appriciate.

Removed SQL 2012 Instance from Cluster, now having Kerberos problems. Suggestions?

Posted: 10 Oct 2013 09:03 AM PDT

Interesting dilemma... we were having an issue with the Passive node of one of our SQL 2012 Active/Passive clusters (running on Windows 2008), and in working with Microsoft a clean-up script was executed which basically destroyed the cluster. The end suggestion was to uninstall SQL and rebuild the cluster, but I was able to pull the Cluster registry key from the Instance and start MS SQL outside of the cluster. All is working great, but now we're finding Kerberos issues with double-hops like with Integrated Security with SSRS or deploying SSIS packages. It's not picking-up the authenticated credentials and is instead using an Anonymous account. I've lisetd out the SPN info for both the server name and former Cluster name on the server, but I'm unsure of what to change or drop to fix this. Any suggestions?Thanks.

Bulk Data

Posted: 10 Oct 2013 06:57 AM PDT

HiWhat is the process to follow if we have to retrieve data to a tune of 50 million records from couple of different data sources[legacy and non SQL server] and store them into multiple csv files.What is the process to go about ..any example?Thanks

AlwaysOn backup dates for secondary backups

Posted: 10 Oct 2013 06:53 AM PDT

I'm not sure if anyone else has encountered this, but I'm not seeing the last backup dates updated on the Primary server of an Availability Group when the backups are being performed on the secondary. This seems like a bug to me, if you rely on those dates to verify the time of the last backup.The issue is that the backup data is recorded in MSDB on the local server, so the backup gets updated correctly on the secondary, and updates the secondary's backupset table, but you can't view it via the GUI, since you can't view the properties of the database on the secondary (even though it's read-only??)I can verify by running the following:select database_name, max(backup_finish_date) lastBackup from msdb.dbo.backupset where type = 'L' group by database_name;I take it that the restore process does not use the MSDB info as the last LSN differs as well on the primary and secondary, however, I have been able to restore databases using a full backup on the primary and secondary log backups with no problems.So, I suppose the only way to get a valid last backup date is to create a linked server to the other Availability Group nodes and join the backupset tables together on all the nodes to find the max(last backup).Any better ideas? It's too bad you can't add the MSDB database to the availability group. That would solve some other issues as well, like security changes.

[T-SQL] How do delete multiple CHAR(9) at end of string

[T-SQL] How do delete multiple CHAR(9) at end of string


How do delete multiple CHAR(9) at end of string

Posted: 10 Oct 2013 05:36 AM PDT

I have a situation where at the end of a remarks string, there is anywhere from 1 -9 "char(9)" at the end of the string.Is there any tried and true function or code that can easily without looping, get rid of those unwanted characters and clean up my remarks code???HELP....and thank you!!! :w00t:

Unable to alter view

Posted: 10 Oct 2013 10:54 AM PDT

I have a view that I cannot make changes to.I just open the view in a query window make a change and then say run and it never comes back.I can make other views and make changes to other views - but for some reason I can't do it for this view.Anything that would cause it not to work.I can't just open it and run it without making changes.I can select using the view fine. Comes back right away.Thanks,Tom

Merging one database to another

Posted: 10 Oct 2013 02:07 AM PDT

I am not sure exactly the best way to accomplish this task so am looking for some adviceWe use Dynamics GP. At the moment this is in an OLTP environment. It also contains about 10 years of historical data. We also need to run a number of reports but this impacts the OLTP environment.My intent is to create a duplicate copy of the GP database on a separate SQL server. I can use this db for reporting requirements. This DB can contain all the data and then I can purge everything older than three years from the primarly OLTP db. Want I then need to do is, after the nightly GP post, take a copy of todays updated information and merge it into the db on the report server. This way the report DB is always, at worst , one day behind the OLTP db which is just fine.And this (sorry for the long winded explanation) is where my question comes in. I can see that I can use the merge command to do what I want to do but if any of you are familiar with GP , then you will know how many tables will need to be merged. Is Merge still the best way to accomplosh this task , simply iterating through each table, or is there are better method to perform this data merge for the entire DB ?Thanks

Sql script to use a column from one table and populate another table

Posted: 10 Oct 2013 09:33 AM PDT

I have two tables Product table and Rate Table.[code="other"]ProductProductId Name [/code][code="other"]RateLevelId Cost ProductId[/code]Each Product has 7 Levels and cost for each level is 100, 200.... 700.I now need a script to take all the product Ids and Populate the Rate table , so that my end output would look like this :[code="other"]RateLevelId Cost ProductId1 100 1 2 200 1 3 300 1 4 400 1 5 500 1 6 600 1 7 700 1 1 100 2 [/code]and so onCurrently I insert the first 7 rows manually and then run the below query for every product id[code="other"]INSERT INTO dbo.Rate (LevelID, Cost, ProductId) SELECT LevelID, Cost, ProductId FROM dbo.Rate WHERE ProductId = 1[/code]Can you direct me on how to fully automate my work ?

Newly Available/No Longer Available analysis

Posted: 10 Oct 2013 03:34 AM PDT

Thanks if you can help.I have a data set that contains a current record of available units for that day.[code="sql"]DECLARE @AvailableUnits TABLE(UnitDate DATE,UnitID INT)INSERT @AvailableUnits (UnitDate,UnitID) VALUES ('10/1/2013',1)INSERT @AvailableUnits (UnitDate,UnitID) VALUES ( '10/1/2013',2)INSERT @AvailableUnits (UnitDate,UnitID) VALUES ( '10/1/2013',3)INSERT @AvailableUnits (UnitDate,UnitID) VALUES ( '10/2/2013',1)INSERT @AvailableUnits (UnitDate,UnitID) VALUES ( '10/2/2013',2)INSERT @AvailableUnits (UnitDate,UnitID) VALUES ( '10/2/2013',3)INSERT @AvailableUnits (UnitDate,UnitID) VALUES ( '10/2/2013',4)INSERT @AvailableUnits (UnitDate,UnitID) VALUES ( '10/3/2013',2)INSERT @AvailableUnits (UnitDate,UnitID) VALUES ( '10/3/2013',3)INSERT @AvailableUnits (UnitDate,UnitID) VALUES ( '10/3/2013',4)INSERT @AvailableUnits (UnitDate,UnitID) VALUES ( '10/4/2013',2)INSERT @AvailableUnits (UnitDate,UnitID) VALUES ( '10/4/2013',3)INSERT @AvailableUnits (UnitDate,UnitID) VALUES ( '10/4/2013',4)INSERT @AvailableUnits (UnitDate,UnitID) VALUES ( '10/5/2013',2)INSERT @AvailableUnits (UnitDate,UnitID) VALUES ( '10/5/2013',3)INSERT @AvailableUnits (UnitDate,UnitID) VALUES ( '10/5/2013',4)INSERT @AvailableUnits (UnitDate,UnitID) VALUES ( '10/6/2013',2)INSERT @AvailableUnits (UnitDate,UnitID) VALUES ( '10/6/2013',3)INSERT @AvailableUnits (UnitDate,UnitID) VALUES ( '10/6/2013',4)INSERT @AvailableUnits (UnitDate,UnitID) VALUES ( '10/6/2013',5)INSERT @AvailableUnits (UnitDate,UnitID) VALUES ( '10/7/2013',2)INSERT @AvailableUnits (UnitDate,UnitID) VALUES ( '10/7/2013',5)SELECT * FROM @AvailableUnits[/code]I'd like to compare each days data against the previous and identify any units that are newly available or no longer available. My expected results are:2013-10-02,4,'Newly Available Unit'2013-10-03,1,'Unit No Longer Available'2013-10-06,5,'Newly Available Unit'2013-10-07,3,'Unit No Longer Available'2013-10-07,4,'Unit No Longer Available'Any day in the series could have results with the exception of the first day. If the data is unchanged compared to the previous day I don't want to return results.

t-sql 2008 r2 performance

Posted: 10 Oct 2013 04:46 AM PDT

In t-sql 2008 r2, can you tell me what performs better from the following two options and why:1. If parameter value = '1' goto parm1 else of parameter value = '2' goto parm2 goto final parm1: <do something1> goto final parm2: <do something2> final:2. If parameter value = '1' <do something1> else of parameter value = '2' <do something2>

[SQL Server 2008 issues] Storage migration for 2 node sql cluster.

[SQL Server 2008 issues] Storage migration for 2 node sql cluster.


Storage migration for 2 node sql cluster.

Posted: 10 Oct 2013 07:07 PM PDT

Hi,We have active acitvie sql cluster 2008 running on win 2008 cluster.On this server storage migration is planning and for the DBA Prospect is there any impact on the DBAs , and also advise me as DBA what are precautions and steps need to be taken.Reply is much appricated.Advance thanks.....

Restore Database From Overwritten Old Backups

Posted: 10 Oct 2013 05:59 AM PDT

Hi Every one.. Like to take advise on this. Have one backup file which overwritting everyday full backup on that file Suppose ADW.Bak overwritting Full Backup every day .I want to restore Database with 5 day old backup file how i can do that?When i am trying to restore it showing me most current backup file only?

Need Scripts

Posted: 10 Oct 2013 12:03 AM PDT

Need Scripts to identify below...1) Script to Identify what Indexes are set on tables.2) Script to Identify unwanted Indexes are set on tables.3) Script to Identify what new recommended Indexes needs to be set on tables.4) Script to Check fragmentation of DB & tables & how to resolve the issue..please give an understanding for this.. coz google search gives hell lots of scripts..:w00t:

How to update one column if second column matches

Posted: 10 Oct 2013 07:21 AM PDT

Hi,I have 2 tables, from which i have to update as belowtable Source: col a, col btable destination: col d, col eif source.b = destination.e then update destination set d = aAny clue how to do...Shaun

Issue with Sp_executeSql table variable output

Posted: 10 Oct 2013 01:15 AM PDT

My requirement is i need to populate @tblAmount dynamically.Here is the code [code="sql"]DECLARE @AdjustmentBatch_ID INT DECLARE @Platform VARCHAR(20), @Type INT, @lStr NVARCHAR(1000),@TableName VARCHAR(20),@ColName VARCHAR(20) ,@parameters NVARCHAR(30) SELECT @Platform = '',@lStr = '',@TableName = '',@ColName = '', @AdjustmentBatch_ID = 15 --DECLARE @tblAmount TABLE(LegalEntity VARCHAR(10), Amount MONEY ) --BEGIN TRY SELECT @Type = type,@Platform = Platform FROM dbo.Adjustment_Batch ab(NOLOCK) WHERE ab.AdjustmentBatch_ID = @AdjustmentBatch_ID AND ab.Status = 1 SELECT @TableName = CASE WHEN @Type = 1 THEN 'GLEntry' WHEN @Type = 2 THEN 'GrossPremium' END, @ColName = CASE WHEN @Type = 1 THEN 'BaseAmount' WHEN @Type = 2 THEN 'GrossWrittenPremium' END SELECT @lStr = N'DECLARE @tblAmount TABLE(LegalEntity VARCHAR(10), Amount MONEY );INSERT INTO @tblAmount (LegalEntity, Amount)'+ 'SELECT LegalEntity,SUM(CAST('+ @ColName +' as MONEY)) AS Amount FROM dbo.' + @TableName + ' t(NOLOCK)'+ 'WHERE t.AdjustmentBatch_ID = ' + CAST(@AdjustmentBatch_ID AS VARCHAR(20)) + ' GROUP BY t.LegalEntity ' PRINT @lStr EXEC SP_executeSql @lstr ,@parameters = N'@tblAmount(LegalEntity VARCHAR(10), Amount MONEY ) TABLE OUTPUT' ,@tblAmount[/code]Error : [quote]Msg 137, Level 15, State 2, Line 45Must declare the scalar variable "@tblAmount".[/quote]

help count that involves mulitple table joins with foreign keys

Posted: 10 Oct 2013 02:11 PM PDT

I am trying to get a count from two separate tables that are joined via foreign key, it only seems to be counting from the HIS table I need results from both the HIS and GRD[p][/p][code="sql"]DECLARE @ID INT = 4043300;SELECT HISTORY, ELA, MATH, SCIENCE, FL, VA, Prep,CASE WHEN HISTORY >= 0 AND ELA >= 1 AND MATH >= 1 AND SCIENCE >= 1 AND FL >= 1 AND VA >= 1 AND Prep >= 1 THEN 'Yes' ELSE 'No' END AS [On Target?] FROM (SELECT COUNT(CASE WHEN CRS.U1 = 'A' AND (HIS.MK NOT LIKE '%FD%') OR (GRD.M1 NOT LIKE '%FD%') THEN STU.ID END) AS HISTORY,COUNT(CASE WHEN CRS.U1 = 'B' AND (HIS.MK NOT LIKE '%FD%') OR (GRD.M1 NOT LIKE '%FD%') THEN STU.ID END) AS ELA, COUNT(CASE WHEN CRS.U1 = 'C' AND (HIS.MK NOT LIKE '%FD%') OR (GRD.M1 NOT LIKE '%FD%') THEN STU.ID END) AS MATH, COUNT(CASE WHEN CRS.U1 = 'D' AND (HIS.MK NOT LIKE '%FD%') OR (GRD.M1 NOT LIKE '%FD%') THEN STU.ID END) AS SCIENCE, COUNT(CASE WHEN CRS.U1 = 'E' AND (HIS.MK NOT LIKE '%FD%') OR (GRD.M1 NOT LIKE '%FD%') THEN STU.ID END) AS FL, COUNT(CASE WHEN CRS.U1 = 'F' AND (HIS.MK NOT LIKE '%FD%') OR (GRD.M1 NOT LIKE '%FD%') THEN STU.ID END) AS VA, COUNT(CASE WHEN CRS.U1 = 'G' AND (HIS.MK NOT LIKE '%FD%') OR (GRD.M1 NOT LIKE '%FD%') THEN STU.ID END) AS Prep FROM CRS INNER JOINHIS ON CRS.CN = HIS.CN INNER JOINSTU ON HIS.PID = STU.ID LEFT OUTER JOIN GRD ON CRS.CN = GRD.CN AND dbo.GRD.SN = dbo.STU.SNWHERE STU.ID = @ID) AS derivedThanks in advance[/code]

Using One Composite Foreign Key Value As Foreign Key In Another Table

Posted: 10 Oct 2013 02:55 AM PDT

[img]http://www.adukinfo.co.zw/Untitled-1.jpg[/img]Good day all!!! I have the tables, as in image above, and I'm trying to create the relationships. I got stuck at creating a relationship between tables 'member_provider' and 'dependant_provider'.As you can see, member_provider has a composite primary key that is made up of to foreign keys 'Member_RowID' and 'Provider_RowID'. I want to maintain this composite primary key and then create a relationship with dependant_provider using Member_Provider_RowID. I have tried adding Member_Provider_RowID to the composite key in member_provider but when I try to create the relationship I get the following error:"The columns in table 'member_provider' do not match an existing primary key or UNIQUE constraint"How would I go about maintaining the the uniqueness of the tables two columns.I'm using MSSql Server 2008...Thanx in advance.

Create Function

Posted: 10 Oct 2013 07:51 AM PDT

HelloI need to Create a Function who pulls data from string and load to the table. here is sample string'CHG_FEP_SVC_GRP_ID(1) :: Expected Value = 1 :: Actual Value = 20'now I need to load to Expected Value in Amt1 column and Actual Value in Amt2 Column which is as in below table[code="plain"]Error Amt1 Amt2CHG_FEP_SVC_GRP_ID(1) :: Expected Value = 1 :: Actual Value = 20 1 20[/code]Please help me to thatThanks

SqlDependency performance

Posted: 10 Oct 2013 07:19 AM PDT

hi all,I'm using SqlDependency (sql server 2008 R2) to get notification when data in some tables changed. The query notificatin is for cache invalidation.There are three large tables and I want to get changes on a small subset of these tables. These three tables are changing frequently, but the subset is not changing frequently. I have a question about the performance. How does the query notification track changes? Does it effect every query againsts these three tables (not only for the subset)? Performance is a major concern in our system, if the sql dependency impacts the performance, we need to find another approach. best wishesWilliam

Getting Database Names from Maintenance Plan

Posted: 10 Oct 2013 06:45 AM PDT

I have some maintenance plans that backup, cleanup, check integrity etc. for all user databases in one maintenance plan. I have since created a database where we want to track each action per database. I have created a stored procedure that takes two parameters: a database name and a comment/action. How would I go about doing this?Thanks in advance.

need help with log shipping error

Posted: 10 Oct 2013 04:15 AM PDT

hi all I am getting this error *** Error: Access to the path '\\sqltest\d$\logshipping' is denied.(mscorlib) ***the backup job runs but the copy job is failing

Replicating a large replicated database

Posted: 10 Oct 2013 04:00 AM PDT

We're starting to hit a wall with our in-house skill set, and hopefully someone in the community can provide some input, or point to some additional online resources to give us some insight. First some background.Our source systems are written on a proprietary platform, and cannot be accessed on an ad-hoc basis via standard tools. The vendor essentially replicates the proprietary source data to a SQL Server database so that the data is available to query. The problem is that poorly written queries can effect the replication service, and degrade performance on the source system, and so we made the decision to replicate the data that we need to another server.At the beginning, transactional replication worked just fine. Now, however, the business requirements have become more complex. We need to take a subset of the replicated data, and replicate it again to another server. We filter the data via indexed views, and then replicate the indexed views to the destination server as tables. SourceDB --> Repl_DB1 --> Repl_DB2Here is where the real trouble starts...the article properties on Repl_DB1 are set to "Drop Existing Object" when the destination object name is in use. Because the destination object is now referenced via a view, we can neither drop nor truncate the destination object. Deleting the data is also not an option, because serveral of these tables contain hundreds of millions of rows, likely overwhelming our transaction log.So far, it's looking like we need to perform the following steps whenever we have to regenerate a snapshot:1. Drop publication from Repl_DB1 to Repl_DB22. Drop all views on Repl_DB1 that reference replcated tables.3. Regenerate snapshot of Repl_DB1 using the truncate destination object option.4. Recreate views on Repl_DB15. Recreate publication from Repl_DB1 to Repl_DB2 using the truncate destination.This is an option that we would like to avoid, due to the size of the databases. Potentially, given the number of publications we have, it could be days before the data on REPL_DB2 is back to the state it was before the snapshot was recreated on REPL_DB1.What advice/experience can anyone offer?

SQL reindexing

Posted: 09 Oct 2013 07:31 PM PDT

I am going to incorporate into my reindexing script that any index under a certain size should be ignored.Whats the rule of thumb for this? There must be a size where reindexing\rebuilding an index under a certain size just becomes inefficient.

TRYCATCH/SAVEPOINTS INSIDE TRIGGERS

Posted: 10 Oct 2013 12:52 AM PDT

Hello all,Like always, your ideas/suggestions will be greatly appreciated.I have created a trigger. The idea is that when there is any DML operation on Table A, changes should be reflected on Table B. So far so good.Before I continue, here is the code[code="sql"] ALTER TRIGGER [dbo].[tr_ComponentOnLines] ON [dbo].[tblCFGLine] AFTER INSERT, UPDATE, DELETEASBEGINBEGIN TRY DECLARE @temptblInsertUpdate TABLE ( ID INT, Line VARCHAR(20), LineTypeID INT, Activity VARCHAR(20) ); DECLARE @Activity AS VARCHAR(20); DECLARE @RowCounter AS INT INSERT INTO @temptblInsertUpdate (ID,Line,LineTypeID,Activity) SELECT I.LineID,I.LineDesc,(SELECT LineTypeID FROM DhubOEE.dbo.LineType WHERE OlympusID=I.LineTypeID),CASE WHEN EXISTS(SELECT * FROM deleted) THEN 'UPDATE' ELSE 'INSERT' END AS Activity FROM INSERTED I UNION ALL SELECT D.LineID,D.LineDesc,(SELECT LineTypeID FROM DhubOEE.dbo.LineType WHERE OlympusID=D.LineTypeID),'DELETE' FROM DELETED D WHERE NOT EXISTS (SELECT * FROM INSERTED) --Check if the Line already exists in Assets. SET @ROWCOUNTER=( Select Count(*) FROM @temptblInsertUpdate Temp INNER JOIN DhubOEE.dbo.Asset A ON Temp.Line=A.AssetName ) SET @Activity=(SELECT TOP(1) Activity FROM @temptblInsertUpdate) SAVE TRANSACTION Tr BEGIN TRAN DECLARE @ErrorValue INT=0 IF(@RowCounter=0) BEGIN --Create The Line INSERT INTO DhubOEE.dbo.Asset(ParentID,AssetName,Path) SELECT (Select TOP(1) AssetID from DhubOEE.dbo.Asset),temp.Line,NULL FROM @temptblInsertUpdate temp --Update the path of the newly created Asset UPDATE DhubOEE.dbo.Asset SET Path='.1.'+CONVERT(VARCHAR(MAX),SCOPE_IDENTITY()) WHERE AssetID=SCOPE_IDENTITY() --Construct the path by updating Line INSERT INTO DhubOEE.dbo.Line (OlympusID,Linedesc,LineTypeID,State,AssetID) SELECT 30,Asset.AssetName,LineTypeID,1,SCOPE_IDENTITY() FROM DhubOEE.dbo.Asset INNER JOIN @temptblInsertUpdate Temp ON Temp.Line=Asset.AssetName END IF @Activity='UPDATE' BEGIN UPDATE L SET L.LineDesc=Temp.Line,L.LineTypeID=Temp.LineTypeID FROM DhubOEE.dbo.Line L INNER JOIN @temptblInsertUpdate AS Temp ON L.OlympusID=Temp.ID END IF @Activity='DELETE' BEGIN UPDATE L SET L.State=0 FROM DhubOEE.dbo.Line L INNER JOIN @temptblInsertUpdate AS Temp ON L.OlympusID=Temp.ID END SET @ErrorValue=@@ERROR COMMIT END TRY BEGIN CATCH IF (@ErrorValue>0) BEGIN ROLLBACK TRAN TR print @@TRANCOUNT END END CATCHEND[/code]The task is that even if smtg happens during the operations on the Table B i.e foreign key violation), the operations on Table A should continue and complete. The problem is that if smtg goes wrong on table b , the operations on table A are aborted as well.I understand that when a trigger executes, an implicit transaction starts. Therefore, even if you create another transaction, inside your trigger, that would be considered as a nested transaction , THEREFORE, if an error occurs, theory says that the entire batch will rollback. I thought, that a way to battle this, is by bringing savepoints on board. However, there is smtg I am missing/doing wrong and it doesn't work and the entire set of operations is aborted.Any ideas?Thanks

Backup - Veeam + DPM

Posted: 10 Oct 2013 12:42 AM PDT

Hi,The company i work for currently use Veeam to replicate an SQL Server instance every morning at 3 am (this involves taking a full backup of each database).They also use DPM to backup the databases, an express full backup takes place at 2 am and then synchronization takes place every 15 miuntes.I was concerned that the backup chain might be broken by the two products running full database backups on the same databases. To test this i executed a full backup using DPM; made some changes to a database and then ran the replication job within Veeam (which takes a full backup). I then allowed a number of DPM synchronization jobs to run (log backups) before restoring the database from DPM.The restore was successful despite the fact that the log chain should have been broken (as far as i can tell).If anyone is able to make sense of what i have written and can offer and suggestion as to how this restore was able to succeed i would be very grateful to know.Kind Regards,DBANewbie.

Consistency Checking on a VLDB with Partitioning and a Read-Only Filegroup

Posted: 10 Oct 2013 02:06 AM PDT

The background:I have a VLDB, 1-2 TBs, that I need to backup and replicate over to a reporting server for internal resources. Within this database there are basically two categories of tables: staging and prod. The staging tables include a bunch of raw, client data tables which make up roughly half the size of the database. The business has mandated we keep this data indefinitely and that it accessible for validation/auditing purposes. The data in these tables is historical but since we're only ever working with the last couple months, my first thought was to partition these tables by years. My next thought was to split those partitions onto read-only and read-write filegroups. This affords me the ability to shorten up my backup window by doing filegroup backups and makes the backup file size I need to copy over much smaller.The issue:How do I run consistency checks against the live database. DBCC CHECK[fill in the blank] doesn't seem to like read-only filegroups so what I'm envisioning is running integrity checks on the "replicated" database by making the read-only filegroup read-write.Make sense? Am I missing something? Is there a better way to do this (without considerable architectural changes)? Am I totally off base? Any input would be appreciated. Thanks.

Compare Tables

Posted: 09 Oct 2013 08:42 PM PDT

HiI have this data first, before I explain my query and my issue:create table staging(customer_id int not null unique, customer_name varchar(20), customer_lname varchar(20), [status] int)insert into staging(customer_id, customer_name, customer_lname, [status])values(1, 'James', 'Brown', 1), (2, 'Thabo', 'Kgosi', 1), (3, 'Horse', 'King', 0), (4, 'Tom', 'Smith', 1)create table final(customer_id int not null unique, customer_name varchar(20), customer_lname varchar(20))insert into final(customer_id, customer_name, customer_lname)values(1, 'James', 'Brown'), (2, 'Thabo', 'Kgosi'), (3, 'Horse', 'King'), (4, 'Tom', 'Smith')create table error(customer_id int not null unique,customer_lname varchar(20))Let me explain my data first, I have Staging table, all the records gets validated and get signed a [status]. if a record fail verification it gets [status] = 1, otherwise it get passed staright to Final table.Records in Staging will be validated again if they pass they go through Final table.In my query below, I want to check data in Staging with status = 1, then check if that record is also there in Final, which it would mean it once failed verification. If I find that record I write it to Error table. I want to end up with all the records that once failed Validation.insert into errorselect fn.customer_id, fn.customer_namefrom final fnleft join staging ston fn.customer_id = st.customer_idwhere in (select * from staging stg where stg.[status] = 1 and fn.customer_id = stg.customer_id)I'm struggling with a concerpt but I think it should be along the code I wrote, please help.

Semi-hypothetical backup question

Posted: 09 Oct 2013 11:43 PM PDT

So I've been commenting in a topic over on Technet, trying to help a guy who needs to ship a copy of his DB to a vendor. Said DB has some sensitive information in it, which he doesn't want the vendor to get. He's working with a copy of the production DB, updated the sensitive info to "placeholder" data, and is now concerned that someone may be able to retrieve the sensitive data from the Transaction Log.His initial procedure was:1. Backup the DB2. Backup the Log3. Shrink the Log (remember, this isn't the production data!)4. Backup the DB again (this is the one he'd ship to the vendor)Now, I didn't think that the backup he'd ship would have anything in the TLog that could potentionally be "read" with a log reader program to recover the sensitive data. Is this incorrect?If it would be possible to "read" the sensitive information, would something like this avoid the issue:1. Make the changes2. Flip the DB to Simple Recovery (he's working with a copy, after all)3. Issue a CHECKPOINT4. Wait a while for the lazy writer to flush the log5. Flip back to Full Recovery6. Backup the DB (ship this one to the vendor)I'm not out to prove the other commenters wrong on Technet, just looking to (try to) increase my knowledge.Thanks,Jason

edit SQL tables directly from Excel

Posted: 10 Oct 2013 12:21 AM PDT

Hi, simple question:Is it possible to connect to a SQL 2008 Db table with ms excel 2010, and edit the fields directly from excel? Like the same way as when you edit a table in ssms. Thanks in advance.

High memory utilization

Posted: 09 Oct 2013 10:11 PM PDT

Hi,In one of our server, SQL server is consuming most of the memory(seen from task manager). Apart from setting the memory limit, is there any way we can reduce memory utilization of SQL server? I believe Setting memory limit may degrade performance as SQL server will have to manage with less memory. How to deal with the situation? Client is chasing as memory utilization is high. Please advice.

Help

Posted: 09 Oct 2013 10:19 PM PDT

Created a Linked server.below is the errorThe operation could not be performed because OLE DB provider "SQLNCLI" for linked server "EMPLOYEE" was unable to begin a distributed transaction.

Replacing the first character in a column containing values

Posted: 09 Oct 2013 09:38 PM PDT

Hi All,I have a table like this CREATE TABLE [dbo].[template_practice]( [value1] [char](10) NOT NULL, [value2] [char](9) NOT NULL, CONSTRAINT [PK_template_practice] PRIMARY KEY CLUSTERED ( [value1] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY]GOand data inserted in the above table as INSERT INTO [practicedb].[dbo].[template_practice] ([LTM_INV_CD] ,[LTM_TMP_CD]) VALUES ('AVINASH','ANAND')GOINSERT INTO [practicedb].[dbo].[template_practice] ([LTM_INV_CD] ,[LTM_TMP_CD]) VALUES ('AKASH','AKHILA')GOINSERT INTO [practicedb].[dbo].[template_practice] ([LTM_INV_CD] ,[LTM_TMP_CD]) VALUES (' ',' ')GOINSERT INTO [practicedb].[dbo].[template_practice] ([LTM_INV_CD] ,[LTM_TMP_CD]) VALUES (' ',' ')GOINSERT INTO [practicedb].[dbo].[template_practice] ([LTM_INV_CD] ,[LTM_TMP_CD]) VALUES ('KRISH','KRISHNA')GOINSERT INTO [practicedb].[dbo].[template_practice] ([LTM_INV_CD] ,[LTM_TMP_CD]) VALUES ('KRUPA','KANNA')GOso i want to replace the first character with 'M' whose letter is starting with 'K' only remaining values starting with other characters or null values are don't want to change.I WRITE A QUERY LIKE THISuse practicedbgoDECLARE @find varchar(20)SELECT @find='K'UPDATE template_practiceSET VALUE1=Stuff(VALUE1, CharIndex(@find, VALUE1), Len(@find), 'M')BUT WHEN I EXECUTE THIS IT SHOWS ERROR LIKEMsg 515, Level 16, State 2, Line 4Cannot insert the value NULL into column 'VALUE1', table 'practicedb.dbo.template_practice'; column does not allow nulls. UPDATE fails.The statement has been terminated.please tell me the stored procedure to clear my problem

Best Index Strategie...

Posted: 09 Oct 2013 09:16 PM PDT

Is it correct if I use a composite noneclusterdindex like this:[b]FirstName + LastName + BirthDate[/b]i doen't need another index like this. The first index is enough?First Name + Last NameFirst Nameor it has huge advantages for the performance?Best RegardsNicole ;-)

Restore Information SQL DB

Posted: 09 Oct 2013 09:13 PM PDT

One of the User have started a Restoration of DB & left home..When we check we see that the DB is in restoring state..can any query help to identify how much percent is complete or how much percent left for complete restoration

SQL DB Architect Requirements

Posted: 09 Oct 2013 07:16 PM PDT

Hi,I'm new to this forum and need help from seniors to help me out on the below requirement. Thanks in advance.I have around 8+ yrs of experience in sql server db that includes t-sql experience, migration, ssrs 4 yrs development, data modeling, etc. My clarification, though i have 8+ yrs of experience in sql. I dont have a full or partial list of things which architect should know. I need a list something like below, - Should have detail knowledge in data modeling - Strong experience in ssrs with tabular, matrix, chart reports, - SSIS packaging experience with.....If seniors in this forum help me on this, it will be a great help to me. I'm planning to learn and get expertise in the things which i dont have experience.Thanks in advance for the help.

Thursday, October 10, 2013

[SQL Server] Who done it? Deleting Replication Jobs...

[SQL Server] Who done it? Deleting Replication Jobs...


Who done it? Deleting Replication Jobs...

Posted: 10 Oct 2013 02:16 PM PDT

I have a vendor that keeps deleting my replication job. Before my company will address the issue, I have to proof who done it. I'm 99% sure they used sp_removedbreplication. Any ideas where I can find this information in SQL? I REALLY REALLY want to catch this vendor red handed.

trying to KILL a SPID, getting 'Cannot use KILL to kill your own process.'

Posted: 10 Oct 2013 11:06 AM PDT

It's happening on an instance of SQL Server installed on my own machine. Inside a database called Sandbox, I was running a query that didn't want to finish, so I cancelled it. But, it appears it's still running cause when I do sp_who2 I get 54 RUNNABLE Domain MachineName . Sandbox SELECT INTO 31 33 10/10 17:00:06 Microsoft SQL Server Management Studio - Query 54 0 The SPID is 54 and when I try to kill it I get that error. So, I created another database login for the instance of SQL Serrver with sysadmin and public server roles checked and logged out as current user and back in with new login to kill the spid. Still can't. I've also rebooted. Still SPID 54 is there hogging resources and preventing me from updating a table on the Sandbox database.I need to kill the spid cause it's locking the resources on the database that I need. Not having access is ruining my evening plans.:w00t: How to solve this problem of SPID locking my database?NOTE: I am willing to take almost any action to solve this short of tossing the laptop, uninstalling SQL Server and reinstalling, deleting the whole database (cause it's not backed up, as it's been my sandbox).

Search This Blog