Wednesday, October 2, 2013

[how to] How to to automatic faiolver in postgresql 9.1

[how to] How to to automatic faiolver in postgresql 9.1


How to to automatic faiolver in postgresql 9.1

Posted: 02 Oct 2013 04:53 PM PDT

I am new to postgresql, and i have a task of implementing replication and failover in postgresql database. We are using 2 nodes (1st Primary and 2nd Slave). I have configured the streaming replication in them and it works fine. But now i cant able to configure the automatic failover in them. When the primary goes down the slave should be promoted as a primary.

I have tried Pgpool but after reading on some forums i dropped it, now planning to go for repmgr. Is repmgr as good one? and i need a step by step configuration of failover in postgres with repmgr.

I have tried many scripts for failover but it seems to be not working. can anyone provide me a sample script for failover process. and steps to how to execute them.

Triggers, the flow of the program does not stop [on hold]

Posted: 02 Oct 2013 04:51 PM PDT

I made a simple program on Oracle Forms. The code below is the trigger when the submit button is pressed.

Here it is:

BEGIN    CREATE_RECORD;    IF :USERS.USERNAME IS NULL THEN      MESSAGE('Please enter Username');      GO_ITEM('USERNAME');    ELSIF :USERS.PASSWORD IS NULL THEN      MESSAGE('Please enter Password');      GO_ITEM('PASSWORD');    ELSIF :USERS.PASSWORD2 IS NULL THEN      MESSAGE('Please confirm your Password');      GO_ITEM('PASSWORD2');    ELSIF :USERS.PASSWORD != :USERS.PASSWORD2 THEN      MESSAGE('Password did not match');      GO_ITEM('PASSWORD2');    ELSIF :USERS.NAME IS NULL THEN      MESSAGE('Please enter your Name');      GO_ITEM('NAME');    ELSIF :USERS.POSITION IS NULL THEN      MESSAGE('Please enter your Position');      GO_ITEM('POSITION');    END IF;    IF :USERS.ACCESS_LEVEL = 'admin' THEN        IF :USERS.ADMIN_PASS = 'eweb1' THEN            alert:= show_alert('USER_CREATED');                IF alert = alert_button1 THEN              MESSAGE('OK');              END IF;        ELSE                  MESSAGE('Administrator Password did not match');                  GO_ITEM('ADMIN_PASS');      END IF;        ELSE          alert:= show_alert('USER_CREATED');                IF alert = alert_button1 THEN                  /* foo */              END IF;    END IF;    END;  

When the form encounters errors, like when the second password (password2) does not match and when it shows the message saying 'Password did not match', it still flows and read the following statements instead of stopping and wait for the button to be pressed again. I hope I can fix this. tnx

Cannot install DB2 Express-C 10.1 on Mac OS X 10.7.5

Posted: 02 Oct 2013 03:07 PM PDT

I'm having troubles seting up DB2 Express-C 10.1 on my Mac OS X (v10.7.5). My initial install attempt failed, but then I found

https://www.ibm.com/developerworks/community/forums/html/topic?id=77777777-0000-0000-0000-000014927797  

That seemed to work up to the point where I tried to start the db:

$ ./db2start   dyld: Library not loaded: /db2unix/db2galileo/db2_galileo_darwinport/darwin64/s120905/engn/lib/libdb2e.dylib    Referenced from: /Users/home/sqllib/adm/./db2start        Reason: image not foundTrace/BPT trap: 5  $  

OTOH, if I try to run db2setup from the expc directory I get and error message that others have seen. Something like

/Applications/expc/db2/macos/install/db2setup: line 606:       /tmp/db2.tmp.22412/db2/macos/install/../java/jre/bin/java: No such file or  directory  

Where the 5-digit number in the tmp directory name changes on each run.

That is followed by a nice DB2 10.1 intro screen which is suddenly replaced by a blank DB2JDKTester window that says "DB2 SETUP LAUNCHPAD" at the top.

Has anyone else tried to install DB2 on Mac OS X 10.7?

(And, yes, I've seen the other thread with the same title, but that was never actually answered.)

How to get "Lookup" functionality in Access when linking to a SQL table?

Posted: 02 Oct 2013 04:24 PM PDT

I am building a SQL database which will have an Access 2010 front-end.

I would like some of the fields to be lookups in Access (ie the user clicks on the field in Access and a drop down populates). It is fairly straightforward to make a field a lookup for another table in Access but I can't seem to know how to do it in SQL and then propagate the changes.

My SQL knowledge is very basic. Here's an example of how I am creating my SQL tables:

CREATE TABLE RequestTypes (      RequestType varchar(50) PRIMARY KEY  );  INSERT INTO RequestTypes (RequestType) VALUES ('Val 1');  INSERT INTO RequestTypes (RequestType) VALUES ('Val 2');  INSERT INTO RequestTypes (RequestType) VALUES ('Val 3');    CREATE TABLE Projects (      ID int IDENTITY(1,1) PRIMARY KEY,      RequestStatus varchar(50) FOREIGN KEY REFERENCES RequestStatus(RequestStatus),      Quantity varchar(50)  );  

I then connect to the database through the ODBC connection in Access.

How can I create my tables in SQL so that the RequestStatus field of my Projects table to have the same functionality a lookup table does? For example, being able to click on the RequestStatus attribute of a Project and select "Val 1" or "Val 2" or "Val 3" from a list. The above does require the tables to match but does not provide the "dropdown" lookup functionality.

Why won't Postgresql 9.3 start on Ubuntu?

Posted: 02 Oct 2013 02:02 PM PDT

All,

I have successfully installed Postgresql 9.3 from the APT repository on 2 VM's running Ubuntu 12.04 and 13.04...however, I cannot get it to install properly on my host machine running Ubuntu 12.04.

The install (this time) seems to have gone ok, but perhaps there is an error I'm not understanding:

* No PostgreSQL clusters exist; see "man pg_createcluster"  Setting up postgresql-9.3 (9.3.0-2.pgdg12.4+1) ...  Creating new cluster 9.3/main ...    config /etc/postgresql/9.3/main    data   /var/lib/postgresql/9.3/main    locale en_US.UTF-8    port   5432  update-alternatives: using /usr/share/postgresql/9.3/man/man1/postmaster.1.gz to provide /usr/share/man/man1/postmaster.1.gz (postmaster.1.gz) in auto mode.  

So I then try to add myself as a postgresql user, but I get this:

createuser: could not connect to database postgres: could not connect to server: No such file or directory      Is the server running locally and accepting      connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?  

I cannot see Postgresql running in system monitor, and there is no file in the /var/run/postgresql/ folder...

EDIT: On the VM's, there is a file in /var/run/postgresql/ called 9.3-main.pid

So... what's going on here that isn't going on in my VM's? Like I said, the other installations on the VM's, including PostGIS and PGAdmin came in perfect...no idea why this host machine isn't going through...

Any thoughts appreciated!

-mb

MySQL stores BLOB in innodb buffer pool (innodb_buffer_pool_size)?

Posted: 02 Oct 2013 08:00 PM PDT

I have several databases all in INNODB, 50GB are BLOB columns, only 700MB are not BLOB data. Tools as mysqltuner ask me to set innodb_buffer_pool_size = 51GB.

Does MySQL store the BLOB in RAM? If yes, how can I configure MYSQL so it doesn't use innodb_buffer_pool_size to store BLOB?

Recover original utf8 data from latin1 mysql db

Posted: 02 Oct 2013 12:00 PM PDT

We forgot to change charsets in our mysql db after migration on new server.

Current situation:

mysql> show variables like 'character%';  +--------------------------+----------------------------+  | Variable_name            | Value                      |  +--------------------------+----------------------------+  | character_set_client     | latin1                     |  | character_set_connection | latin1                     |  | character_set_database   | latin1                     |  | character_set_filesystem | binary                     |  | character_set_results    | latin1                     |  | character_set_server     | latin1                     |  | character_set_system     | utf8                       |  | character_sets_dir       | /usr/share/mysql/charsets/ |  +--------------------------+----------------------------+    mysql> show variables like 'collation%';  +----------------------+-------------------+  | Variable_name        | Value             |  +----------------------+-------------------+  | collation_connection | latin1_swedish_ci |  | collation_database   | latin1_swedish_ci |  | collation_server     | latin1_swedish_ci |  +----------------------+-------------------+  

But table charset is utf-8:

mysql> show create table inbox\G  *************************** 1. row ***************************  Create Table: CREATE TABLE `inbox` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    # ...    `message` varchar(2048) DEFAULT NULL  ) ENGINE=InnoDB AUTO_INCREMENT=4001 DEFAULT CHARSET=utf8  

We're using Node.js (0.10.*) for backend and latest felixge node-mysql (connection is UTF8_GENERAL_CI by default).

Before we've found incorrect config, there are a lot of messages saved to table (~4000), most of them are utf-8. Sure, currently we see a lot of question marks ?????????? instead of normal text.

I've tried node-iconv, but no luck.

var db = require('mysql').createConnection({    // credentials  });    var iconv = new require('iconv').Iconv('latin1', 'utf-8');    db.query('select message from inbox where id = 42', function(err, result) {    var msg = result[0].message;    console.log(iconv.convert(msg).toString());    db.destroy();  });  

Is there any chance to recover original utf-8 data?

Populating joined tables

Posted: 02 Oct 2013 12:01 PM PDT

I have student table which has column stu_id, name, date of birth etc and Marks table that has id stu_id as foreign key. I want to inser a stu_id say x in the student table and then have that same id be populated in the marks table as well automatically.

I have joined the table using RIGHT OUTER JOIN. I am new to mysql and php. Please help.

Transactional Replication: When to delete the initial snapshot

Posted: 02 Oct 2013 12:02 PM PDT

I would like to delete the initial snapshot after I setup transnational replication and everything is working fine.

When is it OK to the delete the snapshot that initializes a subscriber?

Append columns with values from has many reletionship

Posted: 02 Oct 2013 04:14 PM PDT

I have two tables orders and items. Every Order has many Items.

orders  - id  - created_at  - paid_at    items  - id  - title  - amount  - quantity  - order_id  

Let say that there are 3 kinds of items. Chair, table and door.

I would like to create a query that will produce table with following columns:

- order_id  - created_at  - paid_at  - item_id # for chair  - item_title  - item_quantity  - item_amount  - item_id # for table  - item_title  - item_quantity  - item_amount  - item_id # for door  - item_title  - item_quantity  - item_amount  

If order has chair item, data in columns for chair will be displayed otherwise it will be empty.

Why am I doing this?

I would like to export this data to excel sheet.

There is limited amount of different items so there is no problem with milions of columns.

Geography union via cursor returning null

Posted: 02 Oct 2013 11:06 AM PDT

I need to combine multiple geography records into a single entity to store in a spatial data table, but I've learned that the only available function in MSSQL2008R2 that accomplishes this task is STUnion which only works for two static geography records. I have upwards of 200 that I need to union and I'm loathe to hand code that kind of query.

On that note, I wrote up a one-off cursor to iteratively union each geography object from my staging table. However, when executed, the cursor returns a null value without any errors or messages.

DECLARE      @ShapeUnion GEOGRAPHY,      @Shape GEOGRAPHY    DECLARE curSpatial CURSOR FOR  SELECT      geom  FROM      dbo.Boundaries_Staging    OPEN curSpatial    FETCH NEXT FROM curSpatial INTO @Shape    WHILE @@FETCH_STATUS = 0  BEGIN        SET @ShapeUnion = @ShapeUnion.STUnion(@Shape)        FETCH NEXT FROM curSpatial INTO @Shape    END    CLOSE curSpatial  DEALLOCATE curSpatial    SELECT @ShapeUnion  

Edit: Updated script for anyone curious about unioning multiple geography records. As stated in the comments, I initialized @ShapeUnion prior to the loop. Also, I added .STBuffer() to the field being unioned; this ensures that the miniscule spaces between the spatial data are completely filled.

DECLARE      @ShapeUnion GEOGRAPHY,      @Shape GEOGRAPHY    DECLARE curSpatial CURSOR FOR  SELECT      geom  FROM      dbo.Boundaries_Staging  WHERE      geom IS NOT NULL    OPEN curSpatial    FETCH NEXT FROM curSpatial INTO @Shape    SET @ShapeUnion = @Shape    WHILE @@FETCH_STATUS = 0  BEGIN        SET @ShapeUnion = @ShapeUnion.STUnion(@Shape.STBuffer(1))        FETCH NEXT FROM curSpatial INTO @Shape    END    CLOSE curSpatial  DEALLOCATE curSpatial      SELECT @ShapeUnion  

How to create a stored procedure with several delete in(select) statements?

Posted: 02 Oct 2013 08:31 PM PDT

Our database needs to be periodically cleared of certain items that receive a specific flag (A4) on their id. Our application manages stores and all tables are myISAM. The items can be either in the bought, sold, exchanged, etc tables and they will also exist in the main table that contains all items (the only information the tables share is the id and I can't control that at the moment).

My initial idea was to create the individual delete statements and then place them together in a stored procedure. The stored procedure I created contains:

delimiter $$    create procedure clear_A4(in period char(10))    BEGIN    delete from mov_pedido where pednot in(select distinct titnum from mov_titulo where titnum like 'A4%' and titemi <='2012-01-01' and titvalpag >0);    delete from mov_orcame where orcnot in(select distinct titnum from mov_titulo where titnum like 'A4%' and titemi <='2012-01-01' and titvalpag >0);    delete from mov_nfsven where nfsnum in(select distinct titnum from mov_titulo where titnum like 'A4%' and titemi <='2012-01-01' and titvalpag >0);    delete from mov_movime where movdoc in (select distinct titnum from mov_titulo where titnum like 'A4%' and titemi <='2012-01-01' and titvalpag >0);    delete from mov_titulo where titnum like 'A4%' and titemi<='2011-12-01' and titvalpag >0;    end $$    delimiter ;  

These lines refer to items sold. A sale starts with a order, then a quotation, then the actual sale, and finally it gets throw in the main record. The first table mov_pedido has the orders, mov_orcame has the quotations, mov_nfsven has the actual sales and mov_movime records all the transactions. These tables can only be cleared of items that have been paid for by customers but this information can only be found on mov_titulo so I decide to use the select statements with the IN operator. The problem is I also need to clear the mov_titulo table. If I delete the items from mov_titulo first then the information used in the select statements is lost.

Reading up on mySQL,multithreaded databases it seems that if the statements are inside the stored procedure together, they will be processed simultaneously and because the tables are related that way it will cause a problem.

My question is: how can I improve the logic of my delete statement to make them work together? Can I improve my stored procedure to deal with this?

Right now I am executing one statement at a time manually, and that works, but I can see problems down the line, such as entering the wrong date in one of the existing 20 some delete statements for example so I am looking for a better way to do this.

Thank you.

What can do a table constraint that a column constraint can't do?

Posted: 02 Oct 2013 10:16 AM PDT

Had an exam today. One question disturbed me:

  • Q: What can a table constraint do that a column constraint can't do?
  • A: My answer was that only a table constraint can declare a composite primary key.

I guess this is not the only difference. How to answer this question more precisely?

Storing statistic results

Posted: 02 Oct 2013 08:57 AM PDT

I'm doing a small database for statistics, it's a really simple statistic like calculating the cost of an event and divide it by the amount of people that went to get the cost per attendee, but it can end up having loads of event, and it will require to poll in a lot of results at once.

I know with MySQL I can use the database to do such simple calculation, so I could run a query every time is needed, but it can end up being costly, if there's a lot of events in say a year.

My question is if I were to store the statistic results on a different table, I see it a little bit as data duplication, is that acceptable? or should I do it in a different manner?

Thanks

Unknown column error with an index

Posted: 02 Oct 2013 08:55 AM PDT

I created a simple index associated with a timestamp column

CREATE TABLE `data` (      `id` INT(11) UNSIGNED NOT NULL,      `ticker` VARCHAR(16) NOT NULL COLLATE 'utf8_unicode_ci',      `comment` TEXT NOT NULL COLLATE 'utf8_unicode_ci',      `link` VARCHAR(256) NULL DEFAULT NULL COLLATE 'utf8_unicode_ci',      `comment_hash` CHAR(32) NOT NULL COLLATE 'utf8_unicode_ci',      `time` DATETIME NOT NULL,      `feed` VARCHAR(16) NOT NULL COLLATE 'utf8_unicode_ci',      `source` INT(11) NOT NULL,      `active` TINYINT(1) NOT NULL DEFAULT '1',      `score` VARCHAR(20) NULL DEFAULT NULL COLLATE 'utf8_unicode_ci',      `dttm` DATETIME NULL DEFAULT NULL,      PRIMARY KEY (`ticker`, `comment_hash`),      INDEX `time_index1` (`time`)  )  COLLATE='utf8_unicode_ci'  ENGINE=InnoDB;  

However when I try to do anything with the index, it gives me SQL Error (1054): Unknown column 'time_index1' in 'where clause'

Sample queries:

select *  from data  where time_index1 = '2013-10-01 17:18:06'  

or

select *  from data  order by time_index1  

how to permanently enable query cache in mysql?

Posted: 02 Oct 2013 08:43 AM PDT

I want to know if there's a way to permanently set the size of my query cache ... and to make sure it's always enabled. Right now, i'm just using the "set global query_cache_size" command to set the size... but when the database is restarted, it goes away.

Thanks.

How do you see which database on a Server uses most resources

Posted: 02 Oct 2013 03:44 PM PDT

I have a database server with a couple of databases on it. How can I see where any resource pressure may come from?

I would like to get a table of:

  • Database Name
  • Batch Requests per second
  • CPU Time
  • Logical Read
  • Logical Writes

how to access my password in SSIS 2005 by VB.NET

Posted: 02 Oct 2013 10:00 AM PDT

A VB.NET component delivered following message:

Login failed for user MyDBUser

It is about an OleDbConnection.

btw The user works fine togehter with SQL Server Management Studio.

Retaining a password via the connection manager does not work even with enabling Save my password. In my case, it is not a security issue to retain the password because the SSIS package will not be published for others. Hence, what's the fastest way -concerning the VB.NET code- to enable an access to the (SQL Server) password?

EDIT: Workaround: A SSIS variable was attached to the scope of the required control flow. Me.readVariable was applied to concat the connection string.

Grouping Selecting unique values in a single column [duplicate]

Posted: 02 Oct 2013 09:00 AM PDT

This question already has an answer here:

Here is my table

ID      Field1  Field2      Field3      Field5  Field7  357     357     2013-03-07  08:02:02:275    t02 bBCD00103RG  365     365     2013-03-07  08:02:05:307    t02 bR U00103w  374     374     2013-03-07  08:02:08:322    t02 bR U00103w  474     474     2013-03-07  08:02:41:307    t02 bR U00103w  1378    1378    2013-03-07  02:25:45:447    t02 bR U00103w  1381    1381    2013-03-07  02:25:46:416    t02 bBFU3  1386    1386    2013-03-07  02:25:49:057    t02 bBFU02405LL  1394    1394    2013-03-07  02:25:52:260    t02 bBFU02405LL  1504    1504    2013-03-07  02:26:42:307    t02 bBFU02405LL  1510    1510    2013-03-07  02:26:45:275    t02 bBFU02405LL  1516    1516    2013-03-07  02:26:48:307    t02 bBFP02405LI  1523    1523    2013-03-07  02:26:52:088    t02 bBFP02405LI  1530    1530    2013-03-07  02:26:54:885    t02 bBFP02405LI  1556    1556    2013-03-07  02:27:06:307    t02 bBFP02405LI  1562    1562    2013-03-07  02:27:09:307    t02 bBFP02405LI  1568    1568    2013-03-07  02:27:12:307    t02 bR L02405o  1574    1574    2013-03-07  02:27:15:338    t02 bBCL/  1580    1580    2013-03-07  02:27:18:635    t02 bBCL00103RO  1587    1587    2013-03-07  02:27:21:307    t02 bBCL00103RO  1714    1714    2013-03-07  02:28:21:291    t02 bBCD00103RG  1721    1721    2013-03-07  02:28:24:291    t02 bBCD00103RG  1728    1728    2013-03-07  02:28:27:338    t02 bBCD00103RG  1734    1734    2013-03-07  02:28:30:291    t02 bBCD00103RG  1740    1740    2013-03-07  02:28:33:447    t02 bR U00103w  1996    1996    2013-03-07  02:30:33:291    t02 bR U00103w  

The end result must looκ like this

ID      Field1  Field2      Field3      Field5  Field7  357     357     2013-03-07  08:02:02:275    t02 bBCD00103RG  365     36 5    2013-03-07  08:02:05:307    t02 bR U00103w  1381    1381    2013-03-07  02:25:46:416    t02 bBFU3  1386    1386    2013-03-07  02:25:49:057    t02 bBFU02405LL  1516    1516    2013-03-07  02:26:48:307    t02 bBFP02405LI  1568    1568    2013-03-07  02:27:12:307    t02 bR L02405o  1574    1574    2013-03-07  02:27:15:338    t02 bBCL/  1580    1580    2013-03-07  02:27:18:635    t02 bBCL00103RO  1714    1714    2013-03-07  02:28:21:291    t02 bBCD00103RG  1740    1740    2013-03-07  02:28:33:447    t02 bR U00103w  

Ι want the first occurence of the value of Field7, Ι want to compare the value previous value of Field7 and if it is different then only output.

Single disk for Data, System, User, Temp and Backup when installing Fail Over Cluster?

Posted: 02 Oct 2013 09:45 AM PDT

I have a Windows cluster with two nodes. I am trying to make Fail Over Cluster using SQL Server 2012.

On both nodes there is 2 TB storage available which I can access as:

C:\Storage_For_Cluster\Volume1

So basically a 2 TB SAN is mapped on both nodes as above.

My questions is, can I make Fail Over Cluster with one disk only and all data will be saved in it? Or it is better to use a separate drive for each. If you take a look at the screenshot below, you will get an idea as to what I am talking about. As you can see we are using two drives Z and X for different options.

Since in this case I only have one drive, can I use this for all these options?

Secondly MSDTC is not installed yet (which I think is also required to make Fail Over Cluster) so I wanted to know shall I install MSDTC too on same drive as mentioned above?

enter image description here

Issues installing MySQL server on Ubuntu 13.04

Posted: 02 Oct 2013 03:29 PM PDT

I'm currently trying to install a MySQL server on my Ubuntu 13.04 machine. The problem is, when I try to install it, I get error messages indicating that not all packages could be downloaded. When I run sudo apt-get install mysql-server, and after hanging for a while at reading headers, the console reads as follows:

Reading package lists... Done  Building dependency tree         Reading state information... Done  The following extra packages will be installed:    libaio1 libdbd-mysql-perl libdbi-perl libhtml-template-perl libmysqlclient18 libnet-daemon-perl libplrpc-perl mysql-client-5.5 mysql-common mysql-server-5.5 mysql-server-core-5.5  Suggested packages:    libipc-sharedcache-perl tinyca mailx  The following NEW packages will be installed:    libaio1 libdbd-mysql-perl libdbi-perl libhtml-template-perl libmysqlclient18 libnet-daemon-perl libplrpc-perl mysql-client-5.5 mysql-common mysql-server mysql-server-5.5 mysql-server-core-5.5  0 upgraded, 12 newly installed, 0 to remove and 3 not upgraded.  Need to get 8,077 kB/24.5 MB of archives.  After this operation, 84.9 MB of additional disk space will be used.  Do you want to continue [Y/n]? y  Err http://us.archive.ubuntu.com/ubuntu/ raring-updates/main mysql-client-5.5 i386 5.5.32-0ubuntu0.13.04.1    Connection failed [IP: 91.189.91.13 80]  Err http://security.ubuntu.com/ubuntu/ raring-security/main mysql-client-5.5 i386 5.5.32-0ubuntu0.13.04.1    Connection failed [IP: 91.189.92.190 80]  Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/m/mysql-5.5/mysql-client-5.5_5.5.32-0ubuntu0.13.04.1_i386.deb  Connection failed [IP: 91.189.92.190 80]  E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?  

I've tried the suggested fix or running apt-get update and such, I've made sure I have no connections blocked, I've restarted, uninstalled, reinstalled, etc, all to no avail. I've been searching the web all day for solutions I haven't yet tried, but most people with this issue are running an older version of Ubuntu. Suggestions?

MySql one time event never runs?

Posted: 02 Oct 2013 01:25 PM PDT

Please have a look at below events

1) create EVENT Test1 ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 20 second ON COMPLETION PRESERVE ENABLE DO ...     2) create EVENT Test2 ON SCHEDULE EVERY 20 SECOND STARTS CURRENT_TIMESTAMP ON COMPLETION PRESERVE ENABLE DO ...   

I expect event Test1 to run one time after 20 seconds but it never runs. Event Test2 is working fine.

Any idea? Thanks.

Ok sorry it is the alter that is not working

At first i did create EVENT Test1 ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 20 second ON COMPLETION PRESERVE ENABLE DO

then shortly after i did alter EVENT Test1 ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 20 second ON COMPLETION PRESERVE ENABLE DO

Expect event Test1 to run again in another 20 secs but it didn't.

Slow SSRS Report in production

Posted: 02 Oct 2013 02:25 PM PDT

I have an SSRS report which gets its data by firing a series of stored procedures.

Now the report is timing out big time when run in production, yet when I pull down the prod database and restore to development the report runs fine.

I was thinking to set up a sql server profiler trace in production and hopefully that will tell me something... eg high Disk I/O at the time it's being run.

What else should I be doing? Something with perfmon?

Cannot install DB2 Express C 10.1 on Mac OS X 10.7.5 [duplicate]

Posted: 02 Oct 2013 11:37 AM PDT

This question already has an answer here:

When I invoke ./db2setup it cannot find the JRE where it expects it. Binary for Java not found as follows:

/private/tmp/db2/expc/db2/macos/install/db2setup: line 606: /tmp/db2.tmp.740/db2/macos/install/../java/jre/bin/java: No such file or directory

I successfully installed DB2 Express C 9.7 last summer before when I was still using the Apple JDK. I am now getting the JDK from Oracle as it is the only source I know of.

How do I tell the DB2 install where to find Java?

How to run a cold backup with Linux/tar without shutting down MySQL slave?

Posted: 02 Oct 2013 04:25 PM PDT

I run the following before tar-ing up the data directory:

STOP SLAVE;  FLUSH TABLES WITH READ LOCK;  FLUSH LOGS;   

However, tar will sometimes complain that the ibdata* and ib_logfiles* files are updated during the process. What am I missing?

The slave machine is in a cold standby machine so there are no client processes running while tar is running.

CentOS release 5.6 64bits, MySQL 5.1.49-log source distribution.

sp_send_dbmail with attachment

Posted: 02 Oct 2013 03:53 PM PDT

SQL Server 2008, connecting via SQL Server Authentication.

I have a sproc in DatabaseA which calls sp_send_dbmail in msdb to send an email with a file attachment. The file is on the db server, not on a remote fileshare.

The SQL account being used is not sysadmin, but does belong to the DatabaseMailUserRole in msdb.

Sending an email without an attachment is fine, but when an attachment is present it gives the error:

The client connection security context could not be impersonated.   Attaching files require an integrated client login  

There are a few articles/posts about this out there, but some seem to say conflicting things. I've been looking into impersonation, and one thing that does work is in the sproc in DatabaseA, to do the following:

EXECUTE AS LOGIN = 'sa' -- or any account with sysadmin privileges  EXECUTE msdb..sp_send_dbmail ....  REVERT  

I wasn't expecting this to work as I thought to send attachments, you needed to use Windows Authentication. However it does work, but it means the lower privileged SQL account needs to be granted permission to IMPERSONATE the sa (or other sysadmin account).

Doing my due diligence as a dev before unleashing a DBA's nightmare into the wild...

My question is: What is a good/safe way of allowing a user authenticated via SQL Server (non sysadmin) to send email attachments from the local db server disk without opening up a security hole?

Update: Re: Credentials I've created a new Windows Login, created credentials for that account via SSMS, mapped those credentials to my limited privileges SQL account. I get the error:

Msg 22051, Level 16, State 1, Line 0  The client connection security context could not be impersonated.   Attaching files require an integrated client login  

I must be missing something!

What is the best way to get a random ordering?

Posted: 02 Oct 2013 09:49 AM PDT

I have a query where I want the resulting records to be ordered randomly. It uses a clustered index, so if I do not include an order by it will likely return records in the order of that index. How can I ensure a random row order?

I understand that it will likely not be "truly" random, pseudo-random is good enough for my needs.

[MS SQL Server] Create Full Backup with Multiple FileGroups

[MS SQL Server] Create Full Backup with Multiple FileGroups


Create Full Backup with Multiple FileGroups

Posted: 01 Oct 2013 09:28 AM PDT

Hello EveryoneI am in the process of writing a Full Backup Database script, but this database has multiple filegroups. The Database is set to Simple Recovery Mode. It has been ages since I have backed up a Multi FileGroup database.No one installed the SQL Books Online on this server, and I am going to hang someone for that.Can someone give me the code to create a Full Backup with Multiple FileGroups, that will backup each FileGroup?This is what I have:[code="sql"]BACKUP DATABASE ProdDB TO DISK = N'F:\Database Backups\ProdDB\ProdDB_Full.bak' WITH COPY_ONLY, NOFORMAT, INIT, NAME = N'ProdDB-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10, CHECKSUM;[/code]Yes, I know. Great naming convention at this place. I knew that I should not have taken this contract. :)Thank you in advance for all your assistance, suggestions and comments.Andrew SQLDBA

SQL Server 2008 R2 Cluster in Windows Server 2008 R2 - Maintenance Activity - Start/Stop Services

Posted: 01 Oct 2013 10:45 PM PDT

How to Start/Stop SQL Server Services in Two Node Active/Passive Cluster environment?

Shrinking Temp DB

Posted: 02 Oct 2013 01:16 AM PDT

I am trying to shrink the first temp db mdf, but it is not shrinking... 95% free... just added some new mdf's.Using this command:USE [tempdb]GODBCC SHRINKFILE (N'tempdev' , 17724)I get the following:DbId FileId CurrentSize MinimumSize UsedPages EstimatedPages2 1 4029552 1024 10160 10160

SSAS Change Management

Posted: 01 Oct 2013 01:32 PM PDT

Not sure if this is the correct place to post this but I am looking to see if there is any where I can get a listing of objects on a SSAS database and at the very least a last changed date and or created date? I know on the database engine in the master database the SysObjects table has this information for each database on the server. Does anyone know where something like this is in an SSAS database? any help is greatly appreciated.

[Articles] Secure Programming

[Articles] Secure Programming


Secure Programming

Posted: 01 Oct 2013 11:00 PM PDT

Writing secure programs is hard. Steve Jones has a few comments on what some of the issues are with training developers.

[SQL 2012] SQL Wait type xe_live_target_tvf

[SQL 2012] SQL Wait type xe_live_target_tvf


SQL Wait type xe_live_target_tvf

Posted: 02 Oct 2013 02:58 AM PDT

Hi All,We're having some performance problems with our server, and so checked the wait stats. When I run the following script, taken from Glenn Barry's, 2012 Diagnostic Information Queries, the highest wait stat, by an order of magnitude, is XE_LIVE_TARGET_TVF. We're running SQL 2012 x64 Standard on a Win2k8 machine. I can't find any information about this wait, and it's not documented on the msdn dm_os_wait_stats page. Can anyone tell me if it is relevant or just another wait that should be excluded? [code="sql"]WITH WaitsAS (SELECT wait_type, CAST(wait_time_ms / 1000. AS DECIMAL(12, 2)) AS [wait_time_s], CAST(100. * wait_time_ms / SUM(wait_time_ms) OVER () AS decimal(12,2)) AS [pct], ROW_NUMBER() OVER (ORDER BY wait_time_ms DESC) AS rn FROM sys.dm_os_wait_stats WITH (NOLOCK) WHERE wait_type NOT IN (N'CLR_SEMAPHORE', N'LAZYWRITER_SLEEP', N'RESOURCE_QUEUE',N'SLEEP_TASK', N'SLEEP_SYSTEMTASK', N'SQLTRACE_BUFFER_FLUSH', N'WAITFOR', N'LOGMGR_QUEUE', N'CHECKPOINT_QUEUE', N'REQUEST_FOR_DEADLOCK_SEARCH', N'XE_TIMER_EVENT', N'BROKER_TO_FLUSH', N'BROKER_TASK_STOP', N'CLR_MANUAL_EVENT', N'CLR_AUTO_EVENT', N'DISPATCHER_QUEUE_SEMAPHORE' ,N'FT_IFTS_SCHEDULER_IDLE_WAIT', N'XE_DISPATCHER_WAIT', N'XE_DISPATCHER_JOIN', N'SQLTRACE_INCREMENTAL_FLUSH_SLEEP', N'ONDEMAND_TASK_QUEUE', N'BROKER_EVENTHANDLER', N'SLEEP_BPOOL_FLUSH', N'SLEEP_DBSTARTUP', N'DIRTY_PAGE_POLL', N'HADR_FILESTREAM_IOMGR_IOCOMPLETION',N'SP_SERVER_DIAGNOSTICS_SLEEP')),Running_Waits AS (SELECT W1.wait_type, wait_time_s, pct, SUM(pct) OVER(ORDER BY pct DESC ROWS UNBOUNDED PRECEDING) AS [running_pct] FROM Waits AS W1)SELECT wait_type, wait_time_s, pct, running_pctFROM Running_WaitsWHERE running_pct - pct <= 99ORDER BY running_pctOPTION (RECOMPILE);[/code]

FULLTEXT search

Posted: 02 Oct 2013 02:57 AM PDT

Hi,I came across a FULLTEXT catalog that documents tables in a database.The catalog has 5 descriptions tables (table, columns, triggers, foreign keys and check constraints).If I want to search for Sales and Product the SalesDetails table doesn't show on the results cause it's being built like this:[code="SQL"]SELECT t0.Id, ct.Rank FROM CONTAINSTABLE(DescriptionsTables, *, '"*sales*" AND "*products*"') ct INNER JOIN DescriptionsTables t0 ON tc.[Key] = t0.IdUNION ALLSELECT t0.TableId, ct.Rank FROM CONTAINSTABLE(DescriptionsTablesColumns, *, '"*sales*" AND "*products*"') ct INNER JOIN DescriptionsTablesColumns t0 ON tc.[Key] = t0.Id.....[/code]Since the text "sales" is only in tables descriptions and "products" in tables columns descriptions, never on both at the same time, the table doesn't show on the results...There is a way, that I know of, to make this work... Create a table whenever DescriptionsTables is changed (trigger) and populate it with the several tables' data and use it for the fulltext instead of the other 5....But is there an easy way to do this? Merge all the catalogs in just one for the search?! (probably stupid question since the reference for the KEY column is different on all the catalog tables...).Thanks,Pedro

ADODB Recordset Updating

Posted: 02 Oct 2013 02:48 AM PDT

The following MSAccess VBA code worked well with SQL Server 2005, but produces an error with SQLServer 2012.The error is #3251 "Current recordset does not support updating". Set gCnn = New ADODB.Connection gCnn.ConnectionString = "Driver={SQL Server Native Client 11.0};Server=COMPUTER-A\NORMARMSSERVER;DSN=ArmsdataDSN;Database=ArmsData;UID=armsdb;PWD=EQ23234AZ;Trusted_Connection=yes;" gCnn.Open Dim rst As ADODB.Recordset Dim sql As String Set rst = New ADODB.Recordset rst.CursorLocation = adUseClient sql = "SELECT * FROM tblQuoteLob WHERE (QuoteID = " & Me!QuoteID & ")" rst.Open sql, gCnn, adOpenKeyset, adLockOptimistic If rst.EOF And rst.BOF Then 'add a new record rst.AddNew rst!QuoteID = Me!QuoteID rst.Update End If rst.Close Set rst = Nothing Any Ideas?

The ORDER BY clause is invalid in views, inline functions, derived tables, subqueries, and common table expressions, unless TOP, OFFSET or FOR XML is also specified.

Posted: 02 Oct 2013 12:45 AM PDT

Where can i place my ORDER BY statement to sort ALL the data from the table, not just the 'from' and 'to' records?Everywhere i try to move it, i get the above error.Thanks for any help! USE [JobPortal9_10_13]GOSET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGO-- ============================================= -- -- Description: <Description,[Get All Jobs Searched Structured SQL],> -- -- ============================================= alter Procedure [dbo].[zianGetAllJobsSearched10_1_13FAST] -- Add the parameters for the stored procedure here @Title varchar(250), @Industry int, @Industry2 int, @Industry3 int, @Date int, @JobTitle int, @JobType int, @Experience int, @Education int, @State int, @City int, @Salary int, @MaxSalary int, @fromRec int, @toRec int, @OrderBy VARCHAR(50), @OrderByDirection VARCHAR(10) WITH RECOMPILE AS DECLARE @ActualDate DateTime = cast(((select dateadd(d,@Date,GETDATE()))) as varchar(20));IF (@Title = '') SET @Title = NULL;WITH EMPLOYMENT_OPPORTUNITIES_CTE(id,title,contactperson,lastmodified,description,workexperiencetypeid, workexperience,jobtypeid,AcademicExperienceTypeId, industryid,industryid2,industryid3,salaryminid,salarymaxid, --jobTitle, city,state, PostalCode,positions,lastmodified2) --,deadline)AS( SELECT e.id,title,contactperson,lastmodified,e.description,workexperiencetypeid, isnull((select we.[Name] from workexperiencetypes we where we.id=e.workexperiencetypeid),'') as workexperience, isnull((select jot.[Name] from jobtypes jot where jot.id=e.jobtypeid),'') as jobtype, isnull((select edu.[Name] from Degree edu where edu.Id=e.AcademicExperienceTypeId),'') as education, isnull((select ind.[Name] from industries ind where ind.id=e.industryid),'') as industryname, isnull((select ind.[Name] from industries ind where ind.id=e.industryid2),'') as industryname2, isnull((select ind.[Name] from industries ind where ind.id=e.industryid3),'') as industryname3, salaryminid,salarymaxid, --isnull((select jt.[Name] from jobTitles jt where jt.id=e.jobtypeid),'') as jobTitle, isnull((select ci.[Name] from cities ci where ci.id=c.cityid),'') as city, isnull((select r.[AbbreviatedName] from regions r where r.id=c.regionid),'') as state, isnull((select comp.[PostalCode] from Companys comp where comp.Id=c.id),'') as PostalCode, positions, substring(cast(e.LastModified as varchar(20)),1,12) as lastmodified2 --ROW_NUMBER() OVER (ORDER BY (SELECT 1)) RowNumber from EmploymentOpportunities e With (nolock) --left outer join companys c on e.officeid=c.id --inner join companys c on e.officeid=c.id inner join companys c on e.companyid=c.id where (@Title IS NULL or title = @Title) and (@Industry = 0 OR e.industryid = @Industry) and (@Industry2 = 0 OR Industryid2 = @Industry2) and (@Industry3 = 0 OR Industryid3 = @Industry3) and (@Date = 0 OR lastmodified >= @Date) and lastmodified is not null and lastmodified > @ActualDate --and e.LastModified >''' + cast(((select dateadd(d,@Date,GETDATE()))) as varchar(20)) + '''' --and (@JobTitle = 0 OR title = @JobTitle) and (@JobType = 0 OR jobtypeid = @JobType) and (@Experience = 0 OR workexperiencetypeid = @Experience) and (@Education = 0 OR academicexperiencetypeid = @Education) and (@State = 0 OR c.RegionId = @State) and (@City = 0 OR c.CityId = @City) and (@Salary = 0 OR SalaryMinID >= @Salary) and (@MaxSalary = 0 OR SalaryMaxID <= @MaxSalary) )SELECT id,title as Title,contactperson as ContactPerson,lastmodified,description as Description,workexperience,jobtypeid as jobtype,academicexperiencetypeid as education,industryid as industryname,industryid2 as industryname2,industryid3 as industryname3,salaryminid as salary,salarymaxid as maxsalary,--jobtitle as jobTitle,city + ', ' + state + ', ' + PostalCode as Location,positions as Positions,--deadline,rownumber as RowNumber,(select COUNT(*) from EMPLOYMENT_OPPORTUNITIES_CTE) as TotalCount FROM ( SELECT id,title,contactperson,lastmodified,description,workexperience,jobtypeid,academicexperiencetypeid,industryid,industryid2,industryid3,salaryminid,salarymaxid,--jobtitle,city,state,PostalCode,--Location,positions,--deadline,ROW_NUMBER() OVER (ORDER BY LastModified DESC ) RowNumber FROM EMPLOYMENT_OPPORTUNITIES_CTE ) p1WHERE RowNumber BETWEEN @fromRec AND @toRec ORDER BY CASE WHEN @OrderBy = 'LastModified' AND @OrderByDirection = 'D' THEN LastModified END DESC, CASE WHEN @OrderBy = 'LastModified' AND @OrderByDirection != 'D' THEN LastModified END, CASE WHEN @OrderBy = 'City' AND @OrderByDirection = 'D' THEN City END DESC, CASE WHEN @OrderBy = 'City' AND @OrderByDirection != 'D' THEN City END, CASE WHEN @OrderBy = 'State' AND @OrderByDirection = 'D' THEN State END DESC, CASE WHEN @OrderBy = 'State' AND @OrderByDirection != 'D' THEN State END, CASE WHEN @OrderBy = 'Title' AND @OrderByDirection = 'D' THEN Title END DESC, CASE WHEN @OrderBy = 'Title' AND @OrderByDirection != 'D' THEN Title END OPTION(Maxdop 8)

"could not open error log file" on incorrect SQL cluster node

Posted: 01 Oct 2013 11:52 PM PDT

Hi. We have 2-node SQL 2012 cluster running as VMs in Hyper-V. The cluster hosts 4 SQL instances, running in active-active mode (2 instances per node).For reference, let's call instances Instance1-4 and nodes Node1-2.The problem: Instance3 is currently running on Node2 and there are no errors related to this instance in eventlog on Node2. However on Node1, the application eventlog is repeatedly spammed by events with ID 17058 with the MSSQL$Instance3 name as the source. The error is "initerrlog: Could not open error log file 'K:\MSSQL11.Instance3\MSSQL\Log\ERRORLOG'. Operating system error = 3(The system cannot find the path specified.)."That error makes sense, since the instance is running on Node2 and therefore the disk K: is currently accessible/mapped to Node2 and not Node1.The question is, why does this error appear on Node1 and where is it coming from??Thanks in advance

Sybase ASE 12 to SQL Server 2012 via SSMA

Posted: 01 Oct 2013 08:14 PM PDT

were using Sybase ASE 12.5 32 bit to SQL server 2012 64 bit via SSMA. I'm going to transfer only the Data since I'm done converting the schemas and recreating the tables. The problem is. When I click the migrate button I get this error. A really generic error and really frustrating one since the logs shows no signs of errors understandable by human. Anyone having problems with this one also? been stock here for weeks. [Gui: Error] [16260/16] [2013-10-02 04:00:17]: Exception: No such interface supported (Exception from HRESULT: 0x80004002 (E_NOINTERFACE))[Datamigrator: Error] [16260/15] [2013-10-02 04:23:13]: Exception: No such interface supported (Exception from HRESULT: 0x80004002 (E_NOINTERFACE)) site: IErrorInfo GetErrorInfo(Int32, Int32) source: System.Data at System.Data.Common.UnsafeNativeMethods.IErrorRecords.GetErrorInfo(Int32 ulRecordNum, Int32 lcid) at System.Data.OleDb.OleDbError..ctor(IErrorRecords errorRecords, Int32 index) at System.Data.OleDb.OleDbErrorCollection..ctor(IErrorInfo errorInfo) at System.Data.OleDb.OleDbException.CreateException(IErrorInfo errorInfo, OleDbHResult errorCode, Exception inner) at System.Data.OleDb.OleDbConnection.ProcessResults(OleDbHResult hresult, OleDbConnection connection, Object src) at System.Data.OleDb.OleDbCommand.ExecuteCommandTextErrorHandling(OleDbHResult hr) at System.Data.OleDb.OleDbCommand.ExecuteCommandTextForMultpleResults(tagDBPARAMS dbParams, Object& executeResult) at System.Data.OleDb.OleDbCommand.ExecuteCommandText(Object& executeResult) at System.Data.OleDb.OleDbCommand.ExecuteCommand(CommandBehavior behavior, Object& executeResult) at System.Data.OleDb.OleDbCommand.ExecuteReaderInternal(CommandBehavior behavior, String method) at System.Data.OleDb.OleDbCommand.ExecuteScalar() at Microsoft.SSMA.Framework.Generic.Utilities.DbUtilities.DbCommandWrapper.ExecuteScalar() at Microsoft.SSMA.Framework.Generic.DbUtilities.SsmaDbCommand.Execute[T](ExecutionDelegate`1 executionMethod) at Microsoft.SSMA.Framework.Generic.DbUtilities.SsmaDbCommand.ExecuteScalar() at Microsoft.SSMA.Framework.Sybase.SqlServer.DataMigration.TableDataMigrator.GetTableRowCount(String tableName, IDbConnection connection) at Microsoft.SSMA.Framework.Sybase.SqlServer.DataMigration.TableDataMigrator.GetSourceTableRowCount(String tableName, IDataMigrationContext context) at Microsoft.SSMA.Framework.Sybase.SqlServer.DataMigration.TableDataMigrator.GetSourceTableRowCountSafe(String tableName, IDataMigrationContext context)

importing a populating csv file

Posted: 01 Oct 2013 06:54 PM PDT

I have a CSV file which is getting information about every 30 seconds from a third party program and I want that data to be imported into my SQL DB. I tried using the import export wizard but it didn't like it because the file was being used by the third party program.Is anyone able to provide any insights? Apologies for my noobyness, I am brand new to SQL!

Need A Tool to Reverse Engineer an ERD

Posted: 01 Oct 2013 08:09 AM PDT

Our friends have done us a solid with Visio. Since SQL 2005 and Visio 2007 I have been reverse engineering ERD's from existing databases. We recently upgrade our SQL Server from 2008 R2 to 2012. We were successfully reverse engineering the 2008 R2 databases (after installing Visio 2010 SP1). Now we cannot reverse engineer the ERD. Thinking, like previous versions of Visio, that is was because the database engine was built after the Visio release I upgraded to Visio 2013.Silly me. [size="2"][b]Database Reverse Engineering has been deprecated in Visio 2013. Standard and Professional. [/b][/size]What tools are you in the community using to reverse engineer your SQL 2012 database? What is your opinion of those tools? I will not use the diagram tool in SSMS. For the simple reason of it is live in the database. A change in the diagram becomes a change in the database.(Yes, I know the model should be built before the DB. etc. But the devs rarely do that. And because I am the DBA it falls to my team to build ERD's. Otherwise we fail our annual IT audits.don't you love the life of a DBA? :hehe:)

[T-SQL] wish to add rows having NULL values as placeholders for "missing" dates

[T-SQL] wish to add rows having NULL values as placeholders for "missing" dates


wish to add rows having NULL values as placeholders for "missing" dates

Posted: 02 Oct 2013 12:06 AM PDT

I have a table that contains stuff happening. There's a datetime stamp column to show on what date the stuff happened. There are days where nothing happens. I want to select from this table and add rows to the result set on the fly, for those days on which nothing happened. Reason: I've got to pull it into excel and need a placeholder for those days.For the purpose of illustration, I've created a table containing the start datetimes of all backups that have been kicked off for a server called Peaches. This table also contains the name of the server, the size of backup, and the associated jobid. On some days there are no backups eg. 8/05/2013 through 8/13/2013, BUT I still want to generate a row for those days as a placeholder.Can you provide guidance on how to do?Here's DDL.[code="plain"]CREATE TABLE [dbo].[CannedBackupJobs]( [jobid] [int] NULL, [SizeTB] [float] NULL, [StartTime] [datetime] NULL, [ServerName] [varchar](20) NULL) insert into [dbo].[CannedBackupJobs]values(83, 365.226943141887,'2013-08-04 03:20:30.777', 'Peaches'),(83, 408.830221699759, '2013-08-14 18:26:53.220', 'Peaches'),(83, 391.654500133873, '2013-08-15 15:44:34.977', 'Peaches'),(83, 397.063717616127, '2013-08-20 02:10:57.747', 'Peaches'),(83, 353.803773579467, '2013-08-24 05:56:26.090', 'Peaches');[/code]

Linking a SQL Server 2005 (Windows Authentication) to a SQL Server 2008 (Mixed Mode)

Posted: 01 Oct 2013 08:20 PM PDT

I am trying to link a production database (SQL Server 2000) that is on Windows authentication to a test database (SQL Server 2008) that is on mixed mode authentication.I have already tried the code below from [url=http://www.sqlservercentral.com/Forums/Topic699311-149-1.aspx#bm699348]this link[/url]:[code="sql"]DECLARE @LinkName SYSNAME SET @LinkName = 'PRODUCTIONSERVER'DECLARE @SrvName SYSNAME SET @SrvName = 'PRODUCTIONSERVER'DECLARE @LocalLogin SYSNAME SET @LocalLogin = 'sa' --login on test dbDECLARE @RmtLogin SYSNAME SET @RmtLogin = 'DOMAIN\UserName' --win auth login on prod dbDECLARE @RmtPwd SYSNAME SET @RmtPwd = 'password' --win auth password for the login on prod dbIF NOT EXISTS (SELECT * FROM Master..Sysservers WHERE IsRemote = 1 AND SrvName = @LinkName)BEGIN EXECUTE sp_addlinkedserver @server = @LinkName, @srvproduct = '', @provider = 'SQLOLEDB', @datasrc = @SrvName EXECUTE sp_addlinkedsrvlogin @rmtsrvname = @LinkName, @useself = 'false', @locallogin = @LocalLogin, @rmtuser = @RmtLogin, @rmtpassword = @RmtPwdEND[/code]...a linked server is successfully added but when I try a query like:[code="sql"]SELECT COUNT(*) FROM PRODUCTIONSERVER.ProdDatabase.dbo.ProdTable[/code]I get an error:[code="plain"]Msg 18456, Level 14, State 1, Line 0Login failed for user 'DOMAIN\UserName'.[/code]I have also tried logging into the prod database using the mentioned windows authentication credentials (my personal login) and I can get through.The reason for the linking is because I want to test the merging of live data (from sql 2000) into the test database (sql 2008).Any thoughts on how I can link the servers?Thank you very much. :)

Need help on SP

Posted: 01 Oct 2013 06:03 AM PDT

I'm trying to create an SP to execute the followingmsdb.dbo.sp_add_jobstepmsdb.dbo.sp_update_jobstepA 3rd party software installer created about 300-400 jobs. And naturally, the jobs aren't emailing out when it fails. So I'm trying to add a step to the job and alter the 1st steps of the job. I can run this for each job replacing the job_name with the job name that got created; however, it'll take forever.[quote]USE MSDBGOEXEC sp_add_jobstep @job_name = N'InstallerJob1', @step_name = N'EmailOut', @subsystem = N'TSQL', @command = N'EXEC sp_FailedJob ''InstallerJob1''', @on_success_action = 2GOEXEC sp_update_jobstep @job_name = N'InstallerJob1', @step_id=1, @on_fail_action = 4, @on_fail_step_id = 2GO[/quote]However, I thought it'll be quicker to execute an SP.This is the SP I have so far.[quote]SET QUOTED_IDENTIFIER ONSET ANSI_NULLS ONGOCREATE PROCEDURE sp_StepsWITH ENCRYPTIONAS --Purpose: To create job step email notification for -- jobs created automatically from installer.DECLARE@jobneed VARCHAR(150)BEGIN -- begin spDECLARE job_cursor CURSOR FOR SELECT name FROM msdb.dbo.sysjobsWHERE name NOT LIKE 'BACKUP%'AND name NOT LIKE 'sys%'ORDER BY name;OPEN job_cursorFETCH NEXT FROM job_cursorINTO @jobneedWHILE @@FETCH_STATUS = 0BEGIN -- begin fetchEXEC msdb.dbo.sp_add_jobstep @job_name = N'@jobneed', @step_name = N'EmailOut', @subsystem = N'TSQL', @command = N'EXEC sp_FailedJob ''@jobneed''', @on_success_action = 2EXEC msdb.dbo.sp_update_jobstep @job_name = N'@jobneed', @step_id=1, @on_fail_action = 4, @on_fail_step_id = 2END -- end fetch close job_cursor deallocate job_cursorEND -- end procGO[/quote]Getting the following errors:Msg 14262, Level 16, State 1, Procedure sp_verify_job_identifiers, Line 67The specified @job_name ('@jobneed') does not exist.I think I'm just missing passing the jobneed variable into the param. Am I on the right path?

Dropping/Recreating Clustered Index

Posted: 01 Oct 2013 05:23 AM PDT

Hi SSC,I've got a table with a clustered index which needs to be dropped and recreated to add an additional column to it. The table also has three non-clustered indices on it. I seem to recall reading somewhere that to do that, I should either drop or disable the non-clustered indices beforehand, but I can't remember exactly why or which one. Can anyone shed some light on the best practice here?

BCP text out that includes quotation marks

Posted: 01 Oct 2013 04:45 AM PDT

I'm trying to export a text string that includes " to a text file using BCP.This code works as expected by exporting the word blah into a text file on my C drive:[code="sql"]-- Turn on cmdshell EXEC sp_configure 'xp_cmdshell', 1reconfiguregoDECLARE @cmd varchar(1000)SET @cmd = 'bcp "SELECT ''blah''" queryout "C:\bcpout.txt" -w -T'EXEC master..xp_cmdshell @cmd[/code]However if I change my text string from 'blah' to include quotation marks so it reads 'blah"blah', it fails to do anything.[code="sql"]DECLARE @cmd varchar(1000)SET @cmd = 'bcp "SELECT ''blah"blah''" queryout "C:\bcpout.txt" -w -T'EXEC master..xp_cmdshell @cmd[/code]Is there a way I can get the quotation marks exported to my text file using BCP?

Slow Script

Posted: 01 Oct 2013 01:14 AM PDT

Hello. I ran the following script that too 3 hours 25 minutes to insert about 40,000 rows. Is there any logic that I can put into this to speed it up? Thank you!insert into T3select T2.COL1, T2.COL2, T1.COL1, T1.COL2, T1.COL3from T1join T2 on T1.COL4 = T2.COL3where T2.COL4 != 'data1'and T1.COL1 is not NULLand T1.COL5 is not NULLand T1.COL6 = 'data2'and CAST (T2.COL1 as VARCHAR)+'-'+CAST(T2.COL2 as VARCHAR)+'-'+CAST(T1.COL1 as VARCHAR)+'-'+CAST(T1.COL2 as VARCHAR) not in(select CAST(COL1 as VARCHAR)+'-'+CAST(COL2 as VARCHAR)+'-'+CAST(COL3 as VARCHAR)+'-'+CAST(COL4 as VARCHAR) from T3)

[SQL Server 2008 issues] 2008 RTM VS R2

[SQL Server 2008 issues] 2008 RTM VS R2


2008 RTM VS R2

Posted: 01 Oct 2013 06:44 PM PDT

Dear AllWhere can I find the differnce between 2 versions of sql 2008Microsoft SQL Server 2008 (RTM) - 10.0.1600.22 (X64) VSMicrosoft SQL Server 2008 R2 (SP1) - 10.50.2550.0 (X64) RegardsKrishna

connection options to sql server 2008

Posted: 01 Oct 2013 06:58 AM PDT

In a new position that I have, I am told that I will be connecting to reporting services for a sql server 2008 database. My question is, is this where SSRS 2008 reports are displayed and/or deployed to the reporting server? Is this where users can access report buiulder?I am asking this question since I do not know what the difference is between logging into reporting services, integration services, and/or analysis service is setup for?

Optimize join to return single value from (one to many)

Posted: 01 Oct 2013 05:10 AM PDT

Hi,I need to get column from JOINed table but its one to many so I need to select only single instance for c2 , to keep original number of records, I tried to do select top 1 like below but this suddenly gave me very very bad performance, is there any other good way to do this? is there any way to avoid corr query?[code="other"]-- select 100 pk, 'Alpha' c1 into #temp -- drop table #temp;with t as (select 100 pk, 1 level, 'Alpha' c1, 'ID101' c2 unionselect 100 pk, 2 level, 'Alpha' c1, 'ID101' c2 unionselect 100 pk, 3 level, 'Alpha' c1, 'ID101' c2 unionselect 100 pk, 1 level, 'Alpha' c1, 'ID102' c2 unionselect 100 pk, 2 level, 'Alpha' c1, 'ID102' c2)select #temp.pk, #temp.c1 from #temp join t on t.pk = #temp.pk where t.level = 1 AND t.c2 = (select top 1 tt.c2 from t tt where t.pk = tt.pk)[/code]TxMario

approximate timespans for db backups?

Posted: 01 Oct 2013 10:25 AM PDT

Hello, I was wondering if you generally see a fairly consistent timespan for database backups? For example, I just backed up a 5700MB db and it took about 14 seconds. That's about 407MB/s.Based on this metric, would it be reasonable to assume that a 57000MB db would take about 140 seconds to backup? Or do you find that backups become slower or faster at some point?Also, do you tend to see fairly consistent backup times based on database size between different database instances, or do you find that there are db factors which have a significant effect on db backup time? Assume average CPU and RAM between db servers?

Installing a 1t db and best practices to keep 3 data bases in sync

Posted: 01 Oct 2013 05:49 AM PDT

Hi, AllThis is my project which i am going to work on.-- have to install and configure test/dev/prod databases using SQL 2008 and the database have the potential to grow 1t+ (need suggestions and best practices on hardware and software) environment is vmware-- have to keep the prod and dev and test databases in sync all the time -- the test data gets new data weekly through SSIS packages.Thank you for every one for looking at my post

Linked server to DB2 won't show catalogs

Posted: 01 Oct 2013 05:02 AM PDT

I have created the ODBC connection within the IBM Configuration Assistant, then created the linked server to the DB2 database. The connection tests successfully, but I can't see the catalog for the DB2 database. Any ideas?

Trying to remove SQL 2008 R2, getting a dialog about needing to access core_shared.msi

Posted: 01 Oct 2013 03:44 AM PDT

I'm trying to remove SQL Server 2008 R2 Developer Edition from my developer machine, in preparation for installing SQL Server 2012. Right now I'm stuck with a dialog box that says:"SQL Server 2008 R2 SP1 Database Engine SharedThe feature you are trying to use is on a CD-ROM or other removeable disk that is not available.Insert the 'SQL Server 2008 R2 SP1 Database Engine Shared' disk and click OK."I've tried looking for that DVD, but can't find one with that label on it. I've tried looking at every MSDN DVD that I've got, but am coming up with nothing. I'm not sure where this is. The only options the popup dialog box gives me is OK or Cancel. OK would mean that I believe I've found the core_shared.msi file, but since I haven't it will fail. That leaves me with Cancel. I'm hoping that doesn't completely cancel the uninstall. At this point all I can do is hit the Cancel button and hope for the best. I'd love to know what will happen, if it leaves this "SQL Server 2008 R2 SP1 Database Engine Shared" component behind?

SSRS - Cannot Create Connection

Posted: 01 Oct 2013 01:53 AM PDT

Hi There, I am new to SSRS and have basically created a bunch of reports (sysadmin) through visual studio, deployed them and setup the necessary rights, permissions and roles needed per folder on the localhost URL. Now the fun part, the domain user can access the URL get to the required folder but when trying to execute the report the following error is thrown out. An error has occurred during report processing. (rsProcessingAborted)Cannot create a connection to data source 'RI_TEST_TEXTILES'. (rsErrorOpeningConnection)For more information about this error navigate to the report server on the local server machine, or enable remote errorsI logged in using the domain users credentials that is not part of system admin and i can execute the reports successfully but as soon as the users try they get the above error. I have been at this now for two days but with no luck, i have literally tried everything.ThanksMax

Truncation Error

Posted: 01 Oct 2013 03:19 AM PDT

In an SSIS package I am using a Data Flow Task and I am getting this warming:Warning 2 Validation warning. Load Daily Folios from NVCORP: {6574291C-EDED-4C84-8E07-FABBCAD8883D}: Truncation may occur due to retrieving data from database column "ORIGINAL_SRP" with a length of 11 to data flow column "ORIGINAL_SRP" with a length of 3. NVCORP - STAGE - FoliolinkResvRate_STG - Local.dtsx 0 0 But my SP_COLUMNS shows the following for the ORIGINAL_SRP columnTABLE_NAME COLUMN_NAME DATA_TYPE TYPE_NAME PRECISION LENGTHFOLIOLINK_RESV_RATE_STG ORIGINAL_SRP 12 varchar 50 50Any thoughts , why it thinks it's only a length of 3??

master.dbo.xp_create_subdir not working for Long Directory names

Posted: 30 Sep 2013 08:46 PM PDT

Hi All,I have created the attached sp for creating sub directories (which has been later integrated in my restore tool) using xp_create_subdir.This works fine for smaller names, but for longer names, the command behaves strangely and does not create sub directories. It is unable to take longer names which is evident from the print commands.This behaviour persists in Sql server 2005/2008/2008 R2(not tested in 2012)for e.g[Create_Dir] 'abcd275_travellinks' works absolutely FineAs soon as you run the below command, it gives errors since it is not able to take longer names :[Create_Dir] 'abcd275_travellinks'Has someone faced the same issue? Any suggesstions would be highly appreciated.Thanks in advance.

performance

Posted: 30 Sep 2013 10:22 PM PDT

Hello dear,in a server i install sql 2000 & sql2008 r2.and link to another server in sqlserver2000r r2 that have sqlserver2000.suddenly all application that connect to sqlserver2008 r2 decreased speeds of them.what happen on?would you please help me?;-)best regards,zohreh

Script for columns' name, data type, length for one table

Posted: 01 Oct 2013 12:36 AM PDT

I am looking for a simple script for columns' name, data type, length for one table and sort by name

Indexing a column with type 'uniqueidentifier'

Posted: 30 Sep 2013 10:33 PM PDT

HiWe have a SP which is giving performance issues. As an example it has been executed 4 times and has an total elapsed time of 68961914. It is the costliest procedure in terms of Worker time. By comparison other have ran 18877 times with a total time of 358513681ms, and 261 times with a total time of 11155273. I know the comparisons are not really helpful without giving you more info about each SP but its just a guide.I have ran an index tuning query and it has recommended an index on the table the Sp uses. The index it recommends is an uniqueidentifier. The table currently has one non-clustered index with read ratio of 180 to a write ratio of 2365.Is it likely that adding this index will aid in the performance of this Sp ? Also should i got for a clustered index or non clustered ? I realise that adding a clustered index will have a performance hit while it re-organises the data in the table.

Rebuild index task/job fails every time...

Posted: 03 Jul 2011 07:18 PM PDT

We are using SQL Server 2008 Enterprise 64bit.And we have made standard database optimalisation jobs via the maintenance plans.The rebuild index task will run weekly, but is failed the last 3 weeks.When I check the history of the job, I encounter following error after 9minutes:[i] Executing query "ALTER INDEX [PK_AboProductType] ON [dbo].[AboProdu...".: 2% complete End Prog... The package execution fa... The step failed.[/i]But this will not give me the information I need to correct this problem!When i want to run the standard report: Index usage statistics, I get following error:[i]Error:Incorrect syntax near '('.[/i]What can I do to get the job running clearly again?I don't know where to start :(

Tuesday, October 1, 2013

[SQL Server] Is this right if i will do my project this way?..

[SQL Server] Is this right if i will do my project this way?..


Is this right if i will do my project this way?..

Posted: 01 Oct 2013 02:51 PM PDT

Hi Everyone..I hope everyone is having a nice day:-)Is this right if i will do my project this way?Please see my attachment

SQL Agent Job error

Posted: 01 Oct 2013 04:10 AM PDT

Hi,I am experiencing the following error below. I have an update uncf processes job that has 35 steps. The job runs automatically until it gets to step 15 Create.CSV. I need to find out how to have the job run automatically through all 35 steps. Pls help! Thanks.Update NCF Processes,Error,15,SQL04,Update NCF Processes,Create Update.csv,,Executed as user: NCF\sql_services. Microsoft (R) SQL Server Execute Package Utility Version 10.50.2500.0 for 64-bit Copyright (C) Microsoft Corporation 2010. All rights reserved. Started: 12:20:23 PM Error: 2013-10-01 12:20:53.63 Code: 0xC020200E Source: Create CSV Flat File Destination [2672] Description: Cannot open the datafile "J:\Projects\NS to SF\NS Contacts Update with Org\update.csv". End Error Error: 2013-10-01 12:20:53.63 Code: 0xC004701A Source: Create CSV SSIS.Pipeline Description: component "Flat File Destination" (2672) failed the pre-execute phase and returned error code 0xC020200E. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 12:20:23 PM Finished: 12:20:53 PM Elapsed: 30.202 seconds. The package execution failed. The step failed.,00:00:30,0,0,,,,0

Disable a Calculated Parameter ssrs report using SQL Server 2008 r2

Posted: 30 Sep 2013 08:34 PM PDT

Hi,I have a Report created in SQl Server 2008 r2,in which i'm passing 3 parameter to the querythe third one is calculated on the basis of other twowhen the there is no data in first two fields there is no data then the third field is disable but when the value is inserted in the two fields then the third field is enabledi don't want the third field to be enabled(it should be read-only) even if the calculated data is comes inside the third field before value[img]http://i869.photobucket.com/albums/ab252/write2varun/2_zps7daa7ee8.jpg[/img]after value[img]http://i869.photobucket.com/albums/ab252/write2varun/2_zps7daa7ee8.jpg[/img]P.S. I'm new with ssrs reportthank you

Search This Blog