Wednesday, October 2, 2013

[how to] How to to automatic faiolver in postgresql 9.1

[how to] How to to automatic faiolver in postgresql 9.1


How to to automatic faiolver in postgresql 9.1

Posted: 02 Oct 2013 04:53 PM PDT

I am new to postgresql, and i have a task of implementing replication and failover in postgresql database. We are using 2 nodes (1st Primary and 2nd Slave). I have configured the streaming replication in them and it works fine. But now i cant able to configure the automatic failover in them. When the primary goes down the slave should be promoted as a primary.

I have tried Pgpool but after reading on some forums i dropped it, now planning to go for repmgr. Is repmgr as good one? and i need a step by step configuration of failover in postgres with repmgr.

I have tried many scripts for failover but it seems to be not working. can anyone provide me a sample script for failover process. and steps to how to execute them.

Triggers, the flow of the program does not stop [on hold]

Posted: 02 Oct 2013 04:51 PM PDT

I made a simple program on Oracle Forms. The code below is the trigger when the submit button is pressed.

Here it is:

BEGIN    CREATE_RECORD;    IF :USERS.USERNAME IS NULL THEN      MESSAGE('Please enter Username');      GO_ITEM('USERNAME');    ELSIF :USERS.PASSWORD IS NULL THEN      MESSAGE('Please enter Password');      GO_ITEM('PASSWORD');    ELSIF :USERS.PASSWORD2 IS NULL THEN      MESSAGE('Please confirm your Password');      GO_ITEM('PASSWORD2');    ELSIF :USERS.PASSWORD != :USERS.PASSWORD2 THEN      MESSAGE('Password did not match');      GO_ITEM('PASSWORD2');    ELSIF :USERS.NAME IS NULL THEN      MESSAGE('Please enter your Name');      GO_ITEM('NAME');    ELSIF :USERS.POSITION IS NULL THEN      MESSAGE('Please enter your Position');      GO_ITEM('POSITION');    END IF;    IF :USERS.ACCESS_LEVEL = 'admin' THEN        IF :USERS.ADMIN_PASS = 'eweb1' THEN            alert:= show_alert('USER_CREATED');                IF alert = alert_button1 THEN              MESSAGE('OK');              END IF;        ELSE                  MESSAGE('Administrator Password did not match');                  GO_ITEM('ADMIN_PASS');      END IF;        ELSE          alert:= show_alert('USER_CREATED');                IF alert = alert_button1 THEN                  /* foo */              END IF;    END IF;    END;  

When the form encounters errors, like when the second password (password2) does not match and when it shows the message saying 'Password did not match', it still flows and read the following statements instead of stopping and wait for the button to be pressed again. I hope I can fix this. tnx

Cannot install DB2 Express-C 10.1 on Mac OS X 10.7.5

Posted: 02 Oct 2013 03:07 PM PDT

I'm having troubles seting up DB2 Express-C 10.1 on my Mac OS X (v10.7.5). My initial install attempt failed, but then I found

https://www.ibm.com/developerworks/community/forums/html/topic?id=77777777-0000-0000-0000-000014927797  

That seemed to work up to the point where I tried to start the db:

$ ./db2start   dyld: Library not loaded: /db2unix/db2galileo/db2_galileo_darwinport/darwin64/s120905/engn/lib/libdb2e.dylib    Referenced from: /Users/home/sqllib/adm/./db2start        Reason: image not foundTrace/BPT trap: 5  $  

OTOH, if I try to run db2setup from the expc directory I get and error message that others have seen. Something like

/Applications/expc/db2/macos/install/db2setup: line 606:       /tmp/db2.tmp.22412/db2/macos/install/../java/jre/bin/java: No such file or  directory  

Where the 5-digit number in the tmp directory name changes on each run.

That is followed by a nice DB2 10.1 intro screen which is suddenly replaced by a blank DB2JDKTester window that says "DB2 SETUP LAUNCHPAD" at the top.

Has anyone else tried to install DB2 on Mac OS X 10.7?

(And, yes, I've seen the other thread with the same title, but that was never actually answered.)

How to get "Lookup" functionality in Access when linking to a SQL table?

Posted: 02 Oct 2013 04:24 PM PDT

I am building a SQL database which will have an Access 2010 front-end.

I would like some of the fields to be lookups in Access (ie the user clicks on the field in Access and a drop down populates). It is fairly straightforward to make a field a lookup for another table in Access but I can't seem to know how to do it in SQL and then propagate the changes.

My SQL knowledge is very basic. Here's an example of how I am creating my SQL tables:

CREATE TABLE RequestTypes (      RequestType varchar(50) PRIMARY KEY  );  INSERT INTO RequestTypes (RequestType) VALUES ('Val 1');  INSERT INTO RequestTypes (RequestType) VALUES ('Val 2');  INSERT INTO RequestTypes (RequestType) VALUES ('Val 3');    CREATE TABLE Projects (      ID int IDENTITY(1,1) PRIMARY KEY,      RequestStatus varchar(50) FOREIGN KEY REFERENCES RequestStatus(RequestStatus),      Quantity varchar(50)  );  

I then connect to the database through the ODBC connection in Access.

How can I create my tables in SQL so that the RequestStatus field of my Projects table to have the same functionality a lookup table does? For example, being able to click on the RequestStatus attribute of a Project and select "Val 1" or "Val 2" or "Val 3" from a list. The above does require the tables to match but does not provide the "dropdown" lookup functionality.

Why won't Postgresql 9.3 start on Ubuntu?

Posted: 02 Oct 2013 02:02 PM PDT

All,

I have successfully installed Postgresql 9.3 from the APT repository on 2 VM's running Ubuntu 12.04 and 13.04...however, I cannot get it to install properly on my host machine running Ubuntu 12.04.

The install (this time) seems to have gone ok, but perhaps there is an error I'm not understanding:

* No PostgreSQL clusters exist; see "man pg_createcluster"  Setting up postgresql-9.3 (9.3.0-2.pgdg12.4+1) ...  Creating new cluster 9.3/main ...    config /etc/postgresql/9.3/main    data   /var/lib/postgresql/9.3/main    locale en_US.UTF-8    port   5432  update-alternatives: using /usr/share/postgresql/9.3/man/man1/postmaster.1.gz to provide /usr/share/man/man1/postmaster.1.gz (postmaster.1.gz) in auto mode.  

So I then try to add myself as a postgresql user, but I get this:

createuser: could not connect to database postgres: could not connect to server: No such file or directory      Is the server running locally and accepting      connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?  

I cannot see Postgresql running in system monitor, and there is no file in the /var/run/postgresql/ folder...

EDIT: On the VM's, there is a file in /var/run/postgresql/ called 9.3-main.pid

So... what's going on here that isn't going on in my VM's? Like I said, the other installations on the VM's, including PostGIS and PGAdmin came in perfect...no idea why this host machine isn't going through...

Any thoughts appreciated!

-mb

MySQL stores BLOB in innodb buffer pool (innodb_buffer_pool_size)?

Posted: 02 Oct 2013 08:00 PM PDT

I have several databases all in INNODB, 50GB are BLOB columns, only 700MB are not BLOB data. Tools as mysqltuner ask me to set innodb_buffer_pool_size = 51GB.

Does MySQL store the BLOB in RAM? If yes, how can I configure MYSQL so it doesn't use innodb_buffer_pool_size to store BLOB?

Recover original utf8 data from latin1 mysql db

Posted: 02 Oct 2013 12:00 PM PDT

We forgot to change charsets in our mysql db after migration on new server.

Current situation:

mysql> show variables like 'character%';  +--------------------------+----------------------------+  | Variable_name            | Value                      |  +--------------------------+----------------------------+  | character_set_client     | latin1                     |  | character_set_connection | latin1                     |  | character_set_database   | latin1                     |  | character_set_filesystem | binary                     |  | character_set_results    | latin1                     |  | character_set_server     | latin1                     |  | character_set_system     | utf8                       |  | character_sets_dir       | /usr/share/mysql/charsets/ |  +--------------------------+----------------------------+    mysql> show variables like 'collation%';  +----------------------+-------------------+  | Variable_name        | Value             |  +----------------------+-------------------+  | collation_connection | latin1_swedish_ci |  | collation_database   | latin1_swedish_ci |  | collation_server     | latin1_swedish_ci |  +----------------------+-------------------+  

But table charset is utf-8:

mysql> show create table inbox\G  *************************** 1. row ***************************  Create Table: CREATE TABLE `inbox` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    # ...    `message` varchar(2048) DEFAULT NULL  ) ENGINE=InnoDB AUTO_INCREMENT=4001 DEFAULT CHARSET=utf8  

We're using Node.js (0.10.*) for backend and latest felixge node-mysql (connection is UTF8_GENERAL_CI by default).

Before we've found incorrect config, there are a lot of messages saved to table (~4000), most of them are utf-8. Sure, currently we see a lot of question marks ?????????? instead of normal text.

I've tried node-iconv, but no luck.

var db = require('mysql').createConnection({    // credentials  });    var iconv = new require('iconv').Iconv('latin1', 'utf-8');    db.query('select message from inbox where id = 42', function(err, result) {    var msg = result[0].message;    console.log(iconv.convert(msg).toString());    db.destroy();  });  

Is there any chance to recover original utf-8 data?

Populating joined tables

Posted: 02 Oct 2013 12:01 PM PDT

I have student table which has column stu_id, name, date of birth etc and Marks table that has id stu_id as foreign key. I want to inser a stu_id say x in the student table and then have that same id be populated in the marks table as well automatically.

I have joined the table using RIGHT OUTER JOIN. I am new to mysql and php. Please help.

Transactional Replication: When to delete the initial snapshot

Posted: 02 Oct 2013 12:02 PM PDT

I would like to delete the initial snapshot after I setup transnational replication and everything is working fine.

When is it OK to the delete the snapshot that initializes a subscriber?

Append columns with values from has many reletionship

Posted: 02 Oct 2013 04:14 PM PDT

I have two tables orders and items. Every Order has many Items.

orders  - id  - created_at  - paid_at    items  - id  - title  - amount  - quantity  - order_id  

Let say that there are 3 kinds of items. Chair, table and door.

I would like to create a query that will produce table with following columns:

- order_id  - created_at  - paid_at  - item_id # for chair  - item_title  - item_quantity  - item_amount  - item_id # for table  - item_title  - item_quantity  - item_amount  - item_id # for door  - item_title  - item_quantity  - item_amount  

If order has chair item, data in columns for chair will be displayed otherwise it will be empty.

Why am I doing this?

I would like to export this data to excel sheet.

There is limited amount of different items so there is no problem with milions of columns.

Geography union via cursor returning null

Posted: 02 Oct 2013 11:06 AM PDT

I need to combine multiple geography records into a single entity to store in a spatial data table, but I've learned that the only available function in MSSQL2008R2 that accomplishes this task is STUnion which only works for two static geography records. I have upwards of 200 that I need to union and I'm loathe to hand code that kind of query.

On that note, I wrote up a one-off cursor to iteratively union each geography object from my staging table. However, when executed, the cursor returns a null value without any errors or messages.

DECLARE      @ShapeUnion GEOGRAPHY,      @Shape GEOGRAPHY    DECLARE curSpatial CURSOR FOR  SELECT      geom  FROM      dbo.Boundaries_Staging    OPEN curSpatial    FETCH NEXT FROM curSpatial INTO @Shape    WHILE @@FETCH_STATUS = 0  BEGIN        SET @ShapeUnion = @ShapeUnion.STUnion(@Shape)        FETCH NEXT FROM curSpatial INTO @Shape    END    CLOSE curSpatial  DEALLOCATE curSpatial    SELECT @ShapeUnion  

Edit: Updated script for anyone curious about unioning multiple geography records. As stated in the comments, I initialized @ShapeUnion prior to the loop. Also, I added .STBuffer() to the field being unioned; this ensures that the miniscule spaces between the spatial data are completely filled.

DECLARE      @ShapeUnion GEOGRAPHY,      @Shape GEOGRAPHY    DECLARE curSpatial CURSOR FOR  SELECT      geom  FROM      dbo.Boundaries_Staging  WHERE      geom IS NOT NULL    OPEN curSpatial    FETCH NEXT FROM curSpatial INTO @Shape    SET @ShapeUnion = @Shape    WHILE @@FETCH_STATUS = 0  BEGIN        SET @ShapeUnion = @ShapeUnion.STUnion(@Shape.STBuffer(1))        FETCH NEXT FROM curSpatial INTO @Shape    END    CLOSE curSpatial  DEALLOCATE curSpatial      SELECT @ShapeUnion  

How to create a stored procedure with several delete in(select) statements?

Posted: 02 Oct 2013 08:31 PM PDT

Our database needs to be periodically cleared of certain items that receive a specific flag (A4) on their id. Our application manages stores and all tables are myISAM. The items can be either in the bought, sold, exchanged, etc tables and they will also exist in the main table that contains all items (the only information the tables share is the id and I can't control that at the moment).

My initial idea was to create the individual delete statements and then place them together in a stored procedure. The stored procedure I created contains:

delimiter $$    create procedure clear_A4(in period char(10))    BEGIN    delete from mov_pedido where pednot in(select distinct titnum from mov_titulo where titnum like 'A4%' and titemi <='2012-01-01' and titvalpag >0);    delete from mov_orcame where orcnot in(select distinct titnum from mov_titulo where titnum like 'A4%' and titemi <='2012-01-01' and titvalpag >0);    delete from mov_nfsven where nfsnum in(select distinct titnum from mov_titulo where titnum like 'A4%' and titemi <='2012-01-01' and titvalpag >0);    delete from mov_movime where movdoc in (select distinct titnum from mov_titulo where titnum like 'A4%' and titemi <='2012-01-01' and titvalpag >0);    delete from mov_titulo where titnum like 'A4%' and titemi<='2011-12-01' and titvalpag >0;    end $$    delimiter ;  

These lines refer to items sold. A sale starts with a order, then a quotation, then the actual sale, and finally it gets throw in the main record. The first table mov_pedido has the orders, mov_orcame has the quotations, mov_nfsven has the actual sales and mov_movime records all the transactions. These tables can only be cleared of items that have been paid for by customers but this information can only be found on mov_titulo so I decide to use the select statements with the IN operator. The problem is I also need to clear the mov_titulo table. If I delete the items from mov_titulo first then the information used in the select statements is lost.

Reading up on mySQL,multithreaded databases it seems that if the statements are inside the stored procedure together, they will be processed simultaneously and because the tables are related that way it will cause a problem.

My question is: how can I improve the logic of my delete statement to make them work together? Can I improve my stored procedure to deal with this?

Right now I am executing one statement at a time manually, and that works, but I can see problems down the line, such as entering the wrong date in one of the existing 20 some delete statements for example so I am looking for a better way to do this.

Thank you.

What can do a table constraint that a column constraint can't do?

Posted: 02 Oct 2013 10:16 AM PDT

Had an exam today. One question disturbed me:

  • Q: What can a table constraint do that a column constraint can't do?
  • A: My answer was that only a table constraint can declare a composite primary key.

I guess this is not the only difference. How to answer this question more precisely?

Storing statistic results

Posted: 02 Oct 2013 08:57 AM PDT

I'm doing a small database for statistics, it's a really simple statistic like calculating the cost of an event and divide it by the amount of people that went to get the cost per attendee, but it can end up having loads of event, and it will require to poll in a lot of results at once.

I know with MySQL I can use the database to do such simple calculation, so I could run a query every time is needed, but it can end up being costly, if there's a lot of events in say a year.

My question is if I were to store the statistic results on a different table, I see it a little bit as data duplication, is that acceptable? or should I do it in a different manner?

Thanks

Unknown column error with an index

Posted: 02 Oct 2013 08:55 AM PDT

I created a simple index associated with a timestamp column

CREATE TABLE `data` (      `id` INT(11) UNSIGNED NOT NULL,      `ticker` VARCHAR(16) NOT NULL COLLATE 'utf8_unicode_ci',      `comment` TEXT NOT NULL COLLATE 'utf8_unicode_ci',      `link` VARCHAR(256) NULL DEFAULT NULL COLLATE 'utf8_unicode_ci',      `comment_hash` CHAR(32) NOT NULL COLLATE 'utf8_unicode_ci',      `time` DATETIME NOT NULL,      `feed` VARCHAR(16) NOT NULL COLLATE 'utf8_unicode_ci',      `source` INT(11) NOT NULL,      `active` TINYINT(1) NOT NULL DEFAULT '1',      `score` VARCHAR(20) NULL DEFAULT NULL COLLATE 'utf8_unicode_ci',      `dttm` DATETIME NULL DEFAULT NULL,      PRIMARY KEY (`ticker`, `comment_hash`),      INDEX `time_index1` (`time`)  )  COLLATE='utf8_unicode_ci'  ENGINE=InnoDB;  

However when I try to do anything with the index, it gives me SQL Error (1054): Unknown column 'time_index1' in 'where clause'

Sample queries:

select *  from data  where time_index1 = '2013-10-01 17:18:06'  

or

select *  from data  order by time_index1  

how to permanently enable query cache in mysql?

Posted: 02 Oct 2013 08:43 AM PDT

I want to know if there's a way to permanently set the size of my query cache ... and to make sure it's always enabled. Right now, i'm just using the "set global query_cache_size" command to set the size... but when the database is restarted, it goes away.

Thanks.

How do you see which database on a Server uses most resources

Posted: 02 Oct 2013 03:44 PM PDT

I have a database server with a couple of databases on it. How can I see where any resource pressure may come from?

I would like to get a table of:

  • Database Name
  • Batch Requests per second
  • CPU Time
  • Logical Read
  • Logical Writes

how to access my password in SSIS 2005 by VB.NET

Posted: 02 Oct 2013 10:00 AM PDT

A VB.NET component delivered following message:

Login failed for user MyDBUser

It is about an OleDbConnection.

btw The user works fine togehter with SQL Server Management Studio.

Retaining a password via the connection manager does not work even with enabling Save my password. In my case, it is not a security issue to retain the password because the SSIS package will not be published for others. Hence, what's the fastest way -concerning the VB.NET code- to enable an access to the (SQL Server) password?

EDIT: Workaround: A SSIS variable was attached to the scope of the required control flow. Me.readVariable was applied to concat the connection string.

Grouping Selecting unique values in a single column [duplicate]

Posted: 02 Oct 2013 09:00 AM PDT

This question already has an answer here:

Here is my table

ID      Field1  Field2      Field3      Field5  Field7  357     357     2013-03-07  08:02:02:275    t02 bBCD00103RG  365     365     2013-03-07  08:02:05:307    t02 bR U00103w  374     374     2013-03-07  08:02:08:322    t02 bR U00103w  474     474     2013-03-07  08:02:41:307    t02 bR U00103w  1378    1378    2013-03-07  02:25:45:447    t02 bR U00103w  1381    1381    2013-03-07  02:25:46:416    t02 bBFU3  1386    1386    2013-03-07  02:25:49:057    t02 bBFU02405LL  1394    1394    2013-03-07  02:25:52:260    t02 bBFU02405LL  1504    1504    2013-03-07  02:26:42:307    t02 bBFU02405LL  1510    1510    2013-03-07  02:26:45:275    t02 bBFU02405LL  1516    1516    2013-03-07  02:26:48:307    t02 bBFP02405LI  1523    1523    2013-03-07  02:26:52:088    t02 bBFP02405LI  1530    1530    2013-03-07  02:26:54:885    t02 bBFP02405LI  1556    1556    2013-03-07  02:27:06:307    t02 bBFP02405LI  1562    1562    2013-03-07  02:27:09:307    t02 bBFP02405LI  1568    1568    2013-03-07  02:27:12:307    t02 bR L02405o  1574    1574    2013-03-07  02:27:15:338    t02 bBCL/  1580    1580    2013-03-07  02:27:18:635    t02 bBCL00103RO  1587    1587    2013-03-07  02:27:21:307    t02 bBCL00103RO  1714    1714    2013-03-07  02:28:21:291    t02 bBCD00103RG  1721    1721    2013-03-07  02:28:24:291    t02 bBCD00103RG  1728    1728    2013-03-07  02:28:27:338    t02 bBCD00103RG  1734    1734    2013-03-07  02:28:30:291    t02 bBCD00103RG  1740    1740    2013-03-07  02:28:33:447    t02 bR U00103w  1996    1996    2013-03-07  02:30:33:291    t02 bR U00103w  

The end result must looκ like this

ID      Field1  Field2      Field3      Field5  Field7  357     357     2013-03-07  08:02:02:275    t02 bBCD00103RG  365     36 5    2013-03-07  08:02:05:307    t02 bR U00103w  1381    1381    2013-03-07  02:25:46:416    t02 bBFU3  1386    1386    2013-03-07  02:25:49:057    t02 bBFU02405LL  1516    1516    2013-03-07  02:26:48:307    t02 bBFP02405LI  1568    1568    2013-03-07  02:27:12:307    t02 bR L02405o  1574    1574    2013-03-07  02:27:15:338    t02 bBCL/  1580    1580    2013-03-07  02:27:18:635    t02 bBCL00103RO  1714    1714    2013-03-07  02:28:21:291    t02 bBCD00103RG  1740    1740    2013-03-07  02:28:33:447    t02 bR U00103w  

Ι want the first occurence of the value of Field7, Ι want to compare the value previous value of Field7 and if it is different then only output.

Single disk for Data, System, User, Temp and Backup when installing Fail Over Cluster?

Posted: 02 Oct 2013 09:45 AM PDT

I have a Windows cluster with two nodes. I am trying to make Fail Over Cluster using SQL Server 2012.

On both nodes there is 2 TB storage available which I can access as:

C:\Storage_For_Cluster\Volume1

So basically a 2 TB SAN is mapped on both nodes as above.

My questions is, can I make Fail Over Cluster with one disk only and all data will be saved in it? Or it is better to use a separate drive for each. If you take a look at the screenshot below, you will get an idea as to what I am talking about. As you can see we are using two drives Z and X for different options.

Since in this case I only have one drive, can I use this for all these options?

Secondly MSDTC is not installed yet (which I think is also required to make Fail Over Cluster) so I wanted to know shall I install MSDTC too on same drive as mentioned above?

enter image description here

Issues installing MySQL server on Ubuntu 13.04

Posted: 02 Oct 2013 03:29 PM PDT

I'm currently trying to install a MySQL server on my Ubuntu 13.04 machine. The problem is, when I try to install it, I get error messages indicating that not all packages could be downloaded. When I run sudo apt-get install mysql-server, and after hanging for a while at reading headers, the console reads as follows:

Reading package lists... Done  Building dependency tree         Reading state information... Done  The following extra packages will be installed:    libaio1 libdbd-mysql-perl libdbi-perl libhtml-template-perl libmysqlclient18 libnet-daemon-perl libplrpc-perl mysql-client-5.5 mysql-common mysql-server-5.5 mysql-server-core-5.5  Suggested packages:    libipc-sharedcache-perl tinyca mailx  The following NEW packages will be installed:    libaio1 libdbd-mysql-perl libdbi-perl libhtml-template-perl libmysqlclient18 libnet-daemon-perl libplrpc-perl mysql-client-5.5 mysql-common mysql-server mysql-server-5.5 mysql-server-core-5.5  0 upgraded, 12 newly installed, 0 to remove and 3 not upgraded.  Need to get 8,077 kB/24.5 MB of archives.  After this operation, 84.9 MB of additional disk space will be used.  Do you want to continue [Y/n]? y  Err http://us.archive.ubuntu.com/ubuntu/ raring-updates/main mysql-client-5.5 i386 5.5.32-0ubuntu0.13.04.1    Connection failed [IP: 91.189.91.13 80]  Err http://security.ubuntu.com/ubuntu/ raring-security/main mysql-client-5.5 i386 5.5.32-0ubuntu0.13.04.1    Connection failed [IP: 91.189.92.190 80]  Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/m/mysql-5.5/mysql-client-5.5_5.5.32-0ubuntu0.13.04.1_i386.deb  Connection failed [IP: 91.189.92.190 80]  E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?  

I've tried the suggested fix or running apt-get update and such, I've made sure I have no connections blocked, I've restarted, uninstalled, reinstalled, etc, all to no avail. I've been searching the web all day for solutions I haven't yet tried, but most people with this issue are running an older version of Ubuntu. Suggestions?

MySql one time event never runs?

Posted: 02 Oct 2013 01:25 PM PDT

Please have a look at below events

1) create EVENT Test1 ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 20 second ON COMPLETION PRESERVE ENABLE DO ...     2) create EVENT Test2 ON SCHEDULE EVERY 20 SECOND STARTS CURRENT_TIMESTAMP ON COMPLETION PRESERVE ENABLE DO ...   

I expect event Test1 to run one time after 20 seconds but it never runs. Event Test2 is working fine.

Any idea? Thanks.

Ok sorry it is the alter that is not working

At first i did create EVENT Test1 ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 20 second ON COMPLETION PRESERVE ENABLE DO

then shortly after i did alter EVENT Test1 ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 20 second ON COMPLETION PRESERVE ENABLE DO

Expect event Test1 to run again in another 20 secs but it didn't.

Slow SSRS Report in production

Posted: 02 Oct 2013 02:25 PM PDT

I have an SSRS report which gets its data by firing a series of stored procedures.

Now the report is timing out big time when run in production, yet when I pull down the prod database and restore to development the report runs fine.

I was thinking to set up a sql server profiler trace in production and hopefully that will tell me something... eg high Disk I/O at the time it's being run.

What else should I be doing? Something with perfmon?

Cannot install DB2 Express C 10.1 on Mac OS X 10.7.5 [duplicate]

Posted: 02 Oct 2013 11:37 AM PDT

This question already has an answer here:

When I invoke ./db2setup it cannot find the JRE where it expects it. Binary for Java not found as follows:

/private/tmp/db2/expc/db2/macos/install/db2setup: line 606: /tmp/db2.tmp.740/db2/macos/install/../java/jre/bin/java: No such file or directory

I successfully installed DB2 Express C 9.7 last summer before when I was still using the Apple JDK. I am now getting the JDK from Oracle as it is the only source I know of.

How do I tell the DB2 install where to find Java?

How to run a cold backup with Linux/tar without shutting down MySQL slave?

Posted: 02 Oct 2013 04:25 PM PDT

I run the following before tar-ing up the data directory:

STOP SLAVE;  FLUSH TABLES WITH READ LOCK;  FLUSH LOGS;   

However, tar will sometimes complain that the ibdata* and ib_logfiles* files are updated during the process. What am I missing?

The slave machine is in a cold standby machine so there are no client processes running while tar is running.

CentOS release 5.6 64bits, MySQL 5.1.49-log source distribution.

sp_send_dbmail with attachment

Posted: 02 Oct 2013 03:53 PM PDT

SQL Server 2008, connecting via SQL Server Authentication.

I have a sproc in DatabaseA which calls sp_send_dbmail in msdb to send an email with a file attachment. The file is on the db server, not on a remote fileshare.

The SQL account being used is not sysadmin, but does belong to the DatabaseMailUserRole in msdb.

Sending an email without an attachment is fine, but when an attachment is present it gives the error:

The client connection security context could not be impersonated.   Attaching files require an integrated client login  

There are a few articles/posts about this out there, but some seem to say conflicting things. I've been looking into impersonation, and one thing that does work is in the sproc in DatabaseA, to do the following:

EXECUTE AS LOGIN = 'sa' -- or any account with sysadmin privileges  EXECUTE msdb..sp_send_dbmail ....  REVERT  

I wasn't expecting this to work as I thought to send attachments, you needed to use Windows Authentication. However it does work, but it means the lower privileged SQL account needs to be granted permission to IMPERSONATE the sa (or other sysadmin account).

Doing my due diligence as a dev before unleashing a DBA's nightmare into the wild...

My question is: What is a good/safe way of allowing a user authenticated via SQL Server (non sysadmin) to send email attachments from the local db server disk without opening up a security hole?

Update: Re: Credentials I've created a new Windows Login, created credentials for that account via SSMS, mapped those credentials to my limited privileges SQL account. I get the error:

Msg 22051, Level 16, State 1, Line 0  The client connection security context could not be impersonated.   Attaching files require an integrated client login  

I must be missing something!

What is the best way to get a random ordering?

Posted: 02 Oct 2013 09:49 AM PDT

I have a query where I want the resulting records to be ordered randomly. It uses a clustered index, so if I do not include an order by it will likely return records in the order of that index. How can I ensure a random row order?

I understand that it will likely not be "truly" random, pseudo-random is good enough for my needs.

No comments:

Post a Comment

Search This Blog