Monday, May 6, 2013

[how to] Tool to view intermediate query results

[how to] Tool to view intermediate query results


Tool to view intermediate query results

Posted: 06 May 2013 08:01 PM PDT

Ok, I am not sure if this question belongs here, but I am using SQL server 2008 express edition and I've worked two days on a problem where seems to be a cross join in running a query.

I would want to know the set returned per join operation visually.

Is there any tool that can give you intermediate query results. I couldn't find one on google.

Range query from month and year

Posted: 06 May 2013 07:53 PM PDT

im nubie in mysql

I have Table like this

ID  month    year   content  1     4      2013    xxxxx  2     5      2013    yyyyy  3     6      2013    zzzzz  4     8      2014    fffff  

I want use the query it can select based with month and year with ranged

i have query like this

SELECT * FROM UPP WHERE ( month = '4' AND year = '2013' ) AND ( month = '6' AND year = '2013' )   

that query not error but zero result. Can anyone help me for fix this query? NB: the type of column month and year is integer.

Im very Appreciated form your answer

Cannot rebuild index, but there's no reason why not?

Posted: 06 May 2013 08:49 PM PDT

I've created a process whereby I am able to only rebuild indexes that need rebuilding(the process takes an hour and a half if I rebuild them all), and while it works beautifully, it gets stuck on one particular index and I see no reason why I should.

It fails with the following message:

Msg 2725, Level 16, State 2, Line 1

An online operation cannot be performed for index 'I_520CUSTVENDRELIDX' because the index contains column 'MEMO' of data type text, ntext, image, varchar(max), nvarchar(max), varbinary(max), xml, or large CLR type. For a non-clustered index, the column could be an include column of the index. For a clustered index, the column could be any column of the table. If DROP_EXISTING is used, the column could be part of a new or old index. The operation must be performed offline.

However, when I run the query based on a suggestion by this chap, shown below, I get no results:

SELECT *  FROM sys.index_columns AS ic  INNER JOIN sys.columns AS c  ON ic.object_id = c.object_id  AND ic.column_id = c.column_id  AND ((c.system_type_id IN (34,35,99,241)) -- image, text, ntext, xml   OR (c.system_type_id IN (167,231,165) -- varchar, nvarchar, varbinary       AND max_length = -1))  INNER JOIN sys.indexes as si  on si.object_id = ic.object_id  AND ic.index_id = si.index_id  inner join sys.tables t  on t.object_id = ic.object_id  where t.name = 'CONTACTPERSON'  and si.name = 'I_520CUSTVENDRELIDX'  

On top of that, a manual inspection of the index in question shows no text, ntext, image, xml or varchar(MAX), nvarchar(MAX) or varbinary(MAX). Could there be something I'm missing here?

For the record, this is a clustered index.

mysqlslap chokes on strings that contain ";" delimiter?

Posted: 06 May 2013 06:31 PM PDT

I'm having a problem passing preexisting files full of SQL statements into mysqlslap.

For example, I have a file named create.sql that contains my table structure (as dumped by mysqldump), with normal ; delimiting.

I also have a file called slap.sql (actually a slightly munged general-log-file, but this is a tiny example that reproduces the error) that contains

INSERT INTO comments VALUES ("I like winks ;) and frowns :(");  

And I run:

mysqlslap --delimiter=";" --create create.sql --query thing.sql  

I get the error:

mysqlslap: Cannot run query INSERT INTO comments VALUES ("I like winks    ERROR : You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '"I like winks' at line 1  

Which is consistent with MySQL terminating the statement at the ; that's in the middle of a string.

What can I do to make mysqlslap take --create data from mysqldump and not hork on semicolons embedded in strings in the --query data? Is it possible that mysqlslap does not follow normal parsing rules when you pass a file in to the --query parameter?

Do you know any data model archive website like this one? [closed]

Posted: 06 May 2013 03:04 PM PDT

I found this website with many data models... It's not bad actually, but just for curiosity, do you know any others site like this? http://www.databaseanswers.org/data_models/index.htm Thanks

Writing with multiple psql processes to database seems to damage data

Posted: 06 May 2013 05:31 PM PDT

I have a couple terabytes of CSV data that I am trying to import into a PostgreSQL 8.4 database (on a RedHat 6.2 server), whose data directory is initialized on a multipath hardware RAID. There are four folders of CSV data that need to be imported, and the import script acts according to what it finds in those directories, so right now it's simplest for me to run the import script separately for each server.

I have run these scripts serially on a Debian server (without multipath) before, waiting for each script to finish, and that worked. However, when I had to re-import later on this RedHat system, I decided to fire up four screen sessions and blast away. Unfortunately, something I'm doing here is destroying the filesystem:

filesystem errors

(the ls access errors are input/output errors)

There should be a postgres data directory here with ownership postgres.postgres named pgsqldb, but it's now gone. Worse, when I drop into a psql prompt to look at the database, the tables are listed, but only data from the first import script has been imported properly. If I stop the postmaster, unmount, and run fsck, I don't get that directory back either.

What's going on here? I was assured the multipath drivers and mounts for the RAID volume in question is working, so I don't think it's the hardware at this point. For reference, each script adds about 105,000 points every couple seconds to a table in the database.

Here's the import script code:

#!/bin/bash                                                                                                        # run as: /pathtoshfile/cell_import.sh $(pwd)/data_files in IMPORT_DATA dir                                                                                                for csv_file in $@  do      myfilename=`basename $csv_file`  #    echo $myfilename                                                                                                 i=${myfilename:0:4}      j=${myfilename:5:4}      grid=${myfilename:17:1}        echo "loading grid $grid, i=$i, j=$j from file $csv_file"        psql db <<SQLCOMMANDS                                                                                        CREATE TEMPORARY TABLE timport (LIKE data10min);                                                                  COPY timport                                                                                                       (point_date,gmt_time,surface_skin_temp_k,surface_pressure_mb,accum_precip_kg_per_m2,agl_2m_humid_g_per_kg,down_shortwave_rad_flux_w_per_m2,down_longwave_rad_flux_w_per_m2,agl_10m_temp_k,agl_10m_windspd_m_per_s,agl_10m_winddir_deg,agl_50m_temp_k,agl_50m_windspd_m_per_s,agl_50m_winddir_deg,agl_temp_k,agl_80m_windspd_m_per_s,agl_80m_winddir_deg,agl_100m_temp_k,agl_100m_windspd_m_per_s,agl_100m_winddir_deg,agl_200m_temp_k,agl_200m_windspd_m_per_s,agl_200m_winddir_deg) FROM '$csv_file' WITH CSV;  UPDATE timport SET grid_id = '$grid', grid_i=$i, grid_j=$j;                                                       INSERT INTO data10min SELECT * FROM timport;                                                                        SQLCOMMANDS                                                                                                         done  

Sample script output:

COPY 105408  UPDATE 105408  INSERT 0 105408  loading grid E, i=0135, j=0130 from file /media/backup1/****_DATA/E/0135_0130.****.E.txt  CREATE TABLE  COPY 105408  UPDATE 105408  INSERT 0 105408  loading grid E, i=0135, j=0131 from file /media/backup1/****_DATA/E/0135_0131.****.E.txt  CREATE TABLE  COPY 105408  UPDATE 105408  INSERT 0 105408  

When to use MOLAP or Tabular

Posted: 06 May 2013 02:51 PM PDT

What choice and factor do I need to consider if I gonna have my data mart database to use SSAS MOLAP or tabular?

Is there a guideline in what context to use MOLAP or tabular?

pros/cons of different ways to store whether a record is one of two options?

Posted: 06 May 2013 02:31 PM PDT

I am trying to store whether an address is a Work address or a Home address. There will never be another type of address.

I'm wondering what the pros/cons are of the different ways to store this, and if there is an accepted 'style' for this type of situation which is considered best practice,

Would it be better to just have a single
1)IsHome bool column, and if it's false, I just assume it's a work?
2) or both a IsHome and IsWork column,
3) or a AddressType column which is an ID that would correspond to another table which has work and home with an ID?
4) or something I have not considered?

The third option seems a little cleaner however needing to join every time seems inefficient.

Transfer data from DB2 database to Oracle database

Posted: 06 May 2013 02:55 PM PDT

I want to transfer data from an old DB2 system to a new oracle database. How should I go about doing this?

How can I query data from a linked server, and pass it parameters to filter by?

Posted: 06 May 2013 03:09 PM PDT

I have a really big query that needs to be run on multiple databases, and the results appended to a temp table and returned.

The basic syntax looks something like this:

INSERT INTO #tmpTable (Id, ...)    SELECT T1.Id, ...  FROM Server.Database.dbo.Table1 as T1  INNER JOIN #tmpIds as T ON T1.Id = T.Id    INNER JOIN Server.Database.dbo.Table2 as T2 ON T1.Id = T2.Id  INNER JOIN Server.Database.dbo.Table3 as T3 ON T2.Id = T3.Id  LEFT OUTER JOIN Server.Database.dbo.Table4 as T4 ON T3.Id = T4.Id  LEFT OUTER JOIN Server.Database.dbo.Table5 as T5 ON T4.Id = T5.Id  LEFT OUTER JOIN Server.Database.dbo.Table6 as T6 ON T5.Id = T6.Id  

The query runs quickly if run locally on the the individual servers, however it takes a long time to run if it's run from a linked server using the 4-part names like above.

The problem appears to be it's querying the linked server for the unfiltered result set first, then joining it to the #tmpIds table on the local server afterwards, which makes the query take an very long time to run.

If I hardcode the Ids to filter the result set on the linked server, such as

SELECT T1.Id, ...  FROM Server.Database.dbo.Table1 as T1  -- INNER JOIN #tmpIds as T ON T1.Id = T.Id  INNER JOIN Server.Database.dbo.Table2 as T2 ON T1.Id = T2.Id  INNER JOIN Server.Database.dbo.Table3 as T3 ON T2.Id = T3.Id  LEFT OUTER JOIN Server.Database.dbo.Table4 as T4 ON T3.Id = T4.Id  LEFT OUTER JOIN Server.Database.dbo.Table5 as T5 ON T4.Id = T5.Id  LEFT OUTER JOIN Server.Database.dbo.Table6 as T6 ON T5.Id = T6.Id    WHERE T1.Id IN (1, 2, 3)  

it runs quickly in just a few seconds.

Is there a way to run this query so it filters the result set of the query from the linked server by the #tmpId table first, before returning the result set to the local server?

Some things to note

  • The query is very large and complex, and Dynamic SQL is not a viable option for it due to the maintenance nightmare that causes.

    I would be open to suggestions about how to use Dynamic SQL for something else though, such as running a stored procedure or UDF if there's a way to do that over a linked server (tried a few different methods such as sp_executeSQL, OPENROWSET, and OPENQUERY, but those all failed).

  • Because it uses the 4-part naming convention, I cannot use a UDF on the remote server
  • Distributed Transactions are disabled, so the following does not work

    INSERT INTO #table   EXEC Server.Database.dbo.StoredProcedure @ids  

TokuDB not faster then mysql

Posted: 06 May 2013 02:32 PM PDT

i have converted a mysql DB with 80.000.000 entries to TOKUDB.

Now when i make a select count(id) from xxx where active=1 it takes 90% of the time of the normal mysql request.

What do i have to further optimize, that it is faster ?

Best regards,

Andreas

The table definition:

CREATE TABLE `adsDelivered` (    `id` bigint(20) NOT NULL AUTO_INCREMENT,    `uid` varchar(40) NOT NULL,    `_adsDelivered` bigint(20) NOT NULL DEFAULT '0',    `_campaign` bigint(20) NOT NULL DEFAULT '0',    `_ad` bigint(20) NOT NULL DEFAULT '0',    `session` varchar(44) NOT NULL,    `referer` text NOT NULL,    `refererDomain` varchar(256) NOT NULL,    `pageTime` int(11) NOT NULL DEFAULT '0',    `pageVisibleTime` int(11) NOT NULL DEFAULT '0',    `browser` varchar(256) NOT NULL,    `ip` varchar(15) NOT NULL,    `clicks` int(11) NOT NULL DEFAULT '0',    `clickTimeLast` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',    `tag` varchar(256) NOT NULL,    `countryShort` varchar(2) NOT NULL,    `timeCreated` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,    `timeUpdated` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',      PRIMARY KEY (`id`),    UNIQUE KEY `uid` (`uid`),    KEY `_campaign` (`_campaign`),    KEY `_ad` (`_ad`),    KEY `_adsDelivered` (`_adsDelivered`),    KEY `session` (`session`),    KEY `tag` (`tag`),    KEY `ip` (`ip`),    KEY `countryShort` (`countryShort`),    KEY `refererDomain` (`refererDomain`)  ) ENGINE=TokuDB AUTO_INCREMENT=7420143 DEFAULT CHARSET=utf8;  

I have put the code here: http://pastebin.com/yD1gi8ph this is the table.

sql server db log file issues

Posted: 06 May 2013 11:24 AM PDT

Were consolidating data from a bunch of databases into four reporting databases each night.

Because the entire dataset is imported each night we do not need to be able to restore the data to a point in time thus the databases are in simple recover mode. Each time we run the import however our database ldf files are growing to obsurdly large sizes (50+ Gigs). Is there a way to turn off the logging all together or get sql server to clear those log files sooner. I'm guessing no foor clearing as the log_reuse_wait_desc is ACTIVE_TRANSACTION.

Can I get notification when event occur?

Posted: 06 May 2013 12:14 PM PDT

SQL Servers has Traces and XEvents. These are used to capture and analyze what is going on with our SQL Server Instances. Events are stored in the stack for later analysis. For example, If I decide to monitor any dead lock in the database, I just query the trace file to see the history of deadlock for a period of time. Here is my question:

While events occur, in our example deadlock event, is there a way to get an email notification using msdb.dbo.xp_send_dbmail?

How should permissions for anonymous users be modelled?

Posted: 06 May 2013 11:01 AM PDT

I'm designing a web application. There will be users who log into the site, but also anonymous, non-authenticated users, i.e. any member of the public who accesses the site. Users will be assigned to groups, and those groups will be assigned permissions. Some site content will be accessible only to authenticated users, while other content may be marked as publicly accessible. I'm considering how best to model facts such as "this item is publicly accessible". Options that occur to me so far:

  1. Create a special group that a special user "Anonymous" belongs to. Assign permissions to the "Anonymous" group, and pretend that unauthenticated users are all the Anonymous user.

  2. On each content entity, include an attribute such as is_publicly_accessible.

Any thoughts on which is the right way to go? The first option feels like a more consistent approach to permissions, but the notion of some users and groups being special/fake/dummy doesn't feel right.

Table corruption: How to perform Innodb Checksum checks in MySQL 5.5 for Windows?

Posted: 06 May 2013 02:14 PM PDT

Having a corrupted Mysql 5.5.31 (Windows) database, my question relates to the the top solution provided in How do you identify InnoDB table corruption? , more precisely to the following script that is supposed to tell you which tables are corrupted:

#!/bin/bash  for i in $(ls /var/lib/mysql/*/*.ibd)  do    innochecksum -v $i  done  

In fact I have two questions: 1) Where do you execute such a script? I thought the scripting shell in MySQL Workbench would to the job by saving this snippet as a Python file and then executing it - however it reports invalid syntax already in the "for ..." line. 2) According to http://dev.mysql.com/doc/refman/5.5/en/innochecksum.html innochecksum is a utility provided by MySQL/Oracle. However, I do not seem to find it in the bin or other folders of my MySQL installation. How do I obtain it?

UPDATE: As I did not trust my own MySQL installation, I downloaded the zip files for both 32 and 64 bit versions of 5.5.31 but can confirm that a innochecksum file is not included.

Thanks.

Converting Non Unicode string to Unicode string SSIS

Posted: 06 May 2013 01:44 PM PDT

I am creating a package where I will be exporting data from a database into an empty excel file. When I added only the source and destination components and I ran the package I got a conversion error stating Output column and column "A" cannot convert between unicode and non-unicode string data types.

To fix this I added a data conversion component and converted all the columns to

"Unicode String [DT_WSTR]"

and I no longer received the error. The only problem is that I had about 50 columns where I had to go 1 by 1 and select "Unicode String [DT_WSTR]" from the drop down list. I then had to go into the destination component and map the newly converted columns to my excel file.

My question is, if anyone else has come across this, is there a better more efficient way to get around having to do all the manual data type conversions? Having to convert and map all the columns one by one doesn't seem to practical especially if you have a large number of rows.

I understand excel files are not the best way to go for importing and exporting data but it is what is required in this particular case.

I might look for a way to just export to a flat text file and then try to convert to excel as a last step in the package. I'm hopping this wont trigger the same unicode / nonunicode conversion error.

Is replication from SQL Server 2008 to PostgreSql possible?

Posted: 06 May 2013 11:05 AM PDT

Is it possible ? Sql Server as publisher(master) and PostgreSql as slave(subscriber) ? Any type of replication really.

MySQL high CPU usage (MyISAM table indexes)

Posted: 06 May 2013 02:18 PM PDT

I have a problem with an inherited MySQL database. From time to time mysqld uses up to 2300% CPU.. The only solution is to service mysql stop and run an myisamchk -r on a table. After the indexes have been fixed, I start MySQL and everything is ok.

Any ideas for an permanent solution?

Edit (from the comments):

Using 5.5.29-0ubuntu0.12.04.2-log

  key_buffer = 16M   max_allowed_packet = 16M   thread_stack = 128K   thread_cache_size = 8   myisam-recover = BACKUP   max_connections = 500   #table_cache = 512   #thread_concurrency = 10   query_cache_limit = 1M   query_cache_size = 16M   
SELECT SUM(index_length) ndxsize   FROM information_schema.tables   WHERE engine='MyISAM'  

returns

  +----------+   | ndxsize  |   +----------+  | 59862016 |   +----------+   
SELECT SUM(data_length+index_length)/power(1024,2) datndxsize   FROM information_schema.tables   WHERE engine='MyISAM'  

returns:

  +--------------------+   | datndxsize         |  +--------------------+  | 488.69915199279785 |   +--------------------+  

The server has 16GB of RAM, but it is not a DB server...It is running nginx + php-fpm

Total Memory used by SQL Server (64 bit)

Posted: 06 May 2013 12:29 PM PDT

My knowledge on the subject suggests that perf counter SQL Server:Memory Manager: Total Server Memory only gives you buffer pool memory. There is a column called physical_memory_in_use in a DMV named sys.dm_os_process_memor that gives you physical working set.

But I am not sure ...

is there a DMV or perf counter that could tell me total (buffer pool + non buffer pool i.e. MemToLeave) physical memory used by SQL Server process for 64 bit SQL Server 2008 R2 and SQL 2012 running on 64 bit Windows OS?

MySQL slap with custom query

Posted: 06 May 2013 03:07 PM PDT

I want to conduct stress test on our MySQL DB. I have the list of queries i need to execute. I have tried using Apache JMeter for this but it is very time consuming. Is it possible to run mysqlslap with custom .sql file containing INSERT, UPDATE, SELECT queries on specified MySQL database?

Overview of how MongoDB uses its various threads

Posted: 06 May 2013 01:05 PM PDT

On one instance I have MongoDB using ~85 threads. In lieu of having time to investigate directly, I am curious:

  1. What are the threads used for? Do they fall into a few high-level usage categories?
  2. How can I control/influence the number of threads used? E.g. max connection params, etc.

upgrade mysql 5.1 to 5.5 on Ubuntu 11.10

Posted: 06 May 2013 07:05 PM PDT

I currently have mysql server 5.1 installed via apt-get on my production Ubuntu 11.10 server

root@Ubuntu-1110-oneiric-64-minimal$ dpkg --get-selections | grep sql-server     mysql-server                    install   mysql-server-5.1                install   mysql-server- core-5.1          install  

I would like to upgrade this to 5.6, but the mysql docs seem to suggest upgrading to 5.5 first, and from there to 5.6.

I've seen various lengthy guides describing how to upgrade from 5.1 to 5.5, but they all seem to describe how to upgrade by installing the tarball rather than using the package manager. Is there a simpler to upgrade using the package manager if the current version was installed using apt-get.

Obviously I want my existing configuration and databases to be retained after the upgrade and I will be sure to backup my databases using mysqldump before performing the upgrade.

MySQL information_schema doesn't update

Posted: 06 May 2013 08:05 PM PDT

I have a database, say abc, in mysql server. It has only one table named test. test uses innodb engine and I've set innodb_file_per_table to true.

After I run the query delete from abc.test, I want to calculate the database size of abc. Here is the query I use:

SELECT      table_schema "name",      sum( IF(engine = "MyISAM", data_length + index_length -  data_free,      data_length + index_length)) "size"  FROM information_schema.TABLES  where table_schema like "abc";  

The strange thing is that I find the database size doesn't decrease at all, however the data in "test" is gone.

I've done this kind of test many times, this strange behavior happens sometimes.

I'm using percona mysql server 5.5.29-rel29.4.

Can anybody tell me what is wrong?

Update:

Actually, I use another thread to check the database size periodically.

How to succesfully run a batch file in an SQL Agent job?

Posted: 06 May 2013 05:05 PM PDT

I have a SQL Agent Job which generates a specific report in PDF-file and then copies the PDF to a network directory and then deletes the PDF file in the source directory.

The SQL Jobs consists of 2 steps: 1. Generate the report 2. Copy the report to the network location.

For step 2 I made a bat-file which handles the copying and removal of the pdf file.

The bat-file is as follows:

set source_dir=C:\Reports\Energie\Uur  set dest_dir=\\KWS2-WEB-SERVER\Share\Reports\Uur    C:\Windows\System32\Robocopy.exe %source_dir% %dest_dir% /MOV /Z  

However, when I run my the Job, it hangs on the second step. The status just stays on "Executing".

This is the line which I stated in the step (location of the bat-file to execute):

cmd.exe /c "C:\Reports\rapport_uur_copy.bat"  

My job-settings are as follows:

Step 1

Type: Operating system (CmdExec) On Success: Go to the next step

On Failure: Quit the job reporting failure

Step 2

Type: Operating system (CmdExec)

On Success: Quit the job reporting success

On Failure: Quit the job reporting failure

Some facts:

  • I have read/write permissions on the network directory
  • I run the steps as the Administrator-account (the logged in user, default)
  • Step 1 succeeds
  • I run Windows Server 2008 R2 Standard
  • I have SQL Server 2008 R2
  • When I run the script from the cmd manually, it works (under Administrator account).

MySQL partitioned tables?

Posted: 06 May 2013 02:51 PM PDT

I have a database that supports a web application with several large tables. I'm wondering if partitioned tables will help speed up certain queries. Each of these tables has a colum called client_id. Data for each client_id is independent from every other client_id. In other words, web queries will always contain a where clause with a single client_id. I'm thinking this may be a good column on which to partition my large tables.

After reading up on partitioned tables, I'm still a little unsure as to how best to partition. For example, a typical table may have 50 million rows distributed more or less evenly across 35 client_ids. We add new client_ids periodically but in the short term the number of client_ids is relatively fixed.

I was thinking something along these lines:

CREATE TABLE foo (      id INT NOT NULL PRIMARY KEY,      ... more column defs here...      client_id int      )  PARTITION BY KEY(client_id)  PARTITIONS 35;  

My question. Is this an optimal strategy for partitioning these types of tables? My tests indicate a considerable speedup over indexing on client_id, but can I do better with some other form of partitioning (i.e. hash or range)?

Experience using ScaleArc in test or production?

Posted: 06 May 2013 11:12 AM PDT

Has anyone had any experience using ScaleArc?

My CTO has asked my thoughts on it, and I have seen no information out there regarding real-world experiences.

MySQL user defined rollback procedure

Posted: 06 May 2013 03:08 PM PDT

I'm attempting to write my own mini-rollback procedure. I have a table that tracks any updates or deletes to another table using a trigger. I am attempting to make it possible to restore one or more of these tracked changes through the use of a procedure. However, I'm receiving a syntax error with the following:

-- UNDO_CHANGES_PROCEDURE - This copies the values of the work log track table back into the relevant record in the work log table if an accidental edit or deletion is made.  DROP PROCEDURE IF EXISTS UNDO_CHANGES_PROCEDURE;    DELIMITER $$    CREATE PROCEDURE UNDO_CHANGES_PROCEDURE(ID INT(6))  BEGIN  DECLARE var_trig CHAR(8);    SET var_trig = (SELECT TRIGGER_TYPE FROM WORK_LOG_TRACK WHERE WORK_LOG_TRACK.WORK_LOG_EDIT_NUMBER = ID);    IF var_trig = 'Update' THEN  UPDATE WORK_LOG SET ENTRY_NUMBER = WORK_LOG_TRACK.ENTRY_NUMBER_FK, EMPLOYEE_ID_FK = WORK_LOG_TRACK.EMPLOYEE_ID_FK, WORK_ORDER_NUMBER_FK = WORK_LOG_TRACK.WORK_ORDER_NUMBER_FK, ENTRY_TIME = WORK_LOG_TRACK.ENTRY_TIME, WORK_DONE = WORK_LOG_TRACK.WORK_DONE WHERE WORK_LOG_TRACK.WORK_LOG_EDIT_NUMBER = ID AND WORK_LOG.ENTRY_NUMBER = WORK_LOG_TRACK.ENTRY_NUMBER_FK;  ELSE  INSERT INTO WORK_LOG(ENTRY_NUMBER, EMPLOYEE_ID_FK, WORK_ORDER_NUMBER_FK, ENTRY_TIME, WORK_DONE) VALUES (WORK_LOG_TRACK.ENTRY_NUMBER_FK, WORK_LOG_TRACK.EMPLOYEE_ID_FK, WORK_LOG_TRACK.WORK_ORDER_NUMBER_FK, WORK_LOG_TRACK.ENTRY_TIME, WORK_LOG_TRACK.WORK_DONE) WHERE WORK_LOG_TRACK.WORK_LOG_EDIT_NUMBER = ID;  END IF;  END;  $$    DELIMITER ;  

The syntax error comes in in regards to my update statement, any help or suggestions would be appreciated.

Enforce a hard limit on write execution time during Amazon RDS write stalls

Posted: 06 May 2013 05:08 PM PDT

I am trying to recover from the situation where Amazon RDS write operations stall (some discussion on why this is here). If I show full processlist, I will see something like this:

318263  myapp1  domU-XX-XX-XX-0F-90-C1.compute-1.internal:55660 mydb    Query   88  Updating    UPDATE `profiles` SET `updated_at` = '2012-05-23 14:24:46', `latitude` = 38.896, `longitude` = -77.0452 WHERE (`profiles`.`id` = 100767)  318264  myapp1  domU-XX-XX-XX-01-60-B1.compute-1.internal:46609 mydb    Query   91  updating    DELETE FROM `unread_message_indices` WHERE (`unread_message_indices`.`sender_id` = 100601) AND (`unread_message_indices`.`recipient_id` = 101515)  318265  myapp1  domU-XX-XX-XX-14-41-C1.compute-1.internal:59277 mydb    Query   88  Updating    UPDATE `inbox_profile_indices` SET `updated_at` = '2012-05-23 14:24:22' WHERE (`inbox_profile_indices`.`id` = 127613)  

As you can see from the number after query, these have been in the state of "updating" for 80-90 seconds! I am running the largest DB instance, so clearly something bad is happening on the EBS node on which the DB is running.

In these situations, I would prefer the query fail after 1 or 2 seconds, not wait for over 1 min stuck in the "Updating" state. I am using Ruby's ActiveRecord FYI. What is the best way to force a failure in this case after 2 seconds? Should I use innodb_lock_wait_timeout (Don't think so since these tables aren't locked and besides these times are clearly > 50, which is what it is set at). I believe the optimal approach is to set the per-session read timeout and write timeout. In Ruby on Rails this is done by editing the databases.yml file and entering something like:

production:    ...    write_timeout: 2    read_timeout: 10  

Will this approach work? Are there any other approaches I can use to more quickly fail these queries instead of having them hang my application threads indefinitely?

Thanks!

Connect to SQL Server Management Studio over VPN (Hamachi)

Posted: 06 May 2013 01:11 PM PDT

I just got my Hamachi VPN set up. For anyone familiar with Hamachi, I have it set up as a gateway so I'm part of the network when I'm away. Almost everything seems to be working perfectly. I can even backup using Windows Home Server if I want.

I cannot connect to my SQL Server from SQL Server Management Studio. Of course, when I'm at home, everything works perfectly. I can communicate with the database server just fine remotely (i.e., ping). I just can't connect with SSMS. The network configuration is at the default (TCP Enabled).

Does anyone know

  • Why I can't connect?
  • How I can determine why I can't?

Extra info:

  1. Using a Workgroup, not a domain.
  2. Using port 1433 to connect
  3. Connecting to the default instance

No comments:

Post a Comment

Search This Blog