Friday, March 8, 2013

[how to] I can not connect to NeorSQL tool from my Microsoft .NET application to MySQL Server database

[how to] I can not connect to NeorSQL tool from my Microsoft .NET application to MySQL Server database


I can not connect to NeorSQL tool from my Microsoft .NET application to MySQL Server database

Posted: 08 Mar 2013 08:42 PM PST

As per documentation, i used port 4040 to connect .NET web app with Neor Profile, but I am not able to make this work.

Can you help me.

Siva Kishore

reddyrm@gmail.com

What are some best monitoring tools available for mysql?

Posted: 08 Mar 2013 05:21 PM PST

I have a setup a linux mysql server and would like to monitor it. I found percona plugins for nagios and cacti. Are there any other tools available?

Email alerts and Server Side Tracing

Posted: 08 Mar 2013 04:55 PM PST

This question is in regards to SQL 2005 and email alerts on long running queries.

Does this sound like the best approach (with the least impact to the server) to receiving alerts to long running queries is the following:

  1. Create a server side trace that monitors across all databases or a specific database for events 10 (RPC:Completed) and 12 (SQL:Completed).
  2. Set a trace filter on Duration for greater than n amount of time (for example, greater than 2 seconds).
  3. Separate stored procedure that runs every n seconds (for example 5 or 10) to check for new entries and sends out an email alert. (To detect new entries I would need to keep track of the last entry detected but that is trivial.)

Thanks.

Permissions for a MySQL user which will monitor replication status?

Posted: 08 Mar 2013 05:49 PM PST

I'd like to use a script to monitor the replication status of a MySQL database like in this bash script: https://gist.github.com/wesdeboer/1791632

I'd like to create a user which will only be used to query the status of the replication. So basically this user would just need to run the command:

"SHOW SLAVE STATUS \G"  

What is the minimum set of permissions I need to grant a user to allow this?

(Is this even something which can be GRANTed?

Schema design: Use of association (aka: bridge/junction) table vs foreign key constraint with composite index containing a non-key field

Posted: 08 Mar 2013 05:27 PM PST

This is an inventory database for IT assets. The models used are trimmed in order to focus on the problem at hand. Using SQL Server 2008. Thanks for taking the time to read and for any input you can provide.

My design includes a Device table which holds the various devices that can be entered into inventory. Each device has a boolean flag, CanNetwork which states whether a device has network capability, e.g., for most computers CanNetwork = true, for hard drives CanNetwork = false; some printers will be true, and others will be false. You get the idea.

The CanNetwork field determines if network-related information is relevant when an inventory record is created.

Design 1

My first design uses an index on Device.DeviceID and Device.CanNetwork to use in a foreign key constraint with the Inventory table.

Schema #1: Foreign key constraint with index

The NetworkStatus table looks like this in this setup:

+----------------------------------------------------------------------+  | NetworkStatusID | NetworkStatus  | NetworkStatusDescription          |  |----------------------------------------------------------------------|  | 1               | Connected      | Device connected to network.      |  | 2               | Not Connected  | Device not connected to network.  |  | 3               | Do Not Connect | Do not connect device to network. |  +----------------------------------------------------------------------+  

I put check constraints on the Inventory table as follows to ensure a network status and network information can only be provided if the device is capable of connecting to a network.

-- Inventory table check constraint: CK_CanNetwork  -- If can't network, then network fields are null  CanNetwork = 0  AND NetworkStatusID IS NULL  AND Hostname IS NULL  AND IPAddress IS NULL  AND MACAddress IS NULL  OR CanNetwork = 1    -- Inventory table check constraint: CK_NetworkStatus  -- If network status is "Connected", then must provide  -- a hostname or ip address  NetworkStatusID = 1 AND Hostname IS NOT NULL  OR NetworkStatusID = 1 AND IPAddress IS NOT NULL  OR NetworkStatusID <> 1  

The issue I have with this design is I'm not sure if the relationship with Inventory and Device.DeviceID/Device.CanNetwork is a good or bad design decision. Is propagating a non-key field like CanNetwork to other tables a bad design? I don't have enough experience with database design to make an informed decision.


Design 2

In this design I thought I would use a bridge/association/junction table to decide which network statuses are valid for a device. It looks like this:

Schema #2: Bridge/Association table

The NetworkStatus table looks like this in this setup (Notice the addition of record with id #4, and the ForCanNetwork field which specifies that this status is for use with devices that can can connect to a network):

+--------------------------------------------------------------------------------------+  | NetworkStatusID | NetworkStatus  | NetworkStatusDescription          | ForCanNetwork |  |--------------------------------------------------------------------------------------|  | 1               | Connected      | Device connected to network.      | True  (1)     |  | 2               | Not Connected  | Device not connected to network.  | True  (1)     |  | 3               | Do Not Connect | Do not connect device to network. | True  (1)     |  | 4               | Incapable      | Cannot connect to networks.       | False (0)     |  +--------------------------------------------------------------------------------------+  

Due to this design's granularity, I could theoretically allow any mix of statuses for devices with this design, but I wanted to control it so I wrote some triggers to only insert the correct mix of statuses depending on whether the device is network capable. Triggers as follows:

-- NetworkStatus table on INSERT trigger  -- Adds a record for each device that  -- matches the capability of the network status  INSERT INTO DeviceNetworkStatus  SELECT i.NetworkStatusID, dev.DeviceID  FROM Device dev  CROSS JOIN  inserted i  WHERE dev.CanNetwork = i.ForCanNetwork    -- Device table on INSERT trigger  -- Adds a record for each network status that  -- matches the capability of the new device  INSERT INTO DeviceNetworkStatus  SELECT ns.NetworkStatusID, i.DeviceID  FROM NetworkStatus ns  CROSS JOIN  inserted i  WHERE ns.ForCanNetwork = i.CanNetwork  

I used the following CHECK constraint on the Inventory table:

-- Inventory table check constraint: CK_NetworkStatus  -- If network status is "Connected', then must provide  -- a hostname or ip address  -- If network status is "Incapable", then network fields  -- must be null  NetworkStatusID = 1 AND Hostname IS NOT NULL  OR NetworkStatusID = 1 AND IPAddress IS NOT NULL  OR NetworkStatusID = 4 AND Hostname IS NULL  AND IPAddress IS NULL AND MACAddress IS NULL  OR NetworkStatusID <> 1 AND NetworkStatusID <> 4  

This design eliminates the need to propagate CanNetwork across the tables. The issue I see with this design is that every device that has network capability will have records in DeviceNetworkStatus paired with NetworkStatus ids 1, 2 and 3, while devices that can't connect to a network will be paired only with NetworkStatus id 4. It seems like a lot of extra records that all mean the same thing: devices that can be networked can only use statuses 1, 2 and 3, and devices that can't network only use 4. This design seems to be more "relationally correct", but also smells a bit.

Search every column in every table in Sybase Database

Posted: 08 Mar 2013 04:19 PM PST

I'm been taxed with the task of creating an application that pulls data from our Micros Point of Sales system. The POS is using a Sybase database running on one of our servers. The database schema is very convoluted. I've been able to figure out most of the schema to gain access to the data I need, however, there are a few things still left to find. I know what they are called in the actual POS, but I can't seem to find them anywhere in the database (although I haven't actually looked through all 200+ tables).

I'm wondering if there's any way to run a query to search for a specific string in all the columns in all the tables. Btw, I'm using the "Interactive SQL" application that comes with the Sybase Database Client software to connect to the database.

Any help you can offer with this is highly appreciated.

Thanks.

Why isn't my includeThreadNamesAsStatementComment JDBC parameter working?

Posted: 08 Mar 2013 03:31 PM PST

The JDBC API lists a includeThreadNamesAsStatementComment parameter that seems to indicate it will prefix all queries with the Java thread ID. I thought this would be useful in tying back DB activity to application logs. I added it to my resource URLs and it seemed to only be working part of the time.

Curiously it was only the applications heartbeat thread that periodically pings the database as part of a health check. None of the "real" queries were getting tagged.

Why isn't this working for all queries?

SQL Server link to linked server

Posted: 08 Mar 2013 03:21 PM PST

My client has a Visual FoxPro database that needs to co-mingle with SQL Server data (SELECT only, no updates or deletes).

After quite a bit of searching I found the VFP OLEDB provider doesn't work with 64-bit SQL, so I installed an instance of 32-bit SQL Express. I was able to successfully create linked server to VFP. The 32-bit instance is installed on a different server than the 64-bit, but I'm using SSMS on the server with the 32-bit instance and can connect via SSMS fine to the 64-bit instance.

Now I need to link my main 64-bit SQL instance to the 32-bit instance, but I've not had luck. I've tried both the SQL Native Client and the OLE DB Provider for SQL Server, but I keep getting "Server is not found or not accessible." Remote connections are enabled, I've enabled TCP/IP, and tried all manner of server name combinations (.\SQLEXPRESS32, localhost\SQLEXPRESS32, (local)\SQLEXPRESS32, MyServerName\SQLEXPRESS32).

At this point I'm not sure if this is a linked server configuration issue, or a new install of SQL Server configuration issue, so I'm not entirely sure if I'm asking how to link my server or how to ensure I can access my server. Or maybe I'm asking the best way for my 64-bit SQL Server to read VFP data. Any suggestions?

Update Table A row if Table B row is changed

Posted: 08 Mar 2013 04:28 PM PST

Is is anyhow possible to tell the database to execute a little script, if a row is changed in a table?

Scenario: The database constantly monitoring Table B to see if rows are inserted, updated or deleted. If a row is inserted, updated or deleted: Update the row in Table A with the referring identifier.

Setup:

Table A  ------------------------  UserID   Name   SalesSum  1        Carl   5  2        Peter  0  3        Oscar  3      Table B  --------------------------  UserID   Timestamp   Sales  1        01:00:00    3  1        02:01:00    1  1        03:54:00    1  3        01:20:00    2  3        02:45:00    1  

The point of this being not to do a subquery every time the SalesSum is needed in the application, like:

Select A.Name, SalesSum = (select sum(B.Sales) from [Table B] as B where A.UserID = B.UserID) from [Table A] as A  

But instead "just" doing it easy, nice and clean:

Select Name, SalesSum from [Table A]  

Or is there anything I might have overseen or am wrong about?

EDIT:

Many of described the uses of inner joins for these sub-calculations. But when "trying" this, I get a couple of errors: First I have to include each column in the GROUP BY clause unless it is defined by via an Aggregate function. But when trying to do this, then it errors when having a NTEXT datatype column in the clause, because it cannot be compared.

Here is an example of the original query:

Select   U.UserID,                                                                 ' Int - Primary Key   U.Name,                                                                   ' Nvarchar(MAX)   B.OrderID,                                                                ' Int - Primary Key   B.Comment,                                                                ' Ntext   SalesAmount = (select sum(C.Sales) from OrdersDetails as C),   SalesPriceSum = (select sum(C.Sales * C.Price) from OrdersDetails as C)  From   Orders as B  Inner join   Users as U on B.UserID = u.UserID  Where   ' Clauses on finding specific columns based on the Orders table and the Users table  Order By   U.Name asc   B.Period asc  

The "new" query, with another inner join, could be something like:

Select   U.UserID,                                                                 ' Int - Primary Key   U.Name,                                                                   ' Nvarchar(MAX)   B.OrderID,                                                                ' Int - Primary Key   B.Comment,                                                                ' Ntext   SalesAmount = sum(C.Sales),   SalesPriceSum = sum(C.Sales * C.Price)  From   Orders as B  Inner join   Users as U on B.UserID = u.UserID  Inner join   OrdersDetails as C on B.OrderID = C.OrderID  Where   ' Clauses on finding specific columns in Order table  Group By   U.UserID   U.Name   B.OrderID   B.Comment     <------- This errors because of the Ntext datatype!  Order By   U.Name asc   B.Period asc  

As written, this is only an example, and therefore there can be a large number of selected columns - specially from the Orders table. All these needs to be in the Group By clauses. But this cannot be done with the Ntext datatypes. So what do I do?

How are DB locks tied to connections and sessions?

Posted: 08 Mar 2013 02:26 PM PST

I've heard of page- and row-level locks, but never heard them used with regards to sessions and connections. Today, our (MySQL) DBA was talking about row locks and implied that their scope can be set at the connection or session level. Is this true, or did I misunderstand him?

I didn't realize that locks could be set when a connection is made (or when a session is started) and then released when that connection/session ends. If this is true, what benefits does this have over row-/page-level locks? Thanks in advance.

Does large table variables fill up logs in tempdb?

Posted: 08 Mar 2013 04:28 PM PST

I'm running into an issue with a DBA who claims that table variables reside in ldf of the tempdb and when large amount of data is loaded into the table variable, the tempdb's logs fill up.

The DBA's solution is to use temporary table instead of table variable. While I see the justification to use the temp table in case of large data sets, I don't understand how temp table is created and stored in the mdf of tempdb where as the table variable is stored in ldf. Can someone please throw some light?

Do I need tweak something for this MySQL setup?

Posted: 08 Mar 2013 03:10 PM PST

I'm running a fairly popular website with the following stats and setup.

stats: ~250,000 pageviews per day, rendering ~47 queries / second to MySQL setup: the MySQL instance is running alone on a Linode512 on Debian 6.0. Nothing else runs there, for performance reasons. Using MyISAM and the ini for "large" mysql.

I see the following figures right now from MYSQLREPORT.

Please advice. The site performance is quite OK but I'm not sure what the 256 MB key buffer indicates and if that is tannable in the long run or not. I'm not very good at understanding the inner workings of MYSQL. Also, I can't figure out why the QC is being hit so low i.e. why isn't it using all of the QC.

-- Key --  Buffer used   226.52M of 256.00M  %Used:  88.49    Current     256.00M               Write hit      64.93%   Read hit      100.00%    -- Questions --  Total         139.38M    47.0/s    QC Hits      51.28M    17.3/s  %Total:  36.79    DMS          41.99M    14.2/s           30.12    Com-         30.75M    10.4/s           22.06    COM-QUIT     15.36M     5.2/s           11.02    +Unknown     10.68k     0.0/s            0.01  Slow 10 s         127     0.0/s            0.00  %DMS:   0.00  Log:  ON  DMS            41.99M    14.2/s           30.12    SELECT       35.92M    12.1/s           25.77         85.56    UPDATE        3.15M     1.1/s            2.26          7.50    INSERT        2.66M     0.9/s            1.91          6.33    DELETE      256.47k     0.1/s            0.18          0.61    REPLACE           0       0/s            0.00          0.00  Com-           30.75M    10.4/s           22.06    set-option   15.38M     5.2/s           11.03    change-db    15.36M     5.2/s           11.02    show-fields   2.70k     0.0/s            0.00    -- SELECT and Sort --  Scan            1.21M     0.4/s %SELECT:   3.38  Range         175.99k     0.1/s            0.49  Full join          24     0.0/s            0.00  Range check         0       0/s            0.00  Full rng join       0       0/s            0.00  Sort scan       2.51M     0.8/s  Sort range      4.04M     1.4/s  Sort mrg pass 177.95k     0.1/s    -- Query Cache --  Memory usage    2.78M of  16.00M  %Used:  17.38  Block Fragmnt  23.23%  Hits           51.28M    17.3/s  Inserts        33.35M    11.2/s  Insrt:Prune   68.97:1    11.1/s  Hit:Insert     1.54:1    -- Table Locks --  Waited          1.19M     0.4/s  %Total:   1.74  Immediate      67.07M    22.6/s    -- Tables --  Open              227 of  256    %Cache:  88.67  Opened          1.16k     0.0/s    -- Connections --  Max used           27 of  151      %Max:  17.88  Total          15.36M     5.2/s    -- Created Temp --  Disk table      2.16M     0.7/s  Table           2.30M     0.8/s    Size:  16.0M  File          327.26k     0.1/s    -- Threads --  Running             7 of    7  Cached              1 of    8      %Hit:  99.88  Created        19.02k     0.0/s  Slow                0       0/s    -- Aborted --  Clients             4     0.0/s  Connects           26     0.0/s    -- Bytes --  Sent          958.38M   323.0/s  Received      954.98M   321.9/s`  

Btw, I tried posting this question in the DBA forums of stack exchange prior to here but it's like a grave in the other forums. Appreciate the help.

UPPER Case issue postgreSQL when importing from text file

Posted: 08 Mar 2013 02:01 PM PST

I have ruby and rails app, and I have cron task to download text file and import it to the database using

\copy hotels FROM 'db/ActivePropertyList.txt' DELIMITER '|'  

There is a header field which is called EANHotelID in this text file. My import fails because of this particular filed and if I manually rename it to, for example hotel_id, then import goes OK.

Is there any way to overcome this issue because I will be unable to rename it every time manually?

In my database schema the column is called ean_hotel_id.

UPDATE Error Description:

PG::Error: ERROR:  zero-length delimited identifier at or near """"  LINE 1: COPY "hotels" ("","sequence_number","name","address1","addre...  

Text file example

EANHotelId|SequenceNumber|Name|Address1|Address2|City|StateProvince|PostalCode|Country|Latitude|Longitude|AirportCode|PropertyCategory|PropertyCurrency|StarRating|Confidence|SupplierType|Location|ChainCodeID|RegionID|HighRate|LowRate|CheckInTime|CheckOutTime  180997|1|Bristol Metropolitan|Av. Getulio Vargas, 286 - Savassi||Belo Horizonte||30112-020|BR|-19.93301|-43.92641|CNF|1|BRL|.0|52|ESR|Near Liberdade Square||577|90|54||  183714|1|Ibis Vitoria Praia Do Canto|R Joao Da Cruz 385 Praia Do||Vitoria||29055-620|BR|-20.29412|-40.29437|VIX|1|||43|ESR||2141|3711|96.4557|95.4557|12:00 PM|12:00 PM  

Table definition

  t.column :ean_hotel_id,         "int"    t.column :sequence_number,      "int"    t.column :name,                 "varchar(70)"    t.column :address1,             "varchar(50)"    t.column :address2,             "varchar(50)"    t.column :city,                 "varchar(50)"    t.column :state_province,       "varchar(2)"    t.column :postal_code,          "varchar(15)"    t.column :country,              "varchar(2)"    t.column :latitude,             "numeric(8,5)"    t.column :longitude,            "numeric(8,5)"    t.column :airport_code,         "varchar(3)"    t.column :property_category,    "int"    t.column :property_currency,    "varchar(3)"    t.column :star_rating,          "numeric(2,1)"    t.column :confidence,           "int"    t.column :supplier_type,        "varchar(3)"    t.column :location,             "varchar(80)"    t.column :chain_code_id,        "varchar(5)"    t.column :region_id,            "int"    t.column :high_rate,            "numeric(19,4)"    t.column :low_rate,             "numeric(19,4)"    t.column :check_in_time,        "varchar(10)"    t.column :check_out_time,       "varchar(10)"  

For COPY I use gem postgres-copy (https://github.com/diogob/postgres-copy). in my task I have the code to map table columns names in the file to the database

Hotel.pg_copy_from('db/ActivePropertyList.txt', :delimiter => '|',   :map => {'EANHotelID' => 'ean_hotel_id',      'SequenceNumber' => 'sequence_number',      'Name' => 'name'       .......  

Update field1.table1 based on compound join between table1 and table2 - Oracle

Posted: 08 Mar 2013 02:07 PM PST

I want to run update statement on Oracle 11g, but it fails with the following error:

SQL Error: ORA-01427: single-row subquery returns more than one row 01427. 00000 - "single-row subquery returns more than one row"

Here's what I'm trying to do in pseudocode:

Update Table1  Set Flag_Column=1  Where  Table1.ColumnX=Table2.ColumnX  and  Table1.ColumnY=Table2.ColumunY  

Is there a way to do this in Oracle SQL?

My code looks like this:

update Table1  set Flag_Column=1  where ColumnX=      (select distinct Table1.ColumnX      from Table1, Table2      where Table1.ColumnX=Table2.ColumnX      and Table1.ColumnY=Table2.ColumnY)  and ColumnY=      (select distinct Table1.ColumnY      from Table1, Table2      where Table1.ColumnX=Table2.ColumnX      and Table1.ColumnY=Table2.ColumnY)  

Creating indexes with t-sql scrips vs rebuild indexes in maintenance plan

Posted: 08 Mar 2013 02:15 PM PST

I'm using SQL Server 2008 and I am running several (15) scripts each day to bulk insert to tables. Each script drops the table at the beginning. I create indexes in the scripts. Some of the scripts reference the tables that were created with previous scripts. I want to improve the bulk insert operations but also want to keep the indexes to improve query performances. Does it make sense to create the indexes using maintenance plan rather than creating them in the script? Will it improve the bulk insert performance if I create the indexes in the maintenance plan at the end of all scripts run? Thanks.

Moving site to amazon EC2 - better to have 1 large instance or 2 medium instances?

Posted: 08 Mar 2013 11:40 AM PST

I currently have a website which is running on windows 2008 r2 standard along with sql 2012 express. All on the same box. The site gets around 700-900 visits a day,although that is slowly going up (yea us!).

The database itself is around 2gb in size and makes use of full-text search. The bulk of the full text search is done against a product table which currently has about 36,000 records in it and that's slowly growing.

We're leaning towards moving the site over to amazon EC2.

I have 2 basic questions

  1. Would it be better performance wise moving it to EC2 or keeping it on a dedicated box? The reason we're considering the move is the ability to expand capacity if/when the website becomes busier without any downtime. We've been putting a lot of effort into getting more traffic and it's slowly starting to pay off.

  2. If we do move it, would we be better off getting 1 large EC2 instance and putting everything on that one instance or would we be better off with 1 medium EC2 and 1 medium RDS instance? I don't think there's anything really heavy going on with the website itself, most of the resource i think are being used by the database. Highest mem usage for the w3 process is around 200megs.

I don't want the site to run any slower then it's running right now. Faster would always be nice. Budget is around $250/mo but we're willing to go higher if we see performance gains. Using either 1 large or 2 medium instances looks like it'll cost around $170/mo and that includes a WEB license for sql, which would be an upgrade for us as we're running express right now.

Thanks

Are Schemas SQL Standard? [closed]

Posted: 08 Mar 2013 10:43 AM PST

Well I have evidence that it's not supported.

For schema I do not mean "the set of data structure".

Definition of schema (for this questions) :

A database contains one or more named schemas, which in turn contain tables. Schemas also contain other kinds of named objects, including data types, functions, and operators.

SQLite3

Really?

MySQL

Segmentation fault (joke. not supported...right?)

Oracle

ALTER SESSION SET CURRENT_SCHEMA = <schema name>  

Postgres

SET search_path TO myschema,public;  

SQLServer

I think is supported

Well I have evidence that the answer is NO, but I need to get where it says that schemas are not standard. I seems to be on first thought

Thanks!

TimesTen performance as compared to 10g

Posted: 08 Mar 2013 10:59 AM PST

I am new here, so please be nice...

I have a question regarding TimesTen & Oracle. I have attempted an experiment with both pieces of software, and realize that the query speed on a table is slower on TimesTen than on Oracle.

The experiment setup was as follows:

  • 500,000 rows of data
  • count(*) query on a column which is NOT a candidate key

Table definition:

CREATE TABLE menu(DISH_ID NUMBER NOT NULL PRIMARY KEY, price NUMBER(7))

Query: SELECT COUNT(*) FROM menu WHERE price <= 50. The price value increases uniformly until the largest value in the DB. Time taken for the query to be executed 1000 times was recorded.

The query speed for different ranges is consistently slower on TimesTen, as compared to Oracle. No indexes were built on the search key. Is there a particular reason for this?

Other notes: I ran the same experiment, but built an index on the search key on both TimesTen & on 10g, and the difference is stark on TimesTen's favour.

Why would running DBCC FREEPROCCACHE result in different data being returned?

Posted: 08 Mar 2013 10:32 AM PST

I experienced a situation where a query run directly in SSMS was giving a different result set than the same query run from code on a web server, using the same database.

After spending a lot of time ensuring the queries were identical, I ran DBCC FREEPROCCACHE on the off chance it might help, and to my surprise, it corrected the issue. My understanding was that running this command should only affect performance, not the data set returned.

Am I missing something here?

Edit:

By different results, I mean the same number of rows were returned with different numeric values.

The query that was giving incorrect results was using this TVF, which is what generates the data that was incorrect:

alter function [dbo].[tfMAPAE_ARevenue] (@start datetime)  returns table  as  -- M-monthly AP-account portfolio A-actual E-external ARevenue  return (          select AccountPortfolioID, M01, M02, M03, M04, M05, M06, M07, M08, M09, M10, M11, M12          from (              select isnull(pr.AccountPortfolioID, - pr.CompanyID) as AccountPortfolioID, ear.Amount, 'M' + right(cast(datediff(mm, sd.Start, ear.YearMonth) + 101 as varchar(3)), 2) as ColName              from dwvMPRAEARevenue ear              inner join Project pr on (ear.ProjectID = pr.ProjectID)              -- force to the 1st day of the month              inner join (                  select cast(floor(cast(@start as float)) - (day(@start) - 1) as datetime) as Start                  ) sd on (1 = 1)              where sd.Start <= ear.YearMonth and ear.YearMonth < dateadd(yy, 1, sd.Start)              ) ear          pivot(sum(ear.Amount) for ColName in (M01, M02, M03, M04, M05, M06, M07, M08, M09, M10, M11, M12)) aear          )  

Delete a variable number of records by a chronological order

Posted: 08 Mar 2013 10:12 AM PST

Our developers have been using a cursor within a stored procedure to delete old password history. The number of records to delete is being passed by variable.

DECLARE hist_cursor CURSOR LOCAL FOR      SELECT history_nr   FROM usr_pwd_hist   WHERE usr_id = @usr_id      ORDER BY    history_nr ASC    OPEN hist_cursor    WHILE @to_delete > 0  BEGIN      FETCH NEXT FROM hist_cursor INTO @hist_val        DELETE FROM usr_pwd_hist WHERE CURRENT OF hist_cursor        SET @to_delete = @to_delete-1  END    CLOSE hist_cursor;   DEALLOCATE hist_cursor;  

I would like to replace this with a set based approach. I can't do a simple top statement because the number of records to delete is a variable. I can't use a top with a variable without dynamic sql and by policy we don't allow dynamic sql in production.

I'm considering this approach below but it makes me nervous as I know that Microsoft is planning on changing the way ROWCOUNT affects return results. By putting the delete targets in a subquery I should be ok with future SQL versions, but I'm still wondering if there is a better way to delete a variable number of records by a chronological order.

SET ROWCOUNT @to_delete;  /* limit records to be deleted */  DELETE FROM usr_pwd_hist WHERE history_nr IN  (      SELECT history_nr       FROM usr_pwd_hist      WHERE usr_id = @usr_id      ORDER BY history_nr ASC    );  SET ROWCOUNT 0;  /* return rowcount to default setting */  

Complex query with multiple normalized fields

Posted: 08 Mar 2013 03:42 PM PST

We have a fairly simple table structure, but with a LOT of fields per table (talking 40+). This data is initially produced in plain-text, user-readable tables, but then it is translated into higher-performance, easier to query tables before being installed for use in production.

What we do is, wherever possible and reasonable we translate certain fields into enumerated values, and keep track of the enumerations in a MasterEnum table. There are usually 20-25 enumerated fields out of 40 or so.

Sample table structure:

Plain text version:

      |  PartNumber   |  Manufacturer  |  SomeData  |  SomeMoreData  |  SomeTextData ...      ----------------------------------------------------------------------------------      |  1x9kdah      |  GizmoCorp     | ThisIsData |  OtherData     |  ThisStaysText ...      |  8xcjkzh      |  GadgetInc     | MoreData   |  OtherData2    |  ThisTooStaysText ...  

Target table sample structure:

      |  PartNumber  |  Manufacturer  |  SomeData   | SomeMoreData  |  SomeTextData ...      -------------------------------------------------------------------------------------      |  1x9kdah     |       1        |    1        |      1        |  ThisStaysText ...      |  8xcjkzh     |       2        |    2        |      2        |  ThisTooStaysText ...  

Master Enumeration Table Structure

      |  FieldName     |  InputText  |  ValueCode |      ---------------------------------------------      |  Manufacturer  |  GizmoCorp  |  1         |      |  Manufacturer  |  GadgetInc  |  2         |      |  SomeData      |  ThisIsData |  1         |      |  SomeData      |  MoreData   |  2         |      |  SomeMoreData  |  OtherData  |  1         |      |  SomeMoreData  |  OtherData2 |  2         |  

We have a means of doing this translation that works and works well; however it's a little on the slow side since all the processing is done in Java via Spring/Hibernate. My question is:

Is there a way to write a single query that would accomplish all the above translations? (Note that we have an excellent way of keeping track of our field definitions programmaticly, so generating complex SQL queries on the fly is not an issue). If it is not possible to do it in a single query, how would I structure queries to iterate over the individual fields and make sure that as the translations happen the data is inserted into the new table remains associated with the correct rows?

Note that it is safe to assume the target table is always empty at the beginning of the process.

Unique Identifier with Extra Characters Still Matching in Select

Posted: 08 Mar 2013 10:41 AM PST

We are using SQL Server 2012 with a unique identifier and we've noticed that when doing selects with additional characters added onto the end (so not 36 chars) it still returns a match to a UUID.

For example:

select * from some_table where uuid = '7DA26ECB-D599-4469-91D4-F9136EC0B4E8'

returns the row with uuid 7DA26ECB-D599-4469-91D4-F9136EC0B4E8

but if you run

select * from some_table where uuid = '7DA26ECB-D599-4469-91D4-F9136EC0B4E8EXTRACHARS'

it also returns the row with the uuid 7DA26ECB-D599-4469-91D4-F9136EC0B4E8

SQL Server seems to ignore all characters beyond the 36 when doing its selects. Is this a bug/feature or something that can configured?

It's not a massive issue as we have validation on the front end for the length but it doesn't seem correct behaviour to me.

Thanks

MySQL5.6 on Mac OS X 10.6.8 problems when granting permissions to root

Posted: 08 Mar 2013 02:55 PM PST

I'm having serious problems with a MySQL 5.6 instance on a Mac Server.

We had to upgrade MySQL to a newer version, but it turned to be a bad idea, as we lost control to it. We had a backup of the /data directory and the my.cnf file as well. However, when setting an init_file to restore the previous password and its permissions. So we created a text file with this content:

UPDATE mysql.user SET Password=PASSWORD('myOriginalPassword') WHERE User='root';  FLUSH PRIVILEGES;  

We have double checked the init_file permissions (we even gave it a chmod 777, to make sure it worked), but something is not working. When we run mysqld_safe --init_file=/usr/local/mysql/bin/init_file it tries to start the server, but then it shuts itself down. The log is as follows:

130308 13:23:52 mysqld_safe Starting mysqld daemon with databases from /usr/local/mysql/data  2013-03-08 13:23:53 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).  2013-03-08 13:23:53 18979 [Warning] Setting lower_case_table_names=2 because file system for /usr/local/mysql/data/ is case insensitive  2013-03-08 13:23:53 18979 [Note] Plugin 'FEDERATED' is disabled.  2013-03-08 13:23:53 18979 [Note] InnoDB: The InnoDB memory heap is disabled  2013-03-08 13:23:53 18979 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins  2013-03-08 13:23:53 18979 [Note] InnoDB: Compressed tables use zlib 1.2.3  2013-03-08 13:23:53 18979 [Note] InnoDB: CPU does not support crc32 instructions  2013-03-08 13:23:53 18979 [Note] InnoDB: Initializing buffer pool, size = 128.0M  2013-03-08 13:23:53 18979 [Note] InnoDB: Completed initialization of buffer pool  2013-03-08 13:23:53 18979 [Note] InnoDB: Highest supported file format is Barracuda.  2013-03-08 13:23:53 18979 [Note] InnoDB: 128 rollback segment(s) are active.  2013-03-08 13:23:53 18979 [Note] InnoDB: Waiting for purge to start  2013-03-08 13:23:53 18979 [Note] InnoDB: 1.2.10 started; log sequence number 19440939  2013-03-08 13:23:53 18979 [ERROR] /usr/local/mysql/bin/mysqld: unknown option '--skip-locking'  2013-03-08 13:23:53 18979 [ERROR] Aborting    2013-03-08 13:23:53 18979 [Note] Binlog end  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'partition'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_DATAFILES'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_TABLESPACES'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN_COLS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_FIELDS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_INDEXES'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_TABLESTATS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_TABLES'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_INDEX_TABLE'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_INDEX_CACHE'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_CONFIG'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_BEING_DELETED'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_DELETED'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_INSERTED'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_DEFAULT_STOPWORD'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_METRICS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_BUFFER_POOL_STATS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE_LRU'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX_RESET'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_CMPMEM'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_CMP_RESET'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_CMP'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_LOCKS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_TRX'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'InnoDB'  2013-03-08 13:23:53 18979 [Note] InnoDB: FTS optimize thread exiting.  2013-03-08 13:23:53 18979 [Note] InnoDB: Starting shutdown...  2013-03-08 13:23:54 18979 [Note] InnoDB: Shutdown completed; log sequence number 19440949  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'BLACKHOLE'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'ARCHIVE'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'MRG_MYISAM'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'MyISAM'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'MEMORY'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'CSV'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'sha256_password'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'mysql_old_password'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'mysql_native_password'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'binlog'  2013-03-08 13:23:54 18979 [Note] /usr/local/mysql/bin/mysqld: Shutdown complete    130308 13:23:54 mysqld_safe mysqld from pid file /usr/local/mysql/data/server.myServerName.com.pid ended  

The problem may definitely come from the [Error] lines, but as we haven't used the --skip-locking, it's just confusing. If we try to log into mysql using mysql -r -p and set the password we used, it can't connect to the server, as it is not started. Any ideas where this problem may come from?

Table Size Analysis on SQL Server 2000

Posted: 08 Mar 2013 02:37 PM PST

Our SQL Server 2000 database .mdf file is 27Gb large which seems much larger than is plausible. Using the following query we tried to analyse table sizes:

select cast(object_name(id) as varchar(50)) AS name,      sum(CASE WHEN indid<2 THEN rows END) AS rows,      sum(reserved)*8 AS reserved,      sum(dpages)*8 AS data,      sum(used-dpages)*8 AS index_size,      sum(reserved-used)*8 AS unused  from sysindexes with (nolock)      where indid in(0,1,255) and id>100       GROUP BY id with rollup      ORDER BY sum(reserved)*8 desc  

The results were as follows:

Name           Rows       Reserved  Data     Index_Size  Unused  NULL           15274279   26645456  5674592  17361464    3609400  BigTable         875966   16789712   471096  13349816    2968800  
  1. How can we find out which objects are causing this massive NULL space usage?
  2. It seems that approx 26GB are "reserved" for NULL, 16GB for BigTable - is this basically a waste of space or are real records involved?

EMS SQL manager permission problem

Posted: 08 Mar 2013 12:40 PM PST

I have a strange problem with EMS SQL Manager.

I'm using MySQL 5.5 with linux servers. One of my developers need to edit store procedure, so I grant him:

GRANT CREATE ROUTINE, ALTER ROUTINE ON `testdb`.* TO 'testuser'@'192.168.13.11'  

When he clicked edit store in EMS SQL Manager, what he got is:

SELECT command denied to user 'testuser'@'192.168.13.11' for table 'user'  

I find out that EMS SQL Manager ask for select privilege in some tables, so I need to grant this user some extra permissions.

GRANT SELECT ON `mysql`.`user` TO 'testuser'@'192.168.13.11';  GRANT SELECT ON `mysql`.`db` TO 'testuser'@'192.168.13.11';  GRANT SELECT ON `mysql`.`tables_priv` TO 'testuser'@'192.168.13.11';  GRANT SELECT ON `mysql`.`columns_priv` TO 'testuser'@'192.168.13.11';  GRANT SELECT ON `mysql`.`procs_priv` TO 'testuser'@'192.168.13.11';  

So my question is, WHY EMS SQL Manager need those privileges? I can create, drop procedure in mysql command line client without problem.

Write differences between varchar and nvarchar

Posted: 08 Mar 2013 12:25 PM PST

Currently in our SQL Server 2012 database, we're using varchar, and we'd like to change that nvarchar. I've generated a script to do that.

My question is are there any differences in how SQL Server writes to varchar columns vs. nvarchar columns? We have a number of backend procedures that I'm concerned about.

Edit:
Not sure if this helps, but the columns don't have indexes, f/k, or constraints on them.

how to dump a single stored procedure from a database

Posted: 08 Mar 2013 12:34 PM PST

I need to create a dump file that contains a single stored procedure from a database. Not all routines from that database

Deriving formulas for input/output

Posted: 08 Mar 2013 06:34 PM PST

I'm currently enrolled in a DBS class and am having problem with an assignment. I've searched around and have been unable to understand what it is I'm meant to be doing with this derivation formula.

A plant file with TREE-GENUS as the key field includes records with the following TREE-GENUS values: Tsuga, Ficus , Arbutus, Quercus, Melaleuca, Tristaniopsis, Cornus, Sequoiadendron, Lithocarpus, Liriodendron, Pittosporum.
Suppose that records with these search field values are inserted into a random (heap) file with a maximum of 3 records per block. Derive a formula for the expected number of disk I/O to scan these records and to search for a particular record

I've been using some software that was given with the assignment and it also asks what are the maximum number of blocks that are allowed and that is not given by the above brief. I'm not really sure how to derive a formula for this. I've assumed that because there are 3 records per block there are 4 blocks required and that a random heap file uses 1 disk i/o per write/read.

If this is a larger topic than is worth explaining a link to a reliable few pages is also helpful.

How do you extend the tablespace when using innodb_file_per_table?

Posted: 08 Mar 2013 02:34 PM PST

With innodb_file_per_table off, you can create multiple tablespaces on multiple devices if necessary to manage growth, balance I/O, etc.

With the option on, how do you control growth of the files? Do they autoextend? And can you set a maximum and then extend the tablespace for a given table onto another filesystem if necessary?

Window functions cause awful execution plan when called from a view with external parametrized 'where' clause

Posted: 08 Mar 2013 02:34 PM PST

I had this issue long time ago, I found a workaround which suited me and forgot about it. But now there's that question on SO so I'm willing to bring this problem up.


There's a view that joins few tables in a very straightforward way (orders + order lines).

When queried without a where clause, the view returns several million lines.
However, noone ever calls it like that. The usual query is

select * from that_nasty_view where order_number = 123456;  

This returns about 10 records out of 5m.

An important thing: the view contains a window function, rank(), which is partitioned exactly by the field using which the view is always queried:

rank() over (partition by order_number order by detail_line_number)  

Now, if this view is queried with literal parameters in the query string, exactly as shown above, it returns the rows instantly. The execution plan is fine:

  • Index seek on both tables using the indices on order_number (returns 10 rows).
  • Calculating windows over the returned tiny result.
  • Selecting.

However, when the view is called in a parametrized way, things get nasty:

  • Index scan on all tables ignoring indices. Returns 5m rows.
  • Huge join.
  • Calculating windows over all partitions (about 500k windows).
  • Filter to take 10 rows out of 5m.
  • Select

This happens in all cases when parameters are involved. It can be SSMS:

declare @order_number int = 123456;  select * from that_nasty_view where order_number = @order_number;  

It can be an ODBC client, such as Excel:

select * from that_nasty_view where order_number = ?  

Or it can be any other client that uses parameters and not sql concatenation.

If the window function is removed from the view, it runs perfectly quickly, regardless of whether or not it's quieried with parameters.

My workaround was to remove the offending function and reapply it at a later stage.

But, what gives? Is it genuinely a bug in how SQL Server 2008 handles window functions?

No comments:

Post a Comment

Search This Blog