Friday, March 8, 2013

[how to] I can not connect to NeorSQL tool from my Microsoft .NET application to MySQL Server database

[how to] I can not connect to NeorSQL tool from my Microsoft .NET application to MySQL Server database


I can not connect to NeorSQL tool from my Microsoft .NET application to MySQL Server database

Posted: 08 Mar 2013 08:42 PM PST

As per documentation, i used port 4040 to connect .NET web app with Neor Profile, but I am not able to make this work.

Can you help me.

Siva Kishore

reddyrm@gmail.com

What are some best monitoring tools available for mysql?

Posted: 08 Mar 2013 05:21 PM PST

I have a setup a linux mysql server and would like to monitor it. I found percona plugins for nagios and cacti. Are there any other tools available?

Email alerts and Server Side Tracing

Posted: 08 Mar 2013 04:55 PM PST

This question is in regards to SQL 2005 and email alerts on long running queries.

Does this sound like the best approach (with the least impact to the server) to receiving alerts to long running queries is the following:

  1. Create a server side trace that monitors across all databases or a specific database for events 10 (RPC:Completed) and 12 (SQL:Completed).
  2. Set a trace filter on Duration for greater than n amount of time (for example, greater than 2 seconds).
  3. Separate stored procedure that runs every n seconds (for example 5 or 10) to check for new entries and sends out an email alert. (To detect new entries I would need to keep track of the last entry detected but that is trivial.)

Thanks.

Permissions for a MySQL user which will monitor replication status?

Posted: 08 Mar 2013 05:49 PM PST

I'd like to use a script to monitor the replication status of a MySQL database like in this bash script: https://gist.github.com/wesdeboer/1791632

I'd like to create a user which will only be used to query the status of the replication. So basically this user would just need to run the command:

"SHOW SLAVE STATUS \G"  

What is the minimum set of permissions I need to grant a user to allow this?

(Is this even something which can be GRANTed?

Schema design: Use of association (aka: bridge/junction) table vs foreign key constraint with composite index containing a non-key field

Posted: 08 Mar 2013 05:27 PM PST

This is an inventory database for IT assets. The models used are trimmed in order to focus on the problem at hand. Using SQL Server 2008. Thanks for taking the time to read and for any input you can provide.

My design includes a Device table which holds the various devices that can be entered into inventory. Each device has a boolean flag, CanNetwork which states whether a device has network capability, e.g., for most computers CanNetwork = true, for hard drives CanNetwork = false; some printers will be true, and others will be false. You get the idea.

The CanNetwork field determines if network-related information is relevant when an inventory record is created.

Design 1

My first design uses an index on Device.DeviceID and Device.CanNetwork to use in a foreign key constraint with the Inventory table.

Schema #1: Foreign key constraint with index

The NetworkStatus table looks like this in this setup:

+----------------------------------------------------------------------+  | NetworkStatusID | NetworkStatus  | NetworkStatusDescription          |  |----------------------------------------------------------------------|  | 1               | Connected      | Device connected to network.      |  | 2               | Not Connected  | Device not connected to network.  |  | 3               | Do Not Connect | Do not connect device to network. |  +----------------------------------------------------------------------+  

I put check constraints on the Inventory table as follows to ensure a network status and network information can only be provided if the device is capable of connecting to a network.

-- Inventory table check constraint: CK_CanNetwork  -- If can't network, then network fields are null  CanNetwork = 0  AND NetworkStatusID IS NULL  AND Hostname IS NULL  AND IPAddress IS NULL  AND MACAddress IS NULL  OR CanNetwork = 1    -- Inventory table check constraint: CK_NetworkStatus  -- If network status is "Connected", then must provide  -- a hostname or ip address  NetworkStatusID = 1 AND Hostname IS NOT NULL  OR NetworkStatusID = 1 AND IPAddress IS NOT NULL  OR NetworkStatusID <> 1  

The issue I have with this design is I'm not sure if the relationship with Inventory and Device.DeviceID/Device.CanNetwork is a good or bad design decision. Is propagating a non-key field like CanNetwork to other tables a bad design? I don't have enough experience with database design to make an informed decision.


Design 2

In this design I thought I would use a bridge/association/junction table to decide which network statuses are valid for a device. It looks like this:

Schema #2: Bridge/Association table

The NetworkStatus table looks like this in this setup (Notice the addition of record with id #4, and the ForCanNetwork field which specifies that this status is for use with devices that can can connect to a network):

+--------------------------------------------------------------------------------------+  | NetworkStatusID | NetworkStatus  | NetworkStatusDescription          | ForCanNetwork |  |--------------------------------------------------------------------------------------|  | 1               | Connected      | Device connected to network.      | True  (1)     |  | 2               | Not Connected  | Device not connected to network.  | True  (1)     |  | 3               | Do Not Connect | Do not connect device to network. | True  (1)     |  | 4               | Incapable      | Cannot connect to networks.       | False (0)     |  +--------------------------------------------------------------------------------------+  

Due to this design's granularity, I could theoretically allow any mix of statuses for devices with this design, but I wanted to control it so I wrote some triggers to only insert the correct mix of statuses depending on whether the device is network capable. Triggers as follows:

-- NetworkStatus table on INSERT trigger  -- Adds a record for each device that  -- matches the capability of the network status  INSERT INTO DeviceNetworkStatus  SELECT i.NetworkStatusID, dev.DeviceID  FROM Device dev  CROSS JOIN  inserted i  WHERE dev.CanNetwork = i.ForCanNetwork    -- Device table on INSERT trigger  -- Adds a record for each network status that  -- matches the capability of the new device  INSERT INTO DeviceNetworkStatus  SELECT ns.NetworkStatusID, i.DeviceID  FROM NetworkStatus ns  CROSS JOIN  inserted i  WHERE ns.ForCanNetwork = i.CanNetwork  

I used the following CHECK constraint on the Inventory table:

-- Inventory table check constraint: CK_NetworkStatus  -- If network status is "Connected', then must provide  -- a hostname or ip address  -- If network status is "Incapable", then network fields  -- must be null  NetworkStatusID = 1 AND Hostname IS NOT NULL  OR NetworkStatusID = 1 AND IPAddress IS NOT NULL  OR NetworkStatusID = 4 AND Hostname IS NULL  AND IPAddress IS NULL AND MACAddress IS NULL  OR NetworkStatusID <> 1 AND NetworkStatusID <> 4  

This design eliminates the need to propagate CanNetwork across the tables. The issue I see with this design is that every device that has network capability will have records in DeviceNetworkStatus paired with NetworkStatus ids 1, 2 and 3, while devices that can't connect to a network will be paired only with NetworkStatus id 4. It seems like a lot of extra records that all mean the same thing: devices that can be networked can only use statuses 1, 2 and 3, and devices that can't network only use 4. This design seems to be more "relationally correct", but also smells a bit.

Search every column in every table in Sybase Database

Posted: 08 Mar 2013 04:19 PM PST

I'm been taxed with the task of creating an application that pulls data from our Micros Point of Sales system. The POS is using a Sybase database running on one of our servers. The database schema is very convoluted. I've been able to figure out most of the schema to gain access to the data I need, however, there are a few things still left to find. I know what they are called in the actual POS, but I can't seem to find them anywhere in the database (although I haven't actually looked through all 200+ tables).

I'm wondering if there's any way to run a query to search for a specific string in all the columns in all the tables. Btw, I'm using the "Interactive SQL" application that comes with the Sybase Database Client software to connect to the database.

Any help you can offer with this is highly appreciated.

Thanks.

Why isn't my includeThreadNamesAsStatementComment JDBC parameter working?

Posted: 08 Mar 2013 03:31 PM PST

The JDBC API lists a includeThreadNamesAsStatementComment parameter that seems to indicate it will prefix all queries with the Java thread ID. I thought this would be useful in tying back DB activity to application logs. I added it to my resource URLs and it seemed to only be working part of the time.

Curiously it was only the applications heartbeat thread that periodically pings the database as part of a health check. None of the "real" queries were getting tagged.

Why isn't this working for all queries?

SQL Server link to linked server

Posted: 08 Mar 2013 03:21 PM PST

My client has a Visual FoxPro database that needs to co-mingle with SQL Server data (SELECT only, no updates or deletes).

After quite a bit of searching I found the VFP OLEDB provider doesn't work with 64-bit SQL, so I installed an instance of 32-bit SQL Express. I was able to successfully create linked server to VFP. The 32-bit instance is installed on a different server than the 64-bit, but I'm using SSMS on the server with the 32-bit instance and can connect via SSMS fine to the 64-bit instance.

Now I need to link my main 64-bit SQL instance to the 32-bit instance, but I've not had luck. I've tried both the SQL Native Client and the OLE DB Provider for SQL Server, but I keep getting "Server is not found or not accessible." Remote connections are enabled, I've enabled TCP/IP, and tried all manner of server name combinations (.\SQLEXPRESS32, localhost\SQLEXPRESS32, (local)\SQLEXPRESS32, MyServerName\SQLEXPRESS32).

At this point I'm not sure if this is a linked server configuration issue, or a new install of SQL Server configuration issue, so I'm not entirely sure if I'm asking how to link my server or how to ensure I can access my server. Or maybe I'm asking the best way for my 64-bit SQL Server to read VFP data. Any suggestions?

Update Table A row if Table B row is changed

Posted: 08 Mar 2013 04:28 PM PST

Is is anyhow possible to tell the database to execute a little script, if a row is changed in a table?

Scenario: The database constantly monitoring Table B to see if rows are inserted, updated or deleted. If a row is inserted, updated or deleted: Update the row in Table A with the referring identifier.

Setup:

Table A  ------------------------  UserID   Name   SalesSum  1        Carl   5  2        Peter  0  3        Oscar  3      Table B  --------------------------  UserID   Timestamp   Sales  1        01:00:00    3  1        02:01:00    1  1        03:54:00    1  3        01:20:00    2  3        02:45:00    1  

The point of this being not to do a subquery every time the SalesSum is needed in the application, like:

Select A.Name, SalesSum = (select sum(B.Sales) from [Table B] as B where A.UserID = B.UserID) from [Table A] as A  

But instead "just" doing it easy, nice and clean:

Select Name, SalesSum from [Table A]  

Or is there anything I might have overseen or am wrong about?

EDIT:

Many of described the uses of inner joins for these sub-calculations. But when "trying" this, I get a couple of errors: First I have to include each column in the GROUP BY clause unless it is defined by via an Aggregate function. But when trying to do this, then it errors when having a NTEXT datatype column in the clause, because it cannot be compared.

Here is an example of the original query:

Select   U.UserID,                                                                 ' Int - Primary Key   U.Name,                                                                   ' Nvarchar(MAX)   B.OrderID,                                                                ' Int - Primary Key   B.Comment,                                                                ' Ntext   SalesAmount = (select sum(C.Sales) from OrdersDetails as C),   SalesPriceSum = (select sum(C.Sales * C.Price) from OrdersDetails as C)  From   Orders as B  Inner join   Users as U on B.UserID = u.UserID  Where   ' Clauses on finding specific columns based on the Orders table and the Users table  Order By   U.Name asc   B.Period asc  

The "new" query, with another inner join, could be something like:

Select   U.UserID,                                                                 ' Int - Primary Key   U.Name,                                                                   ' Nvarchar(MAX)   B.OrderID,                                                                ' Int - Primary Key   B.Comment,                                                                ' Ntext   SalesAmount = sum(C.Sales),   SalesPriceSum = sum(C.Sales * C.Price)  From   Orders as B  Inner join   Users as U on B.UserID = u.UserID  Inner join   OrdersDetails as C on B.OrderID = C.OrderID  Where   ' Clauses on finding specific columns in Order table  Group By   U.UserID   U.Name   B.OrderID   B.Comment     <------- This errors because of the Ntext datatype!  Order By   U.Name asc   B.Period asc  

As written, this is only an example, and therefore there can be a large number of selected columns - specially from the Orders table. All these needs to be in the Group By clauses. But this cannot be done with the Ntext datatypes. So what do I do?

How are DB locks tied to connections and sessions?

Posted: 08 Mar 2013 02:26 PM PST

I've heard of page- and row-level locks, but never heard them used with regards to sessions and connections. Today, our (MySQL) DBA was talking about row locks and implied that their scope can be set at the connection or session level. Is this true, or did I misunderstand him?

I didn't realize that locks could be set when a connection is made (or when a session is started) and then released when that connection/session ends. If this is true, what benefits does this have over row-/page-level locks? Thanks in advance.

Does large table variables fill up logs in tempdb?

Posted: 08 Mar 2013 04:28 PM PST

I'm running into an issue with a DBA who claims that table variables reside in ldf of the tempdb and when large amount of data is loaded into the table variable, the tempdb's logs fill up.

The DBA's solution is to use temporary table instead of table variable. While I see the justification to use the temp table in case of large data sets, I don't understand how temp table is created and stored in the mdf of tempdb where as the table variable is stored in ldf. Can someone please throw some light?

Do I need tweak something for this MySQL setup?

Posted: 08 Mar 2013 03:10 PM PST

I'm running a fairly popular website with the following stats and setup.

stats: ~250,000 pageviews per day, rendering ~47 queries / second to MySQL setup: the MySQL instance is running alone on a Linode512 on Debian 6.0. Nothing else runs there, for performance reasons. Using MyISAM and the ini for "large" mysql.

I see the following figures right now from MYSQLREPORT.

Please advice. The site performance is quite OK but I'm not sure what the 256 MB key buffer indicates and if that is tannable in the long run or not. I'm not very good at understanding the inner workings of MYSQL. Also, I can't figure out why the QC is being hit so low i.e. why isn't it using all of the QC.

-- Key --  Buffer used   226.52M of 256.00M  %Used:  88.49    Current     256.00M               Write hit      64.93%   Read hit      100.00%    -- Questions --  Total         139.38M    47.0/s    QC Hits      51.28M    17.3/s  %Total:  36.79    DMS          41.99M    14.2/s           30.12    Com-         30.75M    10.4/s           22.06    COM-QUIT     15.36M     5.2/s           11.02    +Unknown     10.68k     0.0/s            0.01  Slow 10 s         127     0.0/s            0.00  %DMS:   0.00  Log:  ON  DMS            41.99M    14.2/s           30.12    SELECT       35.92M    12.1/s           25.77         85.56    UPDATE        3.15M     1.1/s            2.26          7.50    INSERT        2.66M     0.9/s            1.91          6.33    DELETE      256.47k     0.1/s            0.18          0.61    REPLACE           0       0/s            0.00          0.00  Com-           30.75M    10.4/s           22.06    set-option   15.38M     5.2/s           11.03    change-db    15.36M     5.2/s           11.02    show-fields   2.70k     0.0/s            0.00    -- SELECT and Sort --  Scan            1.21M     0.4/s %SELECT:   3.38  Range         175.99k     0.1/s            0.49  Full join          24     0.0/s            0.00  Range check         0       0/s            0.00  Full rng join       0       0/s            0.00  Sort scan       2.51M     0.8/s  Sort range      4.04M     1.4/s  Sort mrg pass 177.95k     0.1/s    -- Query Cache --  Memory usage    2.78M of  16.00M  %Used:  17.38  Block Fragmnt  23.23%  Hits           51.28M    17.3/s  Inserts        33.35M    11.2/s  Insrt:Prune   68.97:1    11.1/s  Hit:Insert     1.54:1    -- Table Locks --  Waited          1.19M     0.4/s  %Total:   1.74  Immediate      67.07M    22.6/s    -- Tables --  Open              227 of  256    %Cache:  88.67  Opened          1.16k     0.0/s    -- Connections --  Max used           27 of  151      %Max:  17.88  Total          15.36M     5.2/s    -- Created Temp --  Disk table      2.16M     0.7/s  Table           2.30M     0.8/s    Size:  16.0M  File          327.26k     0.1/s    -- Threads --  Running             7 of    7  Cached              1 of    8      %Hit:  99.88  Created        19.02k     0.0/s  Slow                0       0/s    -- Aborted --  Clients             4     0.0/s  Connects           26     0.0/s    -- Bytes --  Sent          958.38M   323.0/s  Received      954.98M   321.9/s`  

Btw, I tried posting this question in the DBA forums of stack exchange prior to here but it's like a grave in the other forums. Appreciate the help.

UPPER Case issue postgreSQL when importing from text file

Posted: 08 Mar 2013 02:01 PM PST

I have ruby and rails app, and I have cron task to download text file and import it to the database using

\copy hotels FROM 'db/ActivePropertyList.txt' DELIMITER '|'  

There is a header field which is called EANHotelID in this text file. My import fails because of this particular filed and if I manually rename it to, for example hotel_id, then import goes OK.

Is there any way to overcome this issue because I will be unable to rename it every time manually?

In my database schema the column is called ean_hotel_id.

UPDATE Error Description:

PG::Error: ERROR:  zero-length delimited identifier at or near """"  LINE 1: COPY "hotels" ("","sequence_number","name","address1","addre...  

Text file example

EANHotelId|SequenceNumber|Name|Address1|Address2|City|StateProvince|PostalCode|Country|Latitude|Longitude|AirportCode|PropertyCategory|PropertyCurrency|StarRating|Confidence|SupplierType|Location|ChainCodeID|RegionID|HighRate|LowRate|CheckInTime|CheckOutTime  180997|1|Bristol Metropolitan|Av. Getulio Vargas, 286 - Savassi||Belo Horizonte||30112-020|BR|-19.93301|-43.92641|CNF|1|BRL|.0|52|ESR|Near Liberdade Square||577|90|54||  183714|1|Ibis Vitoria Praia Do Canto|R Joao Da Cruz 385 Praia Do||Vitoria||29055-620|BR|-20.29412|-40.29437|VIX|1|||43|ESR||2141|3711|96.4557|95.4557|12:00 PM|12:00 PM  

Table definition

  t.column :ean_hotel_id,         "int"    t.column :sequence_number,      "int"    t.column :name,                 "varchar(70)"    t.column :address1,             "varchar(50)"    t.column :address2,             "varchar(50)"    t.column :city,                 "varchar(50)"    t.column :state_province,       "varchar(2)"    t.column :postal_code,          "varchar(15)"    t.column :country,              "varchar(2)"    t.column :latitude,             "numeric(8,5)"    t.column :longitude,            "numeric(8,5)"    t.column :airport_code,         "varchar(3)"    t.column :property_category,    "int"    t.column :property_currency,    "varchar(3)"    t.column :star_rating,          "numeric(2,1)"    t.column :confidence,           "int"    t.column :supplier_type,        "varchar(3)"    t.column :location,             "varchar(80)"    t.column :chain_code_id,        "varchar(5)"    t.column :region_id,            "int"    t.column :high_rate,            "numeric(19,4)"    t.column :low_rate,             "numeric(19,4)"    t.column :check_in_time,        "varchar(10)"    t.column :check_out_time,       "varchar(10)"  

For COPY I use gem postgres-copy (https://github.com/diogob/postgres-copy). in my task I have the code to map table columns names in the file to the database

Hotel.pg_copy_from('db/ActivePropertyList.txt', :delimiter => '|',   :map => {'EANHotelID' => 'ean_hotel_id',      'SequenceNumber' => 'sequence_number',      'Name' => 'name'       .......  

Update field1.table1 based on compound join between table1 and table2 - Oracle

Posted: 08 Mar 2013 02:07 PM PST

I want to run update statement on Oracle 11g, but it fails with the following error:

SQL Error: ORA-01427: single-row subquery returns more than one row 01427. 00000 - "single-row subquery returns more than one row"

Here's what I'm trying to do in pseudocode:

Update Table1  Set Flag_Column=1  Where  Table1.ColumnX=Table2.ColumnX  and  Table1.ColumnY=Table2.ColumunY  

Is there a way to do this in Oracle SQL?

My code looks like this:

update Table1  set Flag_Column=1  where ColumnX=      (select distinct Table1.ColumnX      from Table1, Table2      where Table1.ColumnX=Table2.ColumnX      and Table1.ColumnY=Table2.ColumnY)  and ColumnY=      (select distinct Table1.ColumnY      from Table1, Table2      where Table1.ColumnX=Table2.ColumnX      and Table1.ColumnY=Table2.ColumnY)  

Creating indexes with t-sql scrips vs rebuild indexes in maintenance plan

Posted: 08 Mar 2013 02:15 PM PST

I'm using SQL Server 2008 and I am running several (15) scripts each day to bulk insert to tables. Each script drops the table at the beginning. I create indexes in the scripts. Some of the scripts reference the tables that were created with previous scripts. I want to improve the bulk insert operations but also want to keep the indexes to improve query performances. Does it make sense to create the indexes using maintenance plan rather than creating them in the script? Will it improve the bulk insert performance if I create the indexes in the maintenance plan at the end of all scripts run? Thanks.

Moving site to amazon EC2 - better to have 1 large instance or 2 medium instances?

Posted: 08 Mar 2013 11:40 AM PST

I currently have a website which is running on windows 2008 r2 standard along with sql 2012 express. All on the same box. The site gets around 700-900 visits a day,although that is slowly going up (yea us!).

The database itself is around 2gb in size and makes use of full-text search. The bulk of the full text search is done against a product table which currently has about 36,000 records in it and that's slowly growing.

We're leaning towards moving the site over to amazon EC2.

I have 2 basic questions

  1. Would it be better performance wise moving it to EC2 or keeping it on a dedicated box? The reason we're considering the move is the ability to expand capacity if/when the website becomes busier without any downtime. We've been putting a lot of effort into getting more traffic and it's slowly starting to pay off.

  2. If we do move it, would we be better off getting 1 large EC2 instance and putting everything on that one instance or would we be better off with 1 medium EC2 and 1 medium RDS instance? I don't think there's anything really heavy going on with the website itself, most of the resource i think are being used by the database. Highest mem usage for the w3 process is around 200megs.

I don't want the site to run any slower then it's running right now. Faster would always be nice. Budget is around $250/mo but we're willing to go higher if we see performance gains. Using either 1 large or 2 medium instances looks like it'll cost around $170/mo and that includes a WEB license for sql, which would be an upgrade for us as we're running express right now.

Thanks

Are Schemas SQL Standard? [closed]

Posted: 08 Mar 2013 10:43 AM PST

Well I have evidence that it's not supported.

For schema I do not mean "the set of data structure".

Definition of schema (for this questions) :

A database contains one or more named schemas, which in turn contain tables. Schemas also contain other kinds of named objects, including data types, functions, and operators.

SQLite3

Really?

MySQL

Segmentation fault (joke. not supported...right?)

Oracle

ALTER SESSION SET CURRENT_SCHEMA = <schema name>  

Postgres

SET search_path TO myschema,public;  

SQLServer

I think is supported

Well I have evidence that the answer is NO, but I need to get where it says that schemas are not standard. I seems to be on first thought

Thanks!

TimesTen performance as compared to 10g

Posted: 08 Mar 2013 10:59 AM PST

I am new here, so please be nice...

I have a question regarding TimesTen & Oracle. I have attempted an experiment with both pieces of software, and realize that the query speed on a table is slower on TimesTen than on Oracle.

The experiment setup was as follows:

  • 500,000 rows of data
  • count(*) query on a column which is NOT a candidate key

Table definition:

CREATE TABLE menu(DISH_ID NUMBER NOT NULL PRIMARY KEY, price NUMBER(7))

Query: SELECT COUNT(*) FROM menu WHERE price <= 50. The price value increases uniformly until the largest value in the DB. Time taken for the query to be executed 1000 times was recorded.

The query speed for different ranges is consistently slower on TimesTen, as compared to Oracle. No indexes were built on the search key. Is there a particular reason for this?

Other notes: I ran the same experiment, but built an index on the search key on both TimesTen & on 10g, and the difference is stark on TimesTen's favour.

Why would running DBCC FREEPROCCACHE result in different data being returned?

Posted: 08 Mar 2013 10:32 AM PST

I experienced a situation where a query run directly in SSMS was giving a different result set than the same query run from code on a web server, using the same database.

After spending a lot of time ensuring the queries were identical, I ran DBCC FREEPROCCACHE on the off chance it might help, and to my surprise, it corrected the issue. My understanding was that running this command should only affect performance, not the data set returned.

Am I missing something here?

Edit:

By different results, I mean the same number of rows were returned with different numeric values.

The query that was giving incorrect results was using this TVF, which is what generates the data that was incorrect:

alter function [dbo].[tfMAPAE_ARevenue] (@start datetime)  returns table  as  -- M-monthly AP-account portfolio A-actual E-external ARevenue  return (          select AccountPortfolioID, M01, M02, M03, M04, M05, M06, M07, M08, M09, M10, M11, M12          from (              select isnull(pr.AccountPortfolioID, - pr.CompanyID) as AccountPortfolioID, ear.Amount, 'M' + right(cast(datediff(mm, sd.Start, ear.YearMonth) + 101 as varchar(3)), 2) as ColName              from dwvMPRAEARevenue ear              inner join Project pr on (ear.ProjectID = pr.ProjectID)              -- force to the 1st day of the month              inner join (                  select cast(floor(cast(@start as float)) - (day(@start) - 1) as datetime) as Start                  ) sd on (1 = 1)              where sd.Start <= ear.YearMonth and ear.YearMonth < dateadd(yy, 1, sd.Start)              ) ear          pivot(sum(ear.Amount) for ColName in (M01, M02, M03, M04, M05, M06, M07, M08, M09, M10, M11, M12)) aear          )  

Delete a variable number of records by a chronological order

Posted: 08 Mar 2013 10:12 AM PST

Our developers have been using a cursor within a stored procedure to delete old password history. The number of records to delete is being passed by variable.

DECLARE hist_cursor CURSOR LOCAL FOR      SELECT history_nr   FROM usr_pwd_hist   WHERE usr_id = @usr_id      ORDER BY    history_nr ASC    OPEN hist_cursor    WHILE @to_delete > 0  BEGIN      FETCH NEXT FROM hist_cursor INTO @hist_val        DELETE FROM usr_pwd_hist WHERE CURRENT OF hist_cursor        SET @to_delete = @to_delete-1  END    CLOSE hist_cursor;   DEALLOCATE hist_cursor;  

I would like to replace this with a set based approach. I can't do a simple top statement because the number of records to delete is a variable. I can't use a top with a variable without dynamic sql and by policy we don't allow dynamic sql in production.

I'm considering this approach below but it makes me nervous as I know that Microsoft is planning on changing the way ROWCOUNT affects return results. By putting the delete targets in a subquery I should be ok with future SQL versions, but I'm still wondering if there is a better way to delete a variable number of records by a chronological order.

SET ROWCOUNT @to_delete;  /* limit records to be deleted */  DELETE FROM usr_pwd_hist WHERE history_nr IN  (      SELECT history_nr       FROM usr_pwd_hist      WHERE usr_id = @usr_id      ORDER BY history_nr ASC    );  SET ROWCOUNT 0;  /* return rowcount to default setting */  

Complex query with multiple normalized fields

Posted: 08 Mar 2013 03:42 PM PST

We have a fairly simple table structure, but with a LOT of fields per table (talking 40+). This data is initially produced in plain-text, user-readable tables, but then it is translated into higher-performance, easier to query tables before being installed for use in production.

What we do is, wherever possible and reasonable we translate certain fields into enumerated values, and keep track of the enumerations in a MasterEnum table. There are usually 20-25 enumerated fields out of 40 or so.

Sample table structure:

Plain text version:

      |  PartNumber   |  Manufacturer  |  SomeData  |  SomeMoreData  |  SomeTextData ...      ----------------------------------------------------------------------------------      |  1x9kdah      |  GizmoCorp     | ThisIsData |  OtherData     |  ThisStaysText ...      |  8xcjkzh      |  GadgetInc     | MoreData   |  OtherData2    |  ThisTooStaysText ...  

Target table sample structure:

      |  PartNumber  |  Manufacturer  |  SomeData   | SomeMoreData  |  SomeTextData ...      -------------------------------------------------------------------------------------      |  1x9kdah     |       1        |    1        |      1        |  ThisStaysText ...      |  8xcjkzh     |       2        |    2        |      2        |  ThisTooStaysText ...  

Master Enumeration Table Structure

      |  FieldName     |  InputText  |  ValueCode |      ---------------------------------------------      |  Manufacturer  |  GizmoCorp  |  1         |      |  Manufacturer  |  GadgetInc  |  2         |      |  SomeData      |  ThisIsData |  1         |      |  SomeData      |  MoreData   |  2         |      |  SomeMoreData  |  OtherData  |  1         |      |  SomeMoreData  |  OtherData2 |  2         |  

We have a means of doing this translation that works and works well; however it's a little on the slow side since all the processing is done in Java via Spring/Hibernate. My question is:

Is there a way to write a single query that would accomplish all the above translations? (Note that we have an excellent way of keeping track of our field definitions programmaticly, so generating complex SQL queries on the fly is not an issue). If it is not possible to do it in a single query, how would I structure queries to iterate over the individual fields and make sure that as the translations happen the data is inserted into the new table remains associated with the correct rows?

Note that it is safe to assume the target table is always empty at the beginning of the process.

Unique Identifier with Extra Characters Still Matching in Select

Posted: 08 Mar 2013 10:41 AM PST

We are using SQL Server 2012 with a unique identifier and we've noticed that when doing selects with additional characters added onto the end (so not 36 chars) it still returns a match to a UUID.

For example:

select * from some_table where uuid = '7DA26ECB-D599-4469-91D4-F9136EC0B4E8'

returns the row with uuid 7DA26ECB-D599-4469-91D4-F9136EC0B4E8

but if you run

select * from some_table where uuid = '7DA26ECB-D599-4469-91D4-F9136EC0B4E8EXTRACHARS'

it also returns the row with the uuid 7DA26ECB-D599-4469-91D4-F9136EC0B4E8

SQL Server seems to ignore all characters beyond the 36 when doing its selects. Is this a bug/feature or something that can configured?

It's not a massive issue as we have validation on the front end for the length but it doesn't seem correct behaviour to me.

Thanks

MySQL5.6 on Mac OS X 10.6.8 problems when granting permissions to root

Posted: 08 Mar 2013 02:55 PM PST

I'm having serious problems with a MySQL 5.6 instance on a Mac Server.

We had to upgrade MySQL to a newer version, but it turned to be a bad idea, as we lost control to it. We had a backup of the /data directory and the my.cnf file as well. However, when setting an init_file to restore the previous password and its permissions. So we created a text file with this content:

UPDATE mysql.user SET Password=PASSWORD('myOriginalPassword') WHERE User='root';  FLUSH PRIVILEGES;  

We have double checked the init_file permissions (we even gave it a chmod 777, to make sure it worked), but something is not working. When we run mysqld_safe --init_file=/usr/local/mysql/bin/init_file it tries to start the server, but then it shuts itself down. The log is as follows:

130308 13:23:52 mysqld_safe Starting mysqld daemon with databases from /usr/local/mysql/data  2013-03-08 13:23:53 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).  2013-03-08 13:23:53 18979 [Warning] Setting lower_case_table_names=2 because file system for /usr/local/mysql/data/ is case insensitive  2013-03-08 13:23:53 18979 [Note] Plugin 'FEDERATED' is disabled.  2013-03-08 13:23:53 18979 [Note] InnoDB: The InnoDB memory heap is disabled  2013-03-08 13:23:53 18979 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins  2013-03-08 13:23:53 18979 [Note] InnoDB: Compressed tables use zlib 1.2.3  2013-03-08 13:23:53 18979 [Note] InnoDB: CPU does not support crc32 instructions  2013-03-08 13:23:53 18979 [Note] InnoDB: Initializing buffer pool, size = 128.0M  2013-03-08 13:23:53 18979 [Note] InnoDB: Completed initialization of buffer pool  2013-03-08 13:23:53 18979 [Note] InnoDB: Highest supported file format is Barracuda.  2013-03-08 13:23:53 18979 [Note] InnoDB: 128 rollback segment(s) are active.  2013-03-08 13:23:53 18979 [Note] InnoDB: Waiting for purge to start  2013-03-08 13:23:53 18979 [Note] InnoDB: 1.2.10 started; log sequence number 19440939  2013-03-08 13:23:53 18979 [ERROR] /usr/local/mysql/bin/mysqld: unknown option '--skip-locking'  2013-03-08 13:23:53 18979 [ERROR] Aborting    2013-03-08 13:23:53 18979 [Note] Binlog end  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'partition'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_DATAFILES'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_TABLESPACES'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN_COLS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_FIELDS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_INDEXES'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_TABLESTATS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_SYS_TABLES'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_INDEX_TABLE'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_INDEX_CACHE'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_CONFIG'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_BEING_DELETED'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_DELETED'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_INSERTED'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_FT_DEFAULT_STOPWORD'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_METRICS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_BUFFER_POOL_STATS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE_LRU'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX_RESET'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_CMPMEM'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_CMP_RESET'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_CMP'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_LOCKS'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'INNODB_TRX'  2013-03-08 13:23:53 18979 [Note] Shutting down plugin 'InnoDB'  2013-03-08 13:23:53 18979 [Note] InnoDB: FTS optimize thread exiting.  2013-03-08 13:23:53 18979 [Note] InnoDB: Starting shutdown...  2013-03-08 13:23:54 18979 [Note] InnoDB: Shutdown completed; log sequence number 19440949  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'BLACKHOLE'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'ARCHIVE'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'MRG_MYISAM'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'MyISAM'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'MEMORY'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'CSV'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'sha256_password'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'mysql_old_password'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'mysql_native_password'  2013-03-08 13:23:54 18979 [Note] Shutting down plugin 'binlog'  2013-03-08 13:23:54 18979 [Note] /usr/local/mysql/bin/mysqld: Shutdown complete    130308 13:23:54 mysqld_safe mysqld from pid file /usr/local/mysql/data/server.myServerName.com.pid ended  

The problem may definitely come from the [Error] lines, but as we haven't used the --skip-locking, it's just confusing. If we try to log into mysql using mysql -r -p and set the password we used, it can't connect to the server, as it is not started. Any ideas where this problem may come from?

Table Size Analysis on SQL Server 2000

Posted: 08 Mar 2013 02:37 PM PST

Our SQL Server 2000 database .mdf file is 27Gb large which seems much larger than is plausible. Using the following query we tried to analyse table sizes:

select cast(object_name(id) as varchar(50)) AS name,      sum(CASE WHEN indid<2 THEN rows END) AS rows,      sum(reserved)*8 AS reserved,      sum(dpages)*8 AS data,      sum(used-dpages)*8 AS index_size,      sum(reserved-used)*8 AS unused  from sysindexes with (nolock)      where indid in(0,1,255) and id>100       GROUP BY id with rollup      ORDER BY sum(reserved)*8 desc  

The results were as follows:

Name           Rows       Reserved  Data     Index_Size  Unused  NULL           15274279   26645456  5674592  17361464    3609400  BigTable         875966   16789712   471096  13349816    2968800  
  1. How can we find out which objects are causing this massive NULL space usage?
  2. It seems that approx 26GB are "reserved" for NULL, 16GB for BigTable - is this basically a waste of space or are real records involved?

EMS SQL manager permission problem

Posted: 08 Mar 2013 12:40 PM PST

I have a strange problem with EMS SQL Manager.

I'm using MySQL 5.5 with linux servers. One of my developers need to edit store procedure, so I grant him:

GRANT CREATE ROUTINE, ALTER ROUTINE ON `testdb`.* TO 'testuser'@'192.168.13.11'  

When he clicked edit store in EMS SQL Manager, what he got is:

SELECT command denied to user 'testuser'@'192.168.13.11' for table 'user'  

I find out that EMS SQL Manager ask for select privilege in some tables, so I need to grant this user some extra permissions.

GRANT SELECT ON `mysql`.`user` TO 'testuser'@'192.168.13.11';  GRANT SELECT ON `mysql`.`db` TO 'testuser'@'192.168.13.11';  GRANT SELECT ON `mysql`.`tables_priv` TO 'testuser'@'192.168.13.11';  GRANT SELECT ON `mysql`.`columns_priv` TO 'testuser'@'192.168.13.11';  GRANT SELECT ON `mysql`.`procs_priv` TO 'testuser'@'192.168.13.11';  

So my question is, WHY EMS SQL Manager need those privileges? I can create, drop procedure in mysql command line client without problem.

Write differences between varchar and nvarchar

Posted: 08 Mar 2013 12:25 PM PST

Currently in our SQL Server 2012 database, we're using varchar, and we'd like to change that nvarchar. I've generated a script to do that.

My question is are there any differences in how SQL Server writes to varchar columns vs. nvarchar columns? We have a number of backend procedures that I'm concerned about.

Edit:
Not sure if this helps, but the columns don't have indexes, f/k, or constraints on them.

how to dump a single stored procedure from a database

Posted: 08 Mar 2013 12:34 PM PST

I need to create a dump file that contains a single stored procedure from a database. Not all routines from that database

Deriving formulas for input/output

Posted: 08 Mar 2013 06:34 PM PST

I'm currently enrolled in a DBS class and am having problem with an assignment. I've searched around and have been unable to understand what it is I'm meant to be doing with this derivation formula.

A plant file with TREE-GENUS as the key field includes records with the following TREE-GENUS values: Tsuga, Ficus , Arbutus, Quercus, Melaleuca, Tristaniopsis, Cornus, Sequoiadendron, Lithocarpus, Liriodendron, Pittosporum.
Suppose that records with these search field values are inserted into a random (heap) file with a maximum of 3 records per block. Derive a formula for the expected number of disk I/O to scan these records and to search for a particular record

I've been using some software that was given with the assignment and it also asks what are the maximum number of blocks that are allowed and that is not given by the above brief. I'm not really sure how to derive a formula for this. I've assumed that because there are 3 records per block there are 4 blocks required and that a random heap file uses 1 disk i/o per write/read.

If this is a larger topic than is worth explaining a link to a reliable few pages is also helpful.

How do you extend the tablespace when using innodb_file_per_table?

Posted: 08 Mar 2013 02:34 PM PST

With innodb_file_per_table off, you can create multiple tablespaces on multiple devices if necessary to manage growth, balance I/O, etc.

With the option on, how do you control growth of the files? Do they autoextend? And can you set a maximum and then extend the tablespace for a given table onto another filesystem if necessary?

Window functions cause awful execution plan when called from a view with external parametrized 'where' clause

Posted: 08 Mar 2013 02:34 PM PST

I had this issue long time ago, I found a workaround which suited me and forgot about it. But now there's that question on SO so I'm willing to bring this problem up.


There's a view that joins few tables in a very straightforward way (orders + order lines).

When queried without a where clause, the view returns several million lines.
However, noone ever calls it like that. The usual query is

select * from that_nasty_view where order_number = 123456;  

This returns about 10 records out of 5m.

An important thing: the view contains a window function, rank(), which is partitioned exactly by the field using which the view is always queried:

rank() over (partition by order_number order by detail_line_number)  

Now, if this view is queried with literal parameters in the query string, exactly as shown above, it returns the rows instantly. The execution plan is fine:

  • Index seek on both tables using the indices on order_number (returns 10 rows).
  • Calculating windows over the returned tiny result.
  • Selecting.

However, when the view is called in a parametrized way, things get nasty:

  • Index scan on all tables ignoring indices. Returns 5m rows.
  • Huge join.
  • Calculating windows over all partitions (about 500k windows).
  • Filter to take 10 rows out of 5m.
  • Select

This happens in all cases when parameters are involved. It can be SSMS:

declare @order_number int = 123456;  select * from that_nasty_view where order_number = @order_number;  

It can be an ODBC client, such as Excel:

select * from that_nasty_view where order_number = ?  

Or it can be any other client that uses parameters and not sql concatenation.

If the window function is removed from the view, it runs perfectly quickly, regardless of whether or not it's quieried with parameters.

My workaround was to remove the offending function and reapply it at a later stage.

But, what gives? Is it genuinely a bug in how SQL Server 2008 handles window functions?

[SQL Server] Performance Issue - Memory

[SQL Server] Performance Issue - Memory


Performance Issue - Memory

Posted: 07 Mar 2013 08:33 PM PST

Hi AllOn one of my SQL Servers, I am noticing what I think is memory pressurePLE is +-100Buffer Cache Hit Ratio is 99%I would think that with a low PLE comes a low Buffer Cache Hit Ratio??I am noticing PAGEIOLATCH waits on top of my wait_stats list - I'm assuming that this ties into the low PLE.Server: SQL 2008 ENT on Windows 2008 ENTMemory: Server memory - 50GB, Memory allocated to SQL - 42GBLocked pages in memory is setI have had some user complaints about performanceWhat is the recommended method to approach thisThanks

Using multiple variables in a Query / Stored Procedure

Posted: 08 Mar 2013 02:15 AM PST

Hello all,I've looked online and have found numerous articles pertaining to my issue here, I'm not exactly sure that it correlates perfectly, however.The below query is functional and works properly, however I want to be able to enter 'BUY,SELL' for @TC and have it output the data for both BUY and SELL with regards to column 'TransCode'when @TC = BUY it will spit out the proper data when TransCode = Buy when @TC = SELL it will also spit out the proper data when TransCode = SELLHowever, I'm looking for the ability to combine both 'BUY,SELL' for @TC and have it return both BUY and SELL.Essentially I'm looking for the ability to eventually have my WHERE clause read:WHERE @TC in ('BUY','SELL','CONTRIBUTION') and have it return data when multiples are selected There are many other variables in the TransCode field other than BUY and SELL (HOLD, CONTRIBUTION, ON ACCOUNT etc..) so bringing back the entire column is not an option... any ideas?declare @idnum varcharselect @idnum = 1declare @TC varchar(50)select @TC = 'BUY'IF @TC = 'ALL'BEGINSELECT @TC as TC, @idnum as IDNum, B.GroupDetail, C.SecID, C.TransCode, C.Quantity, B.GroupId, C.PortfolioIDFROM ERTutTransactions CINNER JOIN FinalGroupDetail BON C.PortfolioID = B.PortfolioIDAND B.GroupId = 1ORDER BY SecIDEND ELSESELECT @TC as TC, @idnum as IDNum, B.GroupDetail, C.SecID, C.TransCode, C.Quantity, B.GroupId, C.PortfolioIDFROM ERTutTransactions CINNER JOIN FinalGroupDetail BON C.PortfolioID = B.PortfolioIDAND B.GroupId = 1WHERE TransCode = @TCORDER BY SecIDEND

Had a job failure but it doesn't appear in the job history

Posted: 08 Mar 2013 01:36 AM PST

Hello,I am documenting job failures in the last two months and I cannot find the one that we had in that period. I looked in the job history and also in the event logs. Can a job fail without recording it? Is there something I have to set to make it write to the job history (I've never known that to be the case)? Thank you in advance.

Update multiple columns in multiple rows

Posted: 07 Mar 2013 07:54 PM PST

I have a table where I need to update multiple columns in each record (multiple rows) to match a single record already in the table.For example:[code]Col 1 Col 2 Col 3 Col 410000 20.00 1 010001 0 0 110002 15.00 1 010003 0 0 0[/code]So, for the above I would like to update values in Col 2, Col 3, Col 4 for records in (10001, 10002, 10003) to match values in Col 2, Col 3, Col 4 where Col 1 = 10000I would like the data to end up like:[code]Col 1 Col 2 Col 3 Col 410000 20.00 1 010001 20.00 1 010002 20.00 1 010003 20.00 1 0[/code]Is it possible to do this in a single statement?

Business Intelligence SQL Server 2008 where to start?

Posted: 07 Mar 2013 04:57 PM PST

Hi guys,Can you recommend a site or a book where I can learn BI for sql server 2008. I know there are tons of books out there by searching google. But I want your recommendations which could save me lots of learning time. regards,JM

Delete trigger with parameters from webform

Posted: 07 Mar 2013 10:56 PM PST

I have a web form with a gridview of a sql table that will allow the user to delete a row. This is a supervisory task that is necessary since updating the table would be a logistical nightmare. I would like to use a delete trigger within SQL Server 2008 R2 that will move the information into a delete history table before it deletes the row. The question that I have is I want to pass 3 parameters to the history table at the same time. Also, in the delete triggers how do I identify fields that need to be inserted into the history table and how do I identify the parameters. One parameter comes from a drop down list on the form. The other two are current date and time which can be calculated either in the trigger or on the form. I also need to pass the username which is cached using Windows Authentication on the form. I have the form completed and the trigger started, but I am getting stuck on the insert syntax especially relating to the parameters.

Worktables & Hash Tables

Posted: 29 Jan 2013 09:14 PM PST

Hi AllAfter testing and researching - I want to make sure that my understanding is correct regarding Worktables & Hash Tables>Worktables & Hash Tables are 2 different things>Hash tables are only created when there is a Hash Join.>Worktables are created when immediate results need to be stored somewhere>Worktables & Hash Tables are both created in Memory first and spill to tempdb if necessary>A Merge join will not employ a worktable if both inputs are uniquePlease advise if I am on the right track here?Thanks

[Articles] Writing

[Articles] Writing


Writing

Posted: 07 Mar 2013 11:00 PM PST

This Friday's poll asks about one of Steve Jones' favorite activities.

[MS SQL Server] Determine database grow per month

[MS SQL Server] Determine database grow per month


Determine database grow per month

Posted: 07 Mar 2013 06:59 PM PST

Is it possible to determinate how much disk space an specific database grow per month?in sql server 2008Any one advice me,, ThanksJerry

DB Restore - Stopping All Connections

Posted: 08 Mar 2013 02:46 AM PST

I'm working on database restores on a server and have run into some issues. We have multiple databases that are being slammed by applications at all points throughout the day. When doing a normal restore with "kill connections" through Redgate and/or SQL, the restores will fail about 50% of the time because of active connections. My new restore script is complicated but starts with setting each database to single_user. This has helped and now restores fail about 30% of the time but it's still not good enough.I have what I believe to be one of two options, either find all of the tasks/apps that hit our databases, which will probably be a couple day process, and stop those during the restores. Or find a good way to shut down all connections to the entire server except for the restore process. I'd prefer to do the later but don't know of a good way to do it automatically and through SQL Server. Can anyone help?Thanks.

Linked server

Posted: 07 Mar 2013 03:16 PM PST

Hi What is the linked server and how it will work?how can we connect or configure the linked server.can anyone please elaborate this because i am new to this

How can you issue a full backup and not interrupt the LSN's

Posted: 07 Mar 2013 02:38 PM PST

My two cents worth...I was reading an article the other day on junior dba questions and saw the question:How can you issue a full backup and not interrupt the LSN's?*Issue a copy only backup.(http://www.mssqltips.com/sqlservertip/1626/junior-sql-server-dba-interview-questions/)I have seen this answer many times and always thought it to be a half truth as I was a initially a bit confused about the role of copy_only backups when they first appeared and some answers mislead me a bit.To me it reads as though any full backup done outside the daily full backup will break the chain, or will be needed in order to restore to point in time (or the last log backup).Very happy to be corrected but is it true that the Log Chain is only interrupted by the full backup [b]IF [/b]the backup regime does Differential backups? If the backup regime only consists of a Full Backup and Log backups then you can do as many full backups as you want (and delete them) and the log chain will not be broken. i.e you can restore a backup from 6 months ago and restore to point in time as long as you can restore every backup log since in sequence -- regardless of continuing daily backups.thanks

Database Refresh

Posted: 13 Feb 2013 07:11 PM PST

Hi,What is the database refresh. how it will work.How to do the database refresh in sql server real time environment.Thanks in advance.

Database Timeouts : Error -2 Severity 11

Posted: 07 Mar 2013 09:34 PM PST

Hi Folks,at times with heavy workload, my devs tell me they receive a lot of database timeouts. A the database I got 2 stored procs which get heavily executed. If I run the queries at ssms they take about 1000 ms. The execution plan looks good. But the cpu goes up to 100% and I got a lot of SOS_SCHEDULER_YIELDS. I've allready gone through all the points of chapter 3 in the book "the accidental dba", but none of the reasosn fit. Why is the query so ressource intensive?

how to know the delta data volume processed every day in mb

Posted: 07 Mar 2013 05:57 PM PST

Hi I wanted to know if there is a way to figure out how many bytes of data have been processed while performing the delta on a table. I can easily get the delta count of every delta job run, but I'm not aware if there is a way to figure out how many bytes are processed based on the delta row counts or other. We use Informatica for our ETLs if this helps to get the delta data volume info.Please help and suggest of all possible ways to get this info.

[SQL 2012] Newbie Question - Saving SSIS Packages

[SQL 2012] Newbie Question - Saving SSIS Packages


Newbie Question - Saving SSIS Packages

Posted: 07 Dec 2012 02:49 AM PST

This is a really basic question but I can't seem to figure out how to save a copy of an SSIS package to the SQL Server. I've saved it locally but I want to upload it to the server. When I used BIDS, I just: 1. Double-clicked on the package2. Chose file > Save Copy of Package As3. Selected SQL Server as the package location and entered the server nameNow I'm using SSDT. When I choose the same option, "Save Copy of Package As" it doesn't show the option for package location.I was able to upload it by logging into integration services from SSMS and importing the package but I thought there had to be a way to do it from SSDT.Sorry, I know it's a stupid question. I've only been at this for 2 months. :unsure:

This index operation requires 8192 KB of memory per DOP

Posted: 08 Mar 2013 12:52 AM PST

One of my sproc has the following bit of SQL:[code="sql"]IF NOT EXISTS (SELECT 1 FROM sys.columns WHERE name = 'idxMatchKey1' AND OBJECT_ID = OBJECT_ID(@SourceTable + '_keys_')) EXEC('ALTER TABLE dbo.' + @SourceTable + '_keys_ add idxMatchKey1 VARBINARY(8000)') EXEC('UPDATE dbo.' + @SourceTable + '_keys_ SET idxMatchKey1 = HASHBYTES(''SHA2_512'', mkPostOut+mkPostIn+mkName1) WHERE NULLIF(mkPostOut, '''') IS NOT NULL AND NULLIF(mkPostIn, '''') IS NOT NULL AND NULLIF(mkName1, '''') IS NOT NULL') IF NOT EXISTS (SELECT 1 FROM sys.indexes WHERE name = 'idx_MatchKey1' AND OBJECT_ID = OBJECT_ID(@SourceTable + '_keys_')) EXEC ('CREATE INDEX idx_MatchKey1 ON dbo.' + @SourceTable + '_keys_(idxMatchKey1)')[/code]This has been running fine. The only thing that has changed is that I've created additional tempdb files, moved temdb to a different drive and restarted SQL Server.This is the full error message:[b]Warning! The maximum key length is 900 bytes. The index 'idx_MatchKey1' has maximum length of 8000 bytes. For some combination of large values, the insert/update operation will fail.The statement has been terminated.Msg 8606, Level 17, State 1, Line 1This index operation requires 8192 KB of memory per DOP. The total requirement of 131336 KB for DOP of 16 is greater than the sp_configure value of 8192 KB set for the advanced server configuration option "index create memory (KB)". Increase this setting or reduce DOP and rerun the query.[/b]Please advise?

BIDS (SQL Server Business Intellence Studio) has been replaced by Data Tools (SSDT)??

Posted: 07 Mar 2013 07:08 PM PST

I have a SQL Server 2012 enterprise Instance and I've been asked to install 'BIDS' on several 32 bit XP Clients. When I attempted to use my SQL Server 2012 installation media - I couldn't because the media would not run on a 32bit client (surprise surprise)Made sense so I hit google.. Then I discoverd that BIDS (SQL Server Business Intellence Studio) has been replaced by Data Tools (SSDT) in SQL Server 2012... is that correct? If that is the case, does anyone know if I install the BIDS on the XP Client will they be able to use it against the SQL Server 2012 instance? Not sure if it is forward compatible.If it's NOT compatible, I figure my two options are: stand up a 64 bit Windows 7 client for them and install the 2012 Data toolsTry and get a hold of SQL Server 2012 32bit install media (I suspect this does not exist)Any help or suggestions would be greatly appreciated!!

Setting up SQL Server 2012 AlwaysOn, can't get error msgs to clear

Posted: 07 Mar 2013 09:00 AM PST

I'm trying to set up an SQL 2012 Availability Group between 'ServerA' and 'ServerB'On the source server (ServerA) I get the following message in the 'AlwaysOn High Availability' tab in SQL Server Configuration Manager: 'An error occurred while loading the AlwaysOn High Availability properties [return code: 0x80070005]. This server is running Windows Server 2008 R2 Enterprise and SQL Server 2012 SP1 Enterprise edition, but I can't confirm if the hotfix in KB2494036 has been installed as I don't have admin rights on the server (don't ask). The server admin said he was going to install it last night but I haven't confirmed with him, and he's left the office for the night.On the destination server (ServerB) I get the standard error message saying that SQL 2012 Enterprise Edition, Windows Server 2008 and KB2494036 need to be installed. This server is also running Windows Server 2008 R2 Enterprise and SQL Server 2012 SP1 Enterprise edition, and I can confirm the hotfix has been installed.When I try to run the Availability Group wizard on ServerA, on the Replicas tab, when I put in the credentials for ServerB, I get the error message: "Cannot connect to QA-SQL-LOAD1\VRS70. AlwaysOn feature is not enabled on SQL Server instance 'QA-SQL-LOAD1\VRS70'. (Microsoft.SqlServer.Management.HadrTasks)"Getting clear installation and troubleshooting documentation on setting this up has been quite challenging (which might be part of my problem...) - does anyone have any ideas on where to go from here?TIA,Mike

[T-SQL] Is a doomed transaction inevitable if deadlock occurs in a trigger (SQL Server 2008 R2)?

[T-SQL] Is a doomed transaction inevitable if deadlock occurs in a trigger (SQL Server 2008 R2)?


Is a doomed transaction inevitable if deadlock occurs in a trigger (SQL Server 2008 R2)?

Posted: 27 Feb 2013 01:37 AM PST

Hi all, Below are two scenarios, however, what I am trying to accomplish is to catch deadlock errors and re-try the DML that was chosen as the deadlock victim. Does anyone know of way to accomplish what I am trying to accomplish without a doomed transaction given the constraints? I have seen many posts that this appears to be a limitation of the dbengine, but am looking for confirmation. In both scenarios, I have a SProc - let's called it "X" - that is called mostly from within triggers (that performs some denormalized calculations and cannot be changed at this time) - in which deadlocks are often occurring. Scenario 1 •In SProc X, I want to catch any 1205 errors, and re-try the query that receives the 1205 error•However, whenever I re-try the query I receive an error that the transaction is "The current transaction cannot be committed and cannot support operations that write to the log file. Roll back the transaction" Scenario 2 •In SProc Y, that runs DML statements that will fire triggers that call SProc X, I want to catch any 1205 errors, and re-try the query that receives the 1205 error•If SProc Y is called outside any transaction, I can begin a new transaction and rollback if a deadlock occurred and successfully retry the DML•However, if SProc Y is called inside a transaction, I need to perform a SAVE TRAN and rolback to the savepoint. Whenever I re-try the query I receive an error that the transaction is "The current transaction cannot be committed and cannot be rolled back to a savepoint. Roll back the entire transaction.". For my logic to work, I cannot rollback the entire transaction, so SProc Y only performs as expected outside a transaction TIA, Dave

XML nodes extraction

Posted: 07 Mar 2013 09:02 PM PST

Hi, Thanks for your help in advance, working with XML nodes for the first time and finding it frustrating. I am trying to build a relational table from some xml. setting each xml level to a new column in a table.[code="sql"]CREATE TABLE ClientInfo( ID INT PRIMARY KEY IDENTITY, XMLInfo XML);INSERT INTO ClientInfo (XMLInfo)VALUES(N'<?xml version="1.0" ?><Root> <key>29</key> <children> <key>500</key> <children> <key>3000</key> <children> <key>70000</key> </children> </children> </children> <key>50</key> <children> <key>600</key> <children> <key>3600</key> <children> <key>9862</key> </children> <children> <key>9863</key> </children> <children> <key>9864</key> </children> </children> </children> <key>70</key> <children> <key>900</key> <children> <key>3700</key> <children> <key>7862</key> </children> <children> <key>6863</key> </children> <children> <key>5864</key> </children> </children> <children> <key>5400</key> <children> <key>7892</key> </children> <children> <key>6563</key> </children> <children> <key>5464</key> </children> </children> </children></Root>');select * from [dbo].[ClientInfo]SELECT aref.value('(key/text())[1]', 'varchar(50)') as Column1, bref.value('(key/text())[1]', 'varchar(50)') as Column2, cref.value('(key/text())[1]', 'varchar(50)') as Column3, dref.value('(key/text())[1]', 'varchar(50)') as Column4FROM ClientInfo CROSS APPLY XmlInfo.nodes('//Root') AS Level1(aref) CROSS APPLY aref.nodes('children') AS Level2(bref) CROSS APPLY bref.nodes('children') AS Level3(cref) CROSS APPLY cref.nodes('children') AS Level4(dref)GO[/code]When I run this query column 1 just has the first ID in this case 29 ? I would very much like it to report the other key's adjacent to their relevant children keys. Why is this not working for me.Any useful material you can forward on would be gratefully appreciated.Many Thanks,Oliver

Syntax error when using IN statement inside inner query

Posted: 07 Mar 2013 06:59 PM PST

Hi,I'm getting a syntax error on the following query designed to return the newest status from a one to many relationship based on a dynamic list of IDs passed via an IN statement. The inner query runs fine on its own, as does the entire query when the IN statement is removed. The error I keep getting is "Error in list of values in IN clause. Unable to parse query text." Any assistance with this would be greatly appreciated.SELECT ToBeActionedBy, COUNT(ToBeActionedBy) AS ActionCount FROM (SELECT (SELECT TOP (1) ToBeActionedBy FROM dbo.PStatus AS PStatus_1 WHERE (POID = dbo.POrder.ID) ORDER BY ID DESC) AS ToBeActionedBy FROM dbo.POrder INNER JOIN dbo.Address ON dbo.POrder.Company_Name = dbo.Address.Company_Name WHERE (AccountNumber IN ('PARISH') AND (dbo.Address.ID IN (11743,11899)) AND (Complete = 0)) AS A WHERE (ToBeActionedBy Is Not NULL) GROUP BY ToBeActionedBy

Time Problem

Posted: 07 Mar 2013 01:27 AM PST

Hi,I'm trying to figure out a way to calculate a weighted average, but having trouble with the time part.Here are some example records:Time Value2013-03-02 13:40:00 502013-03-02 14:00:00 1002013-03-02 14:20:00 2002013-03-02 14:30:00 1502013-03-02 14:50:00 3002013-03-02 15:00:00 200What I'm trying to do is figure a way to find the minute differences by each hour. For example:Time Minute Difference2013-03-02 14:00:00 202013-03-02 14:20:00 202013-03-02 14:30:00 102013-03-02 14:50:00 202013-03-02 15:00:00 102013-03-02 17:00:00 120Could someone please help me?Thank you.

Urgent help need to verify DateDiff is correct?

Posted: 07 Mar 2013 06:35 AM PST

Hi Friends,my requirement is- "SmartLaborII Work Order End Date" is NOT greater than "SmartLaborII Previous Work Order End Date"]-and the code I'm using is-//datediff(day,[WorkForce_WorkOrder].[Work Order End Date_WO] ,[WorkForce_WorkOrder].[Previous Work Order End Date_WO] ) >= '0'//Since I'm new to TSQL world, wanted to know if my above code for my requirement is correct ? if not what should be the right code and why please?Kind RegardsDhananjay

Query Help

Posted: 07 Mar 2013 06:49 AM PST

Hi, I am in need of some Sum and Grouping help. The below query is returning multiple rows and I only want one returned With the open amounts Summed and 1 voucher amount representing a summary of the three below rows.Thanks for any help you can provide.[IMG]http://i1306.photobucket.com/albums/s580/mtb1973/APTable_zps51b9e6be.jpg[/IMG] [code="sql"]SELECT 'SourceName' AS SourceDB , 'NAME' AS 'SiteName' , RTRIM(APIBH.IDVEND) AS VendorNumber , RTRIM(APIBH.IDINVC) AS VoucherNumber , APIBH.AMTGROSDST AS VoucherTotalDomestic , (APIBH.AMTGROSDST * APIBH.EXCHRATEHC) AS VoucherTotalUSD , (APIBH.AMTGROSDST - ISNULL(APTCP.AMTPAYM, 0)- APTCP.AMTERNDISC) AS OpenAmountDomestic , ((APIBH.AMTGROSDST * APIBH.EXCHRATEHC) - ISNULL(APTCP.AMTPAYM, 0)* APIBH.EXCHRATEHC)- (APTCP.AMTERNDISC * APIBH.EXCHRATEHC) AS OpenAmountUSD , dbo.udf_convert_int_date(APIBH.DATEBUS) AS PostedDate , dbo.udf_convert_int_date(APTCR.DATEBUS) AS AppliedDate , dbo.udf_convert_int_date(APIBH.DATEINVC) AS AgingDate , dbo.udf_convert_int_date(APIBH.DATEDUE) AS DueDate , dbo.udf_convert_int_date(APIBH.DATEINVC) AS DocumentDate , NULL AS ReceivedDate , CASE WHEN (APTCR.DATERMIT) = 0 THEN NULL ELSE dbo.udf_convert_int_date(APTCR.DATERMIT) END AS PaidDate , CASE WHEN (APIBH.DATEDISC) = 0 THEN NULL ELSE dbo.udf_convert_int_date(APIBH.DATEDISC) END AS DiscountDate , CONVERT(bigint, CASE WHEN APIBH.AMTGROSDST - ISNULL(APTCP.AMTPAYM, 0) = 0 THEN dbo.udf_convert_int_date(APTCR.DATERMIT) ELSE GETDATE() END - dbo.udf_convert_int_date(APIBH.DATEINVC)) AS AgeDays , CONVERT(bigint, dbo.udf_convert_int_date(APIBH.DATEDUE) - CASE WHEN APIBH.AMTGROSDST - ISNULL(APTCP.AMTPAYM, 0) = 0 THEN dbo.udf_convert_int_date(APTCR.DATERMIT) ELSE GETDATE() END) AS DueDays , APIBH.CODECURN AS CurrencyCode , APIBH.IDTRX, ISNULL(APTCP.AMTPAYM, 0) AS PayAmt ,APIBH.EXCHRATEHC AS EffectiveExchangeRateFROM MyTable.APIBH AS APIBH LEFT OUTER JOIN MyTable.APTCP AS APTCP ON APIBH.IDVEND = APTCP.IDVEND AND APIBH.IDINVC = APTCP.IDINVC INNER JOIN MyTable.APTCR AS APTCR ON APTCP.BATCHTYPE = APTCR.BTCHTYPE AND APTCP.CNTBTCH = APTCR.CNTBTCH AND APTCP.CNTRMIT = APTCR.CNTENTRWHERE (1 = 1) AND (APIBH.ERRBATCH = 0) --AND (APIBH.FISCYR >= '2010') AND (APIBH.IDTRX <> 32) AND APIBH.IDINVC = '010106'[/code]

Coalesce question?

Posted: 07 Mar 2013 02:55 AM PST

I am trying to concatenate the URL column into one csv value for each ApplicantID. I am not having luck achieving the results I am looking for. Any help most appreciated...SELECTFASTFacultyApps.LastName,COALESCE([FacultyApps_TEST].[dbo].[VideoLinks].URL + ',', '') + [FacultyApps_TEST].[dbo].[VideoLinks].URL AS VidListFROM[FacultyApps_TEST].[dbo].FASTFacultyAppsJOIN [FacultyApps_TEST].[dbo].[VideoLinks]ON[FacultyApps_TEST].[dbo].[VideoLinks].ApplicantID = [FacultyApps_TEST].[dbo].FASTFacultyApps.IDWHERE[FacultyApps_TEST].[dbo].FASTFacultyApps.isArchived = 0RESULT:[img]http://www.danforthcenter.org/GMDW/images/Results.jpg[/img]TABLES:[img]http://www.danforthcenter.org/GMDW/images/Vids.jpg[/img][img]http://www.danforthcenter.org/GMDW/images/Applicants.jpg[/img]TIA,Andrew

determine first instance as one type and all other instances as another type

Posted: 07 Mar 2013 03:06 AM PST

Need to identify the first / earliest certDate matching cert_id as 'Initial" and each additional cert_ID as 'Re-certification' for each Field_id for each ABR_ID. [code="plain"]ABR_ID CERT_ID Field_ID certDate31183 31996 DR 1987-12-13 00:00:00.00031183 31997 SV 1995-02-28 00:00:00.00031183 31998 SV 2004-07-16 00:00:00.000[/code]Given the example above cert_ids 31996 and 31997 would be considered as Initial certification and cert_ID 31998 would be considered a Re-certification.Build the table[code="sql"]SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOCREATE TABLE [dbo].[jwmTestCert]( [abr_ID] [nvarchar](10) NOT NULL, [cert_id] [int] NOT NULL, [field_id] [nvarchar](3) NOT NULL, [certDate] [datetime] NOT NULL, [ExpireYear] [nvarchar](4) NULL, [status] [nvarchar](25) NOT NULL, [statusDate] [datetime] NOT NULL) ON [PRIMARY]GO[/code]Populate[code="sql"]insert into jwmTestCert (abr_ID,Cert_ID, Field_ID, certDate) values (31183,31996,'dr', '1987-12-13')insert into jwmTestCert (abr_ID,Cert_ID, Field_ID, certDate) values (31183,31997,'sv', '1995-02-28')insert into jwmTestCert (abr_ID,Cert_ID, Field_ID, certDate) values (31183,31996,'sv', '2004-07-16')[/code]Thank you for your time.

RANK ISSUE

Posted: 07 Mar 2013 04:57 AM PST

NOTICE THE DESIRED OUTPUT ON THE BOTTOM. SELECT 'table1' TABLENAME,'col1' COLNAMEINTO #temp UNION ALL SELECT 'table1','col2'UNION ALL SELECT 'table1','col3'UNION ALL SELECT 'table2','col1'UNION ALL SELECT 'table2','col2'UNION ALL SELECT 'table3','col1'SELECT *FROM #temp --DESIRED OUTPUT ID COL:--ID TABLENAME COLNAME--1 table1 col1--2 table1 col2--3 table1 col3--1 table2 col1--2 table2 col2--1 table3 col1

How to add item No. in SELECT

Posted: 07 Mar 2013 02:29 AM PST

Hello,I have a table like :[code="sql"]DECLARE @TEST1 TABLE (C_NAME varchar(10), REQ_ITEM VARCHAR(5))INSERT INTO @TEST1(C_NAME,REQ_ITEM) VALUES('John', 'Item1')INSERT INTO @TEST1(C_NAME,REQ_ITEM) VALUES('John', 'Item2')INSERT INTO @TEST1(C_NAME,REQ_ITEM) VALUES('Jake', 'Item2')INSERT INTO @TEST1(C_NAME,REQ_ITEM) VALUES('Jake', 'Item3')INSERT INTO @TEST1(C_NAME,REQ_ITEM) VALUES('Jake', 'Item4')INSERT INTO @TEST1(C_NAME,REQ_ITEM) VALUES('Jake', 'Item5')INSERT INTO @TEST1(C_NAME,REQ_ITEM) VALUES('Joe', 'Item1')INSERT INTO @TEST1(C_NAME,REQ_ITEM) VALUES('Joe', 'Item5')INSERT INTO @TEST1(C_NAME,REQ_ITEM) VALUES('Jane', 'Item3')[/code]How can I show something like :[code="plain"]C_NAME ITEM_NO REQ_ITEM-------------------------------------John 01 Item1John 02 Item2Jake 01 Item2Jake 02 Item3Jake 03 Item4Jake 04 Item5Joe 01 Item1Joe 02 Item5Jane 01 Item3[/code]Thanks in advance.

[SQL Server 2008 issues] Extract text value from a column having alphanumeric value

[SQL Server 2008 issues] Extract text value from a column having alphanumeric value


Extract text value from a column having alphanumeric value

Posted: 07 Mar 2013 04:52 PM PST

Hi All,I am using SQL server 2008. In my database I have a table sat "Employee" having a column say "Height" now in that cocolumnaeave value along with the units e.g. 5ft or 65in like this. Now I want to write a query that will seseparateut the value on the basis of test and numeric value. Something like this:Select Height[text] as HeightValue, Height[numeric] as HeightUnit from Employeeand I should get the result as HeightValue HeightUnit5 ft65 inSomething like the above example.Can we have such query? Please suggest.Regards,Girish Nehte

Query optimization

Posted: 07 Mar 2013 04:09 PM PST

Hello friends,I have table contains millions of records.When i fired query to get record by id. Query get millions of rows for that it take 2-3 second time.How can i optimize query?My sql query is:SELECT * FROM Table1 WHERE intId = @intId AND bintTimeStamp >= @bintStartTimeStamp AND bintTimeStamp < @bintEndTimeStamp ORDER BY bintTimeStamp

Inserting currency symbols in money datatypes

Posted: 07 Mar 2013 03:21 PM PST

Hi,I heard we can insert currency Symbols like $ to Money and small money datatypes. But while i am trying i realized we can insert $ into money datatype but it will not display symbol while viewing table content. Am i missing anything or it is like that only? If so What is need of money datatype because we can use decimal or nvarchar only?Thank you

confused With View , Function and Store Procedure

Posted: 03 Mar 2013 06:08 PM PST

i get confused where to use function , store procedure and view.plz help me;-)

How to find all column dependencies including triggers?

Posted: 07 Mar 2013 10:03 AM PST

Hi,There was a situation when a column was removed from a table via some external web application, and after that a bunch of "invalid column name" sql errors generated in app's log.The reason was that due to apparently app's bug, it did not update table's trigger that was still using the deleted column name in its code... What would be a best way to see ALL column dependencies?Thanks!

modifying a SP

Posted: 07 Mar 2013 08:03 AM PST

We already have a SP which selects data from db.I need to modify the SP so that it selects one more column called supervisor.The code to get the supervisor data is SELECT * INTo #sStaffNamesFROM (select p.personid,p.LastName + N', ' + p.FirstName AS Supervisor, S.StaffID---from Staff as s inner join people as pon s.PersonID = p.personid)A select peoplelinkid ,staffassigned,Supervisorfrom clientinfo as ci LEFT JOIN #StaffNames as s ON Ci.StaffAssigned=S.StaffIDwhere ci.staffassigned<>0 drop table #sstaffnamesThis code works perfectly and I am able to get the desired result.Now the problem is I am not able to figure out how to put the above code in a SP which selects data.Thanks in advance

Maintenance Plan issue

Posted: 07 Mar 2013 03:14 PM PST

Hi all, I created a maintenance plan to perform backups for all the databases in my test box. but the job is failing every time.it is not able to take the backup of a one particular user created database and taking backups of all other databases. can any one has any idea on this issue.

Create File without xp_cmdShell

Posted: 07 Mar 2013 04:54 AM PST

Hello,Is there a way to create a text file in the file system (Win) without xp_cmdshell / sp_configure?RegardsNicole ;-)

AD groups and login permissions

Posted: 07 Mar 2013 08:24 AM PST

Here's the situation.I have some logs that seems to show someone executing a procedure I don't think they should have been able to execute. They shouldn't even be able to get to the database, much less execute this proc.The permissions are handled through AD groups, and this person doesn't belong to any of the groups that can access the database, from what I can tell. But I wonder if there's some kind of nesting (groups belonging to another group) thing going on.Anyway, my question, is given a person's domain login, can I find out what effective SQL permissions they have?

looping through databases

Posted: 07 Mar 2013 07:29 AM PST

I am working on a sql task where i have to loop through all the databases in the server and do operations on those databases.. For example: selecting db1 and executing some select statements, selecting db2 and executing some select statements so on.. here is the code [quote]DECLARE @Loop intDECLARE @DBName varchar(max)declare @maxRow intset @Loop = 1;SET @DBName = ''SELECT @maxRow=max(database_id) FROM sys.databases WHILE (@Loop <= @maxRow)BEGIN SELECT TOP 1 @DBName = d.Name FROM master.sys.databases d WHERE d.Name > @DBName AND d.database_id not in (1, 2, 3, 4) and d.state_desc = 'ONLINE' ORDER BY d.Name set @Loop = @Loop+1; PRINT @DBNAME END[/quote]But the result is[quote]db1db2db3db4db4db4[/quote]I know the culprit is here[quote]SELECT @maxRow=max(database_id) FROM sys.databases WHILE (@Loop <= @maxRow)[/quote]But I couldn't go further.. Confused how to approach

Archiving

Posted: 14 Jan 2013 02:13 AM PST

MSSQL2008 with 1 TB Harddisk spaceDatabases: DB1 - data for 2012.DB1_Arch - data for 2011.*** Inherited this server; looks like he idea was DB1 is the production; and DB1_Arch is the archive of DB1 [u]Question 1[/u]: When I issue the following query[quote]SELECT file_id, name, type_desc, physical_name, size /1024 / 1024 AS gbsize, max_size/1024/1024 AS gbmaxsizeFROM sys.database_files ;[/quote]I get [b]96 gbsize [/b]for Data file and [b]2gb[/b] for Log file for DB1. However, when I do Reports, Standard Reports, Disk Usage on DB1, I get 790 xxx.xx MB for data file size (which is about [b]770gb[/b]) and 18 xxx.xx MB (or about [b]17gb[/b]) in log file size. Which is right? [u]Question 2[/u]: There's a request to archive 2012 data to DB1_Archive; then make a backup of it to a different file server. Then delete data from the current DB1 database. Immediate issue I'm facing - if DB1 is in fact 770gb, and total disk is only 1 TB; how do I archive to DB1_Archive first before deleting from DB1?? The math for the space just isn't there. Any suggestions?Thanks

Slow Cascade Stored Procedure & Hang

Posted: 07 Mar 2013 07:29 AM PST

Hi All,So we have this stored procedure that runs fine most of the times, but we have occasionally encountered that it hangs infinitely and also is very slow. I have pasted the stored procedure code below. Let me know what you'll thinkSELECT @rowCounter = 1, @totalrows = @@ROWCOUNT WHILE @rowCounter <= @totalrows BEGIN SELECT @currentId = tempId FROM @temp WHERE row = @rowCounter SELECT @newModeledCost = case when not exists (select 1 from dbo.DIM_SCENARIO where SCENARIO0_Name = SCENARIO and SCENARIO2_Name = 'Model') then ISNULL(DriverValue1,0)*ISNULL(DriverValue2,0)*ISNULL(UnitA,0)*ISNULL(UnitB,0)+ISNULL(FixedCost,0) else (ISNULL(unita,0) * (ISNULL(DriverValue1,0)/ISNULL(NULLIF(DriverValue2,0),1))* ISNULL(UnitB,0))+ISNULL(FixedCost,0) end , @oldModeledCost = ISNULL(ModeledCost,0), @newOct = (ISNULL(@newModeledCost,0) * (ISNULL(Oct, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newNov = (ISNULL(@newModeledCost,0) * (ISNULL(Nov, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newDec = (ISNULL(@newModeledCost,0) * (ISNULL(Dec, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newJan = (ISNULL(@newModeledCost,0) * (ISNULL(Jan, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newFeb = (ISNULL(@newModeledCost,0) * (ISNULL(Feb, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newMar = (ISNULL(@newModeledCost,0) * (ISNULL(Mar, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newApr = (ISNULL(@newModeledCost,0) * (ISNULL(Apr, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newMay = (ISNULL(@newModeledCost,0) * (ISNULL(May, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newJun = (ISNULL(@newModeledCost,0) * (ISNULL(Jun, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newJul = (ISNULL(@newModeledCost,0) * (ISNULL(Jul, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newAug = (ISNULL(@newModeledCost,0) * (ISNULL(Aug, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))), @newSep = (ISNULL(@newModeledCost,0) * (ISNULL(Sep, 0) /ISNULL(NULLIF(@oldModeledCost,0),1))) FROM dbo.Calculations WHERE BudgetId = @currentId UPDATE dbo.Calculations SET ModeledCost = @newModeledCost, Oct = @newOct, Nov = @newNov, Dec = @newDec, Jan = @newJan, Feb = @newFeb, Mar = @newMar, Apr = @newApr, May = @newMay, Jun = @newJun, Jul = @newJul, Aug = @newAug, Sep = @newSep, Username = 'Cascade', lastmodified = getdate() WHERE BudgetId = @currentId AND @oldModeledCost <> 0 SET @rowCounter = @rowCounter + 1 END

computed column or trigger

Posted: 07 Mar 2013 04:25 AM PST

[code="sql"]CREATE TABLE dbo.#score ( StudentID int IDENTITY (1,1) NOT NULL , Math int , science int ,History int ,Sports int ,Least_score int);-- Insert values into the table.INSERT INTO dbo.#score (Math, science,History,Sports)VALUES (5, 6,8,10), (3, 7,0,9)select * from #scoreI want to make a least_score as a computed column or by use of some trigger .So that the least_score column should get updated automatically with the least value out of Math,Science,History and sports.In my case least_score for student_id=1 should be 5 and for student_id=2 it should be 0.How we can do this?.update #scoreset Least_score=5where StudentID=1update #scoreset Least_score=0where StudentID=2select * from #score[/code]

Oracle to SQL Server Migration - Database Size Estimation in SQL Server

Posted: 07 Mar 2013 03:51 AM PST

Hi we are migrating Oracle database to SQL Server the database size in Oracle is 5 TB. what wil be the size of this Oracle database in SQL Server once it is migrated. There might be the concerns abt what datatypes i am using, indexes and all. Counting all this concerns in general can i have a rough estimate..Thanks a ton in advance! :)

How can I delete Un_installed servers from the sql server Config manager&gt;

Posted: 07 Mar 2013 04:28 AM PST

i have 12 instances that i have installed and Uninstalled from my laptops and sql services are still showing in the configuration manager? thanks

Multiple Transaction Log Files

Posted: 07 Mar 2013 04:47 AM PST

We have multiple transaction log files configured for some of our high volume clients; this is done for disaster prevention reasons. Should transaction log backups fail repeatedly during late night activities, and no tech notification go out in time, a secondary log file generally lets stuff keep going on a 'spillover' volume until backup and truncation issues are resolved.Our issue with this strategy is: once SQL Server has hit a threshold that requires utilization of this secondary log file, after backup issues are resolved, SQL Server appears to continue to use this secondary file, while leaving the primary log file alone. What is the reason for this? How do we ensure SQL Server will return to using this primary log file (perhaps it's on a SAN and resides on higher performance drives than the failover file does...)Thanks!

Table variable subquery as column

Posted: 07 Mar 2013 04:15 AM PST

Hi All,I'm trying to use a value from a table variable's column that matches w/a passed in variable as a subquery for one of the main query's column. I'm trying the below, but the values for @Region and @Queue are showing up as blank in a SSRS report field which are pulling from it:ALTER PROCEDURE [dbo].[udp_WorkQueuesReport] -- Add the parameters for the stored procedure here @FromDt as date, @ThruDt as date, @Region as varchar(max), @Queue as varchar(max)ASBEGIN-- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; Declare @RegionTbl as table(RegionName varchar(50)) Declare @QueueTbl as table(QueueName varchar(50)) Insert @RegionTbl select Value from hpexprpt.dbo.split(@Region,',') Insert @QueueTbl select Value from hpexprpt.dbo.split(@Queue,',') select users.last_name + ',' + users.first_name as [User ID] , (Select RegionName from @RegionTbl where RegionName = @Region) as Region , (Select QueueName from @QueueTbl where QueueName = @Queue) as Queue ... from hpexpprod.dbo.work_items join...where...Any pointers would be greatly appreciated...Thanks in advance!

User Cal Licensing Question

Posted: 07 Mar 2013 03:31 AM PST

If I buy SQL User CAL, does that allow the user to be able to access ANY SQL server, or would I have to have a second SQL User CAL for the user to access a 2nd instance?Thanks!

Failure to calculate super-latch promotion threshold

Posted: 18 Sep 2012 12:26 AM PDT

I saw this error message in the error log of one of my SQL Server 2008 R2 servers today (with SP2 installed, v10.50.4000).[b]"Warning: Failure to calculate super-latch promotion threshold."[/b]I've never seen that message, can't find anything online about it, and don't know what the impact is.The only other relevant information I can give is that there was a series of DBCC CHECKDB commands occurring at the time (our nightly database checks).Any additional information would be appreciated.Thanks,John

trace 1204, 1222 on

Posted: 06 Mar 2013 10:52 PM PST

I have these two traces on, TraceFlag Status Global Session1204 1 1 01222 1 1 0But all I am getting in the error log is[298] SQLServer Error: 1205, Transaction (Process ID 200) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. [SQLSTATE 40001] (ConnExecuteCachableOp)

Msg 911, Level 16, State 1, Line 1 Database 'databasename' does not exist. Make sure that the name is entered correctly.

Posted: 30 Apr 2012 11:02 PM PDT

Dear All,Would anyone be able to let me know how to resolve the following error message please? Msg 911, Level 16, State 1, Line 1Database 'databasename' does not exist. Make sure that the name is entered correctly.Thank you in advance!

Memory leak on witness instance

Posted: 27 Feb 2013 06:25 AM PST

We have a high availability sql server environment with synchronous mirroring and witness which has been working mostly fine for more than one year.But suddenly, last monday, the witness server became unresponsiveness and the errorlog showed lots of errors about insufficient memory. Looking at the memory information dumped to the errorlog by sql server I found that most of the usable memory was taken up by sqlutilities:>MEMORYCLERK_SQLUTILITIES (node 0)> SinglePage Allocator 986240I do now know whether this is normal or not. This accounts for 94% of available memory because the witness instance has server memory setting configured 1GB. To correct the incidence I restarted mssqlserver service on the witness, but 22 hours after that the same issue happened again. I am pretty clueless about what went wrong so I restarted the witness once more time and then switched all our databases to asynchronous mode without witness. This seemed to work as the witness instance (now idle) ceased to consume memory.The only other thing that came to my attention is that one of the main database servers (the one acting as secondary) has a network interface down. This server has two network interfaces bridged by a "Broadcom Advanced Server Program Driver" to provide a failsafe link. So even with one interface down this server seems to be working normally.Any ideas?

Need this complicated query

Posted: 07 Mar 2013 01:02 AM PST

Hi,I need a query which will have an input parameter @SerNum1, the query will copy all the ParamNum, ParamVal, Lock and Id of that serial number. Now I will send in a new @SerNum2 and this new SerNum2 will be inserted into the same table, column and all that copied data of the first SerNum1 will be pasted next to the new SerNum2.Please look at the attached PNG file for the structure of the table and cloumns.

Event time from deadlock in system_health different from time in error log

Posted: 07 Mar 2013 01:44 AM PST

Can anyone explain why I would have an event time of 2013-03-07 05:39:21.027 for a deadlock in the ring buffer, but when I look at the event in the SQL Error Log (we have flag 1222 on) it shows a time of 2013-03-06 11:39:30 PM. This is quite a time difference and I thought the ring buffer was synchronous. Any thoughts?

sqlcode vs sp

Posted: 07 Mar 2013 12:44 AM PST

hi friends i have small doubt in sql plese tell me how to Determine when to use stored procedures vs. SQL in the code

Fixed column names for Dynamic PIVOT Result

Posted: 06 Mar 2013 08:26 PM PST

Hello everybody,I'm using two dynamic Pivot queries(which are basically the same except for the filter on date) to build a final result.Both dynamic Pivot produce a result with 12 columns where the first one is a nvarchar and the other 11 columns are values.My final result for the column names must be like the following: sta_rep,0,1,2,3....23See attachment for more details.Thank you.

Search This Blog