Monday, March 18, 2013

[SQL Server 2008 issues] Best Approach to Archieve DB

[SQL Server 2008 issues] Best Approach to Archieve DB


Best Approach to Archieve DB

Posted: 17 Mar 2013 08:39 PM PDT

Example - We have a DB which have loads of data & we want to archiev it to some Server in such a way thatScenario - Source DB have 5 yrs Data in it, we want to move the 5 yrs Data to Destination DB, & then will keep only one month data in Source DB, that means on monthly basis we will have less data in Sorce DB.Please help me a best approach??

Finding Comma in the string

Posted: 17 Mar 2013 06:24 PM PDT

Guys, Declare @string VARCHAR(MAX)SET @string IN ('sql','server','2008') Here, How to find a comma in the above string?whether comma(,) exists or it is single valueI want to solve the below example[code="sql"]SELECT CASE WHEN @string = 'some string' THEN 'XYZ' WHEN @string IN (,) THEN 'abc' ---- the will be comma separated multiple value('sql','server','2008') FROM my_table[/code] If comma exist in the string then i want to print value.

SQL architecture

Posted: 17 Mar 2013 05:27 PM PDT

Can any one explain the SQL architecture

Transact replication on two computers

Posted: 17 Mar 2013 12:26 AM PDT

HelloI am trying create transactional replication with updatable with subscriptions on two PC ( between instances database is OK ).On PC1 I was creating publicator ( include tables ), On PC2 I was creating Subscription ( I logged on PC1, choosed database and table ), next I was writing data to created linked server and I have error:http://tinypic.com/view.php?pic=vnijk1&s=6Screen step by step:http://tinypic.com/view.php?pic=2vlo009&s=6http://tinypic.com/view.php?pic=nb9qaa&s=6This data to linked server is ok because between instancesit works.Help me :-(

can we install 2008 R2 integration services and 2012 integration services on the same server

Posted: 14 Mar 2013 09:59 AM PDT

Hi DBA'SI had 2012 integration services installed on a server. i am trying to connect to Oracle but it is failing. I worked with Same SSIS Pkg in 2008 r2 where i can connect to oracle and import data. So i am planning to install sql server 2008 r2 ssis on the same box where 2012 is installed. My Database engine will be 2012. can i do this ?please adviseThanks a alot in adavnce

XLSX Import Error: result code db_sec_E_auth_failed

Posted: 17 Mar 2013 09:31 AM PDT

I'm trying to import a 190 meg .XLSX file to a SQL 2008 (not R2) server. First, I installed "Office 12.0 Access Database Engine OLD DB Provider". But I get the error:"Operation Could Not Be Completed. No Error message available. Result code db_sec_E_auth_failed"I found a suggestion to go to the properties tab and insert the path & file name, which now give me the error:DB_E_ERRORSOCCURRED(0x80040E21). System Data.Any thoughts ? I have a 2005 server available, but assumed the 2008 (not R2) server would be better to try.

send Table name to procedure as a parameter

Posted: 17 Mar 2013 08:11 AM PDT

hi i create a procedure in my database.this SP select all rows from a table :[code][b]CREATE PROCEDURE FillTableASBEGIN select * from [dbo].[TableName]END[/b][/code]now ,i want to send the name of table as a parameter to SP . then this name (parameter) use in Select command .how can i do it with this way or any way you know?

Sunday, March 17, 2013

[how to] LOAD DATA INFILE on a MyISAM table blocks user read-queries, can we get around this?

[how to] LOAD DATA INFILE on a MyISAM table blocks user read-queries, can we get around this?


LOAD DATA INFILE on a MyISAM table blocks user read-queries, can we get around this?

Posted: 17 Mar 2013 08:40 PM PDT

I've got a large reference table, 100GB, in a MyISAM table, load it up from scratch goes well (4ish hours) using LOAD DATA INFILE.

But we have 1GB each day we want to update in a batch process using LOAD DATA INFILE. Unfortunately LOAD DATA INFILE blocks users from querying against that table.

Is there a way I can get around this?

For example:

  • Can I load the new data into another table and insert into from that table or something similar to allow loading the data and read queries against the primary table at the same time?
  • Can I do some kind of partitioning with new tables added daily, joined by a view? Would this be a bad idea with such a large table?

Get the count of items related to nested category in sql

Posted: 17 Mar 2013 06:30 PM PDT

I have two tables: categories and items. i have stored categories using nested set structure. Categories have items. Items can be only added to leaf nodes of a root category.

For eg: Categories

Vehicles      Bikes          Bajaj      Automobiles  Art & Antiques      Amateur Art  

Items can be added to category Bajaj, Automobiles and Amateur Art in this case.

Lets say there are 2 items inside Bajaj, 5 items inside Automobiles, 2 inside Amateur Art

For root level categories I want to display as follow:

- Vehicles (7 items)  - Art & Antiques (2 items)  

How can I do this ?

Here is the sql dump to work with some sample data

--  -- Table structure for table `categories`  --    CREATE TABLE IF NOT EXISTS `categories` (   `id` int(11) NOT NULL AUTO_INCREMENT,   `parent_id` int(11) DEFAULT NULL,   `title` varchar(64) COLLATE utf8_unicode_ci NOT NULL,   `lft` int(11) NOT NULL,   `lvl` int(11) NOT NULL,   `rgt` int(11) NOT NULL,   `root` int(11) DEFAULT NULL,    PRIMARY KEY (`id`),     KEY `IDX_3AF34668727ACA70` (`parent_id`)  ) ENGINE=InnoDB  DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=28 ;    --   -- Dumping data for table `categories`  --    INSERT INTO `categories` (`id`, `parent_id`, `title`, `lft`, `lvl`, `rgt`, `root`)  VALUES  (22, NULL, 'Vehicles', 1, 0, 8, 22),  (23, 22, 'Bikes', 2, 1, 5, 22),  (24, 23, 'Bajaj', 3, 2, 4, 22),  (25, 22, 'Automobiles', 6, 1, 7, 22),  (26, NULL, 'Art & Antiques', 1, 0, 4, 26),  (27, 26, 'Amateur Art', 2, 1, 3, 26);    -- --------------------------------------------------------    --  -- Table structure for table `items`  --    CREATE TABLE IF NOT EXISTS `items` (   `id` int(11) NOT NULL AUTO_INCREMENT,   `category_id` int(11) NOT NULL,   `title` varchar(100) NOT NULL,   PRIMARY KEY (`id`),   KEY `FK_403EA91BA33E2D84` (`category_id`)  ) ENGINE=InnoDB  DEFAULT CHARSET=latin1 AUTO_INCREMENT=5 ;    --  -- Dumping data for table `items`  --    INSERT INTO `items` (`id`, `category_id`, `title`) VALUES  (1, 24, 'Pulsor 150 cc'),  (2, 24, 'Discover 125 cc'),  (3, 27, 'Art of dream'),  (4, 25, 'Toyota Car');    --  -- Constraints for dumped tables  --    --  -- Constraints for table `categories`  --  ALTER TABLE `categories`   ADD CONSTRAINT `FK_3AF34668727ACA70` FOREIGN KEY (`parent_id`) REFERENCES `categories` (`id`) ON DELETE SET NULL;    --  -- Constraints for table `items`  --  ALTER TABLE `items`   ADD CONSTRAINT `FK_403EA91BA33E2D84` FOREIGN KEY (`category_id`) REFERENCES  `categories` (`id`) ON DELETE CASCADE;  

root nodes have NULL in the field of parent_id

I was able to fetch for roots using this query:

SELECT c.id,c.title,cte.co FROM categories c  JOIN   (SELECT     c0_.id,c0_.root,COUNT(i.id) co    FROM     categories c0_     JOIN items i ON c0_.id=i.category_id    WHERE c0_.rgt = 1 + c0_.lft    GROUP BY c0_.id  ) cte  ON cte.root=c.id  WHERE c.parent_id is null  

The above query works for root level category. Now when the user clicks on root level category, I want to do the same.

for eg when somebody clicks on vehicles I should get:

 Bikes (2)   Automobiles (5)  

For that I tried :

 SELECT c.id,c.title,cte.co FROM categories c   JOIN    (SELECT      c0_.id,c0_.root,COUNT(i.id) co    FROM     categories c0_     JOIN items i ON c0_.id=i.category_id    WHERE     c0_.rgt = 1 + c0_.lft    GROUP BY c0_.id   ) cte   ON cte.root=c.id   WHERE c.parent_id=1  

This returned empty result set. what is wrong in this query ?

SQL Server update query on linked server causing remote scan

Posted: 17 Mar 2013 04:58 PM PDT

I have a SQL Server 2012 setup as a linked server on a SQL Server 2008 server.

The following queries executes in less than 1 second:

   SELECT kg.IdGarment     FROM Products p      INNER JOIN ZAMStaffRentals r ON r.ProductID = p.UniqueID      INNER JOIN ZAMCustomer_Staff s ON r.StaffID = s.UniqueID      INNER JOIN ZAMRentalGarments g ON g.StaffRentalID = r.UniqueID      INNER JOIN [REMOTE_SERVER].[REMOTE_DB].dbo.t_ProcessIT_Garment kg on g.BarcodeRFID = kg.IdGarment      INNER JOIN ZAMKannChanges log on log.GarmentID = g.UniqueID      WHERE log.ActionType = 'I'   

t_ProcessIT_Garment contains 37,000 rows, the query returns two records, the IdGarment column is the Primary Key. No problem here.

However, if I run this query to do a remote update, it takes 24 seconds, and 2 rows is affected:

   UPDATE [REMOTE_SERVER].[REMOTE_DB].dbo.t_ProcessIT_Garment SET      IdGarment = IdGarment     FROM Products p      INNER JOIN ZAMStaffRentals r ON r.ProductID = p.UniqueID      INNER JOIN ZAMCustomer_Staff s ON r.StaffID = s.UniqueID      INNER JOIN ZAMRentalGarments g ON g.StaffRentalID = r.UniqueID      INNER JOIN [REMOTE_SERVER].[REMOTE_DB].dbo.t_ProcessIT_Garment kg on g.BarcodeRFID = kg.IdGarment      INNER JOIN ZAMKannChanges log on log.GarmentID = g.UniqueID      WHERE log.ActionType = 'I' ;  

I tested using IdGarment = IdGarment to keep things simple. The execution plan shows it uses Remote Query for the first query, but Remote Scan for the second query, which has 100% of the cost.

The table joins are identical in both queries, why is it using Remote Scan for the second query, and how do I fix this?

Synchronize creation of rows between tables

Posted: 17 Mar 2013 04:47 PM PDT

Lets say I have two tables, posts(id_pk,title) and posts_content(post_id_pk,blob). Whenever I create a row in posts, I want a complimentary row to be created in posts_content, with the same id. If I just use auto_increment on both, when two posts are created simultaneously, they might get switched. What is the correct way to do this?

MySQL 5.6 OS X 10.8: how tell if my.cnf read?

Posted: 17 Mar 2013 07:41 PM PDT

With mysqld running, I expect to see that /etc/my.cnf has been used by at least one of the commands

ps aux | grep mysql | grep -v grep    mysqladmin variables | grep defaults-file  

showing the value of --defaults-file. But I see no such item. Why not?

I think /etc/my.cnf is being read, since if I put an erroneous entry in its [mysqld] section, e.g., error = true then when I try to start mysqld it aborts with an ERROR message.

I'm using the installation by running mysql-5.6.10-osx10.7-x86_64.pkg from mysql-5.6.10-osx10.7-x86_64.dmg.

Minimize locking on UPDATE with multiple WHERE columns

Posted: 17 Mar 2013 04:21 PM PDT

I have a problematic query that's causing a lock timeout:

UPDATE  SET row1=   WHERE row1 IS NULL AND row2 >    ORDER BY row2   LIMIT 100      

I have two issues here:

  1. I have multiple servers committing the query simultaneously, they lock each other out and I get lock timeout or even deadlock sometimes. Optimally the servers should update mutually exclusive rows so locking shouldn't happen at all. Is there a way I can skip any locked row in the update?

  2. If I can't avoid locking, and I already have an index for row1 and another one for row2, will Innodb lock all rows that satisfy any condition in the WHERE clause or only the ones that satisfy both conditions? If the answer is the former, can I add an index for the two columns together or do I also need to remove the indexes that I have (for each column separately)?

SQL Server 2005 encryption - What Grants should I give

Posted: 17 Mar 2013 04:56 PM PDT

I use Mixed mode Authentication and I want my database users (in the Public role membership) to have the ability to JUST execute procedures that encrypt/decrypt data (using Certificate and Symmteric keys).

I have already given them Grant Execute on the procedures. What other Grants should I apply?

Thanx!

SQL Server 2005 encrypted database works on new PC without restore keys!

Posted: 17 Mar 2013 02:15 PM PDT

Something tells me that it was not normal.

Let me take it from the begining. In an SQL Server Express 2005 I needed to encrypt one column. I did the following.

CREATE MASTER KEY ENCRYPTION  BY PASSWORD ='blahblah';     CREATE CERTIFICATE my_certificate  ENCRYPTION BY PASSWORD = 'blahblah'  WITH SUBJECT = 'Password',  EXPIRY_DATE = '12/31/2099';    CREATE SYMMETRIC KEY my_symmetric_key  WITH ALGORITHM = AES_256  ENCRYPTION BY CERTIFICATE my_certificate;  

I created procedures to encrypt, decrypt the column and everything worked OK.

I then took the database (SQL Server 2005, Mixed Mode) and restored it to a new PC with an SQL Server 2008 Express.

WITHOUT restoring KEY and Certificate, I launched my application which use the stored procedures to encrypt/decrypt and SURPRISINGLY everything worked in the new database!! I could decrypt previously encrypted data successfully.

Is this normal??

What is the use of backing-up the Key and the Certificates then?

Thanx in advance

Arbitrarily ordering records in a table

Posted: 17 Mar 2013 02:12 PM PDT

A common need when using a database is to access records in order. For example, if I have a blog, I want to be able to reorder my blog posts in arbitrary order. These entries often have lots of relationships, so a relational database seems to make sense.

The common solution that I have seen is to add an integer column order:

| id | title          | order |  +----+----------------+-------+  | 0  | Lorem ipsum    | 3     |  | 1  | Dolor sit      | 2     |  | 2  | Amet, consect  | 0     |  | 3  | Elit fusce     | 1     |  

Then, we can sort the rows by order to get them in the proper order.

However, this seems clumsy:

  • If I want to move record 0 to the start, I have to reorder every record
  • If I want to insert a new record in the middle, I have to reorder every record after it
  • If I want to remove a record, I have to reorder every record after it

It's easy to imagine a situations like:

  • Two records have the same order
  • There are gaps in the order between records

These could happen fairly easily for a number of reasons.

This is the approach that applications like Joomla take:

Example of Joomla's approach to ordering

You could argue that the interface here is bad, and that instead of humans directly editing numbers, they should use arrows or drag-and-drop—and you'd probably be right. But behind the scenes, the same thing is happening.

Some people have proposed using a decimal to store order, so that you can use "2.5" to insert a record in between the records at order 2 and 3. And while that helps a little, it's arguably even messier because you can end up with weird decimals (where do you stop? 2.75? 2.875? 2.8125?)

Is there a better way to store order in a table?

connecting to oracle database on Windows

Posted: 17 Mar 2013 07:41 AM PDT

I have two machines on my network:

  1. pc-1 is running on Windows XP SP3, IP 192.168.1.15
  2. pc-2 is running on Windows Vista, IP 192.168.1.16

Now I have loaded Oracle 11gR2 on pc-2 (Vista) and the listener is running with the following settings: address=192.168.1.16 and port=1065.

I am trying to connect to Oracle database via pc-1 (XP) using jdbc thin driver but I get the error The network adapter could not establish the connection.

The pc-2 is reachable, I checked it by ping 192.168.1.16

I tried a few things:

  • tnsping 192.168.1.16:1065 gives error TNS-12535: TNS:operation timed out

  • sqlplus sys/sys@//192.168.1.16:1065/ora as sysdba but it gives error ERROR:ORA-12170: TNS:Connect timeout occurred

  • TELNET 192.168.l.16 1065 gives error Connecting To 192.168.1.16...Could not open connection to the host, on port 1065: Connect failed. The firewall on pc-2 (vista+oracle) is turned off so I don't think firewall is blocking port 1065.

Can anyone please specify in details what shall I do? I am trying to do this first time so no idea about what is going wrong?

Some of the commands output on PC-2

  • netstat -a

OUTPUT
Local Address---------------------Foreign Address--------------------- State
192.168.1.16:1065-------------------- computername:0-------------------------LISTENING

Permission denied in Postgres

Posted: 17 Mar 2013 11:25 AM PDT

I added the user myuserto Postgres.

Then, through the PgAdminIII GUI I added the database mydatabase and I restored from a backup file. So, the owner of mydatabase is the "administator" user postgres.

Then, I tried to give to myuser all the rights to access and modify the mydatabase.

What I did is logging to psql ad administrator:

psql -d template1 -U postgres    

and then I run this query:

GRANT ALL PRIVILEGES ON DATABASE mydatabase TO myuser  

Now, I can use myuser to log, but if a try to do a simple query I get this error:

ERROR:  permission denied for relation table_name  

Am I missing something? Can you help me solvign that?

Thank you very much.

MongoDB: Aggregation workaround by helper collection

Posted: 17 Mar 2013 06:55 AM PDT

Since I regularly run against mongodb's document size limit and also aggregation framework's group limitations >16M, depending on the amount of data I need to process, I look for a simple workaround.

The solution must also fit for single nodes (and must not require multiple mongos).

Saying in few words, it's an aggregation for finding

(a) the number (count) of uniques in a field and group the number by another field

(b) the frequency or count how often a unique id appeared.

An example could make the task easier to understand: Lets say there is a collection with 2 fields, the last and surname.

(ex a) How many different surnames exist for each last name? Result could be: There are 1000 unique (distinct) surnames for last name "smith", 500 different surnames for people with last name "schulz" etc.

=> this could also be solved with distinct count and/or mapreduce etc. no limitation is exceeded. anyways, it seems to be a slow table scan and aggregation framework does the task much faster.

(ex b) How popular are surnames for each last name? Result: With the last name "smith" there are different 500 sur names which appear only once, 200 sur names which appear 2 times, 45 are listed 3 times our data, ...

=> here it is very comfortable to work with mongodb's aggregation framework which handles this task fast and properly. However, trying the task with lots of uniques in a 15GB collection, it worries about the 16M limitation.

The first workaround was successfull but is only a quick'n'dirt non-scaleable trick: Instead of processing all the data and $group afterwards, I filter ($match) for each $group-item. With the example from above, I look up all distinct last names, run one aggregation per lastname and reduce the pipeline data before aggregation starts.

This avoids the 16M limit issue if the amount of data for a single $group-item (last name) is not too much. => only quick'n'dirty workaround.

Another idea is a help collection for the COUNTs.

I could iterate each input document and write into another collection the surnames and one field per lastname with the counter.

E.g. surname=John count_with_Lastname.smith=10, count_with_Lastname.brown=2, ...

Building this helper collection is very slow, isn't it? When incrementing +1 which is 1 update/write per input document, I fear this could take forever :-(

Is there a best-practice or tips for solving this task (withouth multiple nodes and without Hadoop or alternatives)?

Please let me know

Search within same ID accross several rows

Posted: 17 Mar 2013 06:42 AM PDT

My table has the following structure:

+----+---------+------------+-------------+  | id | word_id | meaning_id | sentence_id |  +----+---------+------------+-------------+  | 1  | 1       | 15333      | 1           |  | 2  | 1       | 12355      | 1           |  | 3  | 1       | 768        | 1           |  | 4  | 2       | 12455      | 1           |  | 5  | 2       | 9581       | 1           |  | 6  | 3       | 41         | 1           |  | 7  | 4       | 125        | 1           |  | 8  | 1       | 17854      | 2           |  | 9  | 2       | 35214      | 2           |  | 10 | 3       | 12455      | 2           |  | 11 | 3       | 988        | 2           |  +----+---------+------------+-------------+  

I have a list of meaning_ids and I want to get all the sentence_ids that contain all of the meaning_ids. I do not care what the sentence it is, as long as the sentence contains all meaning_ids (= sentence_id is the same).

Lets say I have the following sentence_ids: 12455 and 15333. The query should return sentence_id 1 because both only occur there.

How can I achieve this with MySQL?

MySQL: Can't find .idb files after turning on innodb_file_per_table

Posted: 17 Mar 2013 06:27 AM PDT

This a question about MySQL. Let me give the specifics of my setup:

  • Mac OSX 10.6.8
  • MySQL 5.5.28
  • Workbench 5.2.44 for all administration
  • InnoDB engine exclusively

In order to manage limited hard drive space on my laptop, I'd like to turn on the innodb_file_per_table option so that each table gets its own .idb file instead of all data going into the ibdata1 file. I researched various sources, including Stackoverflow, and they all suggest the same basic steps:

  1. export existing tables

  2. turn off MySQL server

  3. add innodb_file_per_table under the [mysqld] heading of the my.cnf file

  4. delete ibdata1, ib_logfile0, and ib_logfile - which for me are located in /etc/local/mysql/data

  5. restart MySQL server

  6. at this point, newly created table or tables imported from the original dump should each get there own .idb file.

I followed the steps and there appears to be no problem. Various checks that I did within Workbench all indicate that the innodb_file_per_table option is turned on. However, when I create new tables or import the old table, I'm not seeing any new .ibd files, and it seems like idata1 is growing.

I've deleted the ibdata1 file multiple times, created new server instances, and tried various approaches to the my.cnf file. I am assuming that the files will show up in /etc/local/mysql/data and this might be incorrect - I've searched in other possible locations, but with no luck.

Please let me know if the I made a mistake in the turning on innodb_file_per_table or if I am simplying looking for the ibd files in the wrong place.

Additional Info:

After some further investigation, encouraged by Spencer7593's response, it seems like innodb_file_per_table is in fact working correctly (specifically, disk space is freed up as soon as I drop a table). However, I still cannot see the .idb files. When I create/import a myschema and new directory with the same name is created in /usr/local/mysql/data - this is where I expected to find the .idb files, and Specer7593's response suggests this as well. But in both Console and Finder, I get 'permission denied' messages when I try to look inside it. Moreover, the size of this directory in Finder displays as 0 or '--'. Some other observations:

  1. ibdata1 does still grow (and doesn't shrink) as you add data, but at a much slower rate that the actual table. This makes sense because the Innodb engine still needs this file, even when innodb_file_per_table is turned on, so things should be written it.

  2. My implementation works even though I don't include the line innodb_data_home_dir = /etc/local/mysql/data in the my.cnf file.

Adding a bool column; tracking untouched vs. false

Posted: 17 Mar 2013 03:05 PM PDT

[backstory] I have a simple database of our current Widget inventory. It averages just dozen columns across five or six tables, but has a huge number of records already.

Some Widgets came with caps, some did not, but we've actually never kept track before. Now management wants users to be able to keep track of whether each individual Widget arrives with a cap or not. We will not be doing an inventory to check our current stock, but when I add the new column for "Arrived Capped," there will be no data on everything received before today. What is the usual, best-practices approach to scenarios like this?

When adding a string column, the meaning of "" is obvious; no data has been entered. But I'm adding a bool, thus existing records will default to a value that does indicate something: FALSE.

My first thought was to make the user portal have, rather than a checkbox, a radio button pair for this question. Upon creating a new record, or even returning to old records with false-falses, if they click Yes or No, that value is recorded, plus another bool indicates this question was actually answered manually. That is; if the first bool is F, but the second bool is T, then the first bool is not a 'default false'.

Their hand-waving solution was to have records dated before today displayed as "unknown" (this was nixed as existing records could not then manually be given a user-verified value of false). Should I use a faux-bool int and think of it in trinary? Or, back to using a separate tracking bit/bool, should I make a new table for that sort of data? Is this a common concept? Any naming conventions?

would that design be good, reasonable, stable or not?

Posted: 17 Mar 2013 03:16 PM PDT

A company that wants to control 4-5 differently asp.net webforms and mvc projects via one web panel and also one mssql database.
The idea is from a software engineer who works at a serious IT education company.
There are 4-5 different web projects as i said before which are a e-shop, an intranet project others are business' web projects.
The engineer wants to use one sql database, views for readings, sps for cruds and these for all projects.
How does affect the development process and which approach would be good with that, using of an orm or ado.net?
Is there any possibility of future problems?

Migrate SQL Server database to multiple files (shrinkfile emptyfile vs log shipping)

Posted: 17 Mar 2013 03:25 PM PDT

I have a SQL Server 2008 database in production that we are moving onto a new server. The current database has a single ~400GB .MDF file. The new server will be running SQL Server 2012, and we are running mirrored Intel 910 SSDs. These drives will present us with 4x 200GB partitions.

To make this work, we will need to split the single .MDF into 4 smaller ones using DBCC SHRINKFILE with EMPTYFILE. We have done this in test, and it still takes ~ 3.5 hours to do which is too long. The existing database is OLTP, and 365/24/7 and I know blocking will occur during this process, so we can't do it on production first.

My question, is there a way to backup and restore the database to the new server in a temp location. create the new files, EMPTY the temp .MDF into the new locations, then apply transaction logs after? That way we can move the data while current old production is up and running, then do a short shutdown, apply logs, and bring up the new DB?

Or are there any other options to get from Server A with one file and Server B with 4 files on different drives with minimal downtime?

Handling concurrent access to a key table without deadlocks in SQL Server

Posted: 17 Mar 2013 11:53 AM PDT

I have a table that is used by a legacy application as a substitute for IDENTITY fields in various other tables.

Each row in the table stores the last used ID LastID for the field named in IDName.

Occasionally the stored proc gets a deadlock - I believe I've built an appropriate error handler; however I'm interested to see if this methodology works as I think it does, or if I'm barking up the wrong tree here.

I'm fairly certain there should be a way to access this table without any deadlocks at all.

The database itself is configured with READ_COMMITTED_SNAPSHOT = 1.

First, here is the table:

CREATE TABLE [dbo].[tblIDs](      [IDListID] [int] NOT NULL CONSTRAINT PK_tblIDs PRIMARY KEY CLUSTERED IDENTITY(1,1) ,      [IDName] [nvarchar](255) NULL,      [LastID] [int] NULL,  );  

And the nonclustered index on the IDName field:

CREATE NONCLUSTERED INDEX [IX_tblIDs_IDName] ON [dbo].[tblIDs]  (      [IDName] ASC  )   WITH (      PAD_INDEX = OFF      , STATISTICS_NORECOMPUTE = OFF      , SORT_IN_TEMPDB = OFF      , DROP_EXISTING = OFF      , ONLINE = OFF      , ALLOW_ROW_LOCKS = ON      , ALLOW_PAGE_LOCKS = ON      , FILLFACTOR = 80  );    GO  

Some sample data:

INSERT INTO tblIDs (IDName, LastID) VALUES ('SomeTestID', 1);  INSERT INTO tblIDs (IDName, LastID) VALUES ('SomeOtherTestID', 1);  GO  

The stored procedure used to update the values stored in the table, and return the next ID:

CREATE PROCEDURE [dbo].[GetNextID](      @IDName nvarchar(255)  )  AS  BEGIN      /*          Description:    Increments and returns the LastID value from tblIDs          for a given IDName          Author:         Max Vernon          Date:           2012-07-19      */        DECLARE @Retry int;      DECLARE @EN int, @ES int, @ET int;      SET @Retry = 5;      DECLARE @NewID int;      SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;      SET NOCOUNT ON;      WHILE @Retry > 0      BEGIN          BEGIN TRY              BEGIN TRANSACTION;              SET @NewID = COALESCE((SELECT LastID FROM tblIDs WHERE IDName = @IDName),0)+1;              IF (SELECT COUNT(IDName) FROM tblIDs WHERE IDName = @IDName) = 0                   INSERT INTO tblIDs (IDName, LastID) VALUES (@IDName, @NewID)              ELSE                  UPDATE tblIDs SET LastID = @NewID WHERE IDName = @IDName;              COMMIT TRANSACTION;              SET @Retry = -2; /* no need to retry since the operation completed */          END TRY          BEGIN CATCH              IF (ERROR_NUMBER() = 1205) /* DEADLOCK */                  SET @Retry = @Retry - 1;              ELSE                  BEGIN                  SET @Retry = -1;                  SET @EN = ERROR_NUMBER();                  SET @ES = ERROR_SEVERITY();                  SET @ET = ERROR_STATE()                  RAISERROR (@EN,@ES,@ET);                  END              ROLLBACK TRANSACTION;          END CATCH      END      IF @Retry = 0 /* must have deadlock'd 5 times. */      BEGIN          SET @EN = 1205;          SET @ES = 13;          SET @ET = 1          RAISERROR (@EN,@ES,@ET);      END      ELSE          SELECT @NewID AS NewID;  END  GO  

Sample executions of the stored proc:

EXEC GetNextID 'SomeTestID';    NewID  2    EXEC GetNextID 'SomeTestID';    NewID  3    EXEC GetNextID 'SomeOtherTestID';    NewID  2  

EDIT:

I've added a new index, since the existing index IX_tblIDs_Name is not being used by the SP; I assume the query processor is using the clustered index since it needs the value stored in LastID. Anyway, this index IS used by the actual execution plan:

CREATE NONCLUSTERED INDEX IX_tblIDs_IDName_LastID ON dbo.tblIDs  (      IDName ASC  )   INCLUDE  (      LastID  )  WITH (FILLFACTOR = 100, ONLINE=ON, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON);  

EDIT #2:

I've taken the advice that @AaronBertrand gave and modified it slightly. The general idea here is to refine the statement to eliminate unnecessary locking, and overall to make the SP more efficient.

The code below replaces the code above from BEGIN TRANSACTION to END TRANSACTION:

BEGIN TRANSACTION;  SET @NewID = COALESCE((SELECT LastID FROM dbo.tblIDs WHERE IDName = @IDName), 0) + 1;    IF @NewID = 1      INSERT INTO tblIDs (IDName, LastID) VALUES (@IDName, @NewID);  ELSE      UPDATE dbo.tblIDs SET LastID = @NewID WHERE IDName = @IDName;    COMMIT TRANSACTION;  

Since our code never adds a record to this table with 0 in LastID we can make the assumption that if @NewID is 1 then the intention is append a new ID to the list, else we are updating an existing row in the list.

Finding out the hosts blocked by mysql server

Posted: 17 Mar 2013 09:42 AM PDT

Can someone tell me how to list the hosts which are blocked by the mysql server due to the reason that they crossed the limit of max_connect_errors. Is there any table in which MySQL server keeps this data. I am using mysql-server-5.1.63

Materialized view for a yet-to-be-defined DBlink

Posted: 17 Mar 2013 07:42 AM PDT

In our product we have a main application with an oracle 11g DB and an optional module with another DB.

We want the two to communicate through a dblink and I was told to put some code for the communication stored in the main application. In practice, since the optional module may not actually exists at the time the main application is deployed, this means writing code for tables that don't exist yet, but of which we know the structure, the ones in the optional module.

We can encapsulate the dblink inside materialized views in the main module and restrain from calling remote procedures from there, however, to my understanding, we can't actually define materialized views on non-existant dblinks.

Perhaps we could define dummy materialized views, e.g. empty ones defined on dual, that get properly redefined once the optional module is installed and the necessary dblink is created, with a procedure calling dynamic plsql.

What do you think about it? How would achieve that?

TIA

Andrea

EDIT: currently I have to evaluate the opportunity of having the communication code in the main application without having the optional module already deployed. So I'm really interested in hearing how I can get to create materialized views over yet-to-be-defined dblinks and whether my proposed solution is reasonable.

Thanks.

How to influence Oracle backup output size?

Posted: 17 Mar 2013 06:42 AM PDT

Output_Size

I have two environment one Prod one Dev. Dev is created by Prod data and both have a similar database size. However, the backup output size varies greatly. Prod is about 300 M while Dev always above 1G. Both using incremental Level 1 backup.

Why would this be happening?

Delete word, its meanings, its meaning's example sentences from DB

Posted: 17 Mar 2013 12:42 PM PDT

I have three tables as below (simplified for demonstration):

words  =====  integer id  text    word    meanings  ========  integer id  integer word_id  text    meaning    examples  ========  integer id  integer meaning_id  text    sentence  

where, word_id stores id of the word in words table and meaning_id stores id of the meaning in meanings table. I am trying to figure out a sql query, given a word's id, to delete the word with all its meanings and example sentences all at one time. Is such sql query possible to compose? If so, how?

Edit1: I am using SQLite3 as the database.

Edit2: I figured the following solution which requires 3 sql queries in order:

DELETE FROM examples WHERE meaning_id IN (SELECT id FROM meanings WHERE word_id=the_given_id);  DELETE FROM meanings WHERE word_id=the_given_id;  DELETE FROM words WHERE id=the_given_id;  

I'm still looking for the answer to my question: is the whole process possible to be done in one query?

MySQL LOAD DATA INFILE slows by 80% after a few gigs of input with InnoDB engine

Posted: 17 Mar 2013 06:30 AM PDT

I'm loading a 100GB file via LOAD DATA INFILE. I've had good success with MyISAM, a few hours and done.

I'm trying it now using InnoDB. The load starts fast at over 10MB/sec (watching the table file growth, file_per_table is turned on).

But after about 5GB of data it slows down to the 2-4MB/sec range, as I get over 20GB it was down around 2MB/sec.

InnoDB buffer pools size is 8G. And I've done the following prior to running the LOAD DATA INFILE command:

SET @@session.sql_log_bin=0;  SET autocommit=0;  SET unique_checks=0;  SET foreign_key_checks=0;  alter table item_load disable keys;  //Run LOAD DATA INFILE....  

I can't see the reason why it's starting off well and slowing down over time.

Also, using the same settings, I ran the same LOAD DATA INFILE command with the table using InnoDB and MyISAM and a 5GB test dataset, MyISAM was 20x faster:

InnoDB:

mysql> LOAD DATA CONCURRENT LOCAL INFILE '/tmp/item' REPLACE INTO TABLE item_load;  Query OK, 2630886 rows affected, 6 warnings (21 min 25.38 sec)  Records: 2630886  Deleted: 0  Skipped: 0  Warnings: 6  

MyISAM:

mysql> LOAD DATA CONCURRENT LOCAL INFILE '/tmp/item' REPLACE INTO TABLE item_load;  Query OK, 2630886 rows affected, 6 warnings (1 min 2.52 sec)  Records: 2630886  Deleted: 0  Skipped: 0  Warnings: 6  

Anything else I should consider trying? The MyISAM engine is able to keep up the load rate much better.


Additional details:

  • I've tried loading the files individually, no difference.

  • Incidentally, I have 150 files of 500MB each, within each file the keys are sorted.

  • After getting 40GB in overnight, 12h later, the load rate was down to 0.5MB/sec, meaning the operation is, practically speaking, impossible.

  • I haven't found any other answers to similar questions on other forums, it's seeming to me that InnoDB doesn't support loading large amounts of data into tables over a few GB in size.

How can I optimize this query and support multiple SKUs?

Posted: 17 Mar 2013 08:42 AM PDT

My current query only can select one SKU at a time. I can leave salesite_id constant. If there is a way to also have varying salesite_ids that would be good too, but not necessary. Also any suggestions on indexes would be much appreciated also.

SELECT       available - (          SELECT COALESCE(sum(quantity), 0)           FROM product_locks           WHERE sku = 'sku1'      ) - (          SELECT COALESCE(sum(quantity), 0)           FROM               orderlineitems               INNER JOIN responses_authnets ON responses_authnets.id = orderlineitems.response_id           WHERE               sku = 'sku1' AND responses_authnets.salesite_id = 'site_id_1'      ) AS free,       available AS total,       sku,       on_hold   FROM product_inventories   WHERE sku = 'sku1' AND salesite_id = 'site_id_1';  

Query to find and replace text in all tables and fields of a mysql db

Posted: 17 Mar 2013 02:42 PM PDT

I need to run a query to find and replace some text in all tables of a mysql database.

I found this query, but it only looks for the text in the tbl_name table and just in the column field.

update tbl_name set column=REPLACE(column, 'fuschia', 'fuchsia');   

I need it to look in all tables and all fields: (everywhere in the database)

PostgreSQL: The database cluster initialization failed

Posted: 17 Mar 2013 08:51 PM PDT

Both C:\PostgreSQL and C:\PostgreSQL\data have postgres user with full access and admin rights.

I ran postgresql-9.1.2-1-windows.exe from postgres user as admin. Target C:\PostgreSQL

Every way I try, I get "The database cluster initialization failed."

Questions

  • Is it OK to run everything w/o being a Windows Service?
  • Is there a work-around to install as Windows Service?

I am trying to setup PostGIS to work with GeoDjango.

I was able to manually install PostGIS. New to PostgreSQL and I am having a crisis of confidence over all of this. Coming from MySQL to PostgreSQL for the first time.


Pertinent log output from C:\Users\Larry\AppData\Local\Temp\install-postgresql.log:

WScript.Network initialized...  Called IsVistaOrNewer()...      'winmgmts' object initialized...      Version:6.1      MajorVersion:6  Ensuring we can read the path C: (using icacls) to Larry:      Executing batch file 'radA3CF7.bat'...      Output file does not exists...  Called IsVistaOrNewer()...      'winmgmts' object initialized...      Version:6.1      MajorVersion:6  Ensuring we can read the path C:\PostgreSQL (using icacls) to Larry:      Executing batch file 'radA3CF7.bat'...      Output file does not exists...  Called IsVistaOrNewer()...      'winmgmts' object initialized...      Version:6.1      MajorVersion:6  Ensuring we can read the path C:\PostgreSQL\data (using icacls) to Larry:      Executing batch file 'radA3CF7.bat'...      Output file does not exists...  Called IsVistaOrNewer()...      'winmgmts' object initialized...      Version:6.1      MajorVersion:6  Ensuring we can write to the data directory (using icacls) to  Larry:      Executing batch file 'radA3CF7.bat'...      Output file does not exists...  Failed to ensure the data directory is accessible (C:\PostgreSQL\data)      Executing batch file 'radA3CF7.bat'...      Output file does not exists...  Called Die(Failed to initialise the database cluster with initdb)...  Failed to initialise the database cluster with initdb  

Suggestions?

[SQL Server] I wish to compose Normalised tables for the below requirment Please Help.

[SQL Server] I wish to compose Normalised tables for the below requirment Please Help.


I wish to compose Normalised tables for the below requirment Please Help.

Posted: 17 Mar 2013 06:51 AM PDT

There is requirement as below to create the Data Model for estimation in No of tables as of nowOpeningsResumes Available Per OpeningCandidates ContactedSkills Required(Programing Langs Only: JAVA/.NET/C C++/Perl etc)OS Knowledge (Eq. Windows/Linux/UNIX etc)Candidates Status.(Reject/Select/Hold)Interview detailsAssumption: CandidateID created only when ResumeID available for a candidate once contacted.I Just gave the crack which is below. Please correct to the most granulized form,1) Opening( OpenID, OpenName)2) Resumes( ResumeID, CandidateID, LastDateOfUpdate,Resume)3) ResumeAvailability(OpeningID, ResumeID) - (Derived Table)4) Skills(SkillID, SkillName,SkillDuration)5) SkillAvailability(SkillId, CandidateID) - (Derived Table)6) OS(OperSID, OperSName, OperSDuration)7) OSAvailability(OperSID, CandidateID) - (Derived Table)8) InterviewDetails(InterviewID, EmployeeID, CandidateID, DOI, Rating,Status_F) ; Date of Interview - DOI ; STATUS_F(Entries Allowed S,R,H only)Can you please guide me so to make this more appropriate to real life.

[T-SQL] Pivot Query with Count

[T-SQL] Pivot Query with Count


Pivot Query with Count

Posted: 17 Mar 2013 12:41 AM PDT

Hi thereI am fairly new to SQL Server TSQL and I have a strong MS Access background.I can't get my head around pivot queries is TSQL.I have 3 Colums with data in it:1) City2) Delivery_type3) Date_DeliveredI want to create a pivot query that has the format with 14 columns1) City (Grouped)2) Deliver (Grouped)3) Jan (Count the amount of dates for month of Jan)4) Feb (Count the amount of dates for month of Feb)...14) Dec (Count the amount of dates for month of Feb)If you can guide me in the right direction I will appreciate it, Thx

problem in select command

Posted: 16 Mar 2013 10:53 AM PDT

hi please look at this select command :[Code][b]select top 2 * from dbo.NFFeeds order by id desc [/b][/code]for my database the result is 2 rows .first by id=33 and second by id=32by this command i get two last rows from my table . how can i get this two rows like this :first row by id=32 and second row by id=33

[SQL Server 2008 issues] Upgrading/moving datbases frm 2005 to 2008

[SQL Server 2008 issues] Upgrading/moving datbases frm 2005 to 2008


Upgrading/moving datbases frm 2005 to 2008

Posted: 10 Mar 2013 04:17 AM PDT

After moving databases from 2005 to 2008, do we need to do run any queries (like update usage which we do in 200o to 2005).I will change frm 90 to 100 after necessary precautions. aything else i am missing?

Data not inserting

Posted: 16 Mar 2013 12:20 AM PDT

Hi i have written a query to check the insertion but its not working neither getting any error :Declare @valuebased varchar(13)='0000000000'Declare @valueDiscount varchar(13)='0000000000'Declare @ProductDiscount varchar(13)='0000000000'Declare @IsBooklet BITDeclare @vouchertypeid int, @voucherno varchar(13), @denomination int, @createddate datetime, @expirydate datetime, @voucherstatus int, @quantity int, @isbookletid bitIf @IsBooklet = 1BEGIN DECLARE @count int SET @count =1WHILE (@count < 10) BEGIN SET @voucherno = 'VVB'+CONVERT(varchar(10),MAX(@valuebased)+1) INSERT INTO GV_Voucher (VoucherTypeId,VoucherNo, Denomination, CreatedDate, ExpiryDate, VoucherStatusId, TransactionID, Quantity, IsBooklet) Values (1,@voucherno,10,GETDATE(),GETDATE(),1,'a1',12,0) SET @valuebased = @valuebased + 1 SET @count = @count +1 ENDEND

problem in execute the stored procedure

Posted: 16 Mar 2013 04:16 AM PDT

hii craete a SP in my database . by this command :USE [MyDatabase];GOCREATE PROCEDURE testAS SET NOCOUNT ON; SELECT id, feed1, feed2 FROM NFFeedsi can see this sp under StoredProcedure in my database .so...i write this command in Query :use [MyDatabase]exec dbo.testat this time a red underline appear under 'test' . if i set mouse cursor on it this error shows as a tag :"Could not find stored procedure 'test' "but if i execute the sp , it's execue normal and i can show the result . what this mean?

SQL Server 2008 R2 - Failed to connect to SQL Server

Posted: 16 Mar 2013 04:41 AM PDT

Hello everyone, I am having an issue and I am not sure why. I have every thing set properly I believe, I have an AD DC, The SQL Server is joined properly, I have the Server Action Account set properly. There is no Firewall on the SQL Server and the Server that I am trying to make the connection from is joined to the AD fine. The user account envolved is added correctly. It does btw have to be added local administrators group on the sql box for this app to work.I really do not want to re install all over again, also the named pipes and tcp/ip are enabled on the sql server, also the allow remote connections is enabled also.I don't know where else to look to fix this issue,The server envolved is a vm on an EXi 5.1 host w/2 gb of ram a 40 gb system drive, 40 gb data drive, 20 gb log file drive and 20 gb PageFile drive. I hope that is enough information. Sorry guys my kb has some keys that stick so I make typos occasionally. Thanks in advance,

Adding to Sales Quantities

Posted: 16 Mar 2013 03:23 AM PDT

Hi I have a table that forecasts our Sales for the upcoming year. It contains the following columns: Year (DATE), Company (VARCHAR), Product (VARCHAR), ProductType (VARCHAR), LastYearsSales (INT), ForecastSales (INT).I would like to increase the ForecastSales for particular ProductTypes in the table. So for example for all the LEISURE products I would like to increase the forecasted sales figures by 1000. I don't want to add a 1000 to each LEISURE product but share the 1000 over all of the LEISURE products ie if there were 10 LEISURE products in the table then increase each of their sales by 100 (1000/10). Thanks in advance.BO

Changing passphrase for EncryptByPassPhrase

Posted: 15 Mar 2013 10:48 PM PDT

I am using EncryptByPassPhrase(http://technet.microsoft.com/en-us/library/ms190357(v=sql.105).aspx) for encrypting some columns . But if i want to change the passphrase in future then how can i change it?

Issue in JIRA

Posted: 15 Mar 2013 10:59 PM PDT

Need help on JIRA ,i have created one Subtask for an issue in JIRA but now i need to remove the sub task i am unable to find the option delete ,could anyone help me on this

Saturday, March 16, 2013

[SQL Server] Error "Must declare scalar variable ..." when using cursor

[SQL Server] Error "Must declare scalar variable ..." when using cursor


Error "Must declare scalar variable ..." when using cursor

Posted: 16 Mar 2013 09:03 AM PDT

Hi,Hope you can help me.Every week KPI's for our HR-dept. have to be generated.The problem with this is that the KPI's from the previous week have to be overwritten by the KPI's from the current week. I'll save you the reason for this, but I tried to solve this using a cursor.The cursor variables store the difference in days between today and 7 days back, 14 days back, 21 days back etc.etc. (and stores the corresponding day and year). So for the current week this day-difference is 0; for the previous week this is 7; 2 weeks ago this is 14, 3 weeks ago this is 21 etc.when I run the query for the current week the current week and all the previous weeks have to be inserted in a table and all the existing weeks in the table have to be deleted. The delete-part of the query is not shown below; only the insert part.when I run the following query I get the error:"Msg 137, Level 15, State 2, Line 99Must declare the scalar variable "@kpi_cursor"."I come accross several topics when I google around, but no solution so far. Probably there is a simple solution to the problem.Hope someone can help me with fixing the error.The query is:[code = "sql"]declare @Created varchar(50)declare @ReferenceDay datedeclare @Type varchar(10)set @Created = getdate()set @ReferenceDay = getdate()set @Type = 'Prognosis'-- Cursor variablesDECLARE @DaysJump intDECLARE @Date varchar(10)DECLARE @DateYear varchar(4)-- End cursor variablesDECLARE @Days intSET @Days = @Days + 7DECLARE kpi_cursor CURSOR FOR SELECT @Days as jump, GETDATE() - @Days as jump_date, YEAR(GETDATE() - @Days) as jump_yearWHERE YEAR(GETDATE() - @Days) >= 2013OPEN kpi_cursorFETCH NEXT FROM kpi_cursor INTO @DaysJump, @Date, @DateYearWHILE @@FETCH_STATUS = 0BEGININSERT INTO Q_KPIs_HRM_Test ( KPI, SubKPI, Type, Yr, RefDate, Week, Weekday, StartDate, EndDate, BV, Value, Syscreated, Syscreator )/* 6. Employees out based on 711-workflow */SELECT 'Employees out' as KPI, '' as SubKPI, @Type as Type, YEAR(a.EndDate) as Yr, /* Year of contract enddate */ CONVERT(VARCHAR(10), @ReferenceDay, 105) as RefDay, (select dbo.udf_GetISOWeekNumberFromDate(a.EndDate)) as Week, DATENAME(DW, @ReferenceDay) as Weekday, CONVERT(varchar(50), (GETDATE() - 6 - @DaysJump), 105) as StartDate, CONVERT(varchar(50), (GETDATE() - @DaysJump), 105) as EndDate, h.costcenter as BV, COUNT(*) as Value, CONVERT(VARCHAR(10), @Created, 105) as Syscreated, '4' as Syscreator FROM Absences aLEFT OUTER JOIN humres h ON a.EmpID = h.res_idWHERE ISNULL(a.hid, 1) > 0 and isnull(h.res_id, 999999) > 5000 and a.Type = 711 and a.Status <> 2 and a.EndDate > GETDATE() - @DaysJumpGROUP BY h.costcenter, a.EndDateUNION ALL/* 7. Employees in based on 500-workflow */SELECT 'Employees in' as KPI, '' as SubKPI, @Type as Type, YEAR(a.StartDate) as Yr, /* Year of contract startdate */ CONVERT(VARCHAR(10), @ReferenceDay, 105) as RefDay, (select dbo.udf_GetISOWeekNumberFromDate(a.StartDate)) as Week, DATENAME(DW, @ReferenceDay) as Weekday, CONVERT(varchar(50), (GETDATE() - 6 - @DaysJump), 105) as StartDate, CONVERT(varchar(50), (GETDATE() - @DaysJump), 105) as EndDate, h.costcenter as BV, COUNT(*) as Value, CONVERT(VARCHAR(10), @Created, 105) as Syscreated, '4' as Syscreator FROM Absences aLEFT OUTER JOIN humres h ON a.EmpID = h.res_idWHERE ISNULL(a.hid, 1) > 0 and isnull(h.res_id, 999999) > 5000 and a.Type = 500 and a.Status <> 2 and a.StartDate > GETDATE() - @DaysJumpGROUP BY h.costcenter, a.StartDateFETCH NEXT FROM @kpi_cursor INTO @DaysJump, @Date, @DateYearENDCLOSE kpi_cursorDEALLOCATE kpi_cursor[/code]

Msg 245, Level 16, State 1, Line 1 Conversion failed when converting the varchar value 'Noida' to data type int.

Posted: 16 Mar 2013 04:24 AM PDT

CREATE TABLE EMPLOYEE (EMP_ID INT PRIMARY KEY,FIRSTNAME nvarchar(100),LASTNAME nvarchar(100),Salary INT,CITY nvarchar(100))insert INTO EMPLOYEE VALUES(2, Monu Rathor, 4789.00, Agra);insert INTO EMPLOYEE VALUES(4, Rahul Saxena, 5567.00, London);insert INTO EMPLOYEE VALUES(5, Prabhat Kumar, 4467.00, Bombay);BEGIN TRANSACTIONDeclare @B varchar(100)SET @B = 'Nodia';Declare @D varchar(100)SET @D = 'Delhi'Update employee SET city =(Case WHEN city = 'Agra' THEN @BWHEN city = 'London' THEN @DELSE 0END)ENDThe error faced was as present in the subject. Please helpI also tried the below way,BEGIN TRANSACTIONUpdate employee SET city =(Case WHEN city = 'Agra' THEN 'Delhi' WHEN city = 'London' THEN 'Noida'ELSE 0END)ENDBUT I CAME ACCROSS: Msg 102, Level 15, State 1, Line 9Incorrect syntax near 'END'.Please HELP

Search This Blog