Thursday, February 28, 2013

[how to] unable to change engine to MyISAM in mysql

[how to] unable to change engine to MyISAM in mysql


unable to change engine to MyISAM in mysql

Posted: 28 Feb 2013 08:07 PM PST

i have installed mysql 5.6.10 in mac os 10.6.5

Now the issue is mysql is using InnoDB as default engine.

i have checked following engines are supported in mysql

show engines\G      *************************** 1. row ***************************        Engine: FEDERATED       Support: NO       Comment: Federated MySQL storage engine  Transactions: NULL            XA: NULL    Savepoints: NULL  *************************** 2. row ***************************        Engine: MRG_MYISAM       Support: YES       Comment: Collection of identical MyISAM tables  Transactions: NO            XA: NO    Savepoints: NO  *************************** 3. row ***************************        Engine: MyISAM       Support: YES       Comment: MyISAM storage engine  Transactions: NO            XA: NO    Savepoints: NO  *************************** 4. row ***************************        Engine: BLACKHOLE       Support: YES       Comment: /dev/null storage engine (anything you write to it disappears)  Transactions: NO            XA: NO    Savepoints: NO  *************************** 5. row ***************************        Engine: CSV       Support: YES       Comment: CSV storage engine  Transactions: NO            XA: NO    Savepoints: NO  *************************** 6. row ***************************        Engine: MEMORY       Support: YES       Comment: Hash based, stored in memory, useful for temporary tables  Transactions: NO            XA: NO    Savepoints: NO  *************************** 7. row ***************************        Engine: ARCHIVE       Support: YES       Comment: Archive storage engine  Transactions: NO            XA: NO    Savepoints: NO  *************************** 8. row ***************************        Engine: InnoDB       Support: DEFAULT       Comment: Supports transactions, row-level locking, and foreign keys  Transactions: YES            XA: YES    Savepoints: YES  *************************** 9. row ***************************        Engine: PERFORMANCE_SCHEMA       Support: YES       Comment: Performance Schema  Transactions: NO            XA: NO    Savepoints: NO  

and i want to change it to MyISAM , so that i have changed the setting in my.cnf

[mysqld]  default-storage-engine=MyISAM  

after that i restart the mysql from system preferences. but if i again check the engines and create a table its being still created under InnoDB but not MyISAM

can anyone tell me how would i change it to MyISAM

here is my.cnf file content

# For advice on how to change settings please see  # http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html    [mysqld]  default-storage-engine=MyISAM    # Remove leading # and set to the amount of RAM for the most important data  # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.  # innodb_buffer_pool_size = 128M    # Remove leading # to turn on a very important data integrity option: logging  # changes to the binary log between backups.  # log_bin    # These are commonly set, remove the # and set as required.  # basedir = .....  # datadir = .....  # port = .....  # server_id = .....  # socket = .....    # Remove leading # to set options mainly useful for reporting servers.  # The server defaults are faster for transactions and fast SELECTs.  # Adjust sizes as needed, experiment to find the optimal values.  # join_buffer_size = 128M  # sort_buffer_size = 2M  # read_rnd_buffer_size = 2M    # sql_mode=STRICT_TRANS_TABLES   

"custom archiver out of memory" error when restoring large DB using pg_restore

Posted: 28 Feb 2013 05:37 PM PST

I'm trying to a restore a local copy of a 30GB DB (with only one table) dumped using the Heroku wrappers (described here) using pg_restore. The Heroku DB is using 9.1.7 and I have had the same result using 9.1.5 and 9.1.7 under OS X 10.8.

Using the command:

pg_restore --clean --no-acl --no-owner -d db_name -U username -h localhost -v db.dump  

I get the following output on the command line ("cleaning" commands omitted) after 20 minutes and seeing the DB size grow to about 30 GB:

pg_restore: creating SCHEMA public  pg_restore: creating COMMENT SCHEMA public  pg_restore: creating EXTENSION plpgsql  pg_restore: creating COMMENT EXTENSION plpgsql  pg_restore: creating TABLE video_data  pg_restore: creating SEQUENCE video_data_id_seq  pg_restore: creating SEQUENCE OWNED BY video_data_id_seq  pg_restore: executing SEQUENCE SET video_data_id_seq  pg_restore: creating DEFAULT id  pg_restore: restoring data for table "video_data"  pg_restore(11938) malloc: *** mmap(size=18446744071605936128) failed (error code=12)  *** error: can't allocate region  *** set a breakpoint in malloc_error_break to debug  pg_restore: [custom archiver] out of memory  pg_restore: *** aborted because of error  

Turning up log level to DEBUG1 reveals the error:

...(many of these checkpoints hints)...  LOG:  checkpoints are occurring too frequently (15 seconds apart)  HINT:  Consider increasing the configuration parameter "checkpoint_segments".  ERROR:  extra data after last expected column  CONTEXT:  COPY video_data, line 463110542: "3599794500:59:01497 797942  2012    23422-08-96 2400892 08.794219   08 0617 2342118769-08 72    115 2046225 1..."  STATEMENT:  COPY video_data (id, video_id, views, favorites, comments, created_at, likes, dislikes) FROM stdin;  

I have tried increasing SHMMAX and SHMALL to 2GB, checkpoint_segments to 30, as well as upgrading PG to match patchlevels. A search reveals a small handful of people with this problem several years back, and no resolution.

Any recommendations?

How much data is needed to show MySQL Cluster's performance scaling vs. InnoDB

Posted: 28 Feb 2013 03:36 PM PST

I am evaluating MySQL Cluster as a possible replacement for an InnoDB schema. So far, I have tested it with 10s of MB of data, and found MySQL Cluster slower than InnoDB; however, I have been told MySQL Cluster scales much better.

How much data does it take to show a performance benefit to MySQL Cluster vs. an InnoDB schema? Or, is there a better way to demonstrate MySQL Cluster's merits?

Open source tools to manage Data Quality

Posted: 28 Feb 2013 03:30 PM PST

Are there open source (or commercial) tools that can report data quality issues in a data warehouse using the Kimball star schema model? It doesn't have to be the silver bullet. I'm just looking for something to start with that can give basic details of data quality.

Load balancing after clustering is failed in sql server cluster

Posted: 28 Feb 2013 03:24 PM PST

This is what i have been asked today on my interview for sql server DBA position:

It is an active - active cluster configuration. The primary node is in NY and the other node is in Rio. The NY server is failed and the failover cluster took over the activities. But the Rio server is becoming overloaded because of the additional activities. What will you do to solve the overload problem?

Please help as i am a newbee for the clustering concept.

Thanks. GTT

Compressions vs shrinking a database

Posted: 28 Feb 2013 01:40 PM PST

Is compression and shrinking a database in SQL Server same? If I compress a table, will it decrease query performance and DML?

Find all the queries executed recently on a database

Posted: 28 Feb 2013 01:21 PM PST

[I am a beginner level T-SQL Programmer]
[..and hopefully I am on the right stack exchange site]

I would like to get a list of all the queries I executed (in the least, those I executed today since morning). I need to make a report on the execution times of the queries.

Online search didn't yield me much useful info. The only query that I found online which seemed pretty close is

SELECT      deqs.last_execution_time AS [Time],       dest.TEXT AS [Query]   FROM       sys.dm_exec_query_stats AS deqs      CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest  ORDER BY       deqs.last_execution_time DESC  

This query returned some odd results (most of which were a bunch of sprocs). Moreover, all these results show queries executed since today afternoon (I need queries from morning).

I couldn't find anything in the previous questions (if similar question has already been asked, please point me to it).

I saw some suggestions regarding SQL Profiler, but I guess profiler would help me only if I have already started the trace (correct me if I am wrong).

Can someone suggest me how I should go about to get a list of all queries that have been executed on the database since morning (including the query execution times)..

[It would be helpful (not a requirement) if I can also somehow get the username of the user who executed the query]

MySQL specific database configuration file

Posted: 28 Feb 2013 12:48 PM PST

In MySQL's configuration file I've globally disabled autocommit as so.

[mysqld]  autocommit=0  

I need to turn MySQL's autocommit on for a specific Ruby on Rails database though. It could be for the user or the database itself (doesn't matter). Thinking it would look something like this.

[mysqld]  autocommit=0  execute_sql="Custom SQL to set autocommit for a database"  

Book / Tutorials / Website - Creating Custom C Database

Posted: 28 Feb 2013 05:03 PM PST

I want to create the custom database for my project. Kindly recommend me Good Book or Tutorials or Website which provide the detailed information.

The features want to implement are full text search, server/client application and encryption of text. As all the text is available in plain text.

The application would be cross platform compatible using MingW32 through Cygwin to make portable.

mysql query - how to delete duplicates based on highest int?

Posted: 28 Feb 2013 10:48 AM PST

I'm running a reverse auction website and I need to delete the duplicate bids and keep only the highest amount . lastPriceBid_rounded is the "amount". The duplicate is defined by the userId.

+----------------------+--------------+------+-----+-------------------+-------------------+  | Field                | Type         | Null | Key | Default           | Extra             |  +----------------------+--------------+------+-----+-------------------+-------------------+  | lastPriceBid         | varchar(125) | YES  |     | NULL              |                   |  | dateBidded           | varchar(125) | YES  |     | NULL              |                   |  | owner                | varchar(65)  | YES  |     | NULL              |                   |  | status               | int(11)      | YES  |     | 0                 |                   |  | userId               | varchar(125) | YES  |     | NULL              |                   |  | siteId               | int(11)      | YES  |     | NULL              |                   |  | lastPriceBid_rounded | int(11)      | YES  |     | NULL              |                   |  | from_nickname        | varchar(125) | YES  |     | NULL              |                   |  | id                   | int(11)      | NO   | PRI | NULL              | auto_increment    |  | time_updated         | timestamp    | NO   |     | CURRENT_TIMESTAMP | on update         |  |                                                                        CURRENT_TIMESTAMP |  +----------------------+--------------+------+-----+-------------------+-------------------+  

So far I've tried

DELETE n1   FROM offers n1, offers n2   WHERE n1.lastPriceBid_rounded < n2.lastPriceBid_rounded   AND n1.userId = n2.userId;  

but it doesn't seem to work well. I still have some duplicates (I see that when I use SELECT DISTINCT).

Which text-index I should create for xooops engine to achieve better search results?

Posted: 28 Feb 2013 06:04 PM PST

In one of projects we use xoops engine to manage content. In mysql slow query log most of queries are following :

SELECT p.uid,f.forum_id, p.topic_id, p.poster_name, p.post_time, f.forum_name, p.post_id, p.subject              FROM xps33_bb_posts p,              xps33_bb_posts_text pt,                  xps33_bb_forums f WHERE p.post_id = pt.post_id AND p.approved = 1 AND p.forum_id = f.forum_id AND f.forum_id IN (1,4,61,7,9,17,20,45,35,44,38,39,43,53,54,55,56,57,58,60,14,29,40,26,18,41,33,24,32,59,25) AND ((p.subject LIKE '%rivi%' OR pt.post_text LIKE '%orvi%') AND (p.subject LIKE '%care%' OR pt.post_text LIKE '%gor%'))  ORDER BY p.post_time DESC LIMIT 0, 5;  

I can't change them as It would involve changing the engine which is not an option atm. But I can help the engine to search faster. As I understood as the table uses MyIsam engine I can create text indicies which should make search faster, am I right?

So in general for which indicies I should create to avoid following queries run for long time?

Making a message system

Posted: 28 Feb 2013 12:34 PM PST

I have created a messaging system with MySQL structure is as follows

      `message_id` int(11) NOT NULL AUTO_INCREMENT,        `message_from` int(11) NOT NULL,        `message_to` int(11) NOT NULL,        `message_body` varchar(255) NOT NULL,        `message_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,        `message_deleted` tinyint(1) NOT NULL DEFAULT '0',        `message_read` tinyint(1) NOT NULL DEFAULT '0',        PRIMARY KEY (`message_id`),        KEY `message_from` (`message_from`),        KEY `message_to` (`message_to`)  

Currently The PK is an AUTO_INCREMENT so I am wondering if using this kind of key will be efficient for this type of system. Also the main reason why i am having a asking this question is to help in my function I have been having some issues with loading more messages. due to the fact that messages are deleted the PKs will not be consistent such as 1,2,3,10,255,300. When I try to run the more function using

(SELECT MAX(message_id)   FROM member_messages    WHERE (message_to = :userId   AND message_from = :friendId)   OR     (message_to = :friendId   AND message_from = :userId))AS max_id,  FROM member_messages AS MSG  WHERE (MSG.message_to = :userId  AND MSG.message_from = :friendId)  OR   (MSG.message_to = :friendId  AND MSG.message_from = :userId)  AND MSG.message_deleted = '0'  HAVING MSG.message_id >= max_id - 20  AND MSG.message_id <= :upperId   ORDER BY message_id ASC  

In effect this query should return messages from the last 20 entered, but due to the PK being inconsistent sometimes no messages are returned. Eg. if the max_id is 200, 200- 20 is 180 and the messages with ID between this range are deleted or not associated with this members(userId) recipient(friendId). Any insight into this problem as to whether I should change the key type or change the query itself? Thanks for your time.

EDIT

Just a side note I used ASC because I wanted the newest messages to appear at the bottom of the DIV, while the fetched results would be older and appear at the top.

String handling in postgres 8.4

Posted: 28 Feb 2013 05:07 PM PST

This started as a question about indexes, but then when I got my answer I realized I would have to try and convert the answer into 8.4 syntax (I can't believe that format does not exist).

I thought I was handling the variables correctly, but for some reason I can't get the typing to function properly:

CREATE OR REPLACE FUNCTION add_column(schema_name TEXT, table_name  TEXT,  column_name TEXT, data_type TEXT) RETURNS BOOLEAN AS $BODY$  DECLARE   _tmp text;  BEGIN       EXECUTE 'SELECT COLUMN_NAME FROM information_schema.columns WHERE        table_schema='''+|| schema_name+'''       AND table_name='''+||table_name+'''       AND column_name='''+||column_name+''''   INTO _tmp;       IF _tmp IS NOT NULL THEN       RAISE NOTICE USING MESSAGE = 'Column '+||column_name+' already exists in '+||schema_name+'.'+||table_name;       RETURN FALSE;   END IF;       EXECUTE format('ALTER TABLE %I.%I ADD COLUMN %I %s;', schema_name,   table_name, column_name, data_type);         RAISE NOTICE USING MESSAGE = 'Column '+||column_name+' added to '+||schema_name+'.'+||table_name;       RETURN TRUE;  END; $BODY$ LANGUAGE 'plpgsql';  

Here is the error message I get:

ERROR: operator does not exist: text + unknown

Does anyone know how I can get this to work in 8.4?

postgres GROUP BY and ORDER BY problem

Posted: 28 Feb 2013 07:46 PM PST

I have two tables like this:

CREATE TABLE cmap5 (     name     VARCHAR(2000),     lexemes      TSQUERY   );  

and

 CREATE TABLE IF NOT EXISTS synonyms_all_gin_tsvcolumn (      cid         int NOT NULL references pubchem_compounds_index(cid) ON UPDATE CASCADE ON DELETE CASCADE,      name            VARCHAR(2000) NOT NULL,        synonym         VARCHAR(2000) NOT NULL,      tsv_syns                TSVECTOR,      PRIMARY KEY (cid, name, synonym)  );  

My current query is:

SELECT s.cid, s.synonym, c.name, ts_rank(s.tsv_syns,c.lexemes,16)   FROM synonyms_all_gin_tsvcolumn s, cmap5 c WHERE c.lexemes @@ s.tsv_syns  

and the output is:

cid     |  synonym                              | name (query)              | rank  5474706 | 10-Methoxyharmalan                    | 10-methoxyharmalan        | 0.0901673  1416    | (+/-)12,13-EODE                       | 12,13-EODE                |  0.211562  5356421 | LEUKOTOXIN B (12,13-EODE)             | 12,13-EODE                |  0.211562   180933 | 1,4-Chrysenequinone                   | 1,4-chrysenequinone       |  0.211562  5283035 | 15-Deoxy-delta-12,14-prostaglandin J2 | 15-delta prostaglandin J2 |  0.304975  5311211 | 15-deoxy-delta 12 14-prostaglandin J2 | 15-delta prostaglandin J2 |  0.304975  5311211 | 15-deoxy-Delta(12,14)-prostaglandin J2| 15-delta prostaglandin J2 |  0.304975  5311211 | 15-Deoxy-delta-12,14-prostaglandin J2 | 15-delta prostaglandin J2 |  0.304975  5311211 | 15-Deoxy-delta 12, 14-Prostaglandin J2| 15-delta prostaglandin J2 |  0.304975  

I would like to return the name matches of all rows in cmap5 in my main table ranked by the ts_rank function but for each row in cmap5 I want to:

 -- select only the best X cids to each query (group by cid)   -- or ORDER BY my results as 1+ts_rank/count(cid)  

To get the best match I add a select distinct on c.name, but when the rank is the same I wanna get the cid with more matches to the query. i have tried adding a simple group by at the end of the query but I get an error, how could I do this?

SQL Server 2005 Replication

Posted: 28 Feb 2013 02:06 PM PST

I am in the process of creating Replication between 2 Remote Servers, server 1 is the Distributor and Publisher and server 2 is the Subscription.

server 1 windows 2003 server 192.168.10.1 connected by vpn SQL Server 2005 domain1.local

server 1  windows 2003 server  192.168.10.1 connected by vpn  SQL Server 2005  domain1.local  

server 2 windows 2003 server 192.168.10.6 connected by vpn SQL Server 2005 domain2.local

server 2  windows 2003 server  192.168.10.6 connected by vpn  SQL Server 2005  domain2.local  

When I setup up Replication everything looked fine until I looked at the sync status and it said:

The Agent could not be started    An exception occurred while executing a transact-sql statement or batch    sqlserveragent error request to run job  server1-username blah blah blah  

From user sa refused because the job is already running from a request by user sa changed database context to technical error 22022.

I have cleared jobs in the server agent as well as restarted the service.

Could this be something to do with authentication between two non trusted domains as I can browse and even control each sql server via SQL studio but just not setup replication?

Yes I can manage each SQL Server in SSMS and we are using merge with snapshot.

How to Correctly Estimate the Average Number of I/O Operations my RAID 10 disk can offer?

Posted: 28 Feb 2013 03:41 PM PST

The average Input/Output Operations per Second a disk can sustain is a very important factor in the tuning of MySQL / InnoDB databases, as it drives the speed with which Dirty InnoDB Buffer Pool pages are flushed back to the disk.

Is there a method or a formula I can use to estimate the average I/O Operations per Second my RAID 10 disk array can run on?

Mysqldump tables excluding some fields

Posted: 28 Feb 2013 11:15 AM PST

Is there a way to mysqldump a table without some fields?

Let me explain:
I have a MySQL database called tests. In tests I have 3 tables: USER, TOTO and TATA. I just want to mysqldump some fields of table USER, so excluding some fields like mail, ip_login, etc.

How can I do this?

How to do something like UPDATE DELAYED in MySQL

Posted: 28 Feb 2013 12:29 PM PST

I have an averages table that should keep track of an average value over time. I don't want to have a row for each value, just a single row that continuously updates the average. What I've come up with is this:

set @value=4;  set @name="myAverageValue";  UPDATE `timing` SET    `max` = greatest(`max`,@value),    `average` = `average` + ((@value - `average`) / (`count` + 1)),    `count` = `count` + 1  WHERE `name` = @name  

Many clients may be doing this at the same time, and I don't want there to be any locking issues. I don't care what order the updates are run in, since in the end it will all end up the same. I just want to have a query that sends the UPDATE to the database, and it will process it eventually, similar to an INSERT DELAYED. Does UPDATE LOW_PRIORITY do this, or does that cause the client to wait until it is available?

SQL Server 2012 database backup successfully report but no backup file

Posted: 28 Feb 2013 06:29 PM PST

enter image description here

I have created maintenance plan in SQL Server 2012. And every day, the maintenance plan should backup the database. There is no database backup file when I look in the folder where the backups must be stored. But SQL Server logs history about this maintenance plan are successful.

What is the problem?

I am using SQL Server 2012. The operating system is Windows Server 2008 R2.

Thank you for paying attention.

enter image description here

phpmyadmin - Login issue, takes me back to the login page after logging in

Posted: 28 Feb 2013 03:08 PM PST

I'd setup phpmyadmin quite a while ago and it worked well and I got everything setup and working etc... However, I went to login recently and it always just takes me back to the login page. The credentials are correct, but never the less I always end up at the login screen.

I can't find anything in my error logs, and in the access logs everything is status code 200, except the POST which is 302 followed by 304s. None of that really stands out.

I'm using Nginx+PHP5-FPM+MySQL if that changes anything.

What types of things might I have changed that would cause this sort of activity?

Replication master binlog rotation when network is unavailable

Posted: 28 Feb 2013 02:29 PM PST

I recently experienced an issue where the binlog file in master rotated because network connectivity between the slave and master was unavailable.

After solving the network issue, the slave was not able to follow the master as it was doing through previous binlog and position.

That was solved by purging the binlog to last binlog in master and pointing the slave to that last binlog and previous binlog's position which was following.

I am wondering if this issue is normal?

Take individual MySQL database offline

Posted: 28 Feb 2013 05:29 PM PST

MySQL does not have any SQL commands or internal mechanisms for

  • making an individual database unavailable / offline
  • moving an individual database

Thtis being the case, how can you take an individual database offline?

Choosing shard key and friendly URL Ids for my MongoDB

Posted: 28 Feb 2013 07:29 PM PST

I have decided to use MongoDB as my Database for a web application. However, I have some difficulties to get started and I hope that you can help me out with a few questions.

I am developing my application in ASP.NET and with MongoDB as the back-end. I intend to start with a single server + 1 replication but wanted to built it right so I won't have problem sharding the database in the future if I have to.

One of my biggest problems is choosing the right shard key and friendly URLs for my website.

I have a folders collection and files as embedded collection inside the folders collection. Each user can create any number of folders and add files to it. Each folder belongs to one user. I wanted to know what is the best shard key for this type of collection? Many queries will query by the user, getting the folder and its items by querying the folders collection by its unique id. I will also use the id in the URL to get the folder and its filers: ex. mywebsite.com/folder/[the_id_of_the_folder]

I will also will use paging in my application, so I need to query the data (also in a sharded environment) and get for example: the last 10 records, page 2 with 10 records - all ordered by the last time the were inserted/updated

  • So my first question is what is the best shard key to use for a single machine, but considering that I will have to shard in the future
  • Does the shard key has to be the primary unique id of the document in MongoDB?
  • How can I generate more user friendly URLs - I prefer a numerical value instead of GUID (is there option to convert it?)

Help will be very appreciated, as I am stuck and can continue until I solve this.

How to write a shell script to check the mysql database backup status.

Posted: 28 Feb 2013 08:29 PM PST

I have mysql production databases running in main server(Linux) and from this server every night the backup is taken into local server. Now i want to find a shell script to check if all the databases backup is up to date or not(for an example: i have twenty database in the main server so i want to know all of them have been backed up completely or not) and compare the each database's current size with their previous size, in a sense i want to make sure with through this script that my backup is perfect and if not i will have a track of that occurrence.

Please help me to get a solution of this problem.

Creating a New Database Across a Network with SQL Server 2008 R2

Posted: 28 Feb 2013 08:08 PM PST

I appreciate that the above should not be done ('Creating a New Database Across a Network with SQL Server 2008 R2') for all of the reasons outlined here but a client wants to know how to do this and cannot be told otherwise.

To setup and test this I have two laptops connected across a local area network (machine one is 'VAIOE', machine two is 'VAION'). I am attempting to write a new database using VIAOE running SQL Server 2008 R2 Developer Edition across the network to a shared drive on VIAON called 'CShare' (VAION's C:Drive). I have enabled all of the appropriate permissions such that I am able to navigate (using VAIOE) on to VAION '\VAION\CShare\Databases\' and create a new Access database (for example).

Now to attempt to create a database (from VAIOE onto the C:Drive of VAION) using SQL Server 2008 R2 in management studio, I open a new scipt and write (USING Master):

CREATE DATABASE [networkedR2] ON  PRIMARY  ( NAME = N'networkedR2', FILENAME = N'\\VAION\CShare\aaData\aaSSDBs\networkedR2.mdf' ,     SIZE = 3072KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB )   LOG ON   ( NAME = N'networkedR2_log', FILENAME = N'\\VAION\CShare\aaData\aaSSDBs\networkedR2_log.ldf' ,     SIZE = 1024KB , MAXSIZE = 2048GB , FILEGROWTH = 10%)   GO  

and I get the following error

Msg 5133, Level 16, State 1, Line 1  Directory lookup for the file "\\VAION\CShare\aaData\aaSSDBs\networkedR2.mdf" failed with the operating system error 5(failed to retrieve text for this error. Reason: 15105).  Msg 1802, Level 16, State 1, Line 1  CREATE DATABASE failed. Some file names listed could not be created. Check related errors.  

indicating I do not have the correct permissions. Can someone tell me ewhat I have done wrong here as I have hit a wall?

Edit: I am aware that for earlier versions of SQL Server the DBCC TRACEON(1807, -1) flag was required. This should not be the case here.

Is MySQL Replication Affected by a High-Latency Interconnect?

Posted: 28 Feb 2013 12:26 PM PST

We've got a vanilla master and slave MySQL setup that reside in different data centers, and another slave in the same datacenter as the master.

The bandwidth between the datacenter is pretty high (in network benchmarks we've done we can reach 15MB/second), but latency exists, it is around 28ms. It's not high by any means, but it is much higher than the sub-second latency in the same datacenter.

Occasionally, we experience serious lags (2000 seconds and more) with the remove slave, while the local slave stays up to date. When looking at the lagging remote slave, the SQL thread usually spends the time waiting for the IO thread to update the relay log. The master shows "waiting for net" or something of the sort at the same time.

So it means it's network, but we still have free bandwidth at the time this happens.

My question is: can the latency between the datacenters affect the performance of replication? Does the slave io thread just stream the events until the master stops sending them, or is it pooling the master somehow between events?

Sql Anywhere 11: Restoring incremental backup failure

Posted: 28 Feb 2013 07:08 PM PST

We want to create remote incremental backups after a full backup. This will allow us to restore in the event of a failure and bring up another machine with as close to real time backups as possible with SQL Anywhere network servers.

We are doing a full backup as follows:

dbbackup -y -c "eng=ServerName.DbName;uid=dba;pwd=sql;links=tcpip(host=ServerName)"      c:\backuppath\full  

This makes a backup of the database and log files and can be restored as expected. For incremental backups I've tried both live and incremental transaction logs with a renaming scheme if there are multiple incremental backups:

dbbackup -y -t -c "eng=ServerName.DbName;uid=dba;pwd=sql;links=tcpip(host=ServerName)"      c:\backuppath\inc    dbbackup -y -l -c "eng=ServerName.DbName;uid=dba;pwd=sql;links=tcpip(host=ServerName)"       c:\backuppath\live  

However, on applying the transaction logs on restore I always receive an error when applying the transaction logs to the database:

10092: Unable to find table definition for table referenced in transaction log

The transaction log restore command is:

dbeng11 "c:\dbpath\dbname.db" -a "c:\backuppath\dbname.log"  

The error doesn't specify what table it can't find but this is a controlled test and no tables are being created or dropped. I insert a few rows then kick off an incremental backup before attempting to restore.

Does anyone know the correct way to do incremental backup and restore on Sql Anywhere 11?

UPDATE: Thinking it may be related to the complexity of the target database I made a new blank database and network service. Then added one table with two columns and inserted a few rows. Made a full backup, then inserted and deleted a few more rows and committed transactions, then made an incremental backup. This also failed with the same error when attempting to apply the incremental backups of transaction logs after restoring the full backup ...

Edit:

You can follow this link to see the same question with slightly more feedback on SA: http://sqlanywhere-forum.sybase.com/questions/4760/restoring-incrementallive-backup-failure

[SQL Server] Current date data problem in SSRS

[SQL Server] Current date data problem in SSRS


Current date data problem in SSRS

Posted: 28 Feb 2013 06:52 AM PST

Hi all,Trying to get data by month for current month and previous month in ssrs but couldn't able to get it. I was using this statement in procedure:IF @CurrentDate IS NULL BEGIN SET @CurrentDate = DATEADD(MM, -1, GETDATE())ENDThis gives me records up to Jan and not for Feb month. I want also Feb month in there. How should I write this. Hope this makes sense.Thanks

Removing a secondary tempdb file

Posted: 27 Feb 2013 08:55 PM PST

Hello.I created a secondary tempdb file on our testing server the other day and i wish now to remove it.I have runUse tempdbalter database tempdbremove file tempdev2and i got an error sayingThe file "D:\tempdb\tempdb2.ndf" has been modified in the system catalog. The new path will be used the next time the database is started. Msg 5042, Level 16, State 1, Line 1 The file 'tempdb2′ cannot be removed because it is not empty.When i restared the instance - sure enough the file was still there.Do i need to run some kind of shrink operation first ?Any help would be great - thanks very muchSteven

DBMail service stopped - Job notification sent

Posted: 28 Feb 2013 04:45 AM PST

DBMail was installed on a 2008 SQL Server platform. Profile, alerts and job monitoring were setup. Due to other environment problems - all alerts and job monitoring were disabled. We had other issues which would have sent 50-100 emails a day (I had a few hours of that).Once these issues were resolved, alerts and job monitoring were re-enabled - then bounced the SQL Server agent. Sending a test DB mail failed (Error: 14641 - Database Mail is stopped). Running sysmail_help_status_sp showed the DBMail was stopped.While researching (I decided not to run sysmail_start_sp initially), I noticed 2 job notification emails sent well after bouncing the SQL agent (two days later). Running sysmail_help_status_sp yielded the same - dbmail was still stopped.My question - how is it that DBMail service appears stopped yet DBMails are generated? Comments / URLs are appreciated.Thank you.

[Articles] More Evolution, More Complexity

[Articles] More Evolution, More Complexity


More Evolution, More Complexity

Posted: 27 Feb 2013 11:00 PM PST

As SQL Server advances and evolves, Steve Jones thinks it gets more complex, not necessarily easier to administer.

[MS SQL Server] Applying checkpoint

[MS SQL Server] Applying checkpoint


Applying checkpoint

Posted: 28 Feb 2013 04:03 AM PST

Hi,Do you need to have access to master in order to apply checkpoint in your query? Please advise. The login I am using has db_owner rights...not mapped to master.....is this good enough to run checkpoint?SueTons.

Strange error, cannot script out jobs from SQL.

Posted: 28 Feb 2013 04:01 AM PST

When trying to script out a current job on our MSX sql server, we receive this error,"Unable cast object of type 'system.dbnull' to type 'system.string'. (microsoft.sqlserver.smo)"I cannot find a lot of information on this topic. what would allow this job to run, yet prevent it from being scripted out and moved to our other MSX server?

Sudden slow performance from distributor to subscriber in transactional replication?

Posted: 27 Feb 2013 09:35 AM PST

We've had transactional replication enabled for 6-8 months to update a secondary database instance that's used for reporting purposes. It's a single subscriber, set to pull.At some point over the last few days the subscriber apparently stopped being able to keep up with the replicated data, and now I'm sitting on anywhere from 2-3 million undistributed commands at any given point. Stuff is making it from the distributor to the subscriber, just very slowly apparently.The problem of course being that I can't find any reason for this sudden change. Nothing has changed from a network/SAN/server perspective that we can find, there's no errors in the SQL or SQL agent logs, and windows event logs are also quite boring. These servers are connected via multiple 10GBe links to a core switch, and our networking gear looks pretty bored, same deal on our SAN.If I spend some time sifting through the commands in the distributor db, nothing really crazy jumps out at me other than the sheer volume of commands sitting in there. If I look on the subscriber all I see are updates being applied without any blocking or anything unusual going on there. If I insert a trace token, latency from Publisher to Distributor is 1-2 seconds, while Distributor to Subscriber has still been pending after around 7-8 hours. I've tried restarting the various SQL agent jobs, nothing really changes there.Any suggestions on what to look at? I'm running out of ideas here.

Multiple Transaction Logs, will removing one break Log Shipping?

Posted: 27 Feb 2013 06:45 AM PST

I have a production server that a former DBA created additional LDF files for a given database - it is quite small (5GB)The database itself is about 100GB PRIMARY file group, a 55 GB Index file group, and it's primary log file sitting at about 50GB, then they created the smaller 5GB log file on a different drive.I'd like to remove this smaller log file altogether by issuing [code="sql"]USE MyDB;DBCC SHRINKFILE (N'MyDB_log' , EMPTYFILE);IF @@ERROR = 0BEGIN ALTER DATABASE MyDB REMOVE FILE MyDB_log;END[/code] Will this work or will it break my existing log shipping configuration? My guess is that I'd be okay but just want to be certain.Any recommendations from the experts?

Does the adrenaline rush ever subside when rebooting a DB server.

Posted: 27 Feb 2013 06:04 AM PST

I work in a small shop as the accidental DBA and for the first time rebooted our production DB server after applying CU 9 to fix some issues we were having. It went fine but until the server booted up and sql started I had that adrenaline rush of what happens if ... Does that ever get better when you have to do something to a Major point of failure (We have a backup server but it would take several hours to get up and running again).This time every thing went fine and i had proper backups but it was still nerve racking.

Job Execution Details

Posted: 28 Feb 2013 12:03 AM PST

Hi Experts,One of our maintenance job which was not supposed to run on weekdays executed 3 times today. Job history is showing its invoked by system account .Is there any way we can figure out how the job started??TIA

sysfiles and master_files out of sync

Posted: 27 Feb 2013 09:38 PM PST

Hi all,I apologise if this has been raised before, but I couldn't find it using search...I've come across a situation on one of my servers where the entries in sysfiles and master_files are out of sync for the master database.As far as I was aware this is a situation that should not be able to happen, and all tests I've done on my dev server with changing the logical file name (admittedly not on the master database) have failed to recreate it.Here is some TSQL I'm using:[code="sql"]use master;go--reports master log file logical name as 'master_log'select name, fileidfrom sysfilesorder by fileid;--reports master log file logical name as 'mastlog'select name, [file_id]from sys.master_fileswhere database_id = db_id()order by [file_id];select FILEPROPERTY('master_log', 'SpaceUsed');select FILEPROPERTY('mastlog', 'SpaceUsed');[/code]and the result sets I'm seeing:[code="other"]name fileid---------- ------master 1master_log 2name file_id---------- -----------master 1mastlog 2----------68----------NULL[/code]Server is Win2k8 with SP1, and SQL Server instance is 2k8R2 with SP1.Any help and/or advice around how things could of got in to this state would be much appreciated!Russell

SQL Database code comparison

Posted: 27 Feb 2013 05:21 AM PST

Good Day all,I am in a process of setting up a full blown test and development environment. We are primarily an ASP.NET C# shop. I will start using Team Foundation Service for my code and source control. I am wondering on a good approach to manage SQL changes such as: Table changes, Stored procedure changes,Function changesIs there software that can track changes between 2 databases? I understand that you can track changes within TFS but I have some doubts about that… Anyways any input is greatly appreciated.

delete a large amount of rows in a table

Posted: 27 Feb 2013 05:18 AM PST

Please help me make this query more efficient because i been waiting for more then 1 hour for query to complete and it did not finish and i got like 25 large tables to delete from:this is what i wrote it like but it is very slow ..SELECT * INTO WORKTBL FROM [CHARACTER_01_DBF].[dbo].SKILLINFLUENCE_TBL WHERE m_idplayer NOT IN (SELECT DISTINCT(m_idPlayer) COLLATE DATABASE_DEFAULT FROM [LOGGING_01_DBF].[dbo].[LOG_LOGIN_TBL] WHERE End_Time > '20130123015613')DROP TABLE [CHARACTER_01_DBF].[dbo].SKILLINFLUENCE_TBLSELECT * INTO [CHARACTER_01_DBF].[dbo].SKILLINFLUENCE_TBL from WORKTBLDROP TABLE WORKTBLDo you know any better method ? much faster then hours and hours of waiting for it to complete ?

[SQL 2012] SQL Server 2012 Certification Study Guides

[SQL 2012] SQL Server 2012 Certification Study Guides


SQL Server 2012 Certification Study Guides

Posted: 28 Feb 2013 01:51 AM PST

I've finally embarked upon the certification path after (too many) years of broken resolutions.I'm choosing to go the study guide route rather than a boot camp or other course, as it allows me to also fit in the study during quiet periods in my current job.I'm starting from scratch, so the path is easy:70-461, 70-462 and 70-463 for MCSA followed by 70-464 and 70-465 for the Data Platform MSCE and (if I still have the will to live) 70-466 and 70-467 for the BI MCSEI have the Oreilly Study guides for the 3 MCSA exams and am forging my way through them.However, I was just browsing for study guides for the MSCE exams and am unable to find any.Does anyone have any details of study guides for 70-464, 70-465, 70-466 or 70-467 please?Ideally both printed and electronic format, but either would be fine.Thanks in advanceApologies - I should have posted this in the Career/Certification forum.

Page life expectancy diving to sub-10 on a 128 GB server

Posted: 27 Feb 2013 09:36 AM PST

Last weekend we migrated our primary ERP system from a Windows 2003, SQL 2008R2 server to a windows 2008R2, SQL 2012 server.The new servers are much beefier than the old ones, including an extra 32 gig of ram. The ERP database is ~500 gig, of which a good amount is audit data. On the old server I generally had page life expectancy values around 1000 or more for normal daily activity.On the new server I have been capturing perfmon stats for the past week and I see the PLE jumping all over the place. It climbs up into the several hundred but then suiddenly dives to ridiculously low values like 12 or even 7. The server usage profile has not changed, it's still the same users and applications doing the same things they were doing last week.The instance is set to have min 70 gig, max 110 gig allocated to SQL.The service account has lock pages in memory permission.Can anyone think of some setting I might have missed during migraiton, or some new option to be set in SQL2012 that I've overlooked, that could explain this?

Downgrade Replication

Posted: 27 Feb 2013 02:02 PM PST

Is possible to build a database replication from SQL 2012 to sql 2008?

[T-SQL] converting Varchar details into nvarchar of my table

[T-SQL] converting Varchar details into nvarchar of my table


converting Varchar details into nvarchar of my table

Posted: 27 Feb 2013 09:29 PM PST

hai friends , i am create on table like create table wish( id int identity, name nvarchar(100), city varchar(100), comments nvarchar(max))insert into wish values('rama','chennai','happy birthdy')i ve inserted the wishes also in tamil telgu like hindhi unfortunately i m inserted value in web page in varchar mode enter the comments of tamil its diplay like ??????now i wanna retrive nvacrhar of tamil .....how to modify the data varchar data into nvarchar alrdy inserted

Grouping records by time interval

Posted: 27 Feb 2013 11:58 PM PST

Hi everyone,I have been searching for a few days now to try to solve a problem, but without success. What I did find on the net wasn't what I want.We have a large DB with a lot of tables. From those tables I have to use 2 tables in my query.The query has to group all the records from the table 'Tussen_Tickets' per time interval of 15 minutes based on the column Ticket_Closed_DateTime and group it also per Event_Channel which is a column in the table. The intervals can be found in the second table with the name 'Tijdsintervallen'. The result has also to represent all the intervals that can be found in the table 'Tijdsintervallen'. This means that per day all the 96 intervals have to be represented in the result. If no results are found for an interval in the table 'Tussen_Tickets', zero has to be filled in the columns DVBIntern, DVBExtern, Tijdsduur, WrapUp, OnHold and Talk.The result has to have the following columns:Date (datetime), Interval from table Tijdsintervallen, and the columns Event_Channel, DVBIntern, DVBExtern, Tijdsduur, WrapUp, OnHold and Talk from the table Tussen_tickets.I hope I'm clear and complete enough with my question and explanation.Who is willing to help me out with this query?In attachment you will find a text file to create the 2 test tables and fill them with data.Thank you very much for your help and assistance.Greetz,Geert

Is a doomed transaction inevitable if deadlock occurs in a trigger (SQL Server 2008 R2)?

Posted: 27 Feb 2013 01:37 AM PST

Hi all, Below are two scenarios, however, what I am trying to accomplish is to catch deadlock errors and re-try the DML that was chosen as the deadlock victim. Does anyone know of way to accomplish what I am trying to accomplish without a doomed transaction given the constraints? I have seen many posts that this appears to be a limitation of the dbengine, but am looking for confirmation. In both scenarios, I have a SProc - let's called it "X" - that is called mostly from within triggers (that performs some denormalized calculations and cannot be changed at this time) - in which deadlocks are often occurring. Scenario 1 •In SProc X, I want to catch any 1205 errors, and re-try the query that receives the 1205 error•However, whenever I re-try the query I receive an error that the transaction is "The current transaction cannot be committed and cannot support operations that write to the log file. Roll back the transaction" Scenario 2 •In SProc Y, that runs DML statements that will fire triggers that call SProc X, I want to catch any 1205 errors, and re-try the query that receives the 1205 error•If SProc Y is called outside any transaction, I can begin a new transaction and rollback if a deadlock occurred and successfully retry the DML•However, if SProc Y is called inside a transaction, I need to perform a SAVE TRAN and rolback to the savepoint. Whenever I re-try the query I receive an error that the transaction is "The current transaction cannot be committed and cannot be rolled back to a savepoint. Roll back the entire transaction.". For my logic to work, I cannot rollback the entire transaction, so SProc Y only performs as expected outside a transaction TIA, Dave

A complicated situation

Posted: 28 Feb 2013 12:07 AM PST

Hi,I am working on a project where the resistance to some antibiotics is studied. We have a database of patients checked for resistance. The result of test for each antibiotic comes back as either "1" for "not-resistant" and "2" for "resistant". A simplified table is something like this :[code="sql"]DECLARE @Test_TBL TABLE(ID VARCHAR(3), City VARCHAR(10),AntiBiotic VARCHAR(20), Result INT)INSERT INTO @Test_TBL(ID,City,Antibiotic, Result) VALUES('1', 'Denver', 'AMP', 1)INSERT INTO @Test_TBL(ID,City,Antibiotic, Result) VALUES('1', 'Denver', 'TET', 2)INSERT INTO @Test_TBL(ID,City,Antibiotic, Result) VALUES('2', 'Denver', 'SPT', 2)INSERT INTO @Test_TBL(ID,City,Antibiotic, Result) VALUES('3', 'New York', 'AMP', 2)INSERT INTO @Test_TBL(ID,City,Antibiotic, Result) VALUES('3', 'New York', 'SPT', 2)INSERT INTO @Test_TBL(ID,City,Antibiotic, Result) VALUES('4', 'Boston', 'AMP', 2)INSERT INTO @Test_TBL(ID,City,Antibiotic, Result) VALUES('4', 'Boston', 'TET', 2)INSERT INTO @Test_TBL(ID,City,Antibiotic, Result) VALUES('4', 'Boston', 'STR', 2)INSERT INTO @Test_TBL(ID,City,Antibiotic, Result) VALUES('5', 'New York', 'AMP', 2)INSERT INTO @Test_TBL(ID,City,Antibiotic, Result) VALUES('5', 'New York', 'STR', 2)INSERT INTO @Test_TBL(ID,City,Antibiotic, Result) VALUES('6', 'Denver', 'TET', 2)INSERT INTO @Test_TBL(ID,City,Antibiotic, Result) VALUES('6', 'Denver', 'SPT', 1)INSERT INTO @Test_TBL(ID,City,Antibiotic, Result) VALUES('6', 'Denver', 'STR', 2)[/code]Now, I need to make a report that looks like this :[img]http://apzec.ca/images/SQLServerCentral/capture.jpg[/img]I appreciate for any help.

How to concatenate group of rows

Posted: 27 Feb 2013 10:28 AM PST

Hi,I have a data structure as followed : [Name], [Task] and the data is :John, task1John, task2Joe, task1Joe, task3Joe, task4Jane, task1Jane, task4..........I wonder how to use the 'SELECT' to make the following result :John, task1;task2Joe, task1;task3;task4Jane, task1;task4.....Thanks for any help.

sql pivot problem with unknown number of records to columns.

Posted: 27 Feb 2013 06:23 PM PST

Hi Please help me out in this scenario.Table ID NAME 1 a 1 b 1 c 1 d 2 e 2 f 3 g 3 h 3 i 4 j 5 K 5 L 5 m 5 N 5 O 5 P required output id name1 name 2 name3 name4 name 5 name61 a b c 2 e f 3 g h i 4 j 5 k l m n o pi need a query which gives the out put

select rows into colums

Posted: 27 Feb 2013 10:19 PM PST

Dear T-sqlérs,I have a test table (see script below) with the following resultname lengthbalk1 7balk1 6balk1 9stof1 6stof2 6stof3 6stof4 6stof5 6stof5 7stof6 6stof7 6stof8 6stof9 6stof9 7stof10 6stof11 6stof12 6Now I would like the result to be like this:balk1 stof1 stof2 stof3 stof4 stof5 stof 6 stof 7 stof 8 stof 9 stof10 stof 11 stof 126 6 6 6 6 6 6 6 6 6 6 6 67 7 7 9 How should my query be?Thnx a lot in advance for your help :)[code]SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOSET ANSI_PADDING ONGOCREATE TABLE [dbo].[test]( [name] [varchar](50) NULL, [length] [int] NULL) ON [PRIMARY]GOinsert into test (name, length) values ('balk1', 7)insert into test (name, length) values ('balk1', 6)insert into test (name, length) values ('balk1', 9)insert into test (name, length) values ('stof1', 6)insert into test (name, length) values ('stof2', 6)insert into test (name, length) values ('stof3', 6)insert into test (name, length) values ('stof4', 6)insert into test (name, length) values ('stof5', 6)insert into test (name, length) values ('stof5', 7)insert into test (name, length) values ('stof6', 6)insert into test (name, length) values ('stof7', 6)insert into test (name, length) values ('stof8', 6)insert into test (name, length) values ('stof9', 6)insert into test (name, length) values ('stof9', 7)insert into test (name, length) values ('stof10', 6)insert into test (name, length) values ('stof11', 6)insert into test (name, length) values ('stof12', 6)select * from testSET ANSI_PADDING OFFGO[/code]

Calculation with aliases

Posted: 15 Feb 2013 12:35 AM PST

Hi,I have the following query:[code="sql']USE ICP;GODECLARE @beginDatum DatetimeDECLARE @eindDatum DatetimeSELECT T.Ticket_ID, UA.Useraccount_First_Name, UA.Useraccount_Last_Name, A.Description, T.Ticket_Accepted_DateTime, T.Ticket_Closed_DateTime, DATEDIFF(second, T.Ticket_Accepted_DateTime, T.Ticket_Closed_DateTime) AS Tijdsduur, SUM(CONVERT(DECIMAL(4, 3), TA.Ticket_Action_Description)) AS WrapUp, SUM(CONVERT(DECIMAL(4, 3), TAA.Ticket_Action_Description)) AS OnHoldFROM Tickets AS T LEFT OUTER JOIN Application_Useraccounts AS UA ON T.Ticket_Accepted_Useraccount_ID = UA.Useraccount_ID LEFT OUTER JOIN Tickets_Actions AS TA ON T.Ticket_ID = TA.Ticket_ID AND TA.Ticket_Action_Type_ID = 3 LEFT OUTER JOIN Tickets_Actions AS TAA ON T.Ticket_ID = TAA.Ticket_ID AND TAA.Ticket_Action_Type_ID = 4 INNER JOIN Authorities AS A ON T.Authority_ID = A.Authority_IDWHERE (T.Ticket_Closed_DateTime BETWEEN @beginDatum AND DATEADD(day, 1, @eindDatum))GROUP BY T.Ticket_ID, UA.Useraccount_First_Name, UA.Useraccount_Last_Name, A.Description, T.Ticket_Accepted_DateTime, T.Ticket_Closed_DateTimeORDER BY UA.Useraccount_First_Name, UA.Useraccount_Last_Name, A.Description, T.Ticket_Accepted_DateTime, T.Ticket_Closed_DateTime[/code]Now I need to add another column in the result based on a calculation using other aliases:Tijdsduur - WrapUp - OnHold AS TalkWhat's the best way to accomplish this?I tried following query, but that does not work:[code="sq'l"]USE ICP;GODECLARE @beginDatum DatetimeDECLARE @eindDatum DatetimeSELECT Tijdsduur, WrapUp, OnHold, Tijdsduur - WrapUp - OnHold AS Talk, T.Ticket_ID, UA.Useraccount_First_Name, UA.Useraccount_Last_Name, A.Description, T.Ticket_Accepted_DateTime, T.Ticket_Closed_DateTimeFROM (SELECT T.Ticket_ID, UA.Useraccount_First_Name, UA.Useraccount_Last_Name, A.Description, T.Ticket_Accepted_DateTime, T.Ticket_Closed_DateTime, DATEDIFF(second, T.Ticket_Accepted_DateTime, T.Ticket_Closed_DateTime) AS Tijdsduur, SUM(CONVERT(DECIMAL(4, 3), TA.Ticket_Action_Description)) AS WrapUp, SUM(CONVERT(DECIMAL(4, 3), TAA.Ticket_Action_Description)) AS OnHold FROM Tickets AS T LEFT OUTER JOIN Application_Useraccounts AS UA ON T.Ticket_Accepted_Useraccount_ID = UA.Useraccount_ID LEFT OUTER JOIN Tickets_Actions AS TA ON T.Ticket_ID = TA.Ticket_ID AND TA.Ticket_Action_Type_ID = 3 LEFT OUTER JOIN Tickets_Actions AS TAA ON T.Ticket_ID = TAA.Ticket_ID AND TAA.Ticket_Action_Type_ID = 4 INNER JOIN Authorities AS A ON T.Authority_ID = A.Authority_ID) AS dtSumsWHERE (T.Ticket_Closed_DateTime BETWEEN @beginDatum AND DATEADD(day, 1, @eindDatum))GROUP BY T.Ticket_ID, UA.Useraccount_First_Name, UA.Useraccount_Last_Name, A.Description, T.Ticket_Accepted_DateTime, T.Ticket_Closed_DateTimeORDER BY UA.Useraccount_First_Name, UA.Useraccount_Last_Name, A.Description, T.Ticket_Accepted_DateTime, T.Ticket_Closed_DateTime[/code]

How to display Columns based on other table values

Posted: 27 Feb 2013 04:28 AM PST

i have below tables1) student (sno,name,subject,cost,city), 2) studentPT (sno,Ptname,EnrollDate)3) StudentDrill (Sno, DrillName,EnrollDate)now the requirement is want to display sno,name,cost,PTCol (if record exist(based on sno) in studentPt table then 'Yes' + EnrollDate otherwise 'Not Enrolled'), DrillCol (if record exist(based on sno) in studentDrill table then 'Yes' + EnrollDate otherwise 'Not Enrolled')how can i display those last two columsn based on rulesplease kindly help meThank you Very Much in AdvanceAsittii

How compare a list to a table

Posted: 27 Feb 2013 12:54 AM PST

Hi,I have a list of colors like this {White,Black,Green} and I have a table with this structure [Name];[Color];[IDX]. The data inside my table is :item1;Black;1item2;Green;2item3;Yellow;3item4;White;4........I wonder if it is possible to write a "SELECT" command to compare each row to the list and if the color is not included in the list, return a word like "Other". so the returned row from above will be like this :item1;Black;1item2;Green;2item3;Other;3item4;White;4........Currently, I fetch through table and compare each row to the list items and if the color is not in the list update the filed and replace it with "Other" however this procedure takes long and as my table record number is increasing, I need to find another way.Any help will be appreciated.Thanks

Search This Blog