Wednesday, July 17, 2013

[how to] SQL Server does not exist or access denied from a VMWare guest

[how to] SQL Server does not exist or access denied from a VMWare guest


SQL Server does not exist or access denied from a VMWare guest

Posted: 17 Jul 2013 08:33 PM PDT

I've made the configuration to allow remote connections to SQL Server 2012, however I still cannot connect to it from a VMware Workstation Guest (using both NAT and Bridge).

I can ping the host machine from the guest VM when using NAT (the same is not true for bridge config).

Can you help me?

Tag search + mysql + pagination

Posted: 17 Jul 2013 07:44 PM PDT

I have a table "tags" in my database which will be updated with so many new rows every second... The user will be able to see the tags list in a page ordered by the added_time...I am trying to use limit and offset options of mysql...So It lists first "50 results" and a "show more" button...On clicking "show more" the next result set will be loaded...

And it is possible that while clicking the "show more" button, some 10 tags have been newly added to the "tags" table... So there will be 10 rows from the "previous result set" shown in the "second result set" and it goes on like this... Is there any way to show the changes in the "first set" and then the second set?

Open source tools for Oracle schema compares

Posted: 17 Jul 2013 04:28 PM PDT

I'm looking for an open source tool to generate DDL scripts. Many tools call these synchronization scripts. We essentially want to compare schema of different environments (ie: DEV to QA, QA to PROD) to manage object migrations/deployments a little easier. Do open source tools exist like this for Oracle?

SSIS 2012 Create environment variable fails

Posted: 17 Jul 2013 08:33 PM PDT

I'm working a script to port an Environment from one server to another. I am running into an issue calling catalog.create_environment_variable wherein I get the error "The data type of the input value is not compatible with the data type of the 'String'." coming out of the proc "check_data_type_value."

What's strange is that if I let the GUI script out the variables, that query would work

DECLARE @var sql_variant = N'\\myserver\ssisdata'  EXEC [catalog].[create_environment_variable]      @variable_name = N'FolderBase'  ,   @sensitive = False  ,   @description = N''  ,   @environment_name = N'Development'  ,   @folder_name = N'POC'  ,   @value = @var  ,   @data_type = N'String'  GO  

However, taking this script approach isn't working. The legwork I've done indicates this error message is usually resolved by using nvarchar data type instead of varchar. However, that's not the case for my stuff.

Line 108 for the following script. My assumption is that it's something wonky with the sql_variant but I have no idea what that thing is.

USE SSISDB;  GO    DECLARE      @folder_id bigint  ,   @folder_name nvarchar(128) = N'POC'  ,   @environment_name nvarchar(128) = N'Development'  ,   @environment_description nvarchar(1024)  ,   @reference_id bigint  ,   @variable_name nvarchar(128)  ,   @data_type nvarchar(128)  ,   @sensitive bit  ,   @value sql_variant  ,   @description nvarchar(1024);    IF NOT EXISTS  (      SELECT * FROM catalog.folders AS F WHERE F.name = @folder_name  )  BEGIN      EXECUTE catalog.create_folder          @folder_name = @folder_name      ,   @folder_id = @folder_id OUTPUT;        PRINT CONCAT('Folder "', @folder_name, '" has been created with a folder_id of ', @folder_id)  END    IF NOT EXISTS  (      SELECT * FROM catalog.environments AS E WHERE E.name = @environment_name       AND E.folder_id = (SELECT F.folder_id FROM catalog.folders AS F WHERE F.name = @folder_name)  )  BEGIN      PRINT CONCAT('Creating environment ',  @environment_name);        EXECUTE catalog.create_environment          @folder_name = @folder_name      ,   @environment_name = @environment_name      ,   @environment_description = @environment_description;  END    DECLARE      @EnvironmentVariables TABLE  (      folder_name nvarchar(128)  ,   environment_name nvarchar(128)  ,   variable_name nvarchar(128)  ,   description nvarchar(1024)  ,   data_type nvarchar(128)  ,   sensitive bit  ,   value sql_variant  );    INSERT INTO      @EnvironmentVariables  SELECT      E.folder_name  ,   E.environment_name  ,   S.name  ,   S.description  ,   S.type  ,   S.sensitive  ,   S.value  FROM  (      SELECT 'FolderBase','Root for ssis processing','String',CAST(0 AS bit),'\\myserver\ssisdata'      UNION ALL SELECT 'AuditConnectionString','Conn to audit db','String',CAST(0 AS bit),'Data Source=SQLETL01;Initial Catalog=Audit;Provider=SQLNCLI11.1;Integrated Security=SSPI;Auto Translate=False;'  ) AS S (name, description, type, sensitive, value)  CROSS APPLY  (      SELECT          E.name AS environment_name      ,   F.name AS folder_name      FROM          catalog.folders AS F          INNER JOIN              catalog.environments AS E              ON E.folder_id = F.folder_id      WHERE          F.name = @folder_name          AND E.name = @environment_name  ) E;      DECLARE Csr CURSOR FORWARD_ONLY STATIC FOR  SELECT      EV.variable_name  ,   EV.description  ,   EV.data_type  ,   EV.sensitive  ,   EV.value  FROM      @Environmentvariables AS EV;    OPEN Csr;  FETCH NEXT FROM Csr INTO      @variable_name  ,   @description  ,   @data_type  ,   @sensitive  ,   @value;    WHILE @@FETCH_STATUS = 0  BEGIN        BEGIN TRY              -- THERE BE MONSTERS AHEAD          -- The data type of the input value is not compatible with the data type of the 'String'.           EXECUTE catalog.create_environment_variable              @variable_name = @variable_name          ,   @sensitive = @sensitive          ,   @description = @description          ,   @environment_name = @environment_name          ,   @folder_name = @folder_name          ,   @value = @value          ,   @data_type = @data_type      END TRY      BEGIN CATCH          SELECT               @folder_name        AS folder_name          ,   @environment_name   AS environment_name          ,   @variable_name      AS variable_name          ,   @data_type          AS data_type          ,   @sensitive          AS sensitive          ,   @value              AS value          ,   @description        AS description          ,   ERROR_NUMBER()AS error_number --returns the number of the error.          ,   ERROR_SEVERITY() AS error_severity --returns the severity.          ,   ERROR_STATE()AS error_state  --returns the error state number.          ,   ERROR_PROCEDURE() AS error_procedure --returns the name of the stored procedure or trigger where the error occurred.          ,   ERROR_LINE() AS error_line --returns the line number inside the routine that caused the error.          ,   ERROR_MESSAGE() AS error_message; --returns the complete text of the error message. The text includes the values supplied for any substitutable parameters, such as lengths, object names, or times.        END CATCH          FETCH NEXT FROM Csr INTO          @variable_name      ,   @description      ,   @data_type      ,   @sensitive      ,   @value;  END    CLOSE Csr;  DEALLOCATE Csr;  

Specific disadvantages to storing infrequently accessed large files in a large database

Posted: 17 Jul 2013 12:30 PM PDT

I have heard many times across the web that it is, for various reasons, a bad idea to store files directly in a database, the preferred method being to store filenames in the database, and files directly on the filesystem. Most explanations, however, seem to assume that the stored files (in my case 2MB longtext object containing XML) are important to retrieve along with the query. I've seen a few comments in passing, however, that suggest that by storing these files directly in your database your are bloating it and thereby "interfere with IO or query performance of the rest of your database."

Under what circumstances does this happen, and why?

It is my understanding that blob and longtext over a certain size were stored externally anyways. Does the database still attempt to load up the entire longtext of an entry when it's respective row is used in operations? My main concern here comes from the fact that most of the queries of the table containing these rows of longtext do not actually need the longtext xml for anything. Does simply having these cumulatively massive chunks of data sitting in the same table really affect the performance of other queries against that table?

Move multiple SQL Server 2008 databases from C: to D: at once

Posted: 17 Jul 2013 02:01 PM PDT

I have SQL Server 2008 R2 with 323 databases consuming some 14 GB on my C: drive, a fast SSD.

Because I want to reclaim some space on my C: drive, I would like to move them to my D: drive.

I have found this MSDN article, but that seems to be the procedure to move only one database.

Is there an automatic way or script to move all my databases at once?

How to optimize MySQL for large blob updates

Posted: 17 Jul 2013 11:45 AM PDT

I have a table that holds large BLOBs (up to 10M), and is updated frequently. The problem is that UPDATE statements can take up to 1 second to execute, and due to app design this blocks UI. I need to speed up those UPDATE/INSERT statements. Is there a way to do it by tuning MySQL server/storage engine/etc?

The table in question is InnoDB, and I have tried turing compression on, but that didnt seem to make a lot of difference. Client is on the same machine as server so there is no network overhead. Server is MySQL 5.5

How to insert in table from remote stored procedure without creating a distributed transaction?

Posted: 17 Jul 2013 12:39 PM PDT

How can I insert in a local table from a remote stored procedure without creating a distributed transaction, in SQL Server 2005? I want to run a stored procedure on a remote server and use the output as the source for an insert into a local table.

One Form with Fields from two Tables [migrated]

Posted: 17 Jul 2013 10:34 AM PDT

I am very new to Access 2007 and am trying to create one form which has multiple tabs on it. Four of the tabs represent information from one table and two of the tabs represent information from a second table.

There is a relationship between the two tables: Enrollment Table!ID has a lookup relationship to Medical Info!ID Lookup.

If there is a new record created using the form, which automatically assignes and ID to the client record in the Enrollment Table, I would like the a corresponding record to be automaticaly created in the Medical Info table.

The problem I am having is that if I create a new client with the form, and move to the medical info table on the form, I can successfully display the ID Lookup results, however, no record is created in the medical info table unless I enter data into one field first.

If this makes sense, is there a way to write the record to the Medical Info Table at the point that the Client ID is created, which would establish the relationship between "Client ID" and "ID Lookup" without having to enter into another field first?

Mysql Select query takes too long for mutilple tables - 12 minutes

Posted: 17 Jul 2013 12:29 PM PDT

Mysql query almost takes more than 12 minutes though indexing is in place. Please can you throw some light on how to bring down the response time. Thanks

select count(*)   from IMSI t1 USE INDEX(imsi_index),  StdSubServiceProfile t4 USE INDEX(msisdn_mode),  Auc t3   where t1.mSubIdentificationNumberId=t3.mSubIdentificationNumberId and            t1.mobileNetworkCodeId=t3.mobileNetworkCodeId and         t1.mobileCountryCodeId=t3.mobileCountryCodeId and         t4.associatedMainMsisdn=t1.mainMSISDN and         t3.authenticationSubscriberType=1 and         (t4.networkAccessMode=1 or t4.networkAccessMode=2);  

EXPLAIN for the above query is :

+----+-------------+-------+--------+---------------+------------+---------+-----------------------------------------------------------------------------------------+---------+-----------------------------------+  | id | select_type | table | type   | possible_keys | key        | key_len | ref                                                                                     | rows    | Extra                             |  +----+-------------+-------+--------+---------------+------------+---------+-----------------------------------------------------------------------------------------+---------+-----------------------------------+  |  1 | SIMPLE      | t4    | ALL    | msisdn_mode   | NULL       | NULL    | NULL                                                                                    | 2374015 | Using where with pushed condition |  |  1 | SIMPLE      | t1    | ref    | imsi_index    | imsi_index | 13      | hlr.t4.associatedMainMsisdn                                                             |      11 | Using where                       |  |  1 | SIMPLE      | t3    | eq_ref | PRIMARY       | PRIMARY    | 12      | hlr.t1.mSubIdentificationNumberId,hlr.t1.mobileNetworkCodeId,hlr.t1.mobileCountryCodeId |       1 | Using where                       |  +----+-------------+-------+--------+---------------+------------+---------+-----------------------------------------------------------------------------------------+---------+-----------------------------------+    INDEX for the table t1:    +-------+------------+------------+--------------+----------------------------+-----------+-------------+----------+--------+------+------------+---------+  | Table | Non_unique | Key_name   | Seq_in_index | Column_name                | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |  +-------+------------+------------+--------------+----------------------------+-----------+-------------+----------+--------+------+------------+---------+  | IMSI  |          0 | PRIMARY    |            1 | mSubIdentificationNumberId | NULL      |        NULL |     NULL | NULL   |      | HASH       |         |  | IMSI  |          0 | PRIMARY    |            2 | mobileNetworkCodeId        | NULL      |        NULL |     NULL | NULL   |      | HASH       |         |  | IMSI  |          0 | PRIMARY    |            3 | mobileCountryCodeId        | NULL      |     2374031 |     NULL | NULL   |      | HASH       |         |  | IMSI  |          1 | imsi_index |            1 | mainMSISDN                 | A         |        NULL |     NULL | NULL   | YES  | BTREE      |         |  +-------+------------+------------+--------------+----------------------------+-----------+-------------+----------+--------+------+------------+---------+    Index for the table t4 :    +----------------------+------------+-------------+--------------+--------------------------+-----------+-------------+----------+--------+------+------------+---------+  | Table                | Non_unique | Key_name    | Seq_in_index | Column_name              | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |  +----------------------+------------+-------------+--------------+--------------------------+-----------+-------------+----------+--------+------+------------+---------+  | StdSubServiceProfile |          0 | PRIMARY     |            1 | gsmServiceProfileId      | NULL      |        NULL |     NULL | NULL   |      | HASH       |         |  | StdSubServiceProfile |          0 | PRIMARY     |            2 | suMSubscriptionProfileId | NULL      |        NULL |     NULL | NULL   |      | HASH       |         |  | StdSubServiceProfile |          0 | PRIMARY     |            3 | suMSubscriberProfileId   | NULL      |     2374015 |     NULL | NULL   |      | HASH       |         |  | StdSubServiceProfile |          1 | msisdn_mode |            1 | associatedMainMsisdn     | A         |        NULL |     NULL | NULL   | YES  | BTREE      |         |  | StdSubServiceProfile |          1 | msisdn_mode |            2 | networkAccessMode        | A         |        NULL |     NULL | NULL   | YES  | BTREE      |         |  +----------------------+------------+-------------+--------------+--------------------------+-----------+-------------+----------+--------+------+------------+---------+  

Restore SQL Server 2000 Database in use

Posted: 17 Jul 2013 11:36 AM PDT

I need to restore a SQL Server 2000 database to a backup of about 2 days ago. I have just backed it up and have tried to restore it, but it says it is in use.

What is the best way to find out who is still using it and then how to disconnect them?

I imagine taking the DB offline is a simple way of doing it, but probably not the safest?

Percona Playback - Strange Locking issues

Posted: 17 Jul 2013 12:34 PM PDT

Problem while replaying logs

I am in the process of benchmarking a new DB node (specs at the end) and have run across some strange behavior:

As described here i:

  • Created a dump (innobackupex ftw)
  • I logged all my queries for an hour
  • Setup my new db (same my.cnf as the live db only with a higher innodb_buffer_pool_size)
  • Started the replay of my slow query log

As per the documentation:

percona-playback --mysql-host=127.0.0.1\  --mysql-user=root --mysql-schema=my_db\  --query-log-file=slow.log  

This works fine for about 15 minutes, then I start getting strange locking problems:

Error during query: Lock wait timeout exceeded; try restarting transaction, number of tries 0  

I started debugging my current load on the db and found that only one single query was running:

(taken from innodb status)

---TRANSACTION 1C5264768, ACTIVE 44 sec inserting  mysql tables in use 1, locked 1  LOCK WAIT 2 lock struct(s), heap size 376, 1 row lock(s)  MySQL thread id 4289, OS thread handle 0x7f7fb0779700, query id 77515 localhost     127.0.0.1 root update  insert into sessions (a, b, c, d, e, e, f, g, h, i, j, k, l, m, n, o, p, q) values (0, 682,  ------- TRX HAS BEEN WAITING 44 SEC FOR THIS LOCK TO BE GRANTED:  RECORD LOCKS space id 4549 page no 7875876 n bits 104 index `PRIMARY` of table `production`.`sessions` trx id 1C5264768 lock_mode X insert intention waiting  ------------------  TABLE LOCK table `production`.`sessions` trx id 1C5264768 lock mode IX  RECORD LOCKS space id 4549 page no 7875876 n bits 104 index `PRIMARY` of table `production`.`sessions` trx id 1C5264768 lock_mode X insert intention waiting  ---TRANSACTION 1C526475D, ACTIVE (PREPARED) 452 sec  2 lock struct(s), heap size 376, 1 row lock(s)  MySQL thread id 1722, OS thread handle 0x7f7fb083d700, query id 77311 localhost 127.0.0.1 root  Trx read view will not see trx with id >= 1C526475E, sees < 1C525BA04  TABLE LOCK table `production`.`sessions` trx id 1C526475D lock mode IX  RECORD LOCKS space id 4549 page no 7875876 n bits 104 index `PRIMARY` of table `production`.`sessions` trx id 1C526475D lock_mode X  ----------------------------  END OF INNODB MONITOR OUTPUT  

And only one table open:

mysql> SHOW OPEN TABLES from production where In_use != 0;  +----------------------+--------------+--------+-------------+  | Database             | Table        | In_use | Name_locked |  +----------------------+--------------+--------+-------------+  | production           | sessions     |      1 |           0 |  +----------------------+--------------+--------+-------------+  1 row in set (0.00 sec)  

This situation stays like this for about 3-4 minutes and then suddenly playback continues.

These issues do not happen on the live db: we have some issues with locking but we have never exceeded the innodb_lock_wait_timeout value.

I am most likely missing something obvious but for the life of me i can't figure it out, but why would the replay hang like that or better yet why would mysql remain in this lock state?

The relevant entries in the slow log are from our jee server:

XA START 0xbe681101606ce8d1676630322c7365727665722c5035313337,0x676630322c7365727665722c50353133372c00,0x4a5453;  insert into sessions (a, b, c, d, e, e, f, g, h, i, j, k, l, m, n, o, p, q) values (0, 682, ...);  XA END 0xbe681101606ce8d1676630322c7365727665722c5035313337,0x676630322c7365727665722c50353133372c00,0x4a5453;  

Does hibernate's transaction handling have anything to do with the way the lock is generated and not closed?

Server Specs

  • Ubuntu 12.04.2 LTS
  • percona-server-server-5.5 version 5.5.32-rel31.0-549.precise

Relavent config:

max_connections         = 1500  sort_buffer_size        = 1M  thread_cache_size       = 1000  max_heap_table_size     = 512M  tmp_table_size          = 512M  join_buffer_size        = 67108864  expand_fast_index_creation = ON  open_files_limit        = 65535  table_definition_cache  = 4096  table_open_cache        = 262144  max_allowed_packet      = 16M  thread_stack            = 192K  query_cache_limit       = 1M  query_cache_size        = 512M  thread_concurrency      = 8  query_cache_type        = 1  long_query_time         = 2  log_slave_updates       = 1  expire_logs_days        = 10  max_binlog_size         = 100M  

Innodb config:

default_storage_engine   = InnoDB  innodb_file_per_table    = 1  innodb_old_blocks_time   = 1000  innodb_buffer_pool_size  = 163456M  innodb_log_file_size     = 256M  innodb_flush_method      = O_DIRECT  innodb_read_io_threads   = 4  innodb_write_io_threads  = 4  innodb_doublewrite       = FALSE  innodb_flush_log_at_trx_commit = 2  

Thanks for any help or experience in this area!

Best way to put checksum on all pages

Posted: 17 Jul 2013 02:18 PM PDT

I have a SQL Server database with the PAGE_VERIFY option set to NONE. When I set this to CHECKSUM, only pages changed from that point onwards get their checksum calculated and stored on the page.

I'm looking for the best way to put checksums on at least all table and index data.

My initial idea would be to drop and recreate all clustered indexes. For tables that have no clustered index I would create one and then drop it again.

That should result in at least one write operation on each data and index page, but it's a bit heavy-handed. Anyone has a better idea?

Start Oracle 12c enterprise manager in Windows 7

Posted: 17 Jul 2013 01:19 PM PDT

I have installed Oracle 12c in Windows 7. The installation was succesful and the RDBMS and the database that I created are working properly.

Then I tried to access to the enterprise manager via web in https://localhost:5500/em as the installer told me to do but it doesn't connect.

If I type netstat I get:

TCP [::1]:5500 My-PC:49439 TIME_WAIT  TCP [::1]:5500 My-PC:49440 TIME_WAIT  TCP [::1]:5500 My-PC:49441 TIME_WAIT  TCP [::1]:49442 My-PC:5500 TIME_WAIT  

I am using a proxy but I disabled it for local locations.

I tried this http://docs.oracle.com/cd/E25054_01/doc.1111/e24473/emctl.htm with no success because I couldn't find emctl file or the services it's talking about. I have 5 Oracle services running instead.

Thanks in advance.

Autogrowth of database with it's file name

Posted: 17 Jul 2013 09:54 AM PDT

I use below query to get auto-growth event occurred to databases in a sql server.

DECLARE @trcfilename VARCHAR(1000);  SELECT @trcfilename = path FROM sys.traces WHERE is_default = 1  SELECT COUNT(*)as no_of_autogrowths,  sum(duration/(1000*60)) as time_in_min,  Filename  FROM ::fn_trace_gettable(@trcfilename, default)  WHERE (EventClass = 92 OR EventClass = 93)  GROUP BY  Filename  

It outputs number of auto-growths,time taken for auto-growth and logical name of the file. But i want physical name of the file(mdf and ldf file name) instead of logical name.I don't know whether from sys.traces i will get physical name or please help me with an alternate way to do it.

SQLServer audit query that fires a trigger

Posted: 17 Jul 2013 11:30 AM PDT

Using SQLServer, does exist a way to audit from inside a trigger the sql that fires it?

I need to know the SQL query that fires a trigger over a database without a profiler.

Thanks

Bulk database update/insert from CSV file

Posted: 17 Jul 2013 09:53 AM PDT

I am implementing application specific data import feature from one database to another.

I have a CSV file containing say 10000 rows. These rows need to be inserted/updated into database.

There might be the case, where couple of rows may present in database that means those need to be updated. If not present in database, those need to be inserted.

One possible solution is that, I can read one by one line, check the entry in database and build insert/update queries accordingly. But this process may take much time to create update/insert queries and execute them in database. Some times my CSV file may have millions of records.

Is there any other faster way to achieve this feature?

Oracle db_name, db_unique_name, and ORACLE_HOME

Posted: 17 Jul 2013 05:02 PM PDT

I'm studying Oracle and I would like to understand the uniqueness or constrains when assigning these parameters.

Suppose a machine with a single OS (host) and 2 different databases (physical). Are they forced to use different db_name in following cases?

  • They share ORACLE_HOME
  • They don't share ORACLE_HOME

How is this related when db_unique_name enters in the game? What's the point?

So far reading in Forums and offical Docs I got the following assumption:

  • Two databases with the same DB_NAME (not having set DB_UNIQUE_NAME) running from the same ORACLE_HOME is not possible, in Linux at least. It is possible with different ORACLE_HOMES but highly undesirable, but in this case Windows does not allow it. Can somebody assert it?

  • It seems standby-productions systems usually use databases with same name, but I am not sure whether you are free to do it only in shared ORACLE_HOME configuration, a different one, or both cases.

Altering the location of Oracle-Suggested Backup

Posted: 17 Jul 2013 09:58 AM PDT

On one database, the Oracle-Suggested Backup scheduled from Enterprise Manager always ends up in the recovery area, despite RMAN configuration showing that device type disk format points elsewhere.

As far as I can see, the scheduled backup job is simply:

run {  allocate channel oem_disk_backup device type disk;  recover copy of database with tag 'ORA_OEM_LEVEL_0';  backup incremental level 1 cumulative  copies=1 for recover of copy with tag 'ORA_OEM_LEVEL_0' database;  }  

Asking RMAN to show all reveals that device type disk is indeed configured to store elsewhere:

CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT   '/s01/backup/PROD11/PROD11_%U';  

If I run the script manually, the backupset is placed at the above location, when the script is run from the job scheduler the backupset goes to the RECO group on ASM,

Why might Oracle still choose to dump the backupset to the db_recovery_file_dest?

Ultimately, how can I change the backup destination?

Database restructure - beneficial?

Posted: 17 Jul 2013 08:09 PM PDT

I have a table for email messages. Then, I have a table that has the message parts.

The Parts table contains a

   field_id,  message_id, and data   

At the time I used Parts to name the table and yet used field_id for the column. Just an FYI

So for example, a part_id of 2 would be the subject of the message. I have parts for subject, date, htmlbody, and textbody. Due to this structure, I have approximately 2 more queries per email (one for the parts, and another for the email addresses associated to the email) than if I were to push all the data into the messages tables. I found this structure to be best, but I'm beginning to think it might be wrong and not best for performance.

My question is, will it be in my best interests to restructure the database? Id rather not.

I was thinking about moving the htmlbody and textbody and subject and date to the messages table. Another solution would be to grab all the emails and their data from the Parts table in one query. I could grab all the ids in one query and then do an IN(ids) for the second query.

CREATE TABLE IF NOT EXISTS `messages` (    `id` int(10) NOT NULL AUTO_INCREMENT,    `user_id` int(10) NOT NULL,    `account_folder_id` int(10) NOT NULL,    `hash` varchar(255) NOT NULL,    `uid` int(10) NOT NULL,    `seen` tinyint(1) NOT NULL,    `flagged` tinyint(1) NOT NULL,    `date_created` int(11) NOT NULL DEFAULT '0',    `last_modified` int(11) NOT NULL DEFAULT '0',    PRIMARY KEY (`id`),    UNIQUE KEY `hash` (`hash`)  ) ENGINE=MyISAM  DEFAULT CHARSET=latin1;    CREATE TABLE IF NOT EXISTS `message_parts_data` (    `id` int(10) NOT NULL AUTO_INCREMENT,    `message_id` int(10) NOT NULL,    `field_id` int(10) NOT NULL,    `data` text NOT NULL,    `date_created` int(11) NOT NULL DEFAULT '0',    `last_modified` int(11) NOT NULL DEFAULT '0',   PRIMARY KEY (`id`)  ) ENGINE=MyISAM  DEFAULT CHARSET=latin1;    CREATE TABLE IF NOT EXISTS `fields` (    `id` int(10) NOT NULL AUTO_INCREMENT,    `name` text,    PRIMARY KEY (`id`)  ) ENGINE=MyISAM  DEFAULT CHARSET=latin1;    INSERT INTO `fields` (`id`, `name`) VALUES  (1, 'To'),  (2, 'Subject'),  (3, 'Date'),  (4, 'From'),  (5, 'CC'),  (7, 'ReplyTo'),  (8, 'textHtml'),  (9, 'textPlain'),  (11, 'Forward');  

Thanks

Error while restoring a Database from an SQL dump

Posted: 17 Jul 2013 08:21 PM PDT

I am extremely new to MySQL and am running it on Windows. I am trying to restore a Database from a dumpfile in MySQL, but I get the following error:

$ >mysql -u root -p -h localhost -D database -o < dump.sql  ERROR: ASCII '\0' appeared in the statement, but this is not allowed unless option --binary-mode is enabled and mysql is run in non-interactive mode. Set --binary-mode to 1 if ASCII '\0' is expected. Query: 'SQLite format 3'.  

I tried $ > mysql -u root -p -h localhost -D database --binary-mode -o < dump.sql but this gave me the following ERROR at line 1: Unknown command '\☻'. It is a 500 Mb dump file, and when I view its contents using gVIM, all I can see is expressions and data which is not comprehensible. Also when I try to copy contents from the file to post here all I can copy is :SQLite format 3 This kind of seems strange.

MYSQL Timezone support

Posted: 17 Jul 2013 02:21 PM PDT

We are having a shared hosting plan and they are saying that do provide MYSQL Timezone support in a shared hosting plan. I can create timezone related tables in our database and populate them with required data(data from from our local MYSQL Timezone related tables. How to view the code syntax for MySQL "CONVERT_TZ" function?

Thanks Arun

replication breaks after upgrading master

Posted: 17 Jul 2013 11:21 AM PDT

I have a set up of replication with master 5.1.30 and slave 5.5.16 and the replication is working good

Now i have upgraded mysql master to 5.1.47

As far as i know we have to turn off the log bin with sql_log_bin=0 before using mysql_upgrade program in order to up grade the replication setup as well

but the problem here is the binary log was not turned off while mysql_upgrade program is running

The reason i found is in 5.1 the sql_log_bin is a session variable and mysql_upgrade program runs in another session

so how to upgrade the replication as well along with the server with any breakage on replication setup.

any suggestions are really useful.....

Custom sp_who/sp_whoUsers

Posted: 17 Jul 2013 06:57 PM PDT

I need to allow a client in a dev DW SQL 2K8R2 environment, to view and kill processes, but I do not want to grant VIEW SERVER STATE to this person (he's a former sql dba and is considered a potential internal threat).

When I run the following, it returns one row as if the user ran the sp themselves with their current permissions.

USE [master]  GO    SET ANSI_NULLS ON  GO  SET QUOTED_IDENTIFIER ON  GO    CREATE PROCEDURE [dbo].[usp_who] with execute as owner  AS  BEGIN      SET NOCOUNT ON;      exec master.dbo.sp_who;  END  

Changing the "with execute as" to "self" (I'm a sysadmin) returns the same results. I've also tried the below instead of calling sp_who, and it only returns one row.

select * from sysprocesses  

It seems that the context isn't switching, or persisting, throughout the execution of the procedure. And this is to say nothing of how I'm going to allow this person to "kill" processes.

Does anyone have a solution or some suggestions to this seemly unique problem?

Need to suppress rowcount headers when using \G

Posted: 17 Jul 2013 12:21 PM PDT

Is there a command to suppress the rowcount headers and asterisks when using '\G' to execute a SQL statement? I am executing mysql with the -s and --skip-column-name options, but these don't suppress the rowcounts.

How to search whole MySQL database for a particular string

Posted: 17 Jul 2013 03:21 PM PDT

is it possible to search a whole database tables ( row and column) to find out a particular string.

I am having a Database named A with about 35 tables,i need to search for the string named "hello" and i dont know on which table this string is saved.Is it possible?

Using MySQL

i am a linux admin and i am not familiar with databases,it would be really helpful if u can explain the query also.

multivalued weak key in ER database modeling

Posted: 17 Jul 2013 01:21 PM PDT

I was wondering since i didnt find out any clarification for this. I want to store movies that exist in different formats (dvd, bluray etc) and the price for each format differs from each other as well as the quantity of each format, so i came up with this:

example

Is this correct from a design perspective? Does this implies redundancy? I dont understand how will this be stored in a table. Would it be better to do it like this :

enter image description here

Thanks in advance.

EDIT : I add some more descriptive information about what i want to store in this point of the design. I want to store information about sales. Each movie that exist in the company i need to store format, price and stock quantity. I will also need to store customer information with a unique id, name, surname, address, movies that he/she has already bought and his credit card number. Finally i will have a basket that temporary keeps items (lets suppose that other items exist apart from movies) that the customer wants to buy.

Microsoft Office Access database engine could not find the object 'tableName'

Posted: 17 Jul 2013 05:21 PM PDT

First a little background: I am using MS access to link to tables in an advantage database. I created a System DSN. In the past in Access I've created a new database, and using the exteranl data wizard, successfully linked to tables. Those databases and the linked tables are working fine.

Now I am trying to do the same thing, create a new access db, and link to this same DSN. I get as far as seeing the tables, but after making my selection, I get the error, " The Microsoft Office Access database engine could not find the object 'tableSelected'. Make sure the object exists and that you spell its name and the path name correctly.

I've tried creating another datasource (system and user) with no luck. Environment is Wn XP, Access 2007, Advantage DB 8.1

Sql Anywhere 11: Restoring incremental backup failure

Posted: 17 Jul 2013 09:54 AM PDT

We want to create remote incremental backups after a full backup. This will allow us to restore in the event of a failure and bring up another machine with as close to real time backups as possible with SQL Anywhere network servers.

We are doing a full backup as follows:

dbbackup -y -c "eng=ServerName.DbName;uid=dba;pwd=sql;links=tcpip(host=ServerName)"      c:\backuppath\full  

This makes a backup of the database and log files and can be restored as expected. For incremental backups I've tried both live and incremental transaction logs with a renaming scheme if there are multiple incremental backups:

dbbackup -y -t -c "eng=ServerName.DbName;uid=dba;pwd=sql;links=tcpip(host=ServerName)"      c:\backuppath\inc    dbbackup -y -l -c "eng=ServerName.DbName;uid=dba;pwd=sql;links=tcpip(host=ServerName)"       c:\backuppath\live  

However, on applying the transaction logs on restore I always receive an error when applying the transaction logs to the database:

10092: Unable to find table definition for table referenced in transaction log

The transaction log restore command is:

dbeng11 "c:\dbpath\dbname.db" -a "c:\backuppath\dbname.log"  

The error doesn't specify what table it can't find but this is a controlled test and no tables are being created or dropped. I insert a few rows then kick off an incremental backup before attempting to restore.

Does anyone know the correct way to do incremental backup and restore on Sql Anywhere 11?

UPDATE: Thinking it may be related to the complexity of the target database I made a new blank database and network service. Then added one table with two columns and inserted a few rows. Made a full backup, then inserted and deleted a few more rows and committed transactions, then made an incremental backup. This also failed with the same error when attempting to apply the incremental backups of transaction logs after restoring the full backup ...

Edit:

You can follow this link to see the same question with slightly more feedback on SA: http://sqlanywhere-forum.sybase.com/questions/4760/restoring-incrementallive-backup-failure

[SQL Server] SQl 2008 R2 - Maximum memory

[SQL Server] SQl 2008 R2 - Maximum memory


SQl 2008 R2 - Maximum memory

Posted: 16 Jul 2013 06:05 PM PDT

Hi AllI am using sql 2008 R2 DB. As per the msdn document max memory that can be allocated to any istance is 1GB. In sql server property it showing 2147483647 MB which is default setting.The machine on which this server is installed has 4 GB RAM. Will it make any differnce if i increase the sql server max memeory more than 1GB eg 2GB?We have noticed that sometime memeory utilization on the server is more tah 90%. Please advice.Regards

Not your usual documentation question

Posted: 17 Jul 2013 03:31 AM PDT

I am a server/SQL admin in a large organization. We have thousands of SQL databases on many clusters and individual servers supporting many departments. Each major department has their own DBA's who (we hope) document their database structures. The issue we have is not the actual database documentation, but rather the high level "what is out there", "who owns it", "what application connects to it", etc. We do have some documentation that has been compiled over the years, but nothing comprehensive and complete. Us admins have been asking management to get involved in this for years now (apps, servers, SQL, etc.) and nothing went forward so I have taken it as my personal task in what little free time I can scrape up.Before I spend too much time building requirements I wanted to see if there are any tools, templates, scripts, etc. that have been created for this purpose. Commercial or non-commercial is OK. Something that goes beyond just SQL to document servers, applications, etc. would be a huge benefit. Tools that would scan the servers for info would be great because none of us really has time to manually discover and document the basic data. I have been hunting for something like this without much luck.Can anyone make some suggestions?Thanks

Stupid Question: Return Object Explorer to It's original position

Posted: 26 Nov 2010 07:57 AM PST

Stupid question but I accidentally moved object explorer from it's original default position on SSMS. By default, object explorer is to the left, and query window is to the right. Now object explorer seems to have been "undocked" and query window occupies the screen from left to right, such that if I open a new query it opens "behind" object explorer. How do I restore the original "default" view in SSMS?

[Articles] Data Philanthropy

[Articles] Data Philanthropy


Data Philanthropy

Posted: 16 Jul 2013 11:00 PM PDT

The idea of giving back to the world some of your success has been something many successful businesses have done forever. Steve Jones thinks they should consider giving more than physical resources, but also give data.

sqldeveloperbundle 12 essential tools for database professionals
The SQL Developer Bundle contains 12 tools designed with the SQL Server developer and DBA in mind. Try it now.

[MS SQL Server] Right click on a Database, select properties Error, cannot show requested dialog.

[MS SQL Server] Right click on a Database, select properties Error, cannot show requested dialog.


Right click on a Database, select properties Error, cannot show requested dialog.

Posted: 02 Jul 2013 11:54 PM PDT

I right click on a Database, select properties Error, cannot show requested dialog.The error reads:cannot show requested dialog (SqlMgmt)Property Size is not available for Database '[DataWarehouse]'. This property may not exist for this object, or may not be retrievable due to insufficient access rights. (Microsoft.SQLServer.Smo)I'm running Database Consistency Checker.Has anyone experienced this problem?

SQl 2008 R2 enterprise edition compressedn backup to be restored Standard edition

Posted: 16 Jul 2013 10:29 PM PDT

HiI have a SQL 2008 R2 enterprise edition compressedn backup which is to be restored on SQl 2008 R2 Standard edition. I see from MSDN that it can be done but how to do that? Will simple restore command work or there is difference in Restore command.I did a RESTOR Verify only on Standard edition and it failed although passed on Enterprise edition.Thanks

Understanding minimal logging using INSERT... SELECT

Posted: 16 Jul 2013 09:47 PM PDT

Hi All,I'm trying to understand more about minimal logging for the INSERT... SELECT operation in SQL Server.I have a database in the simple recovery model, and a clustered table that's about ~20GB in size; and I want to create an exact duplicate of that table. The table does have an IDENTITY(1,1) column, but that is [i]not[/i] the cluster of the table; I mention this cause I don't know whether that will make a difference. As such, the insert statement has to be preceded with SET IDENTITY INSERT ON for the table.My transaction log for that database is presized to 20GB, with 16 vlogs when I run a DBCC LOGINFO. Autogrowth is set to grow in 5GB chunks with no upper restriction.If I run the insert with TABLOCK, it seems to still being fully logged, my t-log gets full and starts autogrowing. I'm not using an ORDER BY clause on my select however; would this affect the operation? Both tables are clustered with the same key, however.I've also tried using trace flag 610 which seemed to stop the log from growing at first, but then it started to grow at a tiny incremental and generated a fresh set of vlogs for each incremental, resulting in > 5000 being created.If DDL is needed, I'm happy to post, but not sure whether that will add anything in this instance so I haven't prepared it yet.Cheers,Matthew

Full backups on any database, will it affect the LSN chain ?

Posted: 16 Jul 2013 10:20 PM PDT

Hi All,Have one situation...If we take full backup .. will it be affect the LSN Chain. Please reply me

Will Transactional Log Backups slow the growth of Database Log file

Posted: 16 Jul 2013 05:32 PM PDT

Could somebody please clear up the confusion I'm having.Is it correct to say that if I do transactional log backups it will reduce or slow the rate of growth of the Database log file(.ldf), whereby shrinking the database log file will not be needed as this causes disk defragmentation if carried out for a long time....Problem is I have huge Database Log files(.ldf) which adds up after few weeks/months causing me to run out of disk space...

[SQL 2012] Query for Reort

[SQL 2012] Query for Reort


Query for Reort

Posted: 17 Jul 2013 12:20 AM PDT

Hi All,I am not sure if I am posting on correct forum, but I need to find a solution to this.This must be really stupid to ask such simple things but somehow I am not able to figure it out.I have a table which has EmployeeID, Status, dateTime.Everytime a user Enters the door there is a record with EmployeeID, Staus (IN/OUT) and datetime.I need a store procedure or a query which will help me generate a report how much time the user Inside the Room (Basically difference between In and Out). There is once twist that the employee can go in and come out multiple times.I need to generate a report for employees each visit as well as a total time spend report for the day.Note: If there is a requirement to change the tables, I can do it and modify the software to make an entry accordingly.Thanks in advance.

database showing restoring does not add back in to always on high availability group

Posted: 17 Jul 2013 02:19 AM PDT

i got a primary secondary and a Disaster Recovery servers.I have a database added to the always on high availability,as we work on failover cluster, i found it as Always On, on all the servers.I dont know how but found the DB says restoring...the always on feature is not enabled to that particular DB.when i try to add the DB to one of the Availability group it says:" Failed to join the 'DB' to the availability group 'groupname' on the availability replica 'server name' the remote copy of DB is not recovered far enough to enable database mrroring or to join it to the availability group. you need to apply the missing log records to the remote database by restoring the curent log backups from the primary database (MS SQL Server error:1408) "i do restored under norecovery from the primary...now i couldnot find a way to add it back to the Always On...Can any one find me a way... or am i missing anything over here???Rookie here :hehe:

Execution Plan Incomplete

Posted: 16 Jul 2013 09:06 AM PDT

I'm trying to show a developer how to rewrite a stored procedure. It's a pretty scary SP (hundreds of lines of T-SQL in a cursor loop doing a bunch of selects - all to create a dynamic SQL statement). I know we're all cringing in horror at this point but what I wanted to do was show him just how bad the performance would be and what it would do to SQL Server by walking him through the execution plan. When I include the actual execution plan, the SP that normally runs about 6 - 10 seconds now runs for over a minute (which I presume is the time spent generating the plan). The problem comes from the plan not finishing. I scrolled through this and when I got to the end of the plan screen, it was cut off half way through a query.The question is - Is there a way to display the entire plan? Any switches or tricks for getting the entire plan to display?ThanksTom

VAS/MemToLeave and the Plan Cache

Posted: 16 Jul 2013 11:41 PM PDT

Hello,Could someone let me know if the plan cache is located within MemToLeave or is separate?I know that MemToLeave is reserved before the buffer pool when SQL starts up, but what about the plan cache?Thanks in advance,Andrew

Find Data Source

Posted: 16 Jul 2013 10:37 PM PDT

Hi Team,am having a table called 'all_data' with columns id, data_src.in above table records are not entering directly, records are inserted into above table based on some triggers and stored procedures.am new to this environment, how to find where and when the records are entering to particular table.Please sugggest.

NETWORK SERVICE cant read system view

Posted: 16 Jul 2013 07:08 AM PDT

I'm trying to run the following query from a Web Service: SELECT last_user_update FROM sys.dm_db_index_usage_statsI get an error when I try this, saying that the current user does not have permissions. Here's what I know;- The web service runs as NT AUTHORITY\NETWORK SERVICE- NT AUTHORITY\NETWORK SERVICE has the "public" role on the database- The view sys.dm_db_index_usage_stats has two SELECT permission options, one with a blank grantor and one with "dbo" as a grantor. "public" is given access to the one with dbo as the grantor - I tried to check the other select box, but SQL quietly unchecks it when I close the window, so I'm basically not able to change the permissions on this view.Is there a way that I can grant access to sys.dm_db_index_usage_stats for NT AUTHORITY\NETWORK SERVICE?or... Is there another way I can discover the last access time on a table that does not require access to sys.dm_db_index_usage_stats?

[T-SQL] check if index exists

[T-SQL] check if index exists


check if index exists

Posted: 17 Jul 2013 01:20 AM PDT

hii am creating script for non clustered ,i need to see of particular index on column on particular table exists or not

delete rows

Posted: 16 Jul 2013 05:36 AM PDT

hii need to move data to another table and then remove from old one ,which i needs to delete 30 millions of rows,i know its taking time but any idea i can do it faster like in a batch

How to get the first date of my table?

Posted: 16 Jul 2013 11:34 PM PDT

Hai friends, i ve the table like belowcreate table student ( class_atttend datetime, name varchar(20))insert into student values ('02-07-2013', 'A')insert into student values ('03-07-2013', 'A')insert into student values ('04-07-2013', 'A')insert into student values ('05-07-2013', 'A')insert into student values ('06-07-2013', 'A')insert into student values ('07-07-2013', 'A')insert into student values ('08-07-2013', 'A')my required output is from the firstdate of where was appeared on it?expecting output:===========02-07-2013

Select All Records from Temporary Table and Assign to VARCHAR?

Posted: 16 Jul 2013 10:17 PM PDT

Hello!I am new to TSQL and require some help with a problem I'm having. I have a stored procedure where I create a temporary table: Create Table #TempSourceOutput( SourceKey varchar(50), SourceID varchar(50), SourceComment varchar(max) )and I can successfully populate it using: Insert Into #TempSourceOutput (SourceKey, SourceID, SourceComment) Select @SourceKey, Cast(@SourceID as varchar), @SourceComment however, I would like to get all the values from this table and list them (along with other variables) in a varchar, resulting in something like this:Declare @TargetData varchar(max)Set @TargetData = "Target Project Name"Set @msg= 'Are you sure you wish to copy the following records to: ' + @TargetData + '?' + 'AA 12 Test Comment' 'AB 13 Another Test Comment' 'AC 14 And another one...'(where lines above AA, AB, AC are the result set of #TempSourceOutput - ie. columns SourceKey + SourceID + SourceComment)I would like to return it all as one complete varchar. Is it at all possible, and if so, how.Many thanks for your help in advance.Best wishesWinnie

Problem with FOR XML EXPLICIT

Posted: 16 Jul 2013 01:45 AM PDT

Hello everybody!I'm new to FOR XML...So here is my problem:I've got a query that returns the correct result regarding rows, nesting etc...[code="sql"] SELECT TAG, Parent, [Type!1!Type], [Type!1!TypeID], [Type!1!TypeName], [Type!1!TypeValue], [Property!2!Type], [Property!2!PropID], [Property!2!PropName], [Value!3!Type], [Value!3!ValID], [Value!3!ValName], [Value!3!ValValue] FROM ( -- get all Types from TBL_Value_Types SELECT DISTINCT 1 AS TAG, NULL AS Parent, 'Type' AS [Type!1!Type], T.TypeID AS [Type!1!TypeID], T.TypeName AS [Type!1!TypeName], T.Value AS [Type!1!TypeValue], T.DisplayOrder AS [Type!1!TypeDisplayOrder], NULL AS [Property!2!Type], NULL AS [Property!2!PropID], NULL AS [Property!2!PropName], NULL AS [Property!2!PropDisplayOrder], NULL AS [Value!3!Type], NULL AS [Value!3!ValID], NULL AS [Value!3!ValName], NULL AS [Value!3!ValValue], NULL AS [Value!3!ValDisplayOrder] FROM dbo.TBL_Value_Types AS T LEFT JOIN TBL_Value_Properties AS P ON P.TypeID_FK = T.TypeID LEFT JOIN TBL_Value_Property_Values AS V ON V.PropID_FK = P.PropID UNION -- get all Properties from TBL_Value_Properties SELECT 2 AS TAG, 1 AS Parent, NULL AS [Type!1!Type], T.TypeID, T.TypeName, T.DisplayOrder, T.Value, 'Property' AS [Property!2!Type], P.PropID, P.PropName, P.DisplayOrder, NULL AS [Value!3!Type], NULL AS [Value!3!ValID], NULL AS [Value!3!ValName], NULL AS [Value!3!ValValue], NULL AS [Value!3!ValDisplayOrder] FROM dbo.TBL_Value_Types AS T LEFT JOIN TBL_Value_Properties AS P ON P.TypeID_FK = T.TypeID LEFT JOIN TBL_Value_Property_Values AS V ON V.PropID_FK = P.PropID UNION -- get all Values from TBL_Value_Property_Values SELECT 3 AS TAG, 2 AS Parent, NULL AS [Type!1!Type], T.TypeID, T.TypeName, T.DisplayOrder, T.Value, NULL AS [Property!2!Type], P.PropID, P.PropName, P.DisplayOrder, 'Value' AS [Value!3!Type], V.ValID, V.ValName, V.ValValue, V.DisplayOrder FROM dbo.TBL_Value_Types AS T LEFT JOIN TBL_Value_Properties AS P ON P.TypeID_FK = T.TypeID LEFT JOIN TBL_Value_Property_Values AS V ON V.PropID_FK = P.PropID WHERE V.ValID IS NOT NULL ) AS TEMP ORDER BY [Type!1!TypeID], [Type!1!TypeDisplayOrder], [Property!2!PropDisplayOrder], [Value!3!ValDisplayOrder], [Type!1!Type] [/code]If I add "FOR XML AUTO" the result itself (meaning rows, nesting etc...) looks still fine, but it's not the desired XML-structure.If I add "FOR XML EXPLICIT" the result is missing several rows. After further investigation it seems that the last "Type" has lost all it's properties and values...Any comments are welcome!!!Thanks in advance....hsackmann

Unexpected Behaviour With SQL Server :,(

Posted: 16 Jul 2013 03:45 PM PDT

There is a SP which usually takes around 3 minutes to execute. Since last few days it is taking around 30-35 minutes.So we did Rebuilding of all the index and updating the statistics.After that we run the same SP (which USED to take 3 minutes), after rebuilding & Updating the statistics the SP executed relatively fast and it executed by taking some 3-4 minutes. After that there were no operations done on DB for next 5-6 hours. After 5-6 hours we again tried to run the SP, but it again took 30-35 minutes.We have observed a common pattern that when we Rebuild & update the statistics,the SP runs pretty quickly. But that is not feasible solution,that whenever we have to run the SP you Rebuild & Update the statistics.Moreover for fragmentation to come into picture, we need to do some insert/update/delete which we are not doing. Then WHY only after rebuild & Update statistics the SP runs smootly.

error

Posted: 16 Jul 2013 06:13 AM PDT

hiselect * from tableA with(nolock) ajoin tableB with(nolock) bon a.id = b.idwhere a.date > getdate()i am getting error incorrect syntax near a

[SQL Server 2008 issues] PAE

[SQL Server 2008 issues] PAE


PAE

Posted: 16 Jul 2013 06:44 PM PDT

can we use both \3GB and PAE together ?

Issue with DATEDIFF returning Negative Values

Posted: 16 Jul 2013 08:12 AM PDT

I have written the following query and it is returning negative values for some records when calculating the number of days between two dates:set dateformat mdyselect b.VINTAGE,FLOOR(DATEDIFF(day,b.BOOK_MTH,CAST(right(a.asofdate,6) as DATE))/30), b.CR_SCR_BAND, case when a.DAYSPD < 1 then 'CURRENT' when a.DAYSPD between 1 and 30 then '<30DPD' when a.DAYSPD between 31 and 60 then '30DAY' when a.DAYSPD > 60 then '60+DAY' else 'ERROR' end as 'ME_DQ', count (a.account), sum(a.currtotalbal-a.CurrDisc) as 'BALANCE'from dbo.ME_MASTER a left join #temp b on a.ACCOUNT = b.ACCOUNTwhere a.OpenDt >=1110101 and VINTAGE is not nullgroup by b.VINTAGE, FLOOR(DATEDIFF(day,b.BOOK_MTH,CAST(right(a.asofdate,6) as DATE))/30), b.CR_SCR_BAND, case when a.DAYSPD < 1 then 'CURRENT' when a.DAYSPD between 1 and 30 then '<30DPD' when a.DAYSPD between 31 and 60 then '30DAY' when a.DAYSPD > 60 then '60+DAY' else 'ERROR' end order by b.VINTAGE, FLOOR(DATEDIFF(day,b.BOOK_MTH,CAST(right(a.asofdate,6) as DATE))/30), b.CR_SCR_BAND, case when a.DAYSPD < 1 then 'CURRENT' when a.DAYSPD between 1 and 30 then '<30DPD' when a.DAYSPD between 31 and 60 then '30DAY' when a.DAYSPD > 60 then '60+DAY' else 'ERROR' end set dateformat mdyselect top 10 b.VINTAGE, FLOOR(DATEDIFF(day,b.BOOK_MTH,CAST(right(a.asofdate,6) as DATE))/30) as 'MTH_ON_BOOKS', b.CR_SCR_BAND, case when a.DAYSPD < 1 then 'CURRENT' when a.DAYSPD between 1 and 30 then '<30DPD' when a.DAYSPD between 31 and 60 then '30DAY' when a.DAYSPD > 60 then '60+DAY' else 'ERROR' end as 'ME_DQ', a.account, (a.currtotalbal-a.CurrDisc) as 'BALANCE', cast(right(a.OpenDt,6)as date) as 'Book Date', CAST(right(a.ASOFDATE,6)as date) as 'Month End'from dbo.ME_MASTER a left join #temp b on a.ACCOUNT = b.ACCOUNTwhere a.OpenDt >=1110101 --and b.VINTAGE is null and FLOOR(DATEDIFF(day,b.BOOK_MTH,CAST(right(a.asofdate,6) as DATE))/30) < 0order by b.VINTAGE,FLOOR(DATEDIFF(day,b.BOOK_MTH,CAST(right(a.asofdate,6) as DATE))/30), b.CR_SCR_BAND, case when a.DAYSPD < 1 then 'CURRENT' when a.DAYSPD between 1 and 30 then '<30DPD' when a.DAYSPD between 31 and 60 then '30DAY' when a.DAYSPD > 60 then '60+DAY' else 'ERROR' endI would rather be using datefiff by months, but no matter which I choose I get negative values on some records. This sometime occurs on accounts where the two dates do not cross a year threshold. I have read several potential solutions but none seem to make sense. Any help would be greatly appreciated.Thanks!

Find the host name

Posted: 16 Jul 2013 06:46 PM PDT

Dear,I require to know the host name from which my database has been accessed I mean performed DDL,DML operations within a daterange. Say, I want to know the host name within the range of 16/07/2013 13:00:00 to 16/07/2013 14:00:00I don't have any trace file.Please help me to find out this. I am in stack.Regards,Akbar

SQL

Posted: 16 Jul 2013 06:28 PM PDT

I searched in sites but unable to find the exact document on sql architecture , can any one send the architecture of sql ?

Shrink database while restoring

Posted: 02 Aug 2011 08:03 PM PDT

Can we shrink the database files while restoring from backup?

Time-Out Occured while estimating the compress savings with Huge Table (100Gigs)

Posted: 16 Jul 2013 03:29 PM PDT

Hi,While checking the estimated compression saving with one of our huge table with size 100 Gigs+, I ran into Time-Out buffer latch error and not able to proceed further, even I am facing the same issue while rebuilding the database where this table exists and not able to rebuild the database. Appreciate any advice.Environment Details===================Windows 2003 serverSQL Server 2008RAM: 128 GBCPU: 24===================[b]sp_estimate_data_compression_savings 'Export','SE_VARCS_DBF',NULL,NULL,'PAGE'Msg 845, Level 17, State 1, Line 1Time-out occurred while waiting for buffer latch type 2 for page (1:24887741), database ID 12.[/b]Thanks,Nagarjun.

looping through all user databases

Posted: 16 Jul 2013 02:45 PM PDT

Hi Guys,Quite new with SQL Server commands..How can i enhance the below script so that it will run against all users databases if executed in master?thanks for ur help!SET NOCOUNT ON;DECLARE @objectid int;DECLARE @indexid int;DECLARE @partitioncount bigint;DECLARE @schemaname nvarchar(130);DECLARE @objectname nvarchar(130);DECLARE @indexname nvarchar(130);DECLARE @partitionnum bigint;DECLARE @partitions bigint;DECLARE @frag float;DECLARE @command nvarchar(4000);DECLARE @dbid smallint;-- Conditionally select tables and indexes from the sys.dm_db_index_physical_stats function-- and convert object and index IDs to names.SET @dbid = DB_ID();SELECT [object_id] AS objectid, index_id AS indexid, partition_number AS partitionnum, avg_fragmentation_in_percent AS frag, page_countINTO #work_to_doFROM sys.dm_db_index_physical_stats (@dbid, NULL, NULL , NULL, N'LIMITED')WHERE avg_fragmentation_in_percent > 10.0 -- Allow limited fragmentationAND index_id > 0 -- Ignore heapsAND page_count > 25; -- Ignore small tables-- Declare the cursor for the list of partitions to be processed.DECLARE partitions CURSOR FOR SELECT objectid,indexid, partitionnum,frag FROM #work_to_do;-- Open the cursor.OPEN partitions;-- Loop through the partitions.WHILE (1=1)BEGINFETCH NEXTFROM partitionsINTO @objectid, @indexid, @partitionnum, @frag;IF @@FETCH_STATUS < 0 BREAK;SELECT @objectname = QUOTENAME(o.name), @schemaname = QUOTENAME(s.name)FROM sys.objects AS oJOIN sys.schemas as s ON s.schema_id = o.schema_idWHERE o.object_id = @objectid;SELECT @indexname = QUOTENAME(name)FROM sys.indexesWHERE object_id = @objectid AND index_id = @indexid;SELECT @partitioncount = count (*)FROM sys.partitionsWHERE object_id = @objectid AND index_id = @indexid;-- 30 is an arbitrary decision point at which to switch between reorganizing and rebuilding.IF @frag < 30.0SET @command = N'ALTER INDEX ' + @indexname + N' ON ' + @schemaname + N'.' + @objectname + N' REORGANIZE';IF @frag >= 30.0SET @command = N'ALTER INDEX ' + @indexname + N' ON ' + @schemaname + N'.' + @objectname + N' REBUILD';IF @partitioncount > 1SET @command = @command + N' PARTITION=' + CAST(@partitionnum AS nvarchar(10));EXEC (@command);PRINT N'Executed: ' + @command;END-- Close and deallocate the cursor.CLOSE partitions;DEALLOCATE partitions;-- Drop the temporary table.DROP TABLE #work_to_do;GO

Doing Insert/Update/Delete from Oracle Source to SQL Server Dest.

Posted: 16 Jul 2013 08:36 AM PDT

Hello,I have an Oracle database located on the remote machine and can conenct thru ODBC conenction and I have a SQL Server 2008 Standard edision on another server.Now, I have to Insert or update or delete the records in SQL Server destination, by comparing with Oracle source. There is a unique key column in that table and last_update column (in both source and destination) which I can use it for comparision.Is there any method for doing the Incremental load from Oracle Source to SQL Server destination.Thanks & Regards

Finding the Top X Values Each Day from History Table

Posted: 16 Jul 2013 04:53 AM PDT

I am trying to find the top 750 balances on each day from a history table of accounts on file in a consumer credit portfolio. Clearly this is easy if I did each day by itself, but I don't know how to find it for each day in a single query. I will need to further confine my results by the days past due of each record each day.Data Table:ASOFDATE (the effective date of the record in the history table)ACCOUNTBALANCEDAYSPDThere are more data elements in the table but these are the only ones I need.Thoughts?

Get total of field values

Posted: 16 Jul 2013 08:50 AM PDT

HiI am a newbie and need to get the grand total of the new_TotalNoofPlanMembers fieldand am having some difficulty as this isn't working for me.SELECT new_MembershipNumber AS 'Membership ID', FullName AS 'Member Name', new_SalesChannelOnlyforPrivateMembersName AS 'Bll To Corp. Name', new_TotalNoofPlanMembers AS 'Total # Plan Members' sum(new_TotalNoofPlanMembers) as 'Grand Total'FROM dbo.FilteredContact

Finding Missing values

Posted: 16 Jul 2013 04:49 AM PDT

Hello everyone,My name is Thomas and I am brand new to T-SQL. I was thrown into writing our company reports a little over two months ago. I am completley self-taught so forgive any ignorance I may show. I was throw into this out of necessity,we are a non-profit and cant afford any more IT staff. However have found that I really like reports! My question is how do I find a missing value?We want to know what clients who have had a visit greater than 01/01/2013 are also missing their Client Intake Forms with a form create date also greater than 01/01/2013. This information will be pulled from two tables. The table containing the document information and the table containing the visit information. They are joined on "PatientID".The client intake form is text value under the "Document Description Column" along with several other Document's such as referals, ROI's ect. I can easily pull the information on who has a Intake Form but struggling on who does not. Please see my "who does query"Select Distinct dbo.DocumentList.id AS 'DoucumentID',dbo.DocumentList.patientId AS 'ClientID',dbo.DocumentList.createdDT AS 'Document Date',docTypeDescription AS 'Document Name',visitdateFrom dbo.DocumentList Inner JOIN dbo.dsCHCChargesAndPayments ON dbo.dsCHCChargesAndPayments.[Patient: Patient ID]=patientIdWhere docTypeDescription IN('Client Intake Information Form')and visitdate > '2013-01-01 00:00:00.000'and createdDT >'2013-01-01 00:00:00.000'Thanks any help is appreciated!

latch

Posted: 16 Jul 2013 01:02 AM PDT

can i know the difference b/w latch vs lock ?

A produce that reports versions, editions and product name of SQL Server installs

Posted: 16 Jul 2013 03:31 AM PDT

Hello,I am looking for a product that reports versions, editions and product names of SQL Server installs and instances in my network. Is there something out there on the market? Any free tools out there or very inexpensive?Thanks.

Query The start up type of a service

Posted: 16 Jul 2013 01:53 AM PDT

I am just putting together a list of security checks which will eventually have a front end to them for audit purposes.Does anybody know a script which will return the start up types of the services? SQL, Agent, Browser? I want to ensure that SQL is automatic and browser is disabled on each instance.Any help is much appreciated

How to measure the SQL Server MTL Memory Usage.

Posted: 16 Jul 2013 04:12 AM PDT

Hi ,Having issues with the following scenario, pls help if anyone can Measure the MTL Usage.[b][u]DB Environment Details[/u][/b]Windows/SQL Server 2008 R2 with 64 BitSQL Max Memory 12 GB ( BPool Area )3 GB for MTL ( Non-Bpool )Max Worker Thread 512[b][u]Application Environment Details[/u][/b]Solaris Server with 64 Bit 32 GB RAM4 Instance Application with 20 SQL Connection eachApplication use the ODBC Drivers for connecting SQL Database• Application getting restarting when allowing with default memory setting due to insufficient memory• Application working fine when default MTL setting changed to 3GB using –g switch during SQL Server Startup.In the test lab server, we did not have any issues after allocating the Custom Memory while startup using –g ( 3GB Size )By using below Query, MTL not crossing more than 100 MBselect type, sum(single_pages_kb) BPool, sum(multi_pages_kb) MTL from sys.dm_os_memory_clerkswhere multi_pages_kb > 0 group by type order by 3 descQuestions: • Why the application restarting at the time of default Memory setting, I hope by default It suppose to have atleast 1 GB and my application not even consuming ¼ of the 1 GB but still application restarting.• Similarly, why the application not restarting when memory increased for the MTL Portion to the higher value.• Which query will help me for measuring the MTL Usage for the allocated 3 GB. Output like,Allocated MTL Area : 3 GBConsumed MTL : 250 MBFree MTL Space : 2822 MBBecause, based on the SQL Script I will scale up the Memory usage by the C/C++ Applications.For easy understanding I have attached the App and DB Layer. Please share your thoughts for measuring the memory usage.

reasons

Posted: 16 Jul 2013 12:07 AM PDT

what are the possible reasons when a query is timed out ?what are the possible reasons for not truncating the log file ?i have Log file of 5 GB how many VLF"s will be present ?

Need to restore one file into a database

Posted: 16 Jul 2013 03:13 AM PDT

Hi all,First of all, this is development server. We have a database of a size of over 1 TB which have partitioned tables and it was restored several months ago.Now, if I select from any table which are partitioned, it gives an error message recommending to run DBCC checkdb, and points to particular file, which I guess may be corrupted. For now I am running DBCC checktable against table in question. But my question is not about this.I have fresh production backup. Can I restore just one file into our database? If it is possible, how should I do this restore? Thanks.

SSMS can't connect to SQL Server, no error, no end.

Posted: 16 Jul 2013 02:46 AM PDT

Hi All,I really need help on this. One QA's computer can't connect to any SQL Server instances using SSMS. When I connect using SQL login or windows login, it just runs forever, no errors show up, just keep on running.Except this computer, other computer in the same domain have no proplem to connect to the servers.The firewall are turned off on this computer. The other applications except SSMS and OSQL can connect using both SQL and Windows login.Anybody can shed a light?Thanks,

LS

Posted: 16 Jul 2013 01:12 AM PDT

i added one secondary database file on the primarydatabase where the db is configured in logshipping on E drive , how can i restore the file to the secondary server where E drive is not present ? what happens my logshipping

service accounts

Posted: 16 Jul 2013 01:31 AM PDT

what are the difference between configuring the accounts as network and local accounts while installing sql ?

service pack on cluster

Posted: 15 Jul 2013 08:55 PM PDT

I have 2node cluster and when we are applying the service pack on passive node suddenly active cluster was down and it failed over to passive.Will SQL instance failover?If it is Can you please tell me will that service pack applied to my sql instance?Please tell me the best way to ptach on sql cluster?Thanks,Santosh.

Does compressed data stay compressed in transactional replication SQL 2008 R2

Posted: 15 Jul 2013 10:58 PM PDT

Does compressed data stay compressed in transactional replication SQL 2008 R2?We are planning on using transactional replication to replicate SQL 2008 R2 (sp2) compressed data to another SQL 2008 R2 (sp2) sql server.Are there any settings I need to make on the published article to retain the compressed data?

Obtaining SSMS errors when a SSRS report returns an error, without logging on to SSMS each time.

Posted: 15 Jul 2013 11:33 PM PDT

Hello people,I have a problem in that if some SSRS report runs a procedure which returns an error, the message displayed at the SSRS end is very inadequate. For various reasons, we have been advised not to turn on remote errors to display the SSMS error message which we need to see in order to diagnose the problem.Therefore, I wonder whether there is any way to make a query that will return the procedure error message returned, given some information about the user / procedure / report or whatever (we normally get told about errors pretty soon after they happen, so we may be able to just order the error messages by time, if only we can find the error messages).We cannot simply log into our live environment for every error to look at the error in SSMS. Unfortunately there are a lot of procedures already existing and we would rather avoid using CATCH commands to capture errors on every single one, which would be a large amount of work.I have had a bit of a fumble around the trace logs and things, but have not found the error messages that will be helpful... any ideas - are the error messages stored somewhere?Example:I make a report based on this procedure:[code="sql"]ALTER PROCEDURE [dbo].[RETURN_AN_ERROR]ASCREATE TABLE #TestMeUp( Id INT ,Value VARCHAR(10)) INSERT INTO #TestMeUp SELECT 1,'this' UNION SELECT 2,'that' UNION SELECT 3,'the' UNION SELECT 4,'other' UNION SELECT 5,'and' UNION SELECT 6,'look' UNION SELECT 7,'another' UNION SELECT 1,'one'SELECT Id ,(SELECT B.Value FROM #TestMeUp AS B WHERE B.Id = A.Id) AS ValueFROM #TestMeUp AS ADROP TABLE #TestMeUp[/code]the report gives this unhelpful error:[quote]An error has occurred during report processing. (rsProcessingAborted)Query execution failed for dataset 'Error'. (rsErrorExecutingCommand)For more information about this error navigate to the report server on the local server machine, or enable remote errors [/quote]whereas SSMS gives this much more useful error message (which we would like to see from outside of SSMS):[quote]8 row(s) affected)Msg 512, Level 16, State 1, Procedure RETURN_AN_ERROR, Line 26Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.[/quote]Thanks for looking!Mark

dynamic index rebuild based on fragmentation level

Posted: 15 Jul 2013 09:00 PM PDT

Hi Guys,Anyone have a dynamic script which will list and rebuild indexes > 30% avg fragmentation.Seen it somewhere but can't recall....thanks

Scan count vs logical reads

Posted: 15 Jul 2013 10:25 PM PDT

I have two versions of a query.1. Scan count 136, logical reads 2776907,CPU time = 230848 ms, elapsed time = 55753 ms.2. Scan count 0, logical reads 89225788, CPU time = 437696 ms, elapsed time = 46195 ms.The only difference I can see is 2 query it taking lesser of 10 minutes. Which is good here? Both Execution plan has lots of hash joins. Thanks

Foregin Key

Posted: 15 Jul 2013 11:28 PM PDT

Hi,If I disable a Fk and then renable it with Check Check; will this lock the table out while the data is being validated please?Thanks

Search This Blog