Wednesday, July 17, 2013

[how to] SQL Server does not exist or access denied from a VMWare guest

[how to] SQL Server does not exist or access denied from a VMWare guest


SQL Server does not exist or access denied from a VMWare guest

Posted: 17 Jul 2013 08:33 PM PDT

I've made the configuration to allow remote connections to SQL Server 2012, however I still cannot connect to it from a VMware Workstation Guest (using both NAT and Bridge).

I can ping the host machine from the guest VM when using NAT (the same is not true for bridge config).

Can you help me?

Tag search + mysql + pagination

Posted: 17 Jul 2013 07:44 PM PDT

I have a table "tags" in my database which will be updated with so many new rows every second... The user will be able to see the tags list in a page ordered by the added_time...I am trying to use limit and offset options of mysql...So It lists first "50 results" and a "show more" button...On clicking "show more" the next result set will be loaded...

And it is possible that while clicking the "show more" button, some 10 tags have been newly added to the "tags" table... So there will be 10 rows from the "previous result set" shown in the "second result set" and it goes on like this... Is there any way to show the changes in the "first set" and then the second set?

Open source tools for Oracle schema compares

Posted: 17 Jul 2013 04:28 PM PDT

I'm looking for an open source tool to generate DDL scripts. Many tools call these synchronization scripts. We essentially want to compare schema of different environments (ie: DEV to QA, QA to PROD) to manage object migrations/deployments a little easier. Do open source tools exist like this for Oracle?

SSIS 2012 Create environment variable fails

Posted: 17 Jul 2013 08:33 PM PDT

I'm working a script to port an Environment from one server to another. I am running into an issue calling catalog.create_environment_variable wherein I get the error "The data type of the input value is not compatible with the data type of the 'String'." coming out of the proc "check_data_type_value."

What's strange is that if I let the GUI script out the variables, that query would work

DECLARE @var sql_variant = N'\\myserver\ssisdata'  EXEC [catalog].[create_environment_variable]      @variable_name = N'FolderBase'  ,   @sensitive = False  ,   @description = N''  ,   @environment_name = N'Development'  ,   @folder_name = N'POC'  ,   @value = @var  ,   @data_type = N'String'  GO  

However, taking this script approach isn't working. The legwork I've done indicates this error message is usually resolved by using nvarchar data type instead of varchar. However, that's not the case for my stuff.

Line 108 for the following script. My assumption is that it's something wonky with the sql_variant but I have no idea what that thing is.

USE SSISDB;  GO    DECLARE      @folder_id bigint  ,   @folder_name nvarchar(128) = N'POC'  ,   @environment_name nvarchar(128) = N'Development'  ,   @environment_description nvarchar(1024)  ,   @reference_id bigint  ,   @variable_name nvarchar(128)  ,   @data_type nvarchar(128)  ,   @sensitive bit  ,   @value sql_variant  ,   @description nvarchar(1024);    IF NOT EXISTS  (      SELECT * FROM catalog.folders AS F WHERE F.name = @folder_name  )  BEGIN      EXECUTE catalog.create_folder          @folder_name = @folder_name      ,   @folder_id = @folder_id OUTPUT;        PRINT CONCAT('Folder "', @folder_name, '" has been created with a folder_id of ', @folder_id)  END    IF NOT EXISTS  (      SELECT * FROM catalog.environments AS E WHERE E.name = @environment_name       AND E.folder_id = (SELECT F.folder_id FROM catalog.folders AS F WHERE F.name = @folder_name)  )  BEGIN      PRINT CONCAT('Creating environment ',  @environment_name);        EXECUTE catalog.create_environment          @folder_name = @folder_name      ,   @environment_name = @environment_name      ,   @environment_description = @environment_description;  END    DECLARE      @EnvironmentVariables TABLE  (      folder_name nvarchar(128)  ,   environment_name nvarchar(128)  ,   variable_name nvarchar(128)  ,   description nvarchar(1024)  ,   data_type nvarchar(128)  ,   sensitive bit  ,   value sql_variant  );    INSERT INTO      @EnvironmentVariables  SELECT      E.folder_name  ,   E.environment_name  ,   S.name  ,   S.description  ,   S.type  ,   S.sensitive  ,   S.value  FROM  (      SELECT 'FolderBase','Root for ssis processing','String',CAST(0 AS bit),'\\myserver\ssisdata'      UNION ALL SELECT 'AuditConnectionString','Conn to audit db','String',CAST(0 AS bit),'Data Source=SQLETL01;Initial Catalog=Audit;Provider=SQLNCLI11.1;Integrated Security=SSPI;Auto Translate=False;'  ) AS S (name, description, type, sensitive, value)  CROSS APPLY  (      SELECT          E.name AS environment_name      ,   F.name AS folder_name      FROM          catalog.folders AS F          INNER JOIN              catalog.environments AS E              ON E.folder_id = F.folder_id      WHERE          F.name = @folder_name          AND E.name = @environment_name  ) E;      DECLARE Csr CURSOR FORWARD_ONLY STATIC FOR  SELECT      EV.variable_name  ,   EV.description  ,   EV.data_type  ,   EV.sensitive  ,   EV.value  FROM      @Environmentvariables AS EV;    OPEN Csr;  FETCH NEXT FROM Csr INTO      @variable_name  ,   @description  ,   @data_type  ,   @sensitive  ,   @value;    WHILE @@FETCH_STATUS = 0  BEGIN        BEGIN TRY              -- THERE BE MONSTERS AHEAD          -- The data type of the input value is not compatible with the data type of the 'String'.           EXECUTE catalog.create_environment_variable              @variable_name = @variable_name          ,   @sensitive = @sensitive          ,   @description = @description          ,   @environment_name = @environment_name          ,   @folder_name = @folder_name          ,   @value = @value          ,   @data_type = @data_type      END TRY      BEGIN CATCH          SELECT               @folder_name        AS folder_name          ,   @environment_name   AS environment_name          ,   @variable_name      AS variable_name          ,   @data_type          AS data_type          ,   @sensitive          AS sensitive          ,   @value              AS value          ,   @description        AS description          ,   ERROR_NUMBER()AS error_number --returns the number of the error.          ,   ERROR_SEVERITY() AS error_severity --returns the severity.          ,   ERROR_STATE()AS error_state  --returns the error state number.          ,   ERROR_PROCEDURE() AS error_procedure --returns the name of the stored procedure or trigger where the error occurred.          ,   ERROR_LINE() AS error_line --returns the line number inside the routine that caused the error.          ,   ERROR_MESSAGE() AS error_message; --returns the complete text of the error message. The text includes the values supplied for any substitutable parameters, such as lengths, object names, or times.        END CATCH          FETCH NEXT FROM Csr INTO          @variable_name      ,   @description      ,   @data_type      ,   @sensitive      ,   @value;  END    CLOSE Csr;  DEALLOCATE Csr;  

Specific disadvantages to storing infrequently accessed large files in a large database

Posted: 17 Jul 2013 12:30 PM PDT

I have heard many times across the web that it is, for various reasons, a bad idea to store files directly in a database, the preferred method being to store filenames in the database, and files directly on the filesystem. Most explanations, however, seem to assume that the stored files (in my case 2MB longtext object containing XML) are important to retrieve along with the query. I've seen a few comments in passing, however, that suggest that by storing these files directly in your database your are bloating it and thereby "interfere with IO or query performance of the rest of your database."

Under what circumstances does this happen, and why?

It is my understanding that blob and longtext over a certain size were stored externally anyways. Does the database still attempt to load up the entire longtext of an entry when it's respective row is used in operations? My main concern here comes from the fact that most of the queries of the table containing these rows of longtext do not actually need the longtext xml for anything. Does simply having these cumulatively massive chunks of data sitting in the same table really affect the performance of other queries against that table?

Move multiple SQL Server 2008 databases from C: to D: at once

Posted: 17 Jul 2013 02:01 PM PDT

I have SQL Server 2008 R2 with 323 databases consuming some 14 GB on my C: drive, a fast SSD.

Because I want to reclaim some space on my C: drive, I would like to move them to my D: drive.

I have found this MSDN article, but that seems to be the procedure to move only one database.

Is there an automatic way or script to move all my databases at once?

How to optimize MySQL for large blob updates

Posted: 17 Jul 2013 11:45 AM PDT

I have a table that holds large BLOBs (up to 10M), and is updated frequently. The problem is that UPDATE statements can take up to 1 second to execute, and due to app design this blocks UI. I need to speed up those UPDATE/INSERT statements. Is there a way to do it by tuning MySQL server/storage engine/etc?

The table in question is InnoDB, and I have tried turing compression on, but that didnt seem to make a lot of difference. Client is on the same machine as server so there is no network overhead. Server is MySQL 5.5

How to insert in table from remote stored procedure without creating a distributed transaction?

Posted: 17 Jul 2013 12:39 PM PDT

How can I insert in a local table from a remote stored procedure without creating a distributed transaction, in SQL Server 2005? I want to run a stored procedure on a remote server and use the output as the source for an insert into a local table.

One Form with Fields from two Tables [migrated]

Posted: 17 Jul 2013 10:34 AM PDT

I am very new to Access 2007 and am trying to create one form which has multiple tabs on it. Four of the tabs represent information from one table and two of the tabs represent information from a second table.

There is a relationship between the two tables: Enrollment Table!ID has a lookup relationship to Medical Info!ID Lookup.

If there is a new record created using the form, which automatically assignes and ID to the client record in the Enrollment Table, I would like the a corresponding record to be automaticaly created in the Medical Info table.

The problem I am having is that if I create a new client with the form, and move to the medical info table on the form, I can successfully display the ID Lookup results, however, no record is created in the medical info table unless I enter data into one field first.

If this makes sense, is there a way to write the record to the Medical Info Table at the point that the Client ID is created, which would establish the relationship between "Client ID" and "ID Lookup" without having to enter into another field first?

Mysql Select query takes too long for mutilple tables - 12 minutes

Posted: 17 Jul 2013 12:29 PM PDT

Mysql query almost takes more than 12 minutes though indexing is in place. Please can you throw some light on how to bring down the response time. Thanks

select count(*)   from IMSI t1 USE INDEX(imsi_index),  StdSubServiceProfile t4 USE INDEX(msisdn_mode),  Auc t3   where t1.mSubIdentificationNumberId=t3.mSubIdentificationNumberId and            t1.mobileNetworkCodeId=t3.mobileNetworkCodeId and         t1.mobileCountryCodeId=t3.mobileCountryCodeId and         t4.associatedMainMsisdn=t1.mainMSISDN and         t3.authenticationSubscriberType=1 and         (t4.networkAccessMode=1 or t4.networkAccessMode=2);  

EXPLAIN for the above query is :

+----+-------------+-------+--------+---------------+------------+---------+-----------------------------------------------------------------------------------------+---------+-----------------------------------+  | id | select_type | table | type   | possible_keys | key        | key_len | ref                                                                                     | rows    | Extra                             |  +----+-------------+-------+--------+---------------+------------+---------+-----------------------------------------------------------------------------------------+---------+-----------------------------------+  |  1 | SIMPLE      | t4    | ALL    | msisdn_mode   | NULL       | NULL    | NULL                                                                                    | 2374015 | Using where with pushed condition |  |  1 | SIMPLE      | t1    | ref    | imsi_index    | imsi_index | 13      | hlr.t4.associatedMainMsisdn                                                             |      11 | Using where                       |  |  1 | SIMPLE      | t3    | eq_ref | PRIMARY       | PRIMARY    | 12      | hlr.t1.mSubIdentificationNumberId,hlr.t1.mobileNetworkCodeId,hlr.t1.mobileCountryCodeId |       1 | Using where                       |  +----+-------------+-------+--------+---------------+------------+---------+-----------------------------------------------------------------------------------------+---------+-----------------------------------+    INDEX for the table t1:    +-------+------------+------------+--------------+----------------------------+-----------+-------------+----------+--------+------+------------+---------+  | Table | Non_unique | Key_name   | Seq_in_index | Column_name                | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |  +-------+------------+------------+--------------+----------------------------+-----------+-------------+----------+--------+------+------------+---------+  | IMSI  |          0 | PRIMARY    |            1 | mSubIdentificationNumberId | NULL      |        NULL |     NULL | NULL   |      | HASH       |         |  | IMSI  |          0 | PRIMARY    |            2 | mobileNetworkCodeId        | NULL      |        NULL |     NULL | NULL   |      | HASH       |         |  | IMSI  |          0 | PRIMARY    |            3 | mobileCountryCodeId        | NULL      |     2374031 |     NULL | NULL   |      | HASH       |         |  | IMSI  |          1 | imsi_index |            1 | mainMSISDN                 | A         |        NULL |     NULL | NULL   | YES  | BTREE      |         |  +-------+------------+------------+--------------+----------------------------+-----------+-------------+----------+--------+------+------------+---------+    Index for the table t4 :    +----------------------+------------+-------------+--------------+--------------------------+-----------+-------------+----------+--------+------+------------+---------+  | Table                | Non_unique | Key_name    | Seq_in_index | Column_name              | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment |  +----------------------+------------+-------------+--------------+--------------------------+-----------+-------------+----------+--------+------+------------+---------+  | StdSubServiceProfile |          0 | PRIMARY     |            1 | gsmServiceProfileId      | NULL      |        NULL |     NULL | NULL   |      | HASH       |         |  | StdSubServiceProfile |          0 | PRIMARY     |            2 | suMSubscriptionProfileId | NULL      |        NULL |     NULL | NULL   |      | HASH       |         |  | StdSubServiceProfile |          0 | PRIMARY     |            3 | suMSubscriberProfileId   | NULL      |     2374015 |     NULL | NULL   |      | HASH       |         |  | StdSubServiceProfile |          1 | msisdn_mode |            1 | associatedMainMsisdn     | A         |        NULL |     NULL | NULL   | YES  | BTREE      |         |  | StdSubServiceProfile |          1 | msisdn_mode |            2 | networkAccessMode        | A         |        NULL |     NULL | NULL   | YES  | BTREE      |         |  +----------------------+------------+-------------+--------------+--------------------------+-----------+-------------+----------+--------+------+------------+---------+  

Restore SQL Server 2000 Database in use

Posted: 17 Jul 2013 11:36 AM PDT

I need to restore a SQL Server 2000 database to a backup of about 2 days ago. I have just backed it up and have tried to restore it, but it says it is in use.

What is the best way to find out who is still using it and then how to disconnect them?

I imagine taking the DB offline is a simple way of doing it, but probably not the safest?

Percona Playback - Strange Locking issues

Posted: 17 Jul 2013 12:34 PM PDT

Problem while replaying logs

I am in the process of benchmarking a new DB node (specs at the end) and have run across some strange behavior:

As described here i:

  • Created a dump (innobackupex ftw)
  • I logged all my queries for an hour
  • Setup my new db (same my.cnf as the live db only with a higher innodb_buffer_pool_size)
  • Started the replay of my slow query log

As per the documentation:

percona-playback --mysql-host=127.0.0.1\  --mysql-user=root --mysql-schema=my_db\  --query-log-file=slow.log  

This works fine for about 15 minutes, then I start getting strange locking problems:

Error during query: Lock wait timeout exceeded; try restarting transaction, number of tries 0  

I started debugging my current load on the db and found that only one single query was running:

(taken from innodb status)

---TRANSACTION 1C5264768, ACTIVE 44 sec inserting  mysql tables in use 1, locked 1  LOCK WAIT 2 lock struct(s), heap size 376, 1 row lock(s)  MySQL thread id 4289, OS thread handle 0x7f7fb0779700, query id 77515 localhost     127.0.0.1 root update  insert into sessions (a, b, c, d, e, e, f, g, h, i, j, k, l, m, n, o, p, q) values (0, 682,  ------- TRX HAS BEEN WAITING 44 SEC FOR THIS LOCK TO BE GRANTED:  RECORD LOCKS space id 4549 page no 7875876 n bits 104 index `PRIMARY` of table `production`.`sessions` trx id 1C5264768 lock_mode X insert intention waiting  ------------------  TABLE LOCK table `production`.`sessions` trx id 1C5264768 lock mode IX  RECORD LOCKS space id 4549 page no 7875876 n bits 104 index `PRIMARY` of table `production`.`sessions` trx id 1C5264768 lock_mode X insert intention waiting  ---TRANSACTION 1C526475D, ACTIVE (PREPARED) 452 sec  2 lock struct(s), heap size 376, 1 row lock(s)  MySQL thread id 1722, OS thread handle 0x7f7fb083d700, query id 77311 localhost 127.0.0.1 root  Trx read view will not see trx with id >= 1C526475E, sees < 1C525BA04  TABLE LOCK table `production`.`sessions` trx id 1C526475D lock mode IX  RECORD LOCKS space id 4549 page no 7875876 n bits 104 index `PRIMARY` of table `production`.`sessions` trx id 1C526475D lock_mode X  ----------------------------  END OF INNODB MONITOR OUTPUT  

And only one table open:

mysql> SHOW OPEN TABLES from production where In_use != 0;  +----------------------+--------------+--------+-------------+  | Database             | Table        | In_use | Name_locked |  +----------------------+--------------+--------+-------------+  | production           | sessions     |      1 |           0 |  +----------------------+--------------+--------+-------------+  1 row in set (0.00 sec)  

This situation stays like this for about 3-4 minutes and then suddenly playback continues.

These issues do not happen on the live db: we have some issues with locking but we have never exceeded the innodb_lock_wait_timeout value.

I am most likely missing something obvious but for the life of me i can't figure it out, but why would the replay hang like that or better yet why would mysql remain in this lock state?

The relevant entries in the slow log are from our jee server:

XA START 0xbe681101606ce8d1676630322c7365727665722c5035313337,0x676630322c7365727665722c50353133372c00,0x4a5453;  insert into sessions (a, b, c, d, e, e, f, g, h, i, j, k, l, m, n, o, p, q) values (0, 682, ...);  XA END 0xbe681101606ce8d1676630322c7365727665722c5035313337,0x676630322c7365727665722c50353133372c00,0x4a5453;  

Does hibernate's transaction handling have anything to do with the way the lock is generated and not closed?

Server Specs

  • Ubuntu 12.04.2 LTS
  • percona-server-server-5.5 version 5.5.32-rel31.0-549.precise

Relavent config:

max_connections         = 1500  sort_buffer_size        = 1M  thread_cache_size       = 1000  max_heap_table_size     = 512M  tmp_table_size          = 512M  join_buffer_size        = 67108864  expand_fast_index_creation = ON  open_files_limit        = 65535  table_definition_cache  = 4096  table_open_cache        = 262144  max_allowed_packet      = 16M  thread_stack            = 192K  query_cache_limit       = 1M  query_cache_size        = 512M  thread_concurrency      = 8  query_cache_type        = 1  long_query_time         = 2  log_slave_updates       = 1  expire_logs_days        = 10  max_binlog_size         = 100M  

Innodb config:

default_storage_engine   = InnoDB  innodb_file_per_table    = 1  innodb_old_blocks_time   = 1000  innodb_buffer_pool_size  = 163456M  innodb_log_file_size     = 256M  innodb_flush_method      = O_DIRECT  innodb_read_io_threads   = 4  innodb_write_io_threads  = 4  innodb_doublewrite       = FALSE  innodb_flush_log_at_trx_commit = 2  

Thanks for any help or experience in this area!

Best way to put checksum on all pages

Posted: 17 Jul 2013 02:18 PM PDT

I have a SQL Server database with the PAGE_VERIFY option set to NONE. When I set this to CHECKSUM, only pages changed from that point onwards get their checksum calculated and stored on the page.

I'm looking for the best way to put checksums on at least all table and index data.

My initial idea would be to drop and recreate all clustered indexes. For tables that have no clustered index I would create one and then drop it again.

That should result in at least one write operation on each data and index page, but it's a bit heavy-handed. Anyone has a better idea?

Start Oracle 12c enterprise manager in Windows 7

Posted: 17 Jul 2013 01:19 PM PDT

I have installed Oracle 12c in Windows 7. The installation was succesful and the RDBMS and the database that I created are working properly.

Then I tried to access to the enterprise manager via web in https://localhost:5500/em as the installer told me to do but it doesn't connect.

If I type netstat I get:

TCP [::1]:5500 My-PC:49439 TIME_WAIT  TCP [::1]:5500 My-PC:49440 TIME_WAIT  TCP [::1]:5500 My-PC:49441 TIME_WAIT  TCP [::1]:49442 My-PC:5500 TIME_WAIT  

I am using a proxy but I disabled it for local locations.

I tried this http://docs.oracle.com/cd/E25054_01/doc.1111/e24473/emctl.htm with no success because I couldn't find emctl file or the services it's talking about. I have 5 Oracle services running instead.

Thanks in advance.

Autogrowth of database with it's file name

Posted: 17 Jul 2013 09:54 AM PDT

I use below query to get auto-growth event occurred to databases in a sql server.

DECLARE @trcfilename VARCHAR(1000);  SELECT @trcfilename = path FROM sys.traces WHERE is_default = 1  SELECT COUNT(*)as no_of_autogrowths,  sum(duration/(1000*60)) as time_in_min,  Filename  FROM ::fn_trace_gettable(@trcfilename, default)  WHERE (EventClass = 92 OR EventClass = 93)  GROUP BY  Filename  

It outputs number of auto-growths,time taken for auto-growth and logical name of the file. But i want physical name of the file(mdf and ldf file name) instead of logical name.I don't know whether from sys.traces i will get physical name or please help me with an alternate way to do it.

SQLServer audit query that fires a trigger

Posted: 17 Jul 2013 11:30 AM PDT

Using SQLServer, does exist a way to audit from inside a trigger the sql that fires it?

I need to know the SQL query that fires a trigger over a database without a profiler.

Thanks

Bulk database update/insert from CSV file

Posted: 17 Jul 2013 09:53 AM PDT

I am implementing application specific data import feature from one database to another.

I have a CSV file containing say 10000 rows. These rows need to be inserted/updated into database.

There might be the case, where couple of rows may present in database that means those need to be updated. If not present in database, those need to be inserted.

One possible solution is that, I can read one by one line, check the entry in database and build insert/update queries accordingly. But this process may take much time to create update/insert queries and execute them in database. Some times my CSV file may have millions of records.

Is there any other faster way to achieve this feature?

Oracle db_name, db_unique_name, and ORACLE_HOME

Posted: 17 Jul 2013 05:02 PM PDT

I'm studying Oracle and I would like to understand the uniqueness or constrains when assigning these parameters.

Suppose a machine with a single OS (host) and 2 different databases (physical). Are they forced to use different db_name in following cases?

  • They share ORACLE_HOME
  • They don't share ORACLE_HOME

How is this related when db_unique_name enters in the game? What's the point?

So far reading in Forums and offical Docs I got the following assumption:

  • Two databases with the same DB_NAME (not having set DB_UNIQUE_NAME) running from the same ORACLE_HOME is not possible, in Linux at least. It is possible with different ORACLE_HOMES but highly undesirable, but in this case Windows does not allow it. Can somebody assert it?

  • It seems standby-productions systems usually use databases with same name, but I am not sure whether you are free to do it only in shared ORACLE_HOME configuration, a different one, or both cases.

Altering the location of Oracle-Suggested Backup

Posted: 17 Jul 2013 09:58 AM PDT

On one database, the Oracle-Suggested Backup scheduled from Enterprise Manager always ends up in the recovery area, despite RMAN configuration showing that device type disk format points elsewhere.

As far as I can see, the scheduled backup job is simply:

run {  allocate channel oem_disk_backup device type disk;  recover copy of database with tag 'ORA_OEM_LEVEL_0';  backup incremental level 1 cumulative  copies=1 for recover of copy with tag 'ORA_OEM_LEVEL_0' database;  }  

Asking RMAN to show all reveals that device type disk is indeed configured to store elsewhere:

CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT   '/s01/backup/PROD11/PROD11_%U';  

If I run the script manually, the backupset is placed at the above location, when the script is run from the job scheduler the backupset goes to the RECO group on ASM,

Why might Oracle still choose to dump the backupset to the db_recovery_file_dest?

Ultimately, how can I change the backup destination?

Database restructure - beneficial?

Posted: 17 Jul 2013 08:09 PM PDT

I have a table for email messages. Then, I have a table that has the message parts.

The Parts table contains a

   field_id,  message_id, and data   

At the time I used Parts to name the table and yet used field_id for the column. Just an FYI

So for example, a part_id of 2 would be the subject of the message. I have parts for subject, date, htmlbody, and textbody. Due to this structure, I have approximately 2 more queries per email (one for the parts, and another for the email addresses associated to the email) than if I were to push all the data into the messages tables. I found this structure to be best, but I'm beginning to think it might be wrong and not best for performance.

My question is, will it be in my best interests to restructure the database? Id rather not.

I was thinking about moving the htmlbody and textbody and subject and date to the messages table. Another solution would be to grab all the emails and their data from the Parts table in one query. I could grab all the ids in one query and then do an IN(ids) for the second query.

CREATE TABLE IF NOT EXISTS `messages` (    `id` int(10) NOT NULL AUTO_INCREMENT,    `user_id` int(10) NOT NULL,    `account_folder_id` int(10) NOT NULL,    `hash` varchar(255) NOT NULL,    `uid` int(10) NOT NULL,    `seen` tinyint(1) NOT NULL,    `flagged` tinyint(1) NOT NULL,    `date_created` int(11) NOT NULL DEFAULT '0',    `last_modified` int(11) NOT NULL DEFAULT '0',    PRIMARY KEY (`id`),    UNIQUE KEY `hash` (`hash`)  ) ENGINE=MyISAM  DEFAULT CHARSET=latin1;    CREATE TABLE IF NOT EXISTS `message_parts_data` (    `id` int(10) NOT NULL AUTO_INCREMENT,    `message_id` int(10) NOT NULL,    `field_id` int(10) NOT NULL,    `data` text NOT NULL,    `date_created` int(11) NOT NULL DEFAULT '0',    `last_modified` int(11) NOT NULL DEFAULT '0',   PRIMARY KEY (`id`)  ) ENGINE=MyISAM  DEFAULT CHARSET=latin1;    CREATE TABLE IF NOT EXISTS `fields` (    `id` int(10) NOT NULL AUTO_INCREMENT,    `name` text,    PRIMARY KEY (`id`)  ) ENGINE=MyISAM  DEFAULT CHARSET=latin1;    INSERT INTO `fields` (`id`, `name`) VALUES  (1, 'To'),  (2, 'Subject'),  (3, 'Date'),  (4, 'From'),  (5, 'CC'),  (7, 'ReplyTo'),  (8, 'textHtml'),  (9, 'textPlain'),  (11, 'Forward');  

Thanks

Error while restoring a Database from an SQL dump

Posted: 17 Jul 2013 08:21 PM PDT

I am extremely new to MySQL and am running it on Windows. I am trying to restore a Database from a dumpfile in MySQL, but I get the following error:

$ >mysql -u root -p -h localhost -D database -o < dump.sql  ERROR: ASCII '\0' appeared in the statement, but this is not allowed unless option --binary-mode is enabled and mysql is run in non-interactive mode. Set --binary-mode to 1 if ASCII '\0' is expected. Query: 'SQLite format 3'.  

I tried $ > mysql -u root -p -h localhost -D database --binary-mode -o < dump.sql but this gave me the following ERROR at line 1: Unknown command '\☻'. It is a 500 Mb dump file, and when I view its contents using gVIM, all I can see is expressions and data which is not comprehensible. Also when I try to copy contents from the file to post here all I can copy is :SQLite format 3 This kind of seems strange.

MYSQL Timezone support

Posted: 17 Jul 2013 02:21 PM PDT

We are having a shared hosting plan and they are saying that do provide MYSQL Timezone support in a shared hosting plan. I can create timezone related tables in our database and populate them with required data(data from from our local MYSQL Timezone related tables. How to view the code syntax for MySQL "CONVERT_TZ" function?

Thanks Arun

replication breaks after upgrading master

Posted: 17 Jul 2013 11:21 AM PDT

I have a set up of replication with master 5.1.30 and slave 5.5.16 and the replication is working good

Now i have upgraded mysql master to 5.1.47

As far as i know we have to turn off the log bin with sql_log_bin=0 before using mysql_upgrade program in order to up grade the replication setup as well

but the problem here is the binary log was not turned off while mysql_upgrade program is running

The reason i found is in 5.1 the sql_log_bin is a session variable and mysql_upgrade program runs in another session

so how to upgrade the replication as well along with the server with any breakage on replication setup.

any suggestions are really useful.....

Custom sp_who/sp_whoUsers

Posted: 17 Jul 2013 06:57 PM PDT

I need to allow a client in a dev DW SQL 2K8R2 environment, to view and kill processes, but I do not want to grant VIEW SERVER STATE to this person (he's a former sql dba and is considered a potential internal threat).

When I run the following, it returns one row as if the user ran the sp themselves with their current permissions.

USE [master]  GO    SET ANSI_NULLS ON  GO  SET QUOTED_IDENTIFIER ON  GO    CREATE PROCEDURE [dbo].[usp_who] with execute as owner  AS  BEGIN      SET NOCOUNT ON;      exec master.dbo.sp_who;  END  

Changing the "with execute as" to "self" (I'm a sysadmin) returns the same results. I've also tried the below instead of calling sp_who, and it only returns one row.

select * from sysprocesses  

It seems that the context isn't switching, or persisting, throughout the execution of the procedure. And this is to say nothing of how I'm going to allow this person to "kill" processes.

Does anyone have a solution or some suggestions to this seemly unique problem?

Need to suppress rowcount headers when using \G

Posted: 17 Jul 2013 12:21 PM PDT

Is there a command to suppress the rowcount headers and asterisks when using '\G' to execute a SQL statement? I am executing mysql with the -s and --skip-column-name options, but these don't suppress the rowcounts.

How to search whole MySQL database for a particular string

Posted: 17 Jul 2013 03:21 PM PDT

is it possible to search a whole database tables ( row and column) to find out a particular string.

I am having a Database named A with about 35 tables,i need to search for the string named "hello" and i dont know on which table this string is saved.Is it possible?

Using MySQL

i am a linux admin and i am not familiar with databases,it would be really helpful if u can explain the query also.

multivalued weak key in ER database modeling

Posted: 17 Jul 2013 01:21 PM PDT

I was wondering since i didnt find out any clarification for this. I want to store movies that exist in different formats (dvd, bluray etc) and the price for each format differs from each other as well as the quantity of each format, so i came up with this:

example

Is this correct from a design perspective? Does this implies redundancy? I dont understand how will this be stored in a table. Would it be better to do it like this :

enter image description here

Thanks in advance.

EDIT : I add some more descriptive information about what i want to store in this point of the design. I want to store information about sales. Each movie that exist in the company i need to store format, price and stock quantity. I will also need to store customer information with a unique id, name, surname, address, movies that he/she has already bought and his credit card number. Finally i will have a basket that temporary keeps items (lets suppose that other items exist apart from movies) that the customer wants to buy.

Microsoft Office Access database engine could not find the object 'tableName'

Posted: 17 Jul 2013 05:21 PM PDT

First a little background: I am using MS access to link to tables in an advantage database. I created a System DSN. In the past in Access I've created a new database, and using the exteranl data wizard, successfully linked to tables. Those databases and the linked tables are working fine.

Now I am trying to do the same thing, create a new access db, and link to this same DSN. I get as far as seeing the tables, but after making my selection, I get the error, " The Microsoft Office Access database engine could not find the object 'tableSelected'. Make sure the object exists and that you spell its name and the path name correctly.

I've tried creating another datasource (system and user) with no luck. Environment is Wn XP, Access 2007, Advantage DB 8.1

Sql Anywhere 11: Restoring incremental backup failure

Posted: 17 Jul 2013 09:54 AM PDT

We want to create remote incremental backups after a full backup. This will allow us to restore in the event of a failure and bring up another machine with as close to real time backups as possible with SQL Anywhere network servers.

We are doing a full backup as follows:

dbbackup -y -c "eng=ServerName.DbName;uid=dba;pwd=sql;links=tcpip(host=ServerName)"      c:\backuppath\full  

This makes a backup of the database and log files and can be restored as expected. For incremental backups I've tried both live and incremental transaction logs with a renaming scheme if there are multiple incremental backups:

dbbackup -y -t -c "eng=ServerName.DbName;uid=dba;pwd=sql;links=tcpip(host=ServerName)"      c:\backuppath\inc    dbbackup -y -l -c "eng=ServerName.DbName;uid=dba;pwd=sql;links=tcpip(host=ServerName)"       c:\backuppath\live  

However, on applying the transaction logs on restore I always receive an error when applying the transaction logs to the database:

10092: Unable to find table definition for table referenced in transaction log

The transaction log restore command is:

dbeng11 "c:\dbpath\dbname.db" -a "c:\backuppath\dbname.log"  

The error doesn't specify what table it can't find but this is a controlled test and no tables are being created or dropped. I insert a few rows then kick off an incremental backup before attempting to restore.

Does anyone know the correct way to do incremental backup and restore on Sql Anywhere 11?

UPDATE: Thinking it may be related to the complexity of the target database I made a new blank database and network service. Then added one table with two columns and inserted a few rows. Made a full backup, then inserted and deleted a few more rows and committed transactions, then made an incremental backup. This also failed with the same error when attempting to apply the incremental backups of transaction logs after restoring the full backup ...

Edit:

You can follow this link to see the same question with slightly more feedback on SA: http://sqlanywhere-forum.sybase.com/questions/4760/restoring-incrementallive-backup-failure

No comments:

Post a Comment

Search This Blog