Tuesday, April 30, 2013

[how to] Lck_m_s suspended queries after running stored proc

[how to] Lck_m_s suspended queries after running stored proc


Lck_m_s suspended queries after running stored proc

Posted: 30 Apr 2013 08:01 PM PDT

Hi all can anyone advise if I am running cursor using a stores proc which do select and update.

Then if I am running a process which at the same time may call the same stored proc multiple times causing this lck_m_s lock suspended queries issue.

How do I tune the stored proc to resolve this issue? We tried removing the cursor and using simple select and update query also face the same issue. Please advise thanks

OK to put temp tablespace on volatile storage or to omit it from backups? (Postgresql)

Posted: 30 Apr 2013 07:07 PM PDT

I would intuit that it's fine, but I just want to make sure there are no gotchas from a recovery point of view:

If I were to lose my temp tablespace upon system crash, would this prevent proper crash recovery?

Also, if I were to omit the temp tablespace from the base backup, would that prevent proper backup recovery?

PostgreSQL datasource URL?

Posted: 30 Apr 2013 03:27 PM PDT

My first attempt to use PostgreSQL:

I have the elephant icon on the top right: running on port 5432.

I want to use LibreOffice as an interface:

  • Connect to an existing database: PostgreSQL
  • Datasource URL: ???

What should I put here? The database will be on the same computer where LibreOffice is running. Tried with localhost:5432 with no success.
By the way, I did not create the database, yet. Can I do it from LibreOffice?

Thank you.

Platform: Mac OSX 10.7 - PostgreSQL 9.2.2 - LibreOffice 4.0

Deleted a file accidentally, I need to remove it's entry, I don't need it's data because it contains only indexes

Posted: 30 Apr 2013 05:29 PM PDT

I have a C# program to deal with a big database, I need the databas not to be the buttle-nick of my program, in the database, to increase performance, I have created a filegroup with a single data file located on a ramdisk, I accidentally changed the size of the ramdisk, the file deleted, I thought that if I have removed all of the objects in that file group, I can rebuild those indexes, but when I try to remove the file group or the file or even when trying to add any file or file group, the current message will appear:

Microsoft SQL Server Management Studio

Alter failed for Database 'xxxxxx'. (Microsoft.SqlServer.Smo)

For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.2500.0+((KJ_PCU_Main).110617-0038+)&EvtSrc=Microsoft.SqlServer.Management.Smo.ExceptionTemplates.FailedOperationExceptionText&EvtID=Alter+Database&LinkId=20476

------------------------------ ADDITIONAL INFORMATION:

An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)

The operating system returned error 21(failed to retrieve text for this error. Reason: 15105) to SQL Server during a write at offset 0x0000002c668000 in file 'R:\DBIndexes.mdf'. Additional messages in the SQL Server error log and system event log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online. (Microsoft SQL Server, Error: 823)

For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=823&LinkId=20476

*UPDATE*

When I detached the database, I couldn't re-attache it, it now saying: Unable to open the physical file "R:\DBIndexes.mdf". Operating system error 2: "2(failed to retrieve text for this error. Reason: 15105)".

It was working fine before detaching it, but the small problem was in deleting that orphan file!!!

Is there a behavior change between SQL Server 2000 and SQL Server 2008 R2 OPTION (FAST 1)?

Posted: 30 Apr 2013 04:43 PM PDT

This is question 2 of 2 related to the FAST query hint. Background Information (as an aside, I'm not trying to garner more rep, I'm trying to be true to only asking one question per question).

Our legacy ERP was running on SQL Server 2000 EE and now it's on 2008 R2 EE. We're noticing many more blocked commands in the 2008R2 environment. The 2008R2 environment is running with trace flag 4199 and 1119.

I noticed a difference between the MSDN documentation on hints in 2000) vs. 2008R2 where the FAST hint is expanded to note that the result will continue to be processed once the specified n rows had been returned. Is this just a more robust document, is this a change in behavior for this hint through the years, or something different?

How does OPTION (FAST 1) actually interact with a client?

Posted: 30 Apr 2013 05:02 PM PDT

This is question 1 of 2 related to OPTION (FAST 1);

We've just upgraded our ERP database from SQL 2000 EE to 2008 R2 EE and we've been noticing increased blocking in the database. I've narrowed it down to what I believe to be the offending statement in the vendor's code which is:

SELECT MAX(column)  FROM [table]   WHERE <condition>   OPTION (FAST 1);  

The spid leaves an open transaction and takes a lock on the table, blocking all other clients. However, the calling client no longer seems to be interacting with the server to tell the server that it's received the data to end the session.

Reading the documentation on Query Hints, I saw this statement

FAST number_rows

Specifies that the query is optimized for fast retrieval of the first number_rows. This is a nonnegative integer. After the first number_rows are returned, the query continues execution and produces its full result set.

This makes me wonder if the client has somehow broken communication, would the server keep the transaction open, processing the full result set after the first n rows are returned and leave the transaction open? The process is an internal process so I can't really watch an end-user execute the session to do it, and this is not something that happens every time the internal process occurs. However, it is only ever used by the internal process.

Having read Remus' answer on SO it seems like it is overkill for the simplicity of the query. Looking at the query, if they're receiving more that one result from an ungrouped MAX then something's very fishy.

So, as I prepare to work with the vendor, I was wondering if I could begin to accurately pin our blocking issues on the fact that this query hint is being used.

Please feel free to edit/request edits as I know this may actually be unclear.

Are shards distributed randomly across machines in MongoDB or are they in order?

Posted: 30 Apr 2013 02:17 PM PDT

Are shards distributed randomly across machines in MongoDB or are they in order?

For example, if I had a MongoDB cluster that was sharded across two machines and my shard key was a person's name, would names starting with 'a-m' be on the first machine (or replica set) and names starting with 'n-z' be on the second machine?

Thanks so much!

UPDATE table based on the same table

Posted: 30 Apr 2013 02:02 PM PDT

I have a table with product descriptions, and each product description has a product_id and a language_id. What I want to do is update all of the fields with a language_id of 2 to be equal to the same product_id where the language_id = 1.

So far I've tried the following query, but I'm getting errors indicating that MySQL doesn't want to update a table where the table's also being used in the subquery.

UPDATE products_description AS pd SET pd.products_seo = (SELECT pd2.products_seo FROM products_description AS pd2 WHERE pd2.language_id = 1 AND pd2.products_id = pd.products_id) WHERE pd.language_id <> 1

Is there a "simple" way around this limitation in MySQL? Or any "tricks"? I'm a little surprised that my query doesn't work, as it seems logical.

Publishing stored proc execution in transactional replication

Posted: 30 Apr 2013 02:29 PM PDT

In SQL 2008 R2, I am going to update a table which will affect 25 million rows, table is currently replicated (transactional replication). To minimize the impact on replication can I create a stored procedure, wrap the update statement inside the stored proc and add this stored procedure for the replication?

By doing this when stored proc executed it will replicate the execution of the stored procedure. My question is how SQL server knows to replicate only the stored proc execution (i.e. actual exec myupateproc t-sql command) instead of underlying table data that is being updated on the publisher (i.e. actually replicating 25 million update statement)?

SQl server indexes file damaged, was on ramdisk!

Posted: 30 Apr 2013 03:17 PM PDT

I have put some of my indexes into a file-group that contains one file, that file is on the ramdisk, the performance goes fire!! but the problem is that the file has deleted by wrong (The file contains only index information), I tried to delete that file from database properties, but an error occured:

Microsoft SQL Server Management Studio

Alter failed for Database 'xxxxxx'. (Microsoft.SqlServer.Smo)

For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=10.50.2500.0+((KJ_PCU_Main).110617-0038+)&EvtSrc=Microsoft.SqlServer.Management.Smo.ExceptionTemplates.FailedOperationExceptionText&EvtID=Alter+Database&LinkId=20476

------------------------------ ADDITIONAL INFORMATION:

An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)


The operating system returned error 21(failed to retrieve text for this error. Reason: 15105) to SQL Server during a write at offset 0x0000002c668000 in file 'R:\DBIndexes.mdf'. Additional messages in the SQL Server error log and system event log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online. (Microsoft SQL Server, Error: 823)

For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=823&LinkId=20476

I have no problem to re-build the indxes, but I need to fix the database!!

I deleted all of the indexes that stores in that file, And tried to delete the file group, but the same error occured


*UPDATE*

When I detached the database, I couldn't re-attache it, it now saying: Unable to open the physical file "R:\DBIndexes.mdf". Operating system error 2: "2(failed to retrieve text for this error. Reason: 15105)".

It was working fine before detaching it, but the small problem was in deleting that orphan file!!!

Multithreading caching in SQL CLR

Posted: 30 Apr 2013 12:40 PM PDT

Is there any mechanism available to a SQLCLR assembly for caching that will still work when the assembly is registered as "safe"?

  • The cache needs to be thread-safe, so it can be shared by multiple SQL Server pids.
  • The cache needs to be held in memory for performance - tempdb is not fast enough.
  • The assembly can't be registered as "unsafe".

See this post on StackOverflow for further details. (Sorry for the cross-post, not sure how else to rephrase the question for this site).

Identical subquery optimisation in an update

Posted: 30 Apr 2013 11:46 AM PDT

update activitybooking set `submitted`='1' where id='958'      and (select SUM(pool1_count) from (select pool1_count from `activitybooking` where `abt`='12' and                              (id='958' or `submitted`='1' or `submitted`='3' or `submitted`='8')) as temp_pool1_count) < 10      and (select SUM(pool2_count) from (select pool2_count from `activitybooking` where `abt`='12' and                              (id='958' or `submitted`='1' or `submitted`='3' or `submitted`='8')) as temp_pool2_count) < 5      and (select SUM(pool3_count) from (select pool3_count from `activitybooking` where `abt`='12' and                              (id='958' or `submitted`='1' or `submitted`='3' or `submitted`='8')) as temp_pool3_count) < 20  

The statement above has three identical subqueries and seems to execute them independently. How can I best rewrite the update to perform the subquery only once?

(The idea of this query is to only perform the update if doing so would not cause the sum of pool1_count to exceed the maximum allowed pool1_count and the same for pool2, pool3 ....)

PostgreSQL stored function that returns arbitrary resultset

Posted: 30 Apr 2013 03:10 PM PDT

I would like to write a PostgreSQL stored function that essentially behaves like the stored procedures I know and love from MSSQL and MySQL where I can just wrap a query that takes no parameters and have it return that resultset without having to specify the format of the output and change that definition every time I update the query. Is this even possible in PostgreSQL ?

I've tried the following using PostgreSQL 9.2:

CREATE OR REPLACE FUNCTION test() RETURNS SETOF record

Which gives me the following error:

ERROR: a column definition list is required for functions returning "record"

I've also tried:

CREATE OR REPLACE FUNCTION test() RETURNS table ()

but apparently that's invalid syntax.

Restore SQL Server 2008 R2 Express from Description

Posted: 30 Apr 2013 06:09 PM PDT

When backing up SQL Server we have the opportunity to input a description of the backup. Yet non of the Microsoft tools will show that description for restoring. Is there a way for me to review the description of the backup to select that one?

Thanks.

Using dynamic sql inside Oracle stored procedure

Posted: 30 Apr 2013 09:38 AM PDT

Assuming I have the following procedure

CREATE PROCEDURE foo (table1_id IN TABLE1.table1_id%type,                        table1_val IN TABLE1.table1_value%type)   AS     SQL_UPDATE VARCHAR2(500) := 'UPDATE TABLE1 SET table1_value =:1 WHERE table1_id = :2';  BEGIN    --.....    --1 :       EXECUTE IMMEDIATE SQL_UPDATE USING foo.table1_val, foo.table1_id;    --2 :       UPDATE TABLE1 SET table1_value = foo.table1_val WHERE table1_id = foo.table1_id;    END;  

Beside the style/readability, is it any performance penalty for using dynamic query (1) compared to (2) in such cases (I mean when it's absolutely avoidable) ?

Thank you.

Difference between accessing SQL Server by instance name and cluster name [closed]

Posted: 30 Apr 2013 05:12 PM PDT

I have installed failover clustering feature on two servers with Windows Server 2012 and then I installed SQL Server 2012 on both of them. I also enabled SQL Server 2012 AlwaysOn feature. Now I can access each node by instance name. I can also connect using failover cluster name. What is the difference between accessing SQL Server by instance name and cluster name?

SQL Server: Worse performance on a new server

Posted: 30 Apr 2013 02:46 PM PDT

We've been on a dedicated server (single quad-core, 6 GB RAM) and are moving to a new dedicated server (2x hex-core, 32 GB RAM). Both are Windows Server 2008, SQL Server 2008. The performance on the new server is slightly worse than the old, slower server.

In testing, our ASP.NET application runs 10 - 20% slower. Running individual expensive queries with STATISTICS IO and STATISTICS TIME shows 10 - 20% greater elapsed time on the new server. SQL Query Profile shows higher CPU usage on expensive queries.

Task Manager on the new server shows sqlserver.exe is consuming 22 GB of RAM, but the CPU values always stay very low.

I've updated all statistics, rebuilt or reorganized indexes, etc. Execution plans should be stored on the new server at this point, given the amount of testing I've done. If there are any missing indexes (I don't think there are) they affect the old and new servers equally. New has a restored backup of the same data on the old.

I'd expected that the performance on the new server would be better, but of more concern is load. If the old server is performing better even under load, then what will happen when this new, slightly worse server has to take that load?

What else could I be missing here?

EDIT: MAXDOP set to 6.

Old server has OS, databases, and tempdb's on the same physical drives (RAID 10). Total of 4 15k 3 Gb/s 3.5 inch SAS. New server has three drive sets: OS on RAID 1, database on RAID 10, tempdb on RAID 5. Total of 9 15K 6 Gb/s 2.5 Inch SAS.

Old server has 1 x Intel Xeon E5620 2.40 GHz Quad-Core 8 Threads (w H/T). New server has 2 x Intel Xeon E5-2640 2.5 GHz Six -Core 12 Threads (w H/T).

Issues converting MyISAM table to InnoDB (auto column issue)

Posted: 30 Apr 2013 09:51 AM PDT

I'm having issues trying to convert a table from MyISAM to InnoDB in MySQL 5.6.

The following is the table dump:

--  -- Table structure for table `companies`  --    DROP TABLE IF EXISTS `companies`;  /*!40101 SET @saved_cs_client     = @@character_set_client */;  /*!40101 SET character_set_client = utf8 */;  CREATE TABLE `companies` (    `uid` int(20) NOT NULL,    `cid` int(20) NOT NULL AUTO_INCREMENT,    `cname` varchar(500) NOT NULL,    `rfc` varchar(20) NOT NULL,    `address` varchar(1000) NOT NULL,    `dbUseExternal` tinyint(1) NOT NULL DEFAULT '0',    `dbHost` varchar(50) NOT NULL,    `dbPort` varchar(50) NOT NULL,    `dbUser` varchar(50) NOT NULL,    `dbPass` varchar(50) NOT NULL,    `dbSSL` varchar(50) NOT NULL,    `dbDriver` varchar(50) NOT NULL,    `dbName` varchar(50) NOT NULL,    `status` int(10) NOT NULL,    PRIMARY KEY (`uid`,`cid`)  ) ENGINE=MyISAM DEFAULT CHARSET=latin1;  /*!40101 SET character_set_client = @saved_cs_client */;  

It works as MyISAM. But, if I try to convert it to InnoDB (or if I try editing this dump to insert it on the command line as a sql file), I get the following error:

Incorrect table definition; there can be only one auto column and it must be defined as a key

I understand that error - or at least I thought I did. I'm not really using more than one AUTO_INCREMENT column and it is defined as primary key.

Also, the information I've found regarding the error is always because of an obvious missing key or a duplicate AUTO_INCREMENT definition. One more thing I see generally commented is that the same is true for MyISAM and InnoDB.

So, why does it work for MyISAM and not for InnoDB?

Thanks in advance for any comments.

Francisco

Master updates Slave but not Master itself

Posted: 30 Apr 2013 12:46 PM PDT

I have a setup master/slave in which applications are pointed only to master. Yesterday Master had got crashed due to "multi bit error on dimm detected" in Front indication panel in orange color.

After reboot I see the master went for recovery then it came up and application started to use the Master normally but still the err msg exist in Front indication panel. But now the slave had stuck out due to primary key constraint for a table.

The problem I face is master has a table data until what exist in binlog of master. But slave has the table data what is not in master binlog. Below is the table details when compared to Master table and Slave table.

Last info in binlog timestamp is 2013-04-29 02:13:11   System shut down timestamp 2013-04-29 02:54                         mysql> select * from audit where id=11298907;              ------------------------------------------------------------              | id     | ipaddress      | dated        | msisdn  |              +----------+----------------+-------------------------------              |11298907 | 82.25.226.183  | 2013-04-29 02:13:11 | 998282821|              -----------------------------------------------------------              mysql> select * from audit where id > 11298907 limit 1;                -----------------------------------------------------------              | id     | ipaddress      | dated        |msisdn  |              +----------+----------------+------------------------------              |11298908 | 82.25.226.183 | 2013-04-29 04:31:13 | 992828111|              ------------------------------------------------------------                On Slave              ==========                mysql> select * from audit where id=11298907;              ------------------------------------------------------------              | id     | ipaddress      | dated        | msisdn  |              +----------+----------------+-------------------------------              |11298907 | 82.25.226.183  | 2013-04-29 02:13:11 | 998282821|              -----------------------------------------------------------                    mysql> select * from audit where id > 11298907 limit 1;                -----------------------------------------------------------              | id     | ipaddress      | dated        |msisdn  |              +--------+----------------+---------------------------------              |11298908 | 82.25.226.183 | 2013-04-29 02:13:12 | 762616173|              ------------------------------------------------------------  

Both the servers are ntp synched with GST. And they remain same after Master reboot too. It took almost 2 hrs to bring up Master Server. How come only slave can have the data that Master is not aware of? Please have someone come accross such a thing?

InnoDB tuning with 1G of ram limit

Posted: 30 Apr 2013 09:42 AM PDT

I am trying to calculate variable moving averages crossover with variable dates.

That is: I want to prompt the user for 3 values and 1 option. The input is through a web front end so I can build/edit the query based on input or have multiple queries if needed.

X = 1st moving average term  (N day moving average. Any number 1-N)  Y = 2nd moving average term. (N day moving average. Any number 1-N)  Z = Amount of days back from present to search for the occurance of:  option = Over/Under: (> or <. X passing over Y, or X passing Under Y)      X day moving average passing over OR under Y day moving average  within the past Z days.  

My database is structured:

daily_data

id  stock_id  date  adj_close  

And:

stocks

stock_id  symbol  

I have a btree index on:

daily_data(stock_id, date, adj_close)    stock_id  

We are creating an open stock analytic system where users can perform trend analysis. I have a database containing 3500 stocks and their price histories going back to 1970.

This query will be running every day in order to find stocks that match certain criteria for example:

10 day moving average crossing over 20 day moving average within 5 days

20 day crossing UNDER 10 day moving average within 5 days

55 day crossing UNDER 22 day moving average within 100 days

But each user may be interested in a different analysis so I cannot just store the moving average with each row, it must be calculated.

The following query has yet to return anything to me after MANY hours of being run:

SET @X:=5;  SET @Y:=3;  set @Z:=10;  set @option:='under';    select stock_id from (     SELECT stock_id,  datediff(current_date(), date) days_ago,        adj_close,       (       SELECT            AVG(adj_close) AS moving_average       FROM            daily_data T2       WHERE            (                 SELECT                      COUNT(*)                 FROM                      daily_data T3                 WHERE                      date BETWEEN T2.date AND T1.date            ) BETWEEN 1 AND @X       ) move_av_1,      (       SELECT            AVG(adj_close) AS moving_average       FROM            daily_data T2       WHERE            (                 SELECT                      COUNT(*)                 FROM                      daily_data T3                 WHERE                      date BETWEEN T2.date AND T1.date            ) BETWEEN 1 AND @Y       ) move_av_2    FROM       daily_data T1    where     datediff(current_date(), date) <= @z  ) x  where     case when @option ='over'  and move_av_1 > move_av_2 then 1 else 0 end +     case when @option ='under' and move_av_2 > move_av_1 then 1 else 0 end  > 0  order by stock_id, days_ago  

I am currnetly running Mysql - Innodb but am open to other engines (Postgres/Oracle maybe?) if there would be a significant speed increase.

The main problem is I don't have much hardware available for this.

Currently I am trying to run it on a 1gig ram, virtual private server running Ubuntu.

I have my own desktop I could try and run this on, 8 core CPU/16gig ram/ssd but its Windows only.

Any guidance on how to improve the query / what to run it on would be very helpful.

Sharded key-value store using MongoDB

Posted: 30 Apr 2013 10:51 AM PDT

Would like to set up a key-value store that is sharded across multiple machines.

We are currently using MongoDB, is there a reason why we shouldn't use MongoDB for this purpose?

We also use Redis, however for this use case, we would like to use the hard drive and Redis is in-RAM only.

How can I set Timeout by View, User or Role?

Posted: 30 Apr 2013 11:54 AM PDT

For SQL 2008 R2, data views.

Looking for a timeout control using Microsoft SQL Server Management Studio (SSMS) that is NOT at the Server Level, and/or is NOT dependent on query timeout as set by application initiating the query.

I have been unable to find timeout controls by View, User or Role using SSMS.

There are server level timeouts (remote query timeout http://technet.microsoft.com/en-us/library/ms189040.aspx ), but as I understand it would also impact the main applications use of the database, which lives on a different server (main application gets to define its own limits).

I found DBPROP_COMMANDTIMEOUT http://msdn.microsoft.com/en-us/library/windows/desktop/ms712980(v=vs.85).aspx but not seeing any way to control it by View. And this http://serverfault.com/questions/242300/set-command-timeout-from-sql-server-2005-rather-than-through-the-code says "Command timeouts are always set by the client"

Considerations: These are connections for reporting from the production database of a major application, where the archived datasets (midnight last night) are not sufficiently current. We have a requirement to allow some access; we have a responsibility to not let that access adversely impact the application.

mysql optimize table crash

Posted: 30 Apr 2013 12:52 PM PDT

When I try OPTIMIZE TABLE `table` (MyISAM) on a table which is about 300MB, then it is crashed and must be repaired. What could cause this problem? The same problem occurs on other tables over 300MB.

Is it possible to have extra tables in a Slave with MySQL Replication

Posted: 30 Apr 2013 01:04 PM PDT

As my title mention I have a Master and a Slave database.

Master if for operations data and my slave mainly for reporting stuff.

The issue is that I need to create extra tables on reporting that can't be on the master, but the way my replication is set (the simplest one mentioned by the official doc) at the moment, this breaks the replication system.

How could I add tables on the Slave without Master caring about it ? Is it even possible ?

Replication issue - CREATE SELECT alternative?

Posted: 30 Apr 2013 02:04 PM PDT

I've an MySQL 5.1 slave for our BI team.

They need to make some CREATE SELECT with big select queries (several million lines).

As CREATE SELECT is a DDL, if the replication attempts to update some rows in same tables than the SELECT statement, replication is blocked until the freeing of the CREATE SELECT.

Do you now a good non-blocking alternative to thoses CREATE SELECT statements?

I thought to an SELECT INTO OUTPUT FILE then LOAD DATA INFILE but they will fill out our disks as BI guys like to do... :)

Max.

Unable to change engine to MyISAM in MySQL

Posted: 30 Apr 2013 09:58 AM PDT

I have installed MySQL 5.6.10 on Mac OS X 10.6.5. My issue is that MySQL is using InnoDB as its default engine.

I have checked following engines are supported in MySQL:

show engines\G      *************************** 1. row ***************************        Engine: FEDERATED       Support: NO       Comment: Federated MySQL storage engine  Transactions: NULL            XA: NULL    Savepoints: NULL  *************************** 2. row ***************************        Engine: MRG_MYISAM       Support: YES       Comment: Collection of identical MyISAM tables  Transactions: NO            XA: NO    Savepoints: NO  *************************** 3. row ***************************        Engine: MyISAM       Support: YES       Comment: MyISAM storage engine  Transactions: NO            XA: NO    Savepoints: NO  *************************** 4. row ***************************        Engine: BLACKHOLE       Support: YES       Comment: /dev/null storage engine (anything you write to it disappears)  Transactions: NO            XA: NO    Savepoints: NO  *************************** 5. row ***************************        Engine: CSV       Support: YES       Comment: CSV storage engine  Transactions: NO            XA: NO    Savepoints: NO  *************************** 6. row ***************************        Engine: MEMORY       Support: YES       Comment: Hash based, stored in memory, useful for temporary tables  Transactions: NO            XA: NO    Savepoints: NO  *************************** 7. row ***************************        Engine: ARCHIVE       Support: YES       Comment: Archive storage engine  Transactions: NO            XA: NO    Savepoints: NO  *************************** 8. row ***************************        Engine: InnoDB       Support: DEFAULT       Comment: Supports transactions, row-level locking, and foreign keys  Transactions: YES            XA: YES    Savepoints: YES  *************************** 9. row ***************************        Engine: PERFORMANCE_SCHEMA       Support: YES       Comment: Performance Schema  Transactions: NO            XA: NO    Savepoints: NO  

I want to change it to MyISAM, so I have changed the setting in my.cnf:

[mysqld]  default-storage-engine=MyISAM  

After that I restarted MySQL from the system preferences. But if I check check the engines again and create a table, it is still being created with InnoDB, not MyISAM.

Can anyone tell me how would I change it to MyISAM?

Here is my my.cnf file content:

# For advice on how to change settings please see  # http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html    [mysqld]  default-storage-engine=MyISAM    # Remove leading # and set to the amount of RAM for the most important data  # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.  # innodb_buffer_pool_size = 128M    # Remove leading # to turn on a very important data integrity option: logging  # changes to the binary log between backups.  # log_bin    # These are commonly set, remove the # and set as required.  # basedir = .....  # datadir = .....  # port = .....  # server_id = .....  # socket = .....    # Remove leading # to set options mainly useful for reporting servers.  # The server defaults are faster for transactions and fast SELECTs.  # Adjust sizes as needed, experiment to find the optimal values.  # join_buffer_size = 128M  # sort_buffer_size = 2M  # read_rnd_buffer_size = 2M    # sql_mode=STRICT_TRANS_TABLES   

How to drop a DB2 instance when the instance owner was removed

Posted: 30 Apr 2013 10:04 AM PDT

This is a real sticky situation. I was handed over a machine (running an AIX 7.1), and my first task was to re-install DB2 server on it. But someone before me had conveniently removed an instance owner account, and probably recreated it. Now, the problem is this:

1) When I try to uninstall DB2, it says the instance is active and has to be dropped first.

2) When I try to drop this instance, DB2 says there is no such instance.

I am quite new to DB2 administration. Not sure how to proceed here. Any help is appreciated

Thanks

How to Convert Horizontal to Vertical Array?

Posted: 30 Apr 2013 11:04 AM PDT

I need to create a query (suitable for Standard Edition) that has data from multiple columns (Columns 1-6 with corresponding Date Started and Date Completed data) displayed vertically, but also has the column name in the preceding column to identify it, along with other data (Record Number, Status).

Sample data:

+--------------+--------+------------+-------------+---------------+  | RecordNumber | Status | ColumnName | DateStarted | DateCompleted |  +--------------+--------+------------+-------------+---------------+  |            1 | Open   | Column 1   | 2012-01-01  | 2012-02-01    |  |            2 | Hold   | Column 2   | 2012-01-03  | 2012-03-01    |  |            1 | Open   | Column 3   | 2012-02-05  | 2012-04-06    |  |            3 | Closed | Column 4   | 2012-05-10  | 2012-07-25    |  |            2 | Hold   | Column 5   | 2012-03-09  | 2012-04-01    |  |            1 | Open   | Column 6   | 2012-10-10  | 2012-12-12    |  +--------------+--------+------------+-------------+---------------+  
DECLARE @Data AS TABLE  (      RecordNumber    integer NOT NULL,      [Status]        varchar(10) NOT NULL,      ColumnName      varchar(10) NOT NULL,      DateStarted     date NOT NULL,      DateCompleted   date NOT NULL  );    INSERT @Data  (      RecordNumber,       [Status],      ColumnName,      DateStarted,      DateCompleted  )  VALUES      (1, 'Open', 'Column 1', '20120101', '20120201'),      (2, 'Hold', 'Column 2', '20120103', '20120301'),      (1, 'Open', 'Column 3', '20120205', '20120406'),      (3, 'Closed', 'Column 4', '20120510', '20120725'),      (2, 'Hold', 'Column 5', '20120309', '20120401'),      (1, 'Open', 'Column 6', '20121010', '20121212');  

How to script out push subscription creation at the subscriber?

Posted: 30 Apr 2013 04:05 PM PDT

I'm trying to set up a push subscription to a SQL Server publication from the subscriber.

I could set up the subscription at the publisher using the Replication Wizard in Management Studio. However, I would prefer to script the process relative to the subscriber so I can automate the deployment of a new SQL Server subscriber instance.

Initially, I'm happy to prompt for the name of the publisher before deployment. If I can get this working, I will look for a way to inject the correct value for my environment automatically.

What is a simple way to do this for a SQL Server instance that has to create multiple subscriptions at different publishers?

I'm open to using any supported SQL Server scripting solution: SMO, RMO, Sqlcmd, WMI, PSDrive, even pure T-SQL.

I've attempted to solve this problem in two ways. The first is a complete solution using T-SQL, but it involves some manual steps.

Using T-SQL

I have a manual solution in T-SQL. The solution is based on the output of the Management Studio Replication Script Generator output.

Using Management Studio, I run the following script to generate a T-SQL script that I can run at the publisher:

PRINT N'  EXECUTE MyDatabase.dbo.sp_addsubscription    @publication = N''MyPublication'',    @subscriber = ''' + CAST(SERVERPROPERTY('ServerName') AS SYSNAME) + ''',    @destination_db = ''SubscriberDatabase'',    @subscription_type = N''Push'',    @sync_type = N''automatic'',    @article = N''all'',    @update_mode = N''read only'',    @subscriber_type = 0;    EXECUTE MyDatabase.dbo.sp_addpushsubscription_agent    @publication = N''MyPublication'',    @subscriber = ''' + CAST(SERVERPROPERTY('ServerName') AS SYSNAME) + ''',    @subscriber_db = ''SubscriberDatabase'',    @job_login = null,    @job_password = null,    @subscriber_security_mode = 1,    @frequency_type = 64,    @frequency_interval = 1,    @frequency_relative_interval = 1,    @frequency_recurrence_factor = 0,    @frequency_subday = 4,    @frequency_subday_interval = 5,    @active_start_time_of_day = 0,    @active_end_time_of_day = 235959,    @active_start_date = 0,    @active_end_date = 0,    @dts_package_location = N''Distributor'';';  

On the MYSUBSCRIBER instance, the output would look like this:

EXECUTE MyDatabase.dbo.sp_addsubscription    @publication = N'MyPublication',    @subscriber = 'MYSUBSCRIBER',    @destination_db = 'SubscriberDatabase',    @subscription_type = N'Push',    @sync_type = N'automatic',    @article = N'all',    @update_mode = N'read only',    @subscriber_type = 0;    EXECUTE MyDatabase.dbo.sp_addpushsubscription_agent    @publication = N'MyPublication',    @subscriber = 'MYSUBSCRIBER',    @subscriber_db = 'SubscriberDatabase',    @job_login = null,    @job_password = null,    @subscriber_security_mode = 1,    @frequency_type = 64,    @frequency_interval = 1,    @frequency_relative_interval = 1,    @frequency_recurrence_factor = 0,    @frequency_subday = 4,    @frequency_subday_interval = 5,    @active_start_time_of_day = 0,    @active_end_time_of_day = 235959,    @active_start_date = 0,    @active_end_date = 0,    @dts_package_location = N'Distributor';  

I copy the output and execute the script at the publisher instance to set up the subscription.

I think I can't automate this in pure T-SQL without editing the script before running it, because T-SQL by design does not handle user input.

Using PowerShell and RMO

PowerShell has simple ways to process user input, so this seems like a good way to prototype the automation process.

MSDN has an eight-step guide to set up a push subscription using the .NET Replication Management Objects (RMO).

Here are the first two steps:

  1. Create a connection to the Publisher by using the ServerConnection class.
  2. Create an instance of the TransPublication class by using the Publisher connection from step 1. Specify Name, DatabaseName, and ConnectionContext.

I'm trying to translate these steps into a PowerShell script, but I can't get past step 2.

In the following code examples, I use fictional object names. I believe this does not affect the answerability of the question because the error message is identical when I use the real object names.

First attempt: setting the properties

My first attempt is to create the TransReplication object then set its properties. The code looks like this:

Add-Type -AssemblyName "Microsoft.SqlServer.Rmo, Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91";    $Publisher = New-Object Microsoft.SqlServer.Management.Common.ServerConnection MyServer    $Publication = New-Object Microsoft.SqlServer.Replication.TransPublication  $Publication.Name = 'MyPublication'  $Publication.DatabaseName = 'MyDatabase'  $Publication.ConnectionContext = $Publisher  

When I execute this script, I see the following error:

Exception setting "ConnectionContext": "Cannot convert the "server='(local)';Trusted_Connection=true;multipleactiveresultsets=false" value   of type "Microsoft.SqlServer.Management.Common.ServerConnection" to type "Microsoft.SqlServer.Management.Common.ServerConnection"."  At line:8 char:14  + $Publication. <<<< ConnectionContext = $Publisher      + CategoryInfo          : InvalidOperation: (:) [], RuntimeException      + FullyQualifiedErrorId : PropertyAssignmentException  

It looks like it's failing becuase it can't convert the type ServerConnection to the type ServerConnection. I don't understand how this could fail for the stated reason, because the value is already of the required type.

Second attempt: overloading the constructor

My second attempt is to specify the property values of the TransReplication object in the constructor. The code looks like this:

Add-Type -AssemblyName "Microsoft.SqlServer.Rmo, Version=11.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91";    $Publisher = New-Object Microsoft.SqlServer.Management.Common.ServerConnection MyServer    $Publication = New-Object Microsoft.SqlServer.Replication.TransPublication 'MyPublication', 'MyDatabase', $Publisher  

When I execute this script, I see the following error:

New-Object : Cannot find an overload for "TransPublication" and the argument count: "3".  At line:5 char:26  + $Publication = New-Object <<<<  -TypeName Microsoft.SqlServer.Replication.TransPublication 'MyPublication', 'MyDatabase', $Publisher      + CategoryInfo          : InvalidOperation: (:) [New-Object], MethodException      + FullyQualifiedErrorId : ConstructorInvokedThrowException,Microsoft.PowerShell.Commands.NewObjectCommand  

It looks like the New-Object cmdlet can't find the three-argument constructor documented by MSDN:

public TransPublication(    string name,    string databaseName,    ServerConnection connectionContext  )  

Parameters

As far as I can tell, I'm overloading the constructor correctly.

Am I doing something wrong? Is there something unusual about my environment? Am I better off using another solution?

Can I monitor the progress of importing a large .sql file in sqlite3 using zenity --progress?

Posted: 30 Apr 2013 03:04 PM PDT

I'm trying to monitor the progress of a sqlite3 command importing a large .sql file into a database using zenity --progress.

I've tried the following which will import the file, however progress is not shown:

sqlite3 DATABASE < import_file.sql | zenity --progress --percentage=0 --auto-close  

I know I need to provide zenity a numeric source for the progress, but don't really know how to obtain the number.

Can anyone help me?

No comments:

Post a Comment

Search This Blog