Sunday, March 24, 2013

[how to] Can SQL Server 2000 and SQL Server 2012 CoExist?

[how to] Can SQL Server 2000 and SQL Server 2012 CoExist?


Can SQL Server 2000 and SQL Server 2012 CoExist?

Posted: 24 Mar 2013 09:35 PM PDT

We have OLTP Databases on SQL Server 2000 on my client's Prod Server. Planning to create Data Warehouse on the same Server but use SQL Server 2012. Can I install without any issues?

MySQL 5.1.67 - Replication failure Mysql, Master Log Truncated or corrupted

Posted: 24 Mar 2013 04:28 PM PDT

I have run into some issues with replication

After an in-place upgrade of mySQL from 5.0.77 -> 5.1.67 rel 14.4 Percona

replication between a master and multiple slaves is causing issues. slaves are at the same level as master.

The replication mode is mixed (SBR and RBR)

The master seems to be truncating the log incorrectly, seemingly at random, and slaves subsequently stop replicating due to corrupt log being sent through.

mysqlbinlog also segfaults when trying to read the log.

This is the error being thrown by the slaves (table names/server names obfuscated):

Last_Error: Could not execute Update_rows event on table ozdb1.tableA; Incorrect string >value: '\x8EIQ\x13)\x00...' for column 'Description1' at row 2, Error_code: 1366; >Incorrect string value: '\x8E\x08\x0F\x0F\x0F\x0F...' for column 'Description2' at row >1, Error_code: 1366; Incorrect string value: '\xBD\x06DINI...' for column 'Description3' >at row 1, Error_code: 1366; Corrupted replication event was detected, Error_code: 1610; >handler error HA_ERR_CORRUPT_EVENT; the event's master log apdb041-bin.005603, >end_log_pos 1070537659

currently the only way to recover is via full refresh of the slaves.

Would there be an issue on the master server? No disk / network / logging indicates a reason why the mysql master would incorrectly truncate the log.

What could be causing the master to incorrectly truncate or not close off the log properly?

I'd be happy to hear about any extra detail which might be required to help answer this question!

Index before or after bulk load using load infile?

Posted: 24 Mar 2013 03:44 PM PDT

I have a database with over 1B rows and two columns that are indexed (in addition to the PK). Is it better to have the index pre-defined in the table before the load infile or better to index after the data has been loaded?

A couple of notes regarding data size and system:

  • System is Linux w/ 8 cores and 32GB memory (currently maxed out unless I move to new HW)
  • DB is 1B rows that in raw data size is 150GB data.
  • Database is myisam and is mainly read-only after it's loaded.

SQL Server database design for "archived but available" data

Posted: 24 Mar 2013 08:14 PM PDT

We have this large database (>1TB) that we intend to "shrink". The database revolves around one main entity, let's call it "Visit". For discussion, let's say it is a database for a medical practice.

There are a total of 30 visit "types", such as procedure, annual, follow-up, immunisation etc, each of which is a subsidary table to "Visit", e.g. "visit_immuno".

The database has accummulated some 12 years of data since 2000. Someone has proposed that we keep about 3 years of data in the "live" version and have the rest live in an "old_data" database. The date is ONLY stored in the "Visit" table since it is normalised. The Visit table also contains a ROWVERSION column and a BIGINT pseudo-identity (clustered) column. For all intents and purposes, let's say the clustering key is populated by a SEQUENCE (SQL Server 2012 Enterprise) - we shall name it cid.

The visit.date is not always in the same order as the clustering key, for example when a doctor goes on extended visitations and returns with his "briefcase" of data, it gets merged into the main table. There are also some updates to the "visit" table that will cause the ROWVERSION column to be out of sync with both the cid and date columns - to put it simply, neither ROWVERSION nor cid would make suitable partition keys for this reason.

The business rule for removing data from the "live" is that the visit.date must be greater than 36 months and a child visit_payment record must exist. Also, the "old_data" database does not contain any of the base tables except visit%.

So we end up with:

Live DB (daily use) - All tables Old-Data DB - older data for the visit% tables

The proposal calls for a Combined DB that is a shell containing Synonyms to ALL the base tables in the Live DB (except visit%) plus Views that UNION ALL across the visit% tables in the two databases.

Assuming the same indexes are created in the Old-Data DB, will the queries perform well on the UNION-ALL Views? What type of query patterns might trip up the execution plan for the UNION-ALL Views?

SSRS Timeline Chart for each day / minute

Posted: 24 Mar 2013 01:59 PM PDT

I'm asked to make a Timeline Bar Chart with SSRS 2012. There should be 1 chart per day/employee, and this chart should have a x-axis of 1440 minutes (or 24 hours). The bar should have 2 different colors, which represent the 2 different status of the employee (working / not working).

The table from which I get the data has following columns:

    Starttime_working (datetime), Endtime_Working (datetime), Duration_worktime (int), Duration_till_next_worktime(int)  

There could be many of this entries per day for one employee. The duration columns are seconds.

I think I have to transform this data to get a result like:

    2013-03-02 16:52:00 - working      2013-03-02 16:53:00 - working      2013-03-02 16:54:00 - working      2013-03-02 16:55:00 - working      2013-03-02 16:56:00 - not working      2013-03-02 16:57:00 - not working  

Is this the right data format for generating a Timeline Chart? Are there other ways to get what I want? Any general tips for creating a timeline-chart with ssrs?

For sample mysql runtime, is this performant or should I have concerns?

Posted: 24 Mar 2013 01:22 PM PDT

For sample mysql runtime, is this performant or should I have concerns? If so, what are the key concerns? Please note that persistent connection is set to TRUE for my application, which I believe impacts Aborted. Please advise.

How to connect Powershell to the SQL Server named instance?

Posted: 24 Mar 2013 05:23 PM PDT

When I run

# Load the SMO assembly   [void][reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo");   $ServerName = "[local\s12]";     #Get a server object which corresponds to the **default** instance   $srv = New-Object -TypeName Microsoft.SqlServer.Management.SMO.Server $ServerName   

I am getting an error

Failed to connect to server local\s12.  

Cascading Inserts in MySql

Posted: 24 Mar 2013 02:08 PM PDT

I have a users table that has a one to one relationship to a user_preferences table (primary foreign key user_id). When a new row is added to the users table (new id added), is there a way to setup the relationship between the users_preferences table so that a row with the new user id is also added to it?

Closure of attributes definition clarification

Posted: 24 Mar 2013 11:50 AM PDT

A databases book I am using states [1]:

The closure of {A1, A2, ..., An} under the FD's in S is the set of attributes B such that every relation that satisfies all the FD's in set S also satisfies A1 A2 ... An -> B.

I'm not trying to nitpick, but shouldn't it say "...the set of all attributes B..." or am I reading it too strictly? The blockquote above seems to indicate any attribute functionally determined by {A1, A2, ..., An} is a closure, yet the algorithms given later in the text seems to define a closure as all attributes functionally determined.

[1] Ullman, Jeffrey D., and Jennifer Widom. A first course in database systems. 3rd ed. Upper Saddle River, NJ: Pearson/Prentice Hall, 2008.

MySQL Slave Lagging Almost Daily -Ver 5.5.17 ( Master & Slave )

Posted: 24 Mar 2013 11:44 AM PDT

I am observing this lag in a continuous manner. Investigating in the bin log events, i found the following information. Slave executing the relay Logs slowly.

 1. Changing the Innodb File Format to Barracuda will improve the Situation ?  

Please give me advice.

Master : Row Based :  ==================  +------------+  | version()  |  +------------+  | 5.5.17-log |  +------------+  1 row in set (0.00 sec)    +--------------------------+-------------------+  | Variable_name            | Value             |  +--------------------------+-------------------+  | binlog_format            | ROW               |  | date_format              | %Y-%m-%d          |  | datetime_format          | %Y-%m-%d %H:%i:%s |  | default_week_format      | 0                 |  | innodb_file_format       | Antelope          |  | innodb_file_format_check | ON                |  | innodb_file_format_max   | Antelope          |  | time_format              | %H:%i:%s          |  +--------------------------+-------------------+  8 rows in set (0.00 sec)      Slave : Statement Based :   ===========================================================  +-----------+  | version() |  +-----------+  | 5.5.17    |  +-----------+  1 row in set (0.00 sec)    +--------------------------+-------------------+  | Variable_name            | Value             |  +--------------------------+-------------------+  | binlog_format            | STATEMENT         |  | date_format              | %Y-%m-%d          |  | datetime_format          | %Y-%m-%d %H:%i:%s |  | default_week_format      | 0                 |  | innodb_file_format       | Antelope          |  | innodb_file_format_check | ON                |  | innodb_file_format_max   | Antelope          |  | time_format              | %H:%i:%s          |  +--------------------------+-------------------+  8 rows in set (0.00 sec)         *************************** 1. row ***************************             Slave_IO_State: Waiting for master to send event                Master_Host: dware-master                Master_User: replica                Master_Port: 3306              Connect_Retry: 60            Master_Log_File: binlog.001620        Read_Master_Log_Pos: 328943038             Relay_Log_File: relay-bin.000109              Relay_Log_Pos: 444602036      Relay_Master_Log_File: binlog.001607           Slave_IO_Running: Yes          Slave_SQL_Running: Yes            Replicate_Do_DB:         Replicate_Ignore_DB:          Replicate_Do_Table:      Replicate_Ignore_Table:     Replicate_Wild_Do_Table:    Replicate_Wild_Ignore_Table:                  Last_Errno: 0                 Last_Error:                Skip_Counter: 0        Exec_Master_Log_Pos: 444601893            Relay_Log_Space: 7062540762            Until_Condition: None             Until_Log_File:               Until_Log_Pos: 0         Master_SSL_Allowed: No         Master_SSL_CA_File:          Master_SSL_CA_Path:             Master_SSL_Cert:           Master_SSL_Cipher:              Master_SSL_Key:       Seconds_Behind_Master: 32298    Master_SSL_Verify_Server_Cert: No              Last_IO_Errno: 0              Last_IO_Error:              Last_SQL_Errno: 0             Last_SQL_Error:   Replicate_Ignore_Server_Ids:            Master_Server_Id: 1      1 row in set (0.00 sec)       ===    +--------------------------------+-------+  | Variable_name                  | Value |  +--------------------------------+-------+  | flush                          | OFF   |  | flush_time                     | 0     |  | innodb_adaptive_flushing       | ON    |  | innodb_flush_log_at_trx_commit | 2     |  | innodb_flush_method            |       |  +--------------------------------+-------+  5 rows in set (0.00 sec)  

Observing the bin logs, I found delayed inserts events:

[root@dware-master binlog]# mysqlbinlog  -v --base64-output=DECODE-ROWS --start-position=444601893 binlog.001607 | head -30   /*!40019 SET @@session.max_insert_delayed_threads=0*/;   /*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;     DELIMITER /*!*/;   # at 444601893   #130323 21:35:43 server id 1  end_log_pos 444601987     Query   thread_id=216095  exec_time=1     error_code=0  SET TIMESTAMP=1364099743/*!*/;  SET @@session.pseudo_thread_id=216095/*!*/;  SET @@session.foreign_key_checks=1, @@session.sql_auto_is_null=0,  @@session.unique_checks=1, @@session.autocommit=1/*!*/;  SET @@session.sql_mode=0/*!*/;  SET @@session.auto_increment_increment=1, @@session.auto_increment_offset=1/*!*/;   /*!\C utf8 *//*!*/;   SET@@session.character_set_client=33,@@session.collation_connection=33,   @@session.collation_server=33/*!*/;   SET @@session.time_zone='SYSTEM'/*!*/;   SET @@session.lc_time_names=0/*!*/;   SET @@session.collation_database=DEFAULT/*!*/;   BEGIN   /*!*/;  # at 444601987  # at 444602072  # at 444603110  # at 444604148  # at 444605186  # at 444606224  # at 444607262  # at 444608300  # at 444609338  # at 444610372  # at 444611410  # at 444612448  # at 444613486  ===========  

Expanding parent-child tree with additional tables

Posted: 24 Mar 2013 08:42 PM PDT

I have readonly access to a table (Oracle database) which contains a hierarchical structure, represented via parent-child.

I need to add another table, which will expand the existing hierarchical tree.
The new table needs to add levels to the tree, including entire sub-trees.

The problem is this:
Any node in the expansion table needs to have a single parent, yet that parent can be in either the original table, or in the expansion table.

How can I accomplish this safely?

Things I thought of so far:

  • Giving each node in the expanded table two parent (one from each table)
    Problem: data integrity!
  • Each node will have one parent, without knowing which table contains it
    Problem: querying will be difficult.
    Problem 2: same ID for two different nodes (in different tables)
  • Each node will have one parent, with another column specifying which table contains it
    Problem: querying still difficult.

Can the OUTPUT clause create a table?

Posted: 24 Mar 2013 09:20 PM PDT

I'm doing an update like this:

UPDATE dbo.Table1  SET BirthDate = b.BirthDate  FROM Table1 a  JOIN Table2 b      ON a.ID = b.ID  

And I want to use the OUTPUT clause to back up my changes.

UPDATE dbo.Table1  SET BirthDate = b.BirthDate  OUTPUT       inserted.Id, inserted.BirthDate AS New_BirthDate,       deleted.BirthDate AS Old_BirthDate      INTO OutputTable  FROM Table1 a  JOIN Table2 b      ON a.ID = b.ID  

What I want to know is if there is a way for the OUTPUT clause to create the table OutputTable or do I have to make sure it already exists before running the statement?

Why can't we write ddl statement directly into the PL/SQL block

Posted: 24 Mar 2013 12:07 PM PDT

Why can't we write ddl statements directly in PL/SQL block, for example when i write

CREATE OR REPLACE PROCEDURE test IS  BEGIN      truncate table table_name; // error  END test;  /  

But,

CREATE OR REPLACE PROCEDURE test IS  BEGIN      execute immediate 'truncate table table_name'; // works fine  END test;  /  

Why second one executed successfully ?

effective mysql table/index design for 35 million rows+ table, with 200+ corresponding columns (double), any combination of which may be queried

Posted: 24 Mar 2013 01:41 PM PDT

I am looking for advice on table/index design for the following situation:

i have a large table (stock price history data, InnoDB, 35 million rows and growing) with a compound primary key (assetid (int),date (date)). in addition to the pricing information, i have 200 double values that need to correspond to each record.

CREATE TABLE `mytable` (  `assetid` int(11) NOT NULL,  `date` date NOT NULL,  `close` double NOT NULL,  `f1` double DEFAULT NULL,     `f2` double DEFAULT NULL,  `f3` double DEFAULT NULL,     `f4` double DEFAULT NULL,   ... skip a few …  `f200` double DEFAULT NULL,   PRIMARY KEY (`assetid`, `date`)) ENGINE=`InnoDB` DEFAULT CHARACTER SET latin1 COLLATE      latin1_swedish_ci ROW_FORMAT=COMPACT CHECKSUM=0 DELAY_KEY_WRITE=0       PARTITION BY RANGE COLUMNS(`date`) PARTITIONS 51;  

i initially stored the 200 double columns directly in this table for ease of update and retrieval, and this had been working fine, as the only querying done on this table was by the assetid and date (these are religiously included in any query against this table), and the 200 double columns were only read. My database size was around 45 Gig

However, now i have the requirement where i need to be able to query this table by any combination of these 200 columns (named f1,f2,...f200), for example:

select from mytable   where assetid in (1,2,3,4,5,6,7,....)  and date > '2010-1-1' and date < '2013-4-5'  and f1 > -0.23 and f1 < 0.9  and f117 > 0.012 and f117 < .877  etc,etc  

i have not historically had to deal with this large of an amount of data before, so my first instinct was that indexes were needed on each of these 200 columns, or i would wind up with large table scans, etc. To me this meant that i needed a table for each of the 200 columns with primary key, value, and index the values. So i went with that.

CREATE TABLE `f1` (  `assetid` int(11) NOT NULL DEFAULT '0',  `date` date NOT NULL DEFAULT '0000-00-00',  `value` double NOT NULL DEFAULT '0',  PRIMARY KEY (`assetid`, `date`),  INDEX `val` (`value`)  ) ENGINE=`InnoDB` DEFAULT CHARACTER SET latin1 COLLATE latin1_swedish_ci ROW_FORMAT=COMPACT CHECKSUM=0 DELAY_KEY_WRITE=0;  

i filled up and indexed all 200 tables. I left the main table intact with all 200 columns, as regularly it is queried over assetid and date range and all 200 columns are selected. I figured that leaving those columns in the parent table (unindexed) for read purposes, and then additionally having them indexed in their own tables (for join filtering) would be most performant. I ran explains on the new form of the query

select count(p.assetid) as total   from mytable p   inner join f1 f1 on f1.assetid = p.assetid and f1.date = p.date  inner join f2 f2 on f2.assetid = p.assetid and f2.date = p.date   where p.assetid in(1,2,3,4,5,6,7)  and p.date >= '2011-01-01' and p.date < '2013-03-14'   and(f1.value >= 0.96 and f1.value <= 0.97 and f2.value >= 0.96 and f2.value <= 0.97)   

Indeed my desired result was achieved, explain shows me that the rows scanned are much smaller for this query. However i wound up with some undesirable side effects.

1) my database went from 45 Gig to 110 Gig. I can no longer keep the db in RAM. (i have 256Gig of RAM on the way however)

2) nightly inserts of new data now need to be done 200 times instead of once

3) maintenance/defrag of the new 200 tables take 200 times longer than just the 1 table. It cannot be completed in a night.

4) queries against the f1, etc tables are not necessarily performant. for example:

 select min(value) from f1    where assetid in (1,2,3,4,5,6,7)    and date >= '2013-3-18' and date < '2013-3-19'  

the above query, while explain shows that it lookgin at < 1000 rows, can take 30+ seconds to complete. I assume this is because the indexes are too large to fit in memory.

Since that was alot of bad news, I looked further and found partitioning. I implemented partitions on the main table, partitioned on date every 3 months. Monthly seemed to make sense to me but i have read that once you get over 120 partitions or so, performance suffers. partitioning quarterly will leave me under that for the next 20 years or so. each partition is a bit under 2 Gig. i ran explain partitions and everything seems to be pruning properly, so regardless i feel the partitioning was a good step, at the very least for analyze/optimize/repair purposes.

I spent a good deal of time with this article

http://ftp.nchu.edu.tw/MySQL/tech-resources/articles/testing-partitions-large-db.html

my table currently is partitioned with primary key still on it. The article mentions that primary keys can make a partitioned table slower, but if you have a machine that can handle it, primary keys on the partitioned table will be faster. Knowing i have a big machine on the way (256 G RAM), i left the keys on.

so as i see it, here are my options

Option 1

1) remove the extra 200 tables and let the query do table scans to find the f1, f2 etc values. non-unique indexes can actually hurt performance on a properly partitioned table. run an explain before the user runs the query and deny them if the number of rows scanned is over some threshold i define. save myself the pain of the giant database. Heck, it will all be in memory soon anyways.

sub-question:

does it sound like i have chosen an appropriate partition scheme?

Option 2

Partition all the 200 tables using the same 3 months scheme. enjoy the smaller row scans and allow the users to run larger queries. now that they are partitioned at least i can manage them 1 partition at a time for maintenance purposes. Heck, it will all be in memory soon anyways. develop efficient way to update them nightly.

sub-question:

do you see a reason that i may avoid primary key indexes on these f1,f2,f3,f4... tables, knowing that i always have assetid and date when querying? seems counter intuitive to me but i am not used to data sets of this size. that would shrink the database a bunch i assume

Option 3

Drop the f1,f2,f3 columns in the master table to reclaim that space. do 200 joins if i need to read 200 features, maybe it wont be as slow as it sounds.

Option 4

You all have a better way to structure this than i have thought of so far.

* NOTE: i will soon be adding another 50-100 of these double values to each item, so i need to design knowing that is coming

thanks for any and all help

Update #1 - 3/24/2103

I went with the idea suggested in the comments i got below and created one new table with the following setup:

create table 'features'{    assetid int,    date    date,    feature varchar(4),    value   double  }  

I partitioned the table in 3 month intervals.

I blew away the earlier 200 tables so that my database was back down to 45 Gig and started filling up this new table. A day and a half later, it completed, and my database now sits at a chubby 220 Gigs!

It does allow the possibility of removing these 200 values from the master table, as i can get them from one join, but that would really only give me back 25 Gigs or so maybe

I asked it to create a primary key on assetid, date,feature and an index on value, and after 9 hours of chugging it really hadn't made a dent and seemed to freeze up so i killed that part off.

i rebuilt a couple of the partitions but it did not seem to reclaim much/any space.

So that solution looks like it probably isn't going to be ideal. Do rows take up significantly more space than columns i wonder, could that be why this solution took up so much more space?

I came across this article

http://www.chrismoos.com/2010/01/31/mysql-partitioning-tables-with-millions-of-rows

it gave me an idea.

where he says

"At first, I thought about RANGE partitioning by date, and while I am using the date in my queries, it is very common for a query to have a very large date range, and that means it could easily span all partitions."

Now i am range partitioning by date as well, but will also be allowing searches by large date range, which will decrease the effectiveness of my partitioning. I will always have a date range when i search, however i will also always have a list of assetids. Perhaps my solution should be to partition by assetid and date, where i identify typically searched assetid ranges (which i can come up with, there are standard lists, S&P 500, russell 2000, etc). this way i would almost never look at the entire data set.

any more thoughts/comments would be appreciated

thanks

SQL Server sync databases between enterprise - web edition

Posted: 24 Mar 2013 11:14 AM PDT

We have an enterprise version of sql server 2012. we need to synchronize its data to a web edition database. what would be the best option t do this ? further to add on to this we need to provide high availability on the website. so the plan is to use 2 identical database on the web edition. only one will be active at a time. periodically the enterprise version needs to push the data to the inactive database. make it active. and then sync the same data to the other one.

what would be the best solution to achieve the above ?

SQL Server in a state of confusion: does the database exist or doesn't it?

Posted: 24 Mar 2013 08:43 PM PDT

Got a really weird, annoying problem.. Somehow the instance of SQL Server 2008 R2 running on our server has gotten somewhat corrupted.

First, we noticed that the database we created yesterday was missing. So, we looked around and found that it was still there, but detached. So, we tried to attach the mdf but got a message which was something like The file is currently in use.

I thought that was odd, so restarted SQL Server. Same thing... okay, time for drastic measures... so, I stopped the service, zipped up the mdf, started the service, unzipped it and then tried to restore it. The above message was gone, but then I got:

Cannot attach a database with the same name as an existing database

Ouch. Of course it's not showing in the database explorer, so no idea what's going on... last resort:

DROP DATABASE [DatabaseName]  

Of course that didn't work.. that tells me the database does not exist. So, I'm stuck... at one point SQL Server thinks the database does exist and at another point it thinks the db does not exist.. obviously it's in a state of confusion.

Has anyone seen this before? Got any ideas on how to fix it?

MySQL data too long error

Posted: 24 Mar 2013 04:34 PM PDT

One of the column in my table was initially created as a varchar(1000). Now there is a need to increase the size of that column to hold around 5000 characters. I used the alter statement to increase the size of that column. DESC table_name as well as SHOW CREATE TABLE table_name clearly indicates the new size of that column to be 5000 characters. However, interestingly when I try to insert any data exceeding 1000 characters ERROR 1406 (22001) : Data too long for column error shows up. Out of desperation I changed the datatype to text, and still it's limited to 1000 characters. Any suggestion?

I created an other dummy table with a coloumn size of varchar(5000) and that works just fine. Engine used is InnoDB and Default Charset is UTF8.

Postgres TupleTable like data structure

Posted: 24 Mar 2013 10:10 AM PDT

I'm writing my own Group by operator (non-hashed implementation) and I'm currently looking on a good data structure to store the result groups. There are two requirement in my mind: the table need to be efficiently expanded and the tuples within each group need to be sorted.

I carefully examined the implementation of standard Group by (both hashed and sorted implementation), the closest I got is to leverage parts of the TupleHashTable but before going through this plan, I need to make sure I'm not overlooking an existing data structure.

consequences of using "innodb_flush_method = O_DIRECT" without having a battery backed write cache? or on a KVM guest?

Posted: 24 Mar 2013 12:51 PM PDT

Mysql 5.5.29 Innodb- 128GB Ram - 32 cores - Raid 10 SSD.

Our server which is a dedicated KVM guest on a 'baremetal' is hosting our heavy read-write DB server. Everything is file-per-table. innodb_Buffer_pool is 96GB with 1GBx2 log_file_size with about 20 minutes of writes to fill up those logs at peak time.

How bad of a situation would it be if O_DIRECT (currently running on the default) was enabled during a high work load without a battery backed write cache and a total crash were to occur on the OS, parent host or the power was cut?

Does a battery backed write cache make a difference if the server is a vm guest of the parent anyway?

.

MySQL Dump configuration file

Posted: 24 Mar 2013 03:51 PM PDT

I modified th my.cnf file, so that the innodb_data_file_path points somewhere else from the default path. But the mysqldump command seems like trying to dump from the default path.

I found that if i give the option --defaults-file=xxxx in command line i could change it, but is there a way to config mysqldump to use this option without specify it in command line?

TMD auto create

Posted: 24 Mar 2013 10:51 AM PDT

I used mysql as part of a project and found it to be a very practical tool.

I have a question about mysql in regards to how it creates temporary table types TMD in /var/lib/mysql/. This partition takes as much memory as the original table!

I do not know what mechanism creates a table automatically, but I'd like this feature to be disabled, or at least configure mysql to put the temporary tables in /tmp, not /var/lib/mysql.

How can I set a default session sql_mode for a given user?

Posted: 24 Mar 2013 11:15 AM PDT

I want to have a default session sql_mode that is different from the global sql_mode.

I can set it manually, but I'd like to do it automatically.

I thought I could add it to .my.cnf, but that doesn't work. I tried adding it to the [mysql] group in .my.cnf, and it does not throw an error.

Yet, when I connect my session sql_mode, the session still inherits the global sql_mode.

I'm using MySQL 5.5.16.

SQL Server 2008 R2 Restore a back up to a new database

Posted: 24 Mar 2013 09:02 AM PDT

I have a full back up of a database, I want to update some records in some table in the current database from its old backup without affecting other records.

What is the best way to do this?

Basically I want to get some records from the old backup update old values to the current database without doing a restore, what are my options?

Cleaning up transaction replication del/ins/upd stored procedures?

Posted: 24 Mar 2013 01:51 PM PDT

I'm referring to the ones that start with:

dbo.sp_MSdel...
dbo.sp_MSins...
dbo.sp_MSupd...

I've noticed they are not removed when I run exec sp_dropsubscription on the Publisher then run sp_removedbreplication on the subscriber (on the database). I also tried sp_subscription_cleanup to no avail.

Do these have to be deleted manually if I want them removed?

Subscriber is SQL 2000
Publisher is SQL 2008 (Distribution database is on the Publisher)

Users cannot view tables in non-default schema in SSMS

Posted: 24 Mar 2013 05:29 PM PDT

I'm having an issue setting the VIEW DEFINITION permission appropriately at the schema level for one of my users. I've created the schema TestSchema and added some tables. The user currently has permissions set to access & modify the table (SELECT, UPDATE, DELETE, etc) through the dbo_datareader and dbo_datawriter roles. However, they cannot see any of the tables in the SSMS object explorer.

I've tried granting permissions to view definitions:

grant view definition on SCHEMA :: [TestSchema] to [User]  

That didn't work. I tried setting the table-level permission:

grant view definition on [TestSchema].[NewTable] to [User]  

That also didn't work. Then I tried just a blanket grant:

grant view definition to [User]  

And that did work; they can now see TestSchema, as well as other schemas that they shouldn't have access to.

My goal here is to allow the user to view all tables within a given schema. How do I accomplish that? If I should be able to do this by default, what permissions should I be looking at to find why I can't?

What are the differences between SQL I/O Sim and SQL IO tools?

Posted: 24 Mar 2013 06:58 PM PDT

I want to test my different SQL Server alternatives with a benchmark tool. I found two tools by Microsoft:

  1. SQLIO Disk Subsystem Benchmark Tool
  2. SQL IO Sim

What are the differences between them? is it the same thing?

Thanks!

SELECT TOP in MySQL

Posted: 24 Mar 2013 04:16 PM PDT

How can I do this in MySQL?

SELECT TOP 50 PERCENT * FROM table  

What's the simplest way to do it without running to separate queries (if possible)?

No comments:

Post a Comment

Search This Blog