[how to] Can SQL Server 2000 and SQL Server 2012 CoExist? |
- Can SQL Server 2000 and SQL Server 2012 CoExist?
- MySQL 5.1.67 - Replication failure Mysql, Master Log Truncated or corrupted
- Index before or after bulk load using load infile?
- SQL Server database design for "archived but available" data
- SSRS Timeline Chart for each day / minute
- For sample mysql runtime, is this performant or should I have concerns?
- How to connect Powershell to the SQL Server named instance?
- Cascading Inserts in MySql
- Closure of attributes definition clarification
- MySQL Slave Lagging Almost Daily -Ver 5.5.17 ( Master & Slave )
- Expanding parent-child tree with additional tables
- Can the OUTPUT clause create a table?
- Why can't we write ddl statement directly into the PL/SQL block
- effective mysql table/index design for 35 million rows+ table, with 200+ corresponding columns (double), any combination of which may be queried
- SQL Server sync databases between enterprise - web edition
- SQL Server in a state of confusion: does the database exist or doesn't it?
- MySQL data too long error
- Postgres TupleTable like data structure
- consequences of using "innodb_flush_method = O_DIRECT" without having a battery backed write cache? or on a KVM guest?
- MySQL Dump configuration file
- TMD auto create
- How can I set a default session sql_mode for a given user?
- SQL Server 2008 R2 Restore a back up to a new database
- Cleaning up transaction replication del/ins/upd stored procedures?
- Users cannot view tables in non-default schema in SSMS
- What are the differences between SQL I/O Sim and SQL IO tools?
- SELECT TOP in MySQL
Can SQL Server 2000 and SQL Server 2012 CoExist? Posted: 24 Mar 2013 09:35 PM PDT We have OLTP Databases on SQL Server 2000 on my client's Prod Server. Planning to create Data Warehouse on the same Server but use SQL Server 2012. Can I install without any issues? |
MySQL 5.1.67 - Replication failure Mysql, Master Log Truncated or corrupted Posted: 24 Mar 2013 04:28 PM PDT I have run into some issues with replication After an in-place upgrade of mySQL from 5.0.77 -> 5.1.67 rel 14.4 Percona replication between a master and multiple slaves is causing issues. slaves are at the same level as master. The replication mode is mixed (SBR and RBR) The master seems to be truncating the log incorrectly, seemingly at random, and slaves subsequently stop replicating due to corrupt log being sent through. mysqlbinlog also segfaults when trying to read the log. This is the error being thrown by the slaves (table names/server names obfuscated):
currently the only way to recover is via full refresh of the slaves. Would there be an issue on the master server? No disk / network / logging indicates a reason why the mysql master would incorrectly truncate the log. What could be causing the master to incorrectly truncate or not close off the log properly? I'd be happy to hear about any extra detail which might be required to help answer this question! |
Index before or after bulk load using load infile? Posted: 24 Mar 2013 03:44 PM PDT I have a database with over 1B rows and two columns that are indexed (in addition to the PK). Is it better to have the index pre-defined in the table before the load infile or better to index after the data has been loaded? A couple of notes regarding data size and system:
|
SQL Server database design for "archived but available" data Posted: 24 Mar 2013 08:14 PM PDT We have this large database (>1TB) that we intend to "shrink". The database revolves around one main entity, let's call it "Visit". For discussion, let's say it is a database for a medical practice. There are a total of 30 visit "types", such as procedure, annual, follow-up, immunisation etc, each of which is a subsidary table to "Visit", e.g. "visit_immuno". The database has accummulated some 12 years of data since 2000. Someone has proposed that we keep about 3 years of data in the "live" version and have the rest live in an "old_data" database. The date is ONLY stored in the "Visit" table since it is normalised. The Visit table also contains a The The business rule for removing data from the "live" is that the So we end up with: Live DB (daily use) - All tables Old-Data DB - older data for the The proposal calls for a Combined DB that is a shell containing Synonyms to ALL the base tables in the Assuming the same indexes are created in the |
SSRS Timeline Chart for each day / minute Posted: 24 Mar 2013 01:59 PM PDT I'm asked to make a Timeline Bar Chart with SSRS 2012. There should be 1 chart per day/employee, and this chart should have a x-axis of 1440 minutes (or 24 hours). The bar should have 2 different colors, which represent the 2 different status of the employee (working / not working). The table from which I get the data has following columns: There could be many of this entries per day for one employee. The duration columns are seconds. I think I have to transform this data to get a result like: Is this the right data format for generating a Timeline Chart? Are there other ways to get what I want? Any general tips for creating a timeline-chart with ssrs? |
For sample mysql runtime, is this performant or should I have concerns? Posted: 24 Mar 2013 01:22 PM PDT For sample mysql runtime, is this performant or should I have concerns? If so, what are the key concerns? Please note that persistent connection is set to TRUE for my application, which I believe impacts Aborted. Please advise. |
How to connect Powershell to the SQL Server named instance? Posted: 24 Mar 2013 05:23 PM PDT When I run I am getting an error |
Posted: 24 Mar 2013 02:08 PM PDT I have a users table that has a one to one relationship to a user_preferences table (primary foreign key user_id). When a new row is added to the users table (new id added), is there a way to setup the relationship between the users_preferences table so that a row with the new user id is also added to it? |
Closure of attributes definition clarification Posted: 24 Mar 2013 11:50 AM PDT A databases book I am using states [1]:
I'm not trying to nitpick, but shouldn't it say "...the set of all attributes B..." or am I reading it too strictly? The blockquote above seems to indicate any attribute functionally determined by {A1, A2, ..., An} is a closure, yet the algorithms given later in the text seems to define a closure as all attributes functionally determined. [1] Ullman, Jeffrey D., and Jennifer Widom. A first course in database systems. 3rd ed. Upper Saddle River, NJ: Pearson/Prentice Hall, 2008. |
MySQL Slave Lagging Almost Daily -Ver 5.5.17 ( Master & Slave ) Posted: 24 Mar 2013 11:44 AM PDT I am observing this lag in a continuous manner. Investigating in the bin log events, i found the following information. Slave executing the relay Logs slowly. Please give me advice. Observing the bin logs, I found delayed inserts events: |
Expanding parent-child tree with additional tables Posted: 24 Mar 2013 08:42 PM PDT I have readonly access to a table (Oracle database) which contains a hierarchical structure, represented via parent-child. I need to add another table, which will expand the existing hierarchical tree. The problem is this: How can I accomplish this safely? Things I thought of so far:
|
Can the OUTPUT clause create a table? Posted: 24 Mar 2013 09:20 PM PDT I'm doing an update like this: And I want to use the OUTPUT clause to back up my changes. What I want to know is if there is a way for the OUTPUT clause to create the table OutputTable or do I have to make sure it already exists before running the statement? |
Why can't we write ddl statement directly into the PL/SQL block Posted: 24 Mar 2013 12:07 PM PDT Why can't we write ddl statements directly in PL/SQL block, for example when i write But, Why second one executed successfully ? |
Posted: 24 Mar 2013 01:41 PM PDT I am looking for advice on table/index design for the following situation: i have a large table (stock price history data, InnoDB, 35 million rows and growing) with a compound primary key (assetid (int),date (date)). in addition to the pricing information, i have 200 double values that need to correspond to each record. i initially stored the 200 double columns directly in this table for ease of update and retrieval, and this had been working fine, as the only querying done on this table was by the assetid and date (these are religiously included in any query against this table), and the 200 double columns were only read. My database size was around 45 Gig However, now i have the requirement where i need to be able to query this table by any combination of these 200 columns (named f1,f2,...f200), for example: i have not historically had to deal with this large of an amount of data before, so my first instinct was that indexes were needed on each of these 200 columns, or i would wind up with large table scans, etc. To me this meant that i needed a table for each of the 200 columns with primary key, value, and index the values. So i went with that. i filled up and indexed all 200 tables. I left the main table intact with all 200 columns, as regularly it is queried over assetid and date range and all 200 columns are selected. I figured that leaving those columns in the parent table (unindexed) for read purposes, and then additionally having them indexed in their own tables (for join filtering) would be most performant. I ran explains on the new form of the query Indeed my desired result was achieved, explain shows me that the rows scanned are much smaller for this query. However i wound up with some undesirable side effects. 1) my database went from 45 Gig to 110 Gig. I can no longer keep the db in RAM. (i have 256Gig of RAM on the way however) 2) nightly inserts of new data now need to be done 200 times instead of once 3) maintenance/defrag of the new 200 tables take 200 times longer than just the 1 table. It cannot be completed in a night. 4) queries against the f1, etc tables are not necessarily performant. for example: the above query, while explain shows that it lookgin at < 1000 rows, can take 30+ seconds to complete. I assume this is because the indexes are too large to fit in memory. Since that was alot of bad news, I looked further and found partitioning. I implemented partitions on the main table, partitioned on date every 3 months. Monthly seemed to make sense to me but i have read that once you get over 120 partitions or so, performance suffers. partitioning quarterly will leave me under that for the next 20 years or so. each partition is a bit under 2 Gig. i ran explain partitions and everything seems to be pruning properly, so regardless i feel the partitioning was a good step, at the very least for analyze/optimize/repair purposes. I spent a good deal of time with this article http://ftp.nchu.edu.tw/MySQL/tech-resources/articles/testing-partitions-large-db.html my table currently is partitioned with primary key still on it. The article mentions that primary keys can make a partitioned table slower, but if you have a machine that can handle it, primary keys on the partitioned table will be faster. Knowing i have a big machine on the way (256 G RAM), i left the keys on. so as i see it, here are my options Option 11) remove the extra 200 tables and let the query do table scans to find the f1, f2 etc values. non-unique indexes can actually hurt performance on a properly partitioned table. run an explain before the user runs the query and deny them if the number of rows scanned is over some threshold i define. save myself the pain of the giant database. Heck, it will all be in memory soon anyways. sub-question:does it sound like i have chosen an appropriate partition scheme? Option 2Partition all the 200 tables using the same 3 months scheme. enjoy the smaller row scans and allow the users to run larger queries. now that they are partitioned at least i can manage them 1 partition at a time for maintenance purposes. Heck, it will all be in memory soon anyways. develop efficient way to update them nightly. sub-question:do you see a reason that i may avoid primary key indexes on these f1,f2,f3,f4... tables, knowing that i always have assetid and date when querying? seems counter intuitive to me but i am not used to data sets of this size. that would shrink the database a bunch i assume Option 3Drop the f1,f2,f3 columns in the master table to reclaim that space. do 200 joins if i need to read 200 features, maybe it wont be as slow as it sounds. Option 4You all have a better way to structure this than i have thought of so far. * NOTE: i will soon be adding another 50-100 of these double values to each item, so i need to design knowing that is coming thanks for any and all help Update #1 - 3/24/2103I went with the idea suggested in the comments i got below and created one new table with the following setup: I partitioned the table in 3 month intervals. I blew away the earlier 200 tables so that my database was back down to 45 Gig and started filling up this new table. A day and a half later, it completed, and my database now sits at a chubby 220 Gigs! It does allow the possibility of removing these 200 values from the master table, as i can get them from one join, but that would really only give me back 25 Gigs or so maybe I asked it to create a primary key on assetid, date,feature and an index on value, and after 9 hours of chugging it really hadn't made a dent and seemed to freeze up so i killed that part off. i rebuilt a couple of the partitions but it did not seem to reclaim much/any space. So that solution looks like it probably isn't going to be ideal. Do rows take up significantly more space than columns i wonder, could that be why this solution took up so much more space? I came across this article http://www.chrismoos.com/2010/01/31/mysql-partitioning-tables-with-millions-of-rows it gave me an idea. where he says "At first, I thought about RANGE partitioning by date, and while I am using the date in my queries, it is very common for a query to have a very large date range, and that means it could easily span all partitions." Now i am range partitioning by date as well, but will also be allowing searches by large date range, which will decrease the effectiveness of my partitioning. I will always have a date range when i search, however i will also always have a list of assetids. Perhaps my solution should be to partition by assetid and date, where i identify typically searched assetid ranges (which i can come up with, there are standard lists, S&P 500, russell 2000, etc). this way i would almost never look at the entire data set. any more thoughts/comments would be appreciated thanks |
SQL Server sync databases between enterprise - web edition Posted: 24 Mar 2013 11:14 AM PDT We have an enterprise version of sql server 2012. we need to synchronize its data to a web edition database. what would be the best option t do this ? further to add on to this we need to provide high availability on the website. so the plan is to use 2 identical database on the web edition. only one will be active at a time. periodically the enterprise version needs to push the data to the inactive database. make it active. and then sync the same data to the other one. what would be the best solution to achieve the above ? |
SQL Server in a state of confusion: does the database exist or doesn't it? Posted: 24 Mar 2013 08:43 PM PDT Got a really weird, annoying problem.. Somehow the instance of SQL Server 2008 R2 running on our server has gotten somewhat corrupted. First, we noticed that the database we created yesterday was missing. So, we looked around and found that it was still there, but detached. So, we tried to attach the mdf but got a message which was something like I thought that was odd, so restarted SQL Server. Same thing... okay, time for drastic measures... so, I stopped the service, zipped up the mdf, started the service, unzipped it and then tried to restore it. The above message was gone, but then I got:
Ouch. Of course it's not showing in the database explorer, so no idea what's going on... last resort: Of course that didn't work.. that tells me the database does not exist. So, I'm stuck... at one point SQL Server thinks the database does exist and at another point it thinks the db does not exist.. obviously it's in a state of confusion. Has anyone seen this before? Got any ideas on how to fix it? |
Posted: 24 Mar 2013 04:34 PM PDT One of the column in my table was initially created as a varchar(1000). Now there is a need to increase the size of that column to hold around 5000 characters. I used the I created an other dummy table with a coloumn size of varchar(5000) and that works just fine. Engine used is InnoDB and Default Charset is UTF8. |
Postgres TupleTable like data structure Posted: 24 Mar 2013 10:10 AM PDT I'm writing my own I carefully examined the implementation of standard |
Posted: 24 Mar 2013 12:51 PM PDT Mysql 5.5.29 Innodb- 128GB Ram - 32 cores - Raid 10 SSD. Our server which is a dedicated KVM guest on a 'baremetal' is hosting our heavy read-write DB server. Everything is file-per-table. innodb_Buffer_pool is 96GB with 1GBx2 log_file_size with about 20 minutes of writes to fill up those logs at peak time. How bad of a situation would it be if O_DIRECT (currently running on the default) was enabled during a high work load without a battery backed write cache and a total crash were to occur on the OS, parent host or the power was cut? Does a battery backed write cache make a difference if the server is a vm guest of the parent anyway? . |
Posted: 24 Mar 2013 03:51 PM PDT I modified th I found that if i give the option |
Posted: 24 Mar 2013 10:51 AM PDT I used mysql as part of a project and found it to be a very practical tool. I have a question about mysql in regards to how it creates temporary table types TMD in I do not know what mechanism creates a table automatically, but I'd like this feature to be disabled, or at least configure mysql to put the temporary tables in |
How can I set a default session sql_mode for a given user? Posted: 24 Mar 2013 11:15 AM PDT I want to have a default session sql_mode that is different from the global sql_mode. I can set it manually, but I'd like to do it automatically. I thought I could add it to Yet, when I connect my session sql_mode, the session still inherits the global sql_mode. I'm using MySQL 5.5.16. |
SQL Server 2008 R2 Restore a back up to a new database Posted: 24 Mar 2013 09:02 AM PDT I have a full back up of a database, I want to update some records in some table in the current database from its old backup without affecting other records. What is the best way to do this? Basically I want to get some records from the old backup update old values to the current database without doing a restore, what are my options? |
Cleaning up transaction replication del/ins/upd stored procedures? Posted: 24 Mar 2013 01:51 PM PDT I'm referring to the ones that start with: dbo.sp_MSdel... I've noticed they are not removed when I run Do these have to be deleted manually if I want them removed? Subscriber is SQL 2000 |
Users cannot view tables in non-default schema in SSMS Posted: 24 Mar 2013 05:29 PM PDT I'm having an issue setting the I've tried granting permissions to view definitions: That didn't work. I tried setting the table-level permission: That also didn't work. Then I tried just a blanket grant: And that did work; they can now see My goal here is to allow the user to view all tables within a given schema. How do I accomplish that? If I should be able to do this by default, what permissions should I be looking at to find why I can't? |
What are the differences between SQL I/O Sim and SQL IO tools? Posted: 24 Mar 2013 06:58 PM PDT I want to test my different SQL Server alternatives with a benchmark tool. I found two tools by Microsoft: What are the differences between them? is it the same thing? Thanks! |
Posted: 24 Mar 2013 04:16 PM PDT How can I do this in MySQL? What's the simplest way to do it without running to separate queries (if possible)? |
You are subscribed to email updates from Recent Questions - Database Administrators - Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment