[how to] Is SQL Server data compression categorically good for read-only databases? |
- Is SQL Server data compression categorically good for read-only databases?
- What's are the different ways to keep track of active and archived data?
- Deployment Manager Vs DBA Cage Match On Pushing Releases To Prod
- mysql cloned server with better hardware have worse performance
- Is there a query to set a table's engine to innodb if it's not already?
- Ghost replication jobs in replication monitor
- At exporting table to excel, double numbers are changed as date type [migrated]
- SSIS Raw Data Import Into SQL Server 2008 R2 - Truncation and Condensed Data Type
- SSIS Data Flow Task Excel to SQL table NULL value will not work for small INT datatype
- Reset every Oracle sequence to the MAX value of its target table PK?
- Determine if existing Trigger acts on field DB2
- Research studies on electronic vs paper record keeping accuracy
- Database stuck in restoring and snapshot unavailable
- Comparing two tables for a UUID change and fix it
- SQL Server 2008: How to send an email when a step in a SQL Server agent job fails, but overall job succeeds
- Why should an application not use the sa account
- Bandwidth comparison between log shipping and transactional replication
- Why upgrade SQL Server 2000?
- Database design - do I need another table?
- event scheduler not called
- For a InnoDB only DB, which of these elements can be removed?
- Run Multiple Remote Jobs
- FETCH API_CURSOR causing open transaction in tempDB
- Cannot find MySQL Server Config Wizard for MySQL 5.6 build
- How export a sql server 2008 diagram to PDF filetype?
- MySQL Benchmark on FreeBSD and Ubuntu
- What permissions are needed to view dependencies?
Is SQL Server data compression categorically good for read-only databases? Posted: 03 Apr 2013 03:10 PM PDT Some literature on SQL Server data compression I read state that the write cost increases to about four times what would normally be required. It also seems to imply that this is the primary downside to data compression, strongly implying that for a read-only archive database, the performance will (with few excep tions) be improved by the usage of data compression of 100% filled pages.
For the purpose of this question, you can limit the context to PAGE-level compression of a big (> 1TB) database, but additional comments are always welcome. References: SQL Server Storage Engine Blog (The DW scenario shows compression to be very advantageous)
Both of the above are demonstrably biased towards recommending page compression for DW-style databases (read-intensive/exclusive, big-data operations). |
What's are the different ways to keep track of active and archived data? Posted: 03 Apr 2013 06:47 PM PDT I'm looking for different ways to keep track of both active and archived data so I can pro and con them. The system: I have a computer with a database on it. The database has several tables in it; one of which contains a list of users that can use the computer; and several tables for auditing (user 1 did this, user 2 did that, etc). This database is a slave of a master database in which a Content Management System is used to say, add a new user and see reports on what user did what. Example: As stated above, I have a table (lets call it users) that keeps track of all the users that are allowed to use the computer. As time goes by users will be added and removed. The problem is the audit tables keep track of a user ID so if the user is removed I lose the user information because the rows can't be joined. One idea I had was to use MySql's triggers so that if a user is added, an insert trigger is triggered and inserts a copy of the data to an 'archived' user table (lets call it users_archive). That way the computer can use users to determine if the user has permission to use it and reports can use users_archive for reports. This seems like the easiest and most simple way to do it, but I can't find any other ways via google search to see if there are any other ways to do something like this. |
Deployment Manager Vs DBA Cage Match On Pushing Releases To Prod Posted: 03 Apr 2013 05:01 PM PDT Our company has now hired a full time release engineer for the MS Windows side and we were thinking of a release management process for MS SQL Server. IF we have confidence in their abilities to deploy SQL Server scripts, is it common to just have the release manager do it or is it something that the DBAs typically do? Our enterprise is growing fast and our DBA's are somewhat overloaded (surprise surprise what else is new) but we can hire more, but that's another matter. How do you guys manage this? Do you allow a release manager to have access to prod to roll out changes? Do you take away rights and turn them on when they need to release? I'm thinking I will give them access to a sproc that gives them prod access for an hour, but it logs who calls it. Or am I totally off, and this is something a DBA should always manage? Any thoughts would be appreciated! Edit: Update: Also what happens when we encounter anomoloies? For example, a dev stated that 'these tables should match this other environment (by environment I mean customer prod environment, not qa/stage/etc.)'. Usually they would do a spot check. I did a checksum and noticed issues which ended up being just whitespace issues. In a case like this, do we push it back to the release manager/qa person to fix after doing basic troublehshooting? Another example: We have scripts by about 20 developers, sometimes they are dependent on each other. The ordering of the script was wrong. I can't keep up with 20 developers work and also manage the data, but after some troubleshooting we discovered the issue and changed the order. Is this something the DBA should typically be deeply involved in or is it fair after basic testing and look over, we send it back to the release manager and devs to fix? |
mysql cloned server with better hardware have worse performance Posted: 03 Apr 2013 02:16 PM PDT I am intrigued. I have migrated one mysql server to another machine (both are simple desktop computers) and the new machine has performed worse than the old one. Comparison between old-new is like this:
I have installed pt-config-diff from percona toolkit and it shows: For example, this query takes 20 secs on old and 30 secs on new (tables are the same - MyISAM engine): Am I missing something here ? |
Is there a query to set a table's engine to innodb if it's not already? Posted: 03 Apr 2013 06:52 PM PDT I can set a (MyISAM) table's engine to InnoDB using the query: which takes some time to complete for large tables. It appears though that if I run the same query again on the same table, it takes some time again (much less but still considerable). I would expect the query to be instantaneous, since the engine is already set to innodb. My questions are:
|
Ghost replication jobs in replication monitor Posted: 03 Apr 2013 01:13 PM PDT Has anyone ever seen and know the cause of this issue? We have transactional replication with pull subscriptions. At 2am this morning it appeared that the job (that runs every 15 minutes) stalled out. We stopped and restarted the job. Transactions started flowing again and we assumed all was ok. We noticed that the update rate was really slow, and upon investigation we found what is in this image - although the job from the pull subscription is definitely stopped, it shows as running in replication monitor. We ended up restarting the SQL Agent on the subscription server and now the speed back to normal, but now we show three running jobs. I'd like to understand this better and determine the cause. We know that restarting the agent got us running again but there is still obviously something wrong. |
At exporting table to excel, double numbers are changed as date type [migrated] Posted: 03 Apr 2013 12:06 PM PDT I have in table column, his type is decimal and in him is stored numbers like this I want export this table in excel, I export table as csv file. But when open csv file in excel, data like this I need that |
SSIS Raw Data Import Into SQL Server 2008 R2 - Truncation and Condensed Data Type Posted: 03 Apr 2013 11:00 AM PDT [Intro] I have written a two pack SSIS solution to perform a scheduled hourly update of data from multiple tables in a single db (SRC) to single table on another db (TRG). I get the following errors and I understand them: [ADO NET Destination [328]] Error: An exception has occurred during data insertion, the message returned from the provider is: * String or binary data would be truncated. * The statement has been terminated. I Know that the data from SRC in some columns is too large for the data in the TRG matching column. For instance SRC.CompositeAddress is nvarchar(50) and TRG.Addr is char(6) (same size as Condensed Data Type) which does not make sense for an address [Question] My question revolves around not really understanding the condensed data type. Aside from data in SRC being too large for data in TRG I am wondering if condensed data type is affecting my SSIS import? |
SSIS Data Flow Task Excel to SQL table NULL value will not work for small INT datatype Posted: 03 Apr 2013 11:14 AM PDT I have a package that imports excel data into a SQL table. One of the columns in the table is of "smallint" data type and it allows for nulls. The excel file column that has the data that is suppose to go into this column will always contain either small int or in some cases "NULL". I have noticed that when my source excel file has a NULL in this particular column I get the following error
If my source excel file only contains small int values then the package runs fine. I am not sure how I can fix this problem. I have tried changing data types in the Physical table as well as using different DataTypes (via the DataConversion component in SSIS) and I still keep getting the same error message. Does anyone have any ideas of how I can get around this issue? Even though the column in the SQL table is checked to allow NULL I guess it is reading the NULL from the Excel source file as text or string and not as an actual NULL. Any suggestions? |
Reset every Oracle sequence to the MAX value of its target table PK? Posted: 03 Apr 2013 11:44 AM PDT I finally got around to migrating from MySQL to Oracle and was pleasantly surprised just how well of a job the SQLDeveloper Migration tool did. It even took my AUTOINCREMENT fields and created sequences for my tables in their place. It also did a good job of migrating the data as well. The only real loose end I see is that when it created the sequences, it did not take into account the existing data in the table and instead all of the sequences are starting NEXTVAL of 1. I can imagine a simple PL/SQL script that will set the next value of the sequence based on the MAX value in the primary key column of a specific table to a specific sequence but then I have to do this over a hundred times and well I just don't have the patience for that. I wonder if there is some way I can write a PL/SQL script that uses meta data in the SYSTEM schema to dynamically do this for every table/sequence pair in an entire user space? Does anybody have any other better ideas how to handle this? I ran out of interns BTW. |
Determine if existing Trigger acts on field DB2 Posted: 03 Apr 2013 12:38 PM PDT I am eventually going to be writing a trigger on a field named "user2". Currently most of the values in this field are blank, but there are some seemingly random rows with values of '00' for the field. What I was wondering, is there anyway to determine if there is an existing trigger that is adding these values to the field? If not, what are some methods I can use to figure out how this data is getting there? Thanks, Andrew |
Research studies on electronic vs paper record keeping accuracy Posted: 03 Apr 2013 11:17 AM PDT Does anyone know of any research studies to test whether electronic or paper record keeping is more accurate than the other? Obviously electronic record keeping is more efficient, but efficient doesn't necessarily mean better accuracy in recording information. I have a client that is holding on dearly to their old paper system because of data accuracy conceptions. I'd love to be able to point to a research study that addresses which way of doing things is more accurate. |
Database stuck in restoring and snapshot unavailable Posted: 03 Apr 2013 11:00 AM PDT I tried to restore my database from a snapshot. This usually took around a minute to complete the last couple of times. When I did it today, it didn't complete for around 30 minutes and the spid was in a suspended state. I stopped the query and now my database is stuck in restoring state and my snapshot is unavailable. Am I screwed? |
Comparing two tables for a UUID change and fix it Posted: 03 Apr 2013 01:26 PM PDT I have two tables which I'm trying to reconcile the differences of in postgresql. Table A is old and needs updating. Table B is an updated, schema identical version of Table A which I have the data for in a temporary table in the database of Table A. Unfortunately some time after the two databases diverged someone changed the UUIDs of records in table B and I need table A to match table B. The schema for both tables is: I need to search through Table A and Table B and match records based on template_folder_uuid_parent and heading, then set the UUID of the Table A record to the UUID from Table B. Once changed in Table A the UUID will cascade correctly. |
Posted: 03 Apr 2013 02:34 PM PDT I have a SQL Server job with 6 steps. Steps 5 and 6 must be run regardless of any failures in the first four steps, so these first four jobs are set to skip to step 5 if they fail. However, if steps 5 and 6 then succeed, the whole job is regarded as a success. I have an email notification set up for the job failure, but I do not receive an email if any of the first four steps fail due to the overall job being considered a success. I would like this to happen. It would not be ideal to split out the first four steps into a separate job, as they must be completed before steps 5 and 6 begin. Please can anyone give me advice to solve this problem so that:
Thanks very much in advance for your help. |
Why should an application not use the sa account Posted: 03 Apr 2013 11:25 AM PDT My first question ever, please be gentle. I understand that the sa account enables complete control over a SQL Server and all the databases, users, permissions etc. I have an absolute belief that applications should not use the sa password without a perfected, business person focused reason why. Answers to This Question include a lot of my reasoning for an IT focused discussion I am being forced into accepting a new service management system that WILL NOT work unless it uses the sa password. I never had time to work out why when setting up an evaluation but the server team tried to install it to use a fixed role I had set up incorporating db_creater and other permissions I thought it would require. which failed. I then let the server team install with the sa account but run under an account in the dbo role for its database but that failed too. Grumpily I tried to get it to run with an account in the sysadmin role but even that failed and not with useful error messages that enabled me to work out what was going on without spending more time than I had available. It will only work with the sa account and the password stored in clear text in the config file. When I queried this and the server team talked to the vendor they got the worrying answer of 'What's the problem with that?' and then 'well we can look at scrambling the password' scrambling ffs I know that there are ways and means to restrict access to the file but it is just another weakness in the security in my opinion Anyway, My question is, could someone point me at some documentation that I can use to explain to the business the reason why this is a bad thing and should be a big no no. I work in a field that means that I need to take security seriously and have been struggling to make the business understand and ultimately may be out-ranked anyway but I need to try. |
Bandwidth comparison between log shipping and transactional replication Posted: 03 Apr 2013 12:15 PM PDT Which technique uses more network bandwidth:
Can some one share any benchmarks for the same ? What would be the Memory and I/O impact on the Primary server when we use any one of the technique ? Thanks, Piyush Patel |
Posted: 03 Apr 2013 02:23 PM PDT I am responsible for a large legacy application, mostly written in classic ASP, running against SQL Server 2000. I have been considering upgrading to a newer version, but most of the information I've found online has been very general. Mostly, I've read that we'll benefit from performance gains, better security, and new features. Can anyone lay out any specific reasons why we should upgrade? I have reviewed the upgrade process and the lists of breaking changes, behavioral changes, etc., and do not anticipate any problems with upgrading other than price. However, despite seeing many recommendations to upgrade, I've set to see any concrete reasons other than performance (not an issue for us). |
Database design - do I need another table? Posted: 03 Apr 2013 05:42 PM PDT I am trying to make a database that follows a form that the company uses. When a client walks in the membes of staff have to fill in a form and the data is recorded. The form on paper is 10 pages long. The first time a client is seen the entire form is filled in and the client gets a clientID. I have split the form into sections that make sense like accommodation and employment. I know I can link these tables together with the clientsID. Thats the simple part. Now when a client returns the form comes out again but this time only certain parts are filled in, what ever the clients needs are. The records for most parts don't need updating but a new record needs inserting. what would be the best way around this. So at the moment I have for example a table called client with an id and name another table called accommodation with clientid and address and another table employment with clientid and employer. But how do I go about it when a client comes in to let us know he has a new employer. I cant update the current one as that is needed but I would need to add new record for the client. Would this mean I would have to add a look up table for all my current tables? |
Posted: 03 Apr 2013 01:01 PM PDT I had created one event scheduler which looks like this This events has not called on 1st of month. So i tried so NULL in db col means that no DB is assigned to it? Please help me to solve it. |
For a InnoDB only DB, which of these elements can be removed? Posted: 03 Apr 2013 11:36 AM PDT So, I'm trying to set up a Drupal 7 my.conf file that's combining best practices from various performance blogs. I'm realizing though that some of them are older than others, and many aren't assuming InnoDB. So of this list of configs, which are irrelevant if you're building for InnoDB. |
Posted: 03 Apr 2013 04:01 PM PDT I need to manually run a job on more than 150 sql server instances (sql server 2000, remote) from a sql server 2005 instance (the local server). The job is the same on all these instances. The job just calls a stored procedure without parameter, which is also the same across all the instances. These jobs are on a schedule. But now they want me to manually run the job for all the instance or for specified instances upon request. What is the best practice for this? I have tried openrowset to call the remote stored procedure. But each run of the job takes couple of minutes, so if I use a loop to run all these jobs, it will run one by one and that's a long time. Ideally, it should be able to run the stored procedure on each instance without waiting for it to finish. More ideally, it should be able to run the job on each instance without waiting for it to finish, so it can leave a record in the job history on each instance. And the stored procedure is from a third party so it can't be altered. update: since the 'people' ask this to be initialised from a SSRS report, use SSRS to call some T-SQL/proc on the server would be most appropriate. The problem I got now is when calling msdb.dbo.sp_start_job on remote SQL Server 2000 instances from local server using OPENQUERY or OPENROWSET I got Cannot following error. process the object "exec msdb.dbo.sp_start_job @job_name = 'xxx' ". The OLE DB provider "SQLNCLI" for linked server "xxx" indicates that either the object has no columns or the current user does not have permissions on that object. I guess this may because the sp_start_job doesn't return anything because I can use OPENQUERY/OPENROWSET to call other remote proc without problem. So any workaround? update: I have found it actually pretty simple in t-sql. EXEC [linkedServerName].msdb.dbo.sp_start_job @job_name = 'test2' So I don't need to use OPENROWSET/OPENQUERY atually since all the remote SQL Server 2000 instances are already added as remote servers. |
FETCH API_CURSOR causing open transaction in tempDB Posted: 03 Apr 2013 02:59 PM PDT A select statement run from Oracle 11gR1 to SQL Server 2005 using Gateway leaves an open transaction in tempdb. The user is a datareader in Database XYZ. Not sure why the open tran is in tempDB when the statement run is SELECT. Any one had seen this issue before ? Thanks in advance sekhar |
Cannot find MySQL Server Config Wizard for MySQL 5.6 build Posted: 03 Apr 2013 12:36 PM PDT Not sure if this is the right Stack Exchange site for this but... I am trying to reconfigure my mySQL instance but cannot find the config wizard. I looked here: http://dev.mysql.com/doc/refman/5.0/en/mysql-config-wizard-starting.html As one user pointed out, the config |
How export a sql server 2008 diagram to PDF filetype? Posted: 03 Apr 2013 11:01 AM PDT I want to have a export from my database diagram to PDF or image types.How can I do this? |
MySQL Benchmark on FreeBSD and Ubuntu Posted: 03 Apr 2013 01:59 PM PDT I'd like to benchmark two db servers, one is running Ubuntu the other FreeBSD. Unfortunately we are unable to install SysBench on the FreeBSD OS. Could anyone recommend any alternatives? Or should I look into creating some intense queries based on the schema? |
What permissions are needed to view dependencies? Posted: 03 Apr 2013 11:02 AM PDT My company is moving to a much more restrictive security model in the near future. As part of that we are changing specific servers to the new model and testing all our existing processes. One of the processes that I use a lot utilizes the SMO framework to script out database objects in dependency order, so we can run those scripts on another server. The dependency order piece is very important as we have a lot of nested objects (think views that reference other views). With the new security model in place, this script stopped working correctly. I tested in SSMS as well (SSMS 2012 against a 2008r2 instance) and using Even more troubling, if I run I did a bit of research and couldn't find a definitive answer, so I'm hoping someone can help me out: What specific permissions are needed for a user to accurately view dependencies in SQL Server 2008r2 and/or SQL Server 2012 (we are upgrading soon). |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment