Wednesday, April 3, 2013

[how to] Is SQL Server data compression categorically good for read-only databases?

[how to] Is SQL Server data compression categorically good for read-only databases?


Is SQL Server data compression categorically good for read-only databases?

Posted: 03 Apr 2013 03:10 PM PDT

Some literature on SQL Server data compression I read state that the write cost increases to about four times what would normally be required. It also seems to imply that this is the primary downside to data compression, strongly implying that for a read-only archive database, the performance will (with few excep tions) be improved by the usage of data compression of 100% filled pages.

  1. Are the statements above true?
  2. What are the primary "variations" between data compression and otherwise (for reading)

    • "CPU +x%"?
    • "IO -y%"?
    • page split occurence?
    • tempdb usage?
    • RAM usage?
  3. And for writing?

For the purpose of this question, you can limit the context to PAGE-level compression of a big (> 1TB) database, but additional comments are always welcome.


References:

SQL Server Storage Engine Blog (The DW scenario shows compression to be very advantageous)
Data Compression: Strategy, Capacity Planning and Best Practices

A more detailed approach to deciding what to compress involves analyzing the workload characteristics for each table and index. It is based on the following two metrics:

U: The percentage of update operations on a specific table, index, or partition, relative to total operations on that object. The lower the value of U (that is, the table, index, or partition is infrequently updated), the better candidate it is for page compression.
S: The percentage of scan operations on a table, index, or partition, relative to total operations on that object. The higher the value of S (that is, the table, index, or partition is mostly scanned), the better candidate it is for page compression.

Both of the above are demonstrably biased towards recommending page compression for DW-style databases (read-intensive/exclusive, big-data operations).

What's are the different ways to keep track of active and archived data?

Posted: 03 Apr 2013 06:47 PM PDT

I'm looking for different ways to keep track of both active and archived data so I can pro and con them.

The system: I have a computer with a database on it. The database has several tables in it; one of which contains a list of users that can use the computer; and several tables for auditing (user 1 did this, user 2 did that, etc). This database is a slave of a master database in which a Content Management System is used to say, add a new user and see reports on what user did what.

Example: As stated above, I have a table (lets call it users) that keeps track of all the users that are allowed to use the computer. As time goes by users will be added and removed. The problem is the audit tables keep track of a user ID so if the user is removed I lose the user information because the rows can't be joined. One idea I had was to use MySql's triggers so that if a user is added, an insert trigger is triggered and inserts a copy of the data to an 'archived' user table (lets call it users_archive). That way the computer can use users to determine if the user has permission to use it and reports can use users_archive for reports.

This seems like the easiest and most simple way to do it, but I can't find any other ways via google search to see if there are any other ways to do something like this.

Deployment Manager Vs DBA Cage Match On Pushing Releases To Prod

Posted: 03 Apr 2013 05:01 PM PDT

Our company has now hired a full time release engineer for the MS Windows side and we were thinking of a release management process for MS SQL Server.

IF we have confidence in their abilities to deploy SQL Server scripts, is it common to just have the release manager do it or is it something that the DBAs typically do? Our enterprise is growing fast and our DBA's are somewhat overloaded (surprise surprise what else is new) but we can hire more, but that's another matter.

How do you guys manage this? Do you allow a release manager to have access to prod to roll out changes? Do you take away rights and turn them on when they need to release? I'm thinking I will give them access to a sproc that gives them prod access for an hour, but it logs who calls it.

Or am I totally off, and this is something a DBA should always manage?

Any thoughts would be appreciated!

Edit:

Update: Also what happens when we encounter anomoloies? For example, a dev stated that 'these tables should match this other environment (by environment I mean customer prod environment, not qa/stage/etc.)'. Usually they would do a spot check. I did a checksum and noticed issues which ended up being just whitespace issues. In a case like this, do we push it back to the release manager/qa person to fix after doing basic troublehshooting?

Another example: We have scripts by about 20 developers, sometimes they are dependent on each other. The ordering of the script was wrong. I can't keep up with 20 developers work and also manage the data, but after some troubleshooting we discovered the issue and changed the order. Is this something the DBA should typically be deeply involved in or is it fair after basic testing and look over, we send it back to the release manager and devs to fix?

mysql cloned server with better hardware have worse performance

Posted: 03 Apr 2013 02:16 PM PDT

I am intrigued. I have migrated one mysql server to another machine (both are simple desktop computers) and the new machine has performed worse than the old one. Comparison between old-new is like this:

  • RAM: 2Gb to 4Gb
  • Disk: 77Mb/s to 133Mb/s
  • CPU: Core2Duo E8400 (11969.53 BogoMIPS) to Phenom X2 Dual (13600.96 BogoMIPS)
  • Ubuntu: 10.10 to 12.10
  • MySQL: 5.1 to 5.5 (default distrib package)

I have installed pt-config-diff from percona toolkit and it shows:

root@ametista2-sp:~# pt-config-diff h=localhost h=192.168.11.249 --ask-pass  Enter MySQL password:   Enter MySQL password:   32 config differences  Variable                  ametista2-sp (new)        ametista-sp (old)  ========================= ========================= =========================  collation_connection      utf8_unicode_ci           utf8_general_ci  collation_database        utf8_unicode_ci           utf8_general_ci  collation_server          utf8_unicode_ci           utf8_general_ci  completion_type           NO_CHAIN                  0  concurrent_insert         AUTO                      1  datadir                   /dados/mysql-databases/   /var/lib/mysql/  general_log_file          /dados/mysql-databases... /var/lib/mysql/ametist...  hostname                  ametista2-sp              ametista-sp  ignore_builtin_innodb     OFF                       ON  innodb_change_buffering   all                       inserts  innodb_file_format_check  ON                        Barracuda  innodb_version            1.1.8                     1.0.17  log_slow_queries          OFF                       ON  long_query_time           10.000000                 2.000000  max_binlog_cache_size     18446744073709547520      4294963200  max_seeks_for_key         18446744073709551615      4294967295  max_write_lock_count      18446744073709551615      4294967295  myisam_max_sort_file_size 9223372036853727232       2146435072  myisam_mmap_size          18446744073709551615      4294967295  optimizer_switch          index_merge=on,index_m... index_merge=on,index_m...  pid_file                  /var/run/mysqld/mysqld... /var/lib/mysql/ametist...  slow_query_log            OFF                       ON  slow_query_log_file       /dados/mysql-databases... /var/log/mysql/mysql-s...  socket                    /dados/mysql-databases... /var/run/mysqld/mysqld...  sort_buffer_size          2097152                   2097144  sql_auto_is_null          OFF                       ON  sql_slave_skip_counter    0                           storage_engine            InnoDB                    MyISAM  table_definition_cache    400                       256  table_open_cache          400                       64  version                   5.5.29-0ubuntu0.12.04.2   5.1.61-0ubuntu0.10.10....  version_compile_machine   x86_64                    i686  

For example, this query takes 20 secs on old and 30 secs on new (tables are the same - MyISAM engine):

3   17:24:06    SELECT       tblEmpresasPOSITIVAS_LEFTJOINCnae . *  FROM      tblEmpresasPOSITIVAS_LEFTJOINCnae  WHERE      (CO_CNPJ_CEI LIKE '00000000%')  ORDER BY CO_CNPJ_CEI ASC  limit 0,30000 5759 row(s) returned  

Am I missing something here ?

Is there a query to set a table's engine to innodb if it's not already?

Posted: 03 Apr 2013 06:52 PM PDT

I can set a (MyISAM) table's engine to InnoDB using the query:

alter table tablename engine=InnoDB  

which takes some time to complete for large tables. It appears though that if I run the same query again on the same table, it takes some time again (much less but still considerable). I would expect the query to be instantaneous, since the engine is already set to innodb.

My questions are:

  • Is there a single query that conditionally sets the engine, for example

    alter table tablename engine=InnoDB <if not already innodb>

  • Why does the second query have such a delay? (out of curiosity mostly)

Ghost replication jobs in replication monitor

Posted: 03 Apr 2013 01:13 PM PDT

Has anyone ever seen and know the cause of this issue? We have transactional replication with pull subscriptions. At 2am this morning it appeared that the job (that runs every 15 minutes) stalled out. We stopped and restarted the job. Transactions started flowing again and we assumed all was ok. We noticed that the update rate was really slow, and upon investigation we found what is in this image - although the job from the pull subscription is definitely stopped, it shows as running in replication monitor. We ended up restarting the SQL Agent on the subscription server and now the speed back to normal, but now we show three running jobs. I'd like to understand this better and determine the cause. We know that restarting the agent got us running again but there is still obviously something wrong.

Bizarre behavior

At exporting table to excel, double numbers are changed as date type [migrated]

Posted: 03 Apr 2013 12:06 PM PDT

I have in table column, his type is decimal and in him is stored numbers like this

2.3  2.0  3.3  4.2  

I want export this table in excel, I export table as csv file.

But when open csv file in excel, data like this 4.2 displayed as 04.02.2013

I need that 4.2 displayed as 4.2 in excel, how to make this?

SSIS Raw Data Import Into SQL Server 2008 R2 - Truncation and Condensed Data Type

Posted: 03 Apr 2013 11:00 AM PDT

[Intro] I have written a two pack SSIS solution to perform a scheduled hourly update of data from multiple tables in a single db (SRC) to single table on another db (TRG).

I get the following errors and I understand them:

[ADO NET Destination [328]] Error: An exception has occurred during data insertion, the message returned from the provider is: * String or binary data would be truncated. * The statement has been terminated.

I Know that the data from SRC in some columns is too large for the data in the TRG matching column. For instance SRC.CompositeAddress is nvarchar(50) and TRG.Addr is char(6) (same size as Condensed Data Type) which does not make sense for an address

[Question] My question revolves around not really understanding the condensed data type. Aside from data in SRC being too large for data in TRG I am wondering if condensed data type is affecting my SSIS import?

SSIS Data Flow Task Excel to SQL table NULL value will not work for small INT datatype

Posted: 03 Apr 2013 11:14 AM PDT

I have a package that imports excel data into a SQL table. One of the columns in the table is of "smallint" data type and it allows for nulls. The excel file column that has the data that is suppose to go into this column will always contain either small int or in some cases "NULL".

I have noticed that when my source excel file has a NULL in this particular column I get the following error

There was an error with input column "Copy of Place of Service" (8582) on input "OLE DB Destination Input" (8459). The column status returned was: "The value could not be converted because of a potential loss of data.".

If my source excel file only contains small int values then the package runs fine. I am not sure how I can fix this problem. I have tried changing data types in the Physical table as well as using different DataTypes (via the DataConversion component in SSIS) and I still keep getting the same error message.

Does anyone have any ideas of how I can get around this issue? Even though the column in the SQL table is checked to allow NULL I guess it is reading the NULL from the Excel source file as text or string and not as an actual NULL. Any suggestions?

Reset every Oracle sequence to the MAX value of its target table PK?

Posted: 03 Apr 2013 11:44 AM PDT

I finally got around to migrating from MySQL to Oracle and was pleasantly surprised just how well of a job the SQLDeveloper Migration tool did. It even took my AUTOINCREMENT fields and created sequences for my tables in their place. It also did a good job of migrating the data as well.

The only real loose end I see is that when it created the sequences, it did not take into account the existing data in the table and instead all of the sequences are starting NEXTVAL of 1.

I can imagine a simple PL/SQL script that will set the next value of the sequence based on the MAX value in the primary key column of a specific table to a specific sequence but then I have to do this over a hundred times and well I just don't have the patience for that.

I wonder if there is some way I can write a PL/SQL script that uses meta data in the SYSTEM schema to dynamically do this for every table/sequence pair in an entire user space? Does anybody have any other better ideas how to handle this? I ran out of interns BTW.

Determine if existing Trigger acts on field DB2

Posted: 03 Apr 2013 12:38 PM PDT

I am eventually going to be writing a trigger on a field named "user2". Currently most of the values in this field are blank, but there are some seemingly random rows with values of '00' for the field.

What I was wondering, is there anyway to determine if there is an existing trigger that is adding these values to the field? If not, what are some methods I can use to figure out how this data is getting there?

Thanks,

Andrew

Research studies on electronic vs paper record keeping accuracy

Posted: 03 Apr 2013 11:17 AM PDT

Does anyone know of any research studies to test whether electronic or paper record keeping is more accurate than the other? Obviously electronic record keeping is more efficient, but efficient doesn't necessarily mean better accuracy in recording information.

I have a client that is holding on dearly to their old paper system because of data accuracy conceptions. I'd love to be able to point to a research study that addresses which way of doing things is more accurate.

Database stuck in restoring and snapshot unavailable

Posted: 03 Apr 2013 11:00 AM PDT

I tried to restore my database from a snapshot. This usually took around a minute to complete the last couple of times. When I did it today, it didn't complete for around 30 minutes and the spid was in a suspended state. I stopped the query and now my database is stuck in restoring state and my snapshot is unavailable. Am I screwed?

USE master;  RESTORE DATABASE QA from   DATABASE_SNAPSHOT = 'QA_Snap_Testing';  GO  

Comparing two tables for a UUID change and fix it

Posted: 03 Apr 2013 01:26 PM PDT

I have two tables which I'm trying to reconcile the differences of in postgresql.

Table A is old and needs updating.

Table B is an updated, schema identical version of Table A which I have the data for in a temporary table in the database of Table A.

Unfortunately some time after the two databases diverged someone changed the UUIDs of records in table B and I need table A to match table B.

The schema for both tables is:

CREATE TABLE A  (      uuid VARCHAR(36) NOT NULL,      template_folder_uuid_parent VARCHAR(36),      heading VARCHAR(255) NOT NULL,      image VARCHAR(100),      downloaded BOOL NOT NULL,      allow_in TEXT NOT NULL,      template_group VARCHAR(255) NOT NULL,      country_filter VARCHAR(1024) NOT NULL,      user_id INT,      date_created TIMESTAMP,      date_modified TIMESTAMP,      is_modified BOOL NOT NULL,      image_path VARCHAR(255)  );  

I need to search through Table A and Table B and match records based on template_folder_uuid_parent and heading, then set the UUID of the Table A record to the UUID from Table B. Once changed in Table A the UUID will cascade correctly.

SQL Server 2008: How to send an email when a step in a SQL Server agent job fails, but overall job succeeds

Posted: 03 Apr 2013 02:34 PM PDT

I have a SQL Server job with 6 steps. Steps 5 and 6 must be run regardless of any failures in the first four steps, so these first four jobs are set to skip to step 5 if they fail.

However, if steps 5 and 6 then succeed, the whole job is regarded as a success. I have an email notification set up for the job failure, but I do not receive an email if any of the first four steps fail due to the overall job being considered a success. I would like this to happen.

It would not be ideal to split out the first four steps into a separate job, as they must be completed before steps 5 and 6 begin.

Please can anyone give me advice to solve this problem so that:

  1. Steps 5 and 6 run even when any of steps 1-4 fail.
  2. Steps 5 and 6 begin strictly not before steps 1-4 complete.
  3. When any of steps 1-4 fail, an email notification is sent indicating the step that failed.

Thanks very much in advance for your help.

Why should an application not use the sa account

Posted: 03 Apr 2013 11:25 AM PDT

My first question ever, please be gentle. I understand that the sa account enables complete control over a SQL Server and all the databases, users, permissions etc.

I have an absolute belief that applications should not use the sa password without a perfected, business person focused reason why. Answers to This Question include a lot of my reasoning for an IT focused discussion

I am being forced into accepting a new service management system that WILL NOT work unless it uses the sa password. I never had time to work out why when setting up an evaluation but the server team tried to install it to use a fixed role I had set up incorporating db_creater and other permissions I thought it would require. which failed. I then let the server team install with the sa account but run under an account in the dbo role for its database but that failed too. Grumpily I tried to get it to run with an account in the sysadmin role but even that failed and not with useful error messages that enabled me to work out what was going on without spending more time than I had available. It will only work with the sa account and the password stored in clear text in the config file.

When I queried this and the server team talked to the vendor they got the worrying answer of 'What's the problem with that?' and then 'well we can look at scrambling the password' scrambling ffs

I know that there are ways and means to restrict access to the file but it is just another weakness in the security in my opinion

Anyway, My question is, could someone point me at some documentation that I can use to explain to the business the reason why this is a bad thing and should be a big no no. I work in a field that means that I need to take security seriously and have been struggling to make the business understand and ultimately may be out-ranked anyway but I need to try.

Bandwidth comparison between log shipping and transactional replication

Posted: 03 Apr 2013 12:15 PM PDT

Which technique uses more network bandwidth:

  1. Log shipping
  2. Transactional Replication

Can some one share any benchmarks for the same ?

What would be the Memory and I/O impact on the Primary server when we use any one of the technique ?

Thanks,

Piyush Patel

Why upgrade SQL Server 2000?

Posted: 03 Apr 2013 02:23 PM PDT

I am responsible for a large legacy application, mostly written in classic ASP, running against SQL Server 2000. I have been considering upgrading to a newer version, but most of the information I've found online has been very general. Mostly, I've read that we'll benefit from performance gains, better security, and new features.

Can anyone lay out any specific reasons why we should upgrade? I have reviewed the upgrade process and the lists of breaking changes, behavioral changes, etc., and do not anticipate any problems with upgrading other than price. However, despite seeing many recommendations to upgrade, I've set to see any concrete reasons other than performance (not an issue for us).

Database design - do I need another table?

Posted: 03 Apr 2013 05:42 PM PDT

I am trying to make a database that follows a form that the company uses. When a client walks in the membes of staff have to fill in a form and the data is recorded. The form on paper is 10 pages long. The first time a client is seen the entire form is filled in and the client gets a clientID.

I have split the form into sections that make sense like accommodation and employment. I know I can link these tables together with the clientsID. Thats the simple part.

Now when a client returns the form comes out again but this time only certain parts are filled in, what ever the clients needs are. The records for most parts don't need updating but a new record needs inserting. what would be the best way around this.

So at the moment I have for example a table called client with an id and name another table called accommodation with clientid and address and another table employment with clientid and employer.

But how do I go about it when a client comes in to let us know he has a new employer. I cant update the current one as that is needed but I would need to add new record for the client.

Would this mean I would have to add a look up table for all my current tables?

event scheduler not called

Posted: 03 Apr 2013 01:01 PM PDT

I had created one event scheduler which looks like this

mysql> show create event event1      -> ;  +--------+----------+-----------+--------------------------------------------------------------------------------------------+  | Event  | sql_mode | time_zone | Create Event  | character_set_client | collation_connection | Database Collation |  +--------+----------+-----------+-----------------------------------------------------------------------+----------------------+----------------------+    | event1 |          | SYSTEM    | CREATE DEFINER=`root`@`localhost` EVENT `event1` ON SCHEDULE EVERY 1 MONTH STARTS '2013-02-02 00:00:00' ON COMPLETION NOT PRESERVE ENABLE DO BEGIN   update samp set col1 =col1  + 1; END | utf8                 | utf8_general_ci      | latin1_swedish_ci  |  +--------+----------+-----------+-----------------------------------------------------------------------+----------------------+----------------------+---------  -----------+  1 row in set (0.00 sec)  

This events has not called on 1st of month. So i tried show processlist\g to find it is runnung or not; it showed following o/p

mysql> show processlist\g;  +-----+-----------------+-----------------+---------------+---------+------+---     | Id  | User            | Host            | db            | Command | Time | State                       | Info             |  +-----+-----------------+-----------------+---------------+---------+------+---  | 136 | event_scheduler | localhost       | NULL          | Daemon  | 1855 | Waiting for next activation | NULL    |  

so NULL in db col means that no DB is assigned to it?

Please help me to solve it.

For a InnoDB only DB, which of these elements can be removed?

Posted: 03 Apr 2013 11:36 AM PDT

So, I'm trying to set up a Drupal 7 my.conf file that's combining best practices from various performance blogs. I'm realizing though that some of them are older than others, and many aren't assuming InnoDB. So of this list of configs, which are irrelevant if you're building for InnoDB.

[client]  port = 3306  socket = /var/run/mysqld/mysqld.sock    [mysqld_safe]  socket = /var/run/mysqld/mysqld.sock  nice = 0  open-files-limit = 4096    [mysqld]  port = 3306  user = mysql    default_storage_engine  default-storage-engine = InnoDB  socket = /var/run/mysqld/mysqld.sock  pid_file = /var/run/mysqld/mysqld.pid  basedir = /usr  tmpdir = /tmp  lc-messages-dir = /usr/share/mysql  local-infile = 0  automatic_sp_privileges = 0  safe-user-create = 1  secure-auth = 1  secure-file-priv = /tmp  symbolic-links = 0  key_buffer_size = 32M  myisam-recover = BACKUP,FORCE  concurrent_insert = 2  max_allowed_packet = 16M  max_connect_errors = 1000000  datadir = /var/lib/mysql  tmp_table_size = 64M  max_heap_table_size = 64M  query_cache_type = 1  query_cache_size = 0  query_cache_limit = 8M  query_cache_min_res_unit = 1K  default-storage-engine = InnoDB  thread_stack = 256K  thread_cache_size = 128  max_connections = 128  open_files_limit = 65535  skip-locking  skip-bdb  server-id  = 1  log_bin = /var/log/mysql/mysql-bin.log  binlog_cache_size = 256K  sync_binlog  = 256  expire_logs_days  = 14  max_binlog_size = 1G  binlog_do_db  = include_database_name  binlog_ignore_db = include_database_name  max_user_connections = 150  key_buffer = 16M  key_cache_block_size     = 4K  bulk_insert_buffer_size  = 8M  innodb_buffer_pool_size  = 6G  myisam_sort_buffer_size  = 64M  join_buffer_size = 8M  read_buffer_size = 2M  sort_buffer_size = 3M  read_rnd_buffer_size = 64M  table_cache = 4096  table_definition_cache = 4096  table_open_cache = 16384  optimizer_search_depth   = 4  collation-server = utf8_general_ci  interactive_timeout = 400  wait_timeout = 300  connect_timeout = 10  thread_concurrency=8  back_log = 2048  open-files = 10000  query_prealloc_size = 65536  query_alloc_block_size = 131072  

Run Multiple Remote Jobs

Posted: 03 Apr 2013 04:01 PM PDT

I need to manually run a job on more than 150 sql server instances (sql server 2000, remote) from a sql server 2005 instance (the local server). The job is the same on all these instances. The job just calls a stored procedure without parameter, which is also the same across all the instances. These jobs are on a schedule. But now they want me to manually run the job for all the instance or for specified instances upon request.

What is the best practice for this? I have tried openrowset to call the remote stored procedure. But each run of the job takes couple of minutes, so if I use a loop to run all these jobs, it will run one by one and that's a long time. Ideally, it should be able to run the stored procedure on each instance without waiting for it to finish. More ideally, it should be able to run the job on each instance without waiting for it to finish, so it can leave a record in the job history on each instance.

And the stored procedure is from a third party so it can't be altered.

update:

since the 'people' ask this to be initialised from a SSRS report, use SSRS to call some T-SQL/proc on the server would be most appropriate. The problem I got now is when calling msdb.dbo.sp_start_job on remote SQL Server 2000 instances from local server using OPENQUERY or OPENROWSET I got Cannot following error.

process the object "exec msdb.dbo.sp_start_job @job_name = 'xxx' ". The OLE DB provider "SQLNCLI" for linked server "xxx" indicates that either the object has no columns or the current user does not have permissions on that object.

I guess this may because the sp_start_job doesn't return anything because I can use OPENQUERY/OPENROWSET to call other remote proc without problem. So any workaround?

update:

I have found it actually pretty simple in t-sql.

EXEC [linkedServerName].msdb.dbo.sp_start_job @job_name = 'test2'

So I don't need to use OPENROWSET/OPENQUERY atually since all the remote SQL Server 2000 instances are already added as remote servers.

FETCH API_CURSOR causing open transaction in tempDB

Posted: 03 Apr 2013 02:59 PM PDT

A select statement run from Oracle 11gR1 to SQL Server 2005 using Gateway leaves an open transaction in tempdb. The user is a datareader in Database XYZ. Not sure why the open tran is in tempDB when the statement run is SELECT.

Any one had seen this issue before ?

Thanks in advance sekhar

Cannot find MySQL Server Config Wizard for MySQL 5.6 build

Posted: 03 Apr 2013 12:36 PM PDT

Not sure if this is the right Stack Exchange site for this but...

I am trying to reconfigure my mySQL instance but cannot find the config wizard. I looked here: http://dev.mysql.com/doc/refman/5.0/en/mysql-config-wizard-starting.html

As one user pointed out, the config .exe file is not included in version above 5.5. Does anyone know how to reconfigure the MySQL server instance?

How export a sql server 2008 diagram to PDF filetype?

Posted: 03 Apr 2013 11:01 AM PDT

I want to have a export from my database diagram to PDF or image types.How can I do this?
I worked with sql server 2008 R2.

MySQL Benchmark on FreeBSD and Ubuntu

Posted: 03 Apr 2013 01:59 PM PDT

I'd like to benchmark two db servers, one is running Ubuntu the other FreeBSD. Unfortunately we are unable to install SysBench on the FreeBSD OS. Could anyone recommend any alternatives? Or should I look into creating some intense queries based on the schema?

What permissions are needed to view dependencies?

Posted: 03 Apr 2013 11:02 AM PDT

My company is moving to a much more restrictive security model in the near future.

As part of that we are changing specific servers to the new model and testing all our existing processes. One of the processes that I use a lot utilizes the SMO framework to script out database objects in dependency order, so we can run those scripts on another server.

The dependency order piece is very important as we have a lot of nested objects (think views that reference other views).

With the new security model in place, this script stopped working correctly.

I tested in SSMS as well (SSMS 2012 against a 2008r2 instance) and using View Dependencies on a view that references another view in the same DB doesn't show the referenced view under Objects on which [this view] depends.

Even more troubling, if I run sp_depends on the view in question I do get an accurate list of dependencies.

I did a bit of research and couldn't find a definitive answer, so I'm hoping someone can help me out:

What specific permissions are needed for a user to accurately view dependencies in SQL Server 2008r2 and/or SQL Server 2012 (we are upgrading soon).

No comments:

Post a Comment

Search This Blog