Tuesday, April 23, 2013

[how to] Build career as DBA

[how to] Build career as DBA


Build career as DBA

Posted: 23 Apr 2013 09:11 PM PDT

I wanted to know that if someone is interested to build her/his career as a DBA then is programming language required ?I mean ,without knowing programming language is it possible to become a DBA?Kindly guide me because i wanted to build my career in this field.

one vs two column index difference when doing JOIN query?

Posted: 23 Apr 2013 08:41 PM PDT

Let`s suppose that alfa,beta and gamma contains milions of rows so we need to create indexes obviously to get optimal performace for this query :

SELECT * FROM alfa    JOIN beta on beta.id = alfa.beta_id    JOIN gamma on gamma.id = alfa.gamma_id  WHERE    beta.someattribute = 'X' AND   gamma.anotherone = 'Y' AND   alfa.something = 'Z'  

Now what is optimal way to create indexes on alfa table ? Is is better to create one index across two columns (beta_id,gamma_id) or two separate indexes for each column ? (I am not sure how is JOIN query internally interpreted).

DB : Oracle 11g 11.1.0.7.0

update statement with self join

Posted: 23 Apr 2013 08:21 PM PDT

I needed to copy information from one row of a table to another. I noticed I could do the following:

update address  set col1 = a2.col1,  col2 = a2.col2,  .....etc  from address a1, address a2  where a1.id = @to and a2.id = @from  

However, the above SQL seems kind of ambiguous.
How do I know it's going to update row @to with data of @from and not the otherway around?

Edit: Note: while I have not done extensive testing, this query does seem to work as I try it against our development database. It simply updates the one @to row with results from the @from row.

Storing data as rows as opposed to columns

Posted: 23 Apr 2013 03:11 PM PDT

Here's a typical way I would store data (obviously not storing the password in plain text)

USER TABLE    | UserID | UserName | FullName | EMail   | Password |  |--------|----------|----------|---------|----------|  |1       |userAAA   |User Aaa  |aa@aa.com|aAaaA     |  |1       |userBBB   |User Bbb  |bb@bb.com|bBbbB     |  |1       |userCCC   |User Ccc  |cc@cc.com|cCccC     |  |--------|----------|----------|---------|----------|  

Is there anything wrong with storing it in the following manner?

USER TABLE                 ATTRIBUTES TABLE    | UserID | UserName |      |AttributeID | Attribute |  |--------|----------|      |------------|-----------|  |1       |userAAA   |      |1           |Full Name  |  |1       |userBBB   |      |2           |E-Mail     |  |1       |userCCC   |      |3           |Password   |  |--------|----------|      |------------|-----------|    ATTRIBUTE_VALUES TABLE    |UserID | AttributeID | AttributeValue |  |-------|-------------|----------------|  |1      | 1           |User Aaa        |  |1      | 2           |aa@aa.com       |  |1      | 3           |aAaaA           |  |2      | 1           |User Bbb        |  |2      | 2           |bb@bb.com       |  |2      | 3           |bBbbB           |  |3      | 1           |User Ccc        |  |3      | 2           |cc@cc.com       |  |3      | 3           |cCccC           |  |-------|-------------|----------------|  

The huge benefit I see here is the ability to easily add additional attributes to the user. But I really wanted to get another opinion on this.

Using a table swap for a data load but I have triggers

Posted: 23 Apr 2013 07:01 PM PDT

I'm working on re-writing a table load. The current method is a daily job that deletes from the table, then re-loads it. The complaint is that if the load fails the users have no data. The users have said they are ok if the load fails and they have to use yesterdays data.

My solution is to do a table swap.

  1. Create a staging table that has exactly the same structure, indexes etc
  2. Then the daily job does the following
    1. Delete from the staging table
    2. Load the staging table
    3. Begin a transaction to keep everyone out of the tables until I'm done
    4. Rename Live to Backup
    5. Rename Staging to Live
    6. Rename Backup to Staging
    7. Commit the transaction

This works fine on 3 of the 5 tables I'm working with. The other 2 have triggers. Because the trigger has code that references the table name (of course) it can't handle the rename of the object it is attached to.

I start with this:

Table1_Live    |    -- Trigger for Table1_Live    Table1_Stage    |    -- Trigger for Table1_Stage  

After I do my renames I get the following

Table1_Stage    |    -- Trigger for Table1_Live    Table1_Live    |    -- Trigger for Table1_Stage  

The benefits to the method are that the down time for the users is minimal. I have yesterdays data. And if anything goes wrong I roll back to yesterdays data.

My question is does anyone know a better method without the trigger problem, or a way that I can swap my triggers between the two tables so that the code remains correct after the rename?

Oh, and I'm somewhat embarrassed to admit but even though the database is 2005 (we are upgrading soon) the requirement is that the load be done using a DTS package. My plan is for the rest of the process to be done in a stored procedure that can be called from the DTS package (or SSIS if I can ever talk them into it).

ALLOW_SNAPSHOT_ISOLATION and READ_COMMITTED_SNAPSHOT

Posted: 23 Apr 2013 06:30 PM PDT

Most of the forum and example online always suggest to have both ALLOW_SNAPSHOT_ISOLATION and READ_COMMITTED_SNAPSHOT set to ON whenever someone is asking snapshot, row versioning or similar question.

I guess the word SNAPSHOT in both setting get a little confusing. I thought that, in order for database engine to use row versioning instead of locks for READ_COMMITTED default behavior, the database READ_COMMITTED_SNAPSHOT is set to ON regardless of what ALLOW_SNAPSHOT_ISOLATION setting.

The ALLOW_SNAPSHOT_ISOLATION setting is set to ON only to allow snapshot isolation when starting a transaction (e.g. SET TRANSACTION ISOLATION LEVEL SNAPSHOT) regardless of READ_COMMITTED_SNAPSHOT setting.

The only reason to have these two settings set to ON is when it needs to have READ COMMITTED row versioning AND snapshot isolation.

My question is, is my understanding incorrect in some way? And that these two setting have to be always set to ON together (especially for READ COMMITTED row versioning)?

Switch MySQL lag slave to new master and keep lagging (5.1)

Posted: 23 Apr 2013 02:03 PM PDT

We have the following scenario:

mysql setup

Set of slaves replicate from the master, separate standby master (a.k.a slave with binlogs) replicates from the same master; lag slave, powered by pt-slave-delay, replicates from the same source.

We also utilise mha4mysql to switch slaves from one master to another in case of need.

All would be dandy, if not for lag slaves.

Problem

Let's say lag slave is running 12 hours behind master. Switch lag slave from Master to Standby Master in such a way that failover occurs immediately (i.e. CHANGE MASTER is executed on lag slave the same time as on regular slaves) and that lag slave keeps lagging 12 hours.

Issues

While I know that slave keeps track of master binlog position in its own relay log, I can calculate master position to 12 hours in the past, however, as far as I understand it Standby Master (aka slave with binlogs) has no relation between own relay logs and own binlogs.

Question

Is there a reliable way to calculate position of the Standby Master we need to switch to (discarding the rest of relay logs on lag slave, obviously) and restart replication from 12 hours behind, but on the new master?

NOTE we are running MySQL 5.1, so 5.5 (or 5.6 with GTIDs) aren't a solution for now unfortunately.

Thank you.

Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'

Posted: 23 Apr 2013 04:17 PM PDT

I have 4 SQL Server 2008 instances from which I need to pull data using OPENROWSET. I am adding this data to a table on server 1. Server 1 and 2 are in a cluster, same as 3 and 4. I am remoted into server 1, connected to server 1 and running OPENROWSET for servers 3 and 4 works just fine.

But for 1 and 2 I get this error:

Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'

All servers have the same users (including the one I am logged in as)

Any ideas? Thank you.

Optimize UNION query in MYSQL

Posted: 23 Apr 2013 01:24 PM PDT

I have a problem with a UNION query in MySQL. We have 10 millions players on our website and we would like to select players with a multi-criterias system. For exemple, selecting US people, men, > 35 yo.

We are using "vertical partionning" : 1 table per criter. For exemple :

* user_country  - id_user  - id_country  

We would like to do this kind of query :

SELECT id_inscri FROM userdata_langue   WHERE id_langue='43'     UNION   SELECT id_inscri FROM userdata_sexe   WHERE sexe='2'     UNION   SELECT id_inscri FROM userdata_nb_jour   WHERE nb_jour>='31'     UNION   SELECT id_inscri FROM userdata_last   WHERE last<='2013-04-07'     AND last>='2013-04-03' ;  

Do you have any idea on how to optimize that ?

Thanks !

François

----- More details Explain of the query :

id  select_type table   type    possible_keys   key key_len ref rows    Extra  1   PRIMARY userdata_langue ref id_langue   id_langue   1   const       398846  Using index  2   UNION   userdata_sexe   ref sexe    sexe    1   const   1667137 Using index  3   UNION   userdata_nb_jour    range   nb_jour nb_jour 2   NULL    5830    Using where; Using index  4   UNION   userdata_last   range   last    last    3   NULL    371614  Using where; Using index  NULL    UNION RESULT    <union1,2,3,4>  ALL NULL    NULL    NULL    NULL    NULL  

SHOW CREATE TABLE

Table   Create Table  userdata_langue CREATE TABLE `userdata_langue` (   `id_inscri` bigint(20) NOT NULL,   `id_langue` tinyint(3) unsigned NOT NULL,   PRIMARY KEY (`id_inscri`),   KEY `id_langue` (`id_langue`)  ) ENGINE=InnoDB DEFAULT CHARSET=latin1     

How can Innodb ibdata1 file grows by 5X even with innodb_file_per_table set?

Posted: 23 Apr 2013 01:23 PM PDT

I have innodb_file_per_table set and just today my ibdata1 file jumped from 59M to 323M after I made several changes to an 800M table to reduce it to about 600M. That particular table's .ibd file was reduced but the server's ibdata1 file went crazy. Any ideas?

Change SQL Server 2012 Version from registry?

Posted: 23 Apr 2013 01:17 PM PDT

I am creating an automated build of SQL Server which is run from a batch file. Is there any way to change the registry, or other methoid, of deciding which SQL Server version (Either Enterprise or Standard) will be installed, or are the two versions still seperate builds?

If they are seperate builds, I guess I could get around it, by having a "master" batch file call an appropriate "child" batch file, but this would mean having a folder which contains both sets of binaries, which I rather avoid.

When converting a table valued function to inline, why do I get a lazy spool?

Posted: 23 Apr 2013 12:18 PM PDT

I have a table valued function that I want to convert to inline to improve performance. It sets the value of variables to the value four bit columns in one row of a table (looking up the row using a function parameter as the key), then runs one or more of four different insert statements (differing only by the WHERE clause) to populate a temp table based on which of the bit variables have a value of 1, and then returns the content of the temp table.

I rewrote it to be one SELECT (with a CTE to get the four bit column values) using four OR clauses in the WHERE to get the same result.

However, the plan cost jumped from .003 to over 2.5. On the other hand, the actual performance is better judging by the output from SET STATISTICS TIME ON. 75% of the cost of the inline version is for a lazy spool. Is there a good way to avoid the lazy spool and improve performance even more? The output could be anywhere from one row to thousands.

Best way to synchronize several databases in different computers

Posted: 23 Apr 2013 01:39 PM PDT

I must to do a POS program for college in Java, the problem is that I don't know how to link the different computers to a master computer that handles the changes of the database and sends them to the others PCs .

I've thought the best way is that every computer has their own copy of the database and the changes are made directly on this database and later sends them to the master PC, which return the changes made by others computers. Too, I could do it using only one database in the master, but I don't know if this are going to respond well when all the computers are working at the same time.

Do you have any alternative? or It'll work fine with whatever of the ways I've thought?

Is there slowdown inserting into an InnoDB table that has no index set?

Posted: 23 Apr 2013 12:26 PM PDT

I have an old application with lots of InnoDB tables, that have no indexes at all, not even a primary ID or such.

Those tables only contain a few thousand rows.

Would it be faster to INSERT data into these tables if I would set a primary index (that I don't need otherwise)?

How can I set Timeout by View, User or Role?

Posted: 23 Apr 2013 07:30 PM PDT

I also posted this question at http://superuser.com/questions/586148/how-can-i-set-timeout-by-view-user-or-role I have searched at ServerFault for an answer and not found the question or an answer.

For SQL 2008 R2, data views.

Looking for a timeout control using Microsoft SQL Server Management Studio (SSMS) that is NOT at the Server Level, and/or is NOT dependent on query timeout as set by application initiating the query.

I have been unable to find timeout controls by View, User or Role using SSMS.

There are server level timeouts (remote query timeout http://technet.microsoft.com/en-us/library/ms189040.aspx ), but as I understand it would also impact the main applications use of the database, which lives on a different server (main application gets to define its own limits).

I found DBPROP_COMMANDTIMEOUT http://msdn.microsoft.com/en-us/library/windows/desktop/ms712980(v=vs.85).aspx but not seeing any way to control it by View. And this Set Command Timeout from SQL Server 2005 rather than through the code? says "Command timeouts are always set by the client"

Considerations: These are connections for reporting from the production database of a major application, where the archived datasets (midnight last night) are not sufficiently current. We have a requirement to allow some access; we have a responsibility to not let that access adversely impact the application.

How can I identify the number of times a View is called in SQL Server?

Posted: 23 Apr 2013 12:23 PM PDT

How can I identify the number of times a View was called in SQL Server?

I am looking for an answer that is similar to this below which shows how it can be done for stored procedures.

How can I monitor the call count per stored procedure per timespan?

mysqldump: Got error: 1017: Can't find file: 'drupal_install_test' (errno: 2) when using LOCK TABLES

Posted: 23 Apr 2013 04:01 PM PDT

I'm trying to backup a drupal site database but I'm having some issues. When I ran the following command:

mysqldump -uroot -p drupaSite > drupaSite.sql

I get the following error:

mysqldump: Got error: 1017: Can't find file: 'drupal_install_test' (errno: 2) when using LOCK TABLES

if I tried to query the table I get the same error:

mysql> select * from drupal_install_test;    ERROR 1017 (HY000): Can't find file: 'drupal_install_test' (errno: 2)  

I check the status of the table in the database:

show table status from drupaSite;

I get the following output:

| drupal_install_test     | NULL   |    NULL | NULL       |   NULL |           NULL |        NULL |               NULL |         NULL |      NULL |           NULL | NULL                | NULL                | NULL                | NULL            |     NULL | NULL           | Can't find file: 'drupal_install_test' (errno: 2) |   

I ran the following query:

SELECT * FROM information_schema.tables WHERE table_name='drupal_install_test'\G

I get the following output:

*************************** 1. row ***************************    TABLE_CATALOG: NULL     TABLE_SCHEMA: drupaSite       TABLE_NAME: drupal_install_test       TABLE_TYPE: BASE TABLE           ENGINE: NULL          VERSION: NULL       ROW_FORMAT: NULL       TABLE_ROWS: NULL   AVG_ROW_LENGTH: NULL      DATA_LENGTH: NULL  MAX_DATA_LENGTH: NULL     INDEX_LENGTH: NULL        DATA_FREE: NULL   AUTO_INCREMENT: NULL      CREATE_TIME: NULL      UPDATE_TIME: NULL       CHECK_TIME: NULL  TABLE_COLLATION: NULL         CHECKSUM: NULL   CREATE_OPTIONS: NULL    TABLE_COMMENT: Can't find file: 'drupal_install_test' (errno: 2)  

I ran the following query:

CHECKSUM TABLE drupal_install_test;  

I got the following output:

+-------------------------------+----------+  | Table                         | Checksum |  +-------------------------------+----------+  | drupaSite.drupal_install_test |     NULL |   +-------------------------------+----------+  1 row in set, 1 warning (4.34 sec)  

I ran the following query:

CHECK TABLE drupal_install_test;

and I get the following output:

   +-------------------------------+-------+----------+---------------------------------------------------+      | Table                         | Op    | Msg_type | Msg_text                                          |      +-------------------------------+-------+----------+---------------------------------------------------+      | drupaSite.drupal_install_test | check | Error    | Can't find file: 'drupal_install_test' (errno: 2) |       | drupaSite.drupal_install_test | check | error    | Corrupt                                           |       +-------------------------------+-------+----------+---------------------------------------------------+    2 rows in set (0.02 sec)  

My question for you guys is how can I fix this in a way I can backup the database and restore it in another server. The site is working just fine I need to migrate the server. I would really appreciate your help guys.

Cannot Utilize Maximum CPU and Memory Usage for MySQL

Posted: 23 Apr 2013 12:37 PM PDT

Good day.

I know this may be a duplicate of other questions however I have applied all the suggestions in many of the threads, but I remain with the same problem.

I have a single stored procedure working with max 3 tables. when I run the procedure, only 30% of my CPU is used and about 25% of RAM.

I am sitting with a CPU with 4 cores and 16GB RAM.

my.ini looks as follows:

[client]  port        = 3306  socket      = /tmp/mysql.sock    [mysqld]  port        = 3306  socket      = /tmp/mysql.sock  skip-external-locking  key_buffer_size = 512M  max_allowed_packet = 32M  table_open_cache = 512  sort_buffer_size = 2M  read_buffer_size = 2M  read_rnd_buffer_size = 8M  myisam_sort_buffer_size = 128M  thread_cache_size = 16  query_cache_size= 32M  thread_concurrency = 0    log-bin=mysql-bin    binlog_format=mixed    server-id   = 1    innodb_buffer_pool_size = 12G  innodb_log_buffer_size = 256M  innodb_flush_log_at_trx_commit = 2  innodb_read_io_threads = 64  innodb_write_io_threads = 64    [mysqldump]  quick  max_allowed_packet = 16M    [mysql]  no-auto-rehash    [myisamchk]  key_buffer_size = 128M  sort_buffer_size = 128M  read_buffer = 2M  write_buffer = 2M    [mysqlhotcopy]  interactive-timeout  

Is it the nature of the procedure called which is causing mysql to under utiize the hardware or is it my configuration?

I was running XAMPP but then realised it was 32-bit so I switched to the 64-bit version of WAMP. I use a 32-bit MySQLWorkbench to run queries.

I am using the InnoDB engine.

Using MySQL Ver 14.14 Distrib 5.5.24 Win64 (x86).

Getting sql statement to refer to more than 2 tables?

Posted: 23 Apr 2013 04:04 PM PDT

I have the following tables:

StaffName

============================================================================  |  Name    |   Income_Group   |    Educational_Level  |   Country          |  ============================================================================  |  Jack    |   5              |    5                  |   1                |  |  Jill    |   3              |    3                  |   4                |  |  Jane    |   1              |    4                  |   6                |  |  June    |   4              |    2                  |   7                |  ============================================================================  

Country

==================================================  | ID   |  Country   |  Country_Description       |  ==================================================  | 1    | America    |                            |  | 7    | Japan      |                            |  ==================================================  

IncomeGroup

=======================================================  | ID   |  Income_Range        |  Description          |  =======================================================  | 1    | Below US$2500        |                       |  | 5    | US$9000 to US$12000  |                       |  =======================================================  

EducationalLevel

============================================================  | ID   |  Educational_Level   |  Country_Description       |  ============================================================  | 1    | PhD                  |                            |  | 7    | Master               |                            |  ============================================================  

My intention is to get all the values from other table to the main table and display it something like the following:

Intended Result

=======================================================================================  |  Name    |   Income_Group              |    Educational_Level  |   Country          |  =======================================================================================  |  John    |   US$9000 to US$12000       |    PhD                |   America          |  |  KoKo    |   US$5000 to US$7000        |    Master             |   Japan            |  |  Kane    |   US$1000 to US$2000        |    B.Degree           |   Thailand         |  |  Ali     |   US$8200 to US$9200        |    College            |   Malaysia         |  =======================================================================================  

I tried using the following sql:

select       s.name, s.Income_Group, s.Educational_Level, s.Country  from       StaffName s, Country c. IncomeGroup ig, EducationalLevel el   where       s.Income_Group = c.ID       AND s.Educational_Level = ig.id       AND s.Country = el.id  

But it return no results.

There is no foreign or primary keys for all tables.

What could I have been missing?

DPM and AlwaysOn - Can they work

Posted: 23 Apr 2013 01:11 PM PDT

How does AlwaysOn tie into DPM for SQL Server 2012? Can we use one or the other for High Availability ? Can they work together or must they be used separately. Thanks

PostgreSQL and default Schemas

Posted: 23 Apr 2013 05:23 PM PDT

Whenever I create a brand new database in PostgreSQL Maestro it creates the following list of default schemas:

enter image description here

Now from my understanding Schemas are like folders for organization, etc. So the question I have is are all these schemas needed when I create a new DB? If so what are they used for on PG side as I won't ever use them myself.

I can understand information_schema as this is default for an install of MySQL on a server, though I don't get why a database would need it's own as opposed to the entire server, but to each DB type his own I guess.

LATCH_EX Waits on Resource METADATA_SEQUENCE_GENERATOR

Posted: 23 Apr 2013 12:57 PM PDT

We have a process that generates an inventory report. On the client side, the process splits of a configurable number of worker threads to build a chunk of data for the report that corresponds to one store out of many (potentially thousands, typically dozens). Each worker thread calls a web service that executes a stored procedure.

The database process for processing each chunk gathers a bunch of data into a #Temporary table. At the end of each processing chunk, the data is written to a permanent table in tempdb. Finally, at the end of the process, one thread on the client side requests all the data from the permanent tempdb table.

The more users that run this report, the slower it gets. I analyzed the activity in the database. At one point, I saw 35 separate requests all blocked at one point in the process. All these SPIDs had on the order of 50 ms waits of type LATCH_EX on resource METADATA_SEQUENCE_GENERATOR (00000010E13CA1A8). One SPID has this resource, and all the others are blocking. I did not find anything about this wait resource on a web search.

The table in tempdb that we are using does have an IDENTITY(1,1) column. Are these SPIDs waiting for the IDENTITY column? What methods could we use to reduce or eliminate the blocking?

The server is part of a cluster. The server is running 64-bit SQL Server 2012 Standard Edition SP1 on 64-bit Windows 2008 R2 Enterprise. The server has 64 GB RAM and 48 processors, but the database can only use 16 because it is the standard edition.

(Note that I'm not thrilled by the design of using a permanent table in tempdb to hold all this data. Changing that would be an interesting technical and political challenge, but I'm open to suggestions.)

UPDATE 4/23/2013

We've opened a support case with Microsoft. I'll keep this question updated as we learn more.

Normalization/normal forms - May a field describe an other field?

Posted: 23 Apr 2013 06:03 PM PDT

Like this:

CREATE TABLE persons(      id serial8 NOT NULL PRIMARY KEY,      name varchar,      -- A lot of other fields      date_of_birth timestamp with time zone,      date_of_birth_precision varchar(16),      CHECK (date_of_birth_precision IN ('Years','Months','Days','Hours','Minutes'))  );  

date_of_birth_precision describes the precision of date_of_birth.

I wonder if it violates this rule (because I don't fully understand the rule):

Every non-prime attribute of R is non-transitively dependent (i.e. directly dependent) on every superkey of R.

MySQL data too long error

Posted: 23 Apr 2013 05:03 PM PDT

One of the column in my table was initially created as a varchar(1000). Now there is a need to increase the size of that column to hold around 5000 characters. I used the alter statement to increase the size of that column. DESC table_name as well as SHOW CREATE TABLE table_name clearly indicates the new size of that column to be 5000 characters. However, interestingly when I try to insert any data exceeding 1000 characters ERROR 1406 (22001) : Data too long for column error shows up. Out of desperation I changed the datatype to text, and still it's limited to 1000 characters. Any suggestion?

I created an other dummy table with a coloumn size of varchar(5000) and that works just fine. Engine used is InnoDB and Default Charset is UTF8.

Why is SQL running the same query longer from another connection?

Posted: 23 Apr 2013 02:06 PM PDT

Here is the issue overview: Why does my stored procedure run faster when executed localy vs remotely?

Dont jump to any conclusion just yet, let me explain what I mean...

Here is the setup:
A Windows 2008R2 (correction: 2003) application server executes a stored procedure that performs some action, (what its doing is really important at this point in time). This stored procedure is executed on the SQL server over a TCP/IP connection to the database server. The DB server is physicaly located right next to the application server, and they are connected to eachother via 1GB NICs to a 1GB Switch. The DB server is running SQL 2005 SP2 Enterprise Edition, and has 16GB of memory and several vLUNS striped across 48 15k drives in an HP-EVA FC connected SAN. From all indicators thus far, there are no I/O, Mem, or CPU constreints or limits being hit. Trace Falg 1118 is on and TempDB is split across 8 file on their own vLUN. Data, and TLogs also have their own vLUNS too.

So, here is what I am seeing:
Using SQLCMD on the database server, with SQLProfiler running from the same DB server, I can execute the stored procedure and I see that the execution starts immediatly, and compleats with a durration of about 2,100ms with an IO of about 1200.

Using SQLCMD on the application server, with SQLProfiler running from the DB server, I can execute the same exact stored procedure, with the SAME exact parameters, and I see that the execution starts immediatly, and compleats with a durration of about 110,000ms with an IO of about 1200.

The query results in 1 row, with 4 columns [INT, INT, VARCHAR(50), VARCHAR(100)]

ASIDE:(I know the query is a train wreck, this is a regulated system and I cannot change it on a live prodution server, so please dont make any sugestions about doing so. The next version has been rewritten to be better.)

From everything we can see, there is no reason that we should be seeing differances like this, but what is heppening is the .NET application that calls this query from the application server is timing out waiting for the responce.

We have checked locking and blocking, WAIT states, Query plans, IO contention, CPU contention, MEM contention, NETWORK saturation/utilization, performed indexes rebuilds on all indexes, updates all stats, and a hand full of other items, but haven't come up with anything that points to why this is happening.

Please ask more questions if you have any, make recomendations as you come up with them, and depending on the impact (remember this is a production environment) we will try them and respond back.

-Thanks! -Chris

consequences of using "innodb_flush_method = O_DIRECT" without having a battery backed write cache? or on a KVM guest?

Posted: 23 Apr 2013 01:26 PM PDT

Mysql 5.5.29 Innodb- 128GB Ram - 32 cores - Raid 10 SSD.

Our server which is a dedicated KVM guest on a 'baremetal' is hosting our heavy read-write DB server. Everything is file-per-table. innodb_Buffer_pool is 96GB with 1GBx2 log_file_size with about 20 minutes of writes to fill up those logs at peak time.

How bad of a situation would it be if O_DIRECT (currently running on the default) was enabled during a high work load without a battery backed write cache and a total crash were to occur on the OS, parent host or the power was cut?

Does a battery backed write cache make a difference if the server is a vm guest of the parent anyway?

.

MySQL Dump configuration file

Posted: 23 Apr 2013 04:03 PM PDT

I modified th my.cnf file, so that the innodb_data_file_path points somewhere else from the default path. But the mysqldump command seems like trying to dump from the default path.

I found that if i give the option --defaults-file=xxxx in command line i could change it, but is there a way to config mysqldump to use this option without specify it in command line?

How can I set a default session sql_mode for a given user?

Posted: 23 Apr 2013 12:03 PM PDT

I want to have a default session sql_mode that is different from the global sql_mode.

I can set it manually, but I'd like to do it automatically.

I thought I could add it to .my.cnf, but that doesn't work. I tried adding it to the [mysql] group in .my.cnf, and it does not throw an error.

Yet, when I connect my session sql_mode, the session still inherits the global sql_mode.

I'm using MySQL 5.5.16.

Users cannot view tables in non-default schema in SSMS

Posted: 23 Apr 2013 07:03 PM PDT

I'm having an issue setting the VIEW DEFINITION permission appropriately at the schema level for one of my users. I've created the schema TestSchema and added some tables. The user currently has permissions set to access & modify the table (SELECT, UPDATE, DELETE, etc) through the dbo_datareader and dbo_datawriter roles. However, they cannot see any of the tables in the SSMS object explorer.

I've tried granting permissions to view definitions:

grant view definition on SCHEMA :: [TestSchema] to [User]  

That didn't work. I tried setting the table-level permission:

grant view definition on [TestSchema].[NewTable] to [User]  

That also didn't work. Then I tried just a blanket grant:

grant view definition to [User]  

And that did work; they can now see TestSchema, as well as other schemas that they shouldn't have access to.

My goal here is to allow the user to view all tables within a given schema. How do I accomplish that? If I should be able to do this by default, what permissions should I be looking at to find why I can't?

SSRS calculated field strange behavior

Posted: 23 Apr 2013 01:09 PM PDT

I have a dataset with a calculated field:

 = IIF(String.IsNullOrEmpty(Fields!Event.Value), "\t", Fields!Event.Value) & ", " &        IIF(String.IsNullOrEmpty(Fields!Action.Value), "\t", Fields!Action.Value) & ", " &      IIF(String.IsNullOrEmpty(Fields!RequestedBy.Value), "\t", Fields!RequestedBy.Value)  

This field is then accessed inside a Table within the report using a lookup expression:

=Join(LookupSet(Fields!contactid.Value, Fields!ContactParticipantValue.Value, Fields!RowSummary.Value, "EventRequests"), Environment.NewLine)  

When I run this, initially everything works fine and the correct data is shown in the cell. However there is an warning displayed:

Warning 1   [rsRuntimeErrorInExpression] The Value expression for the field 'RowSummary' contains an error: (processing): (field.ExprHost != null)  0     

Then if I go to the next page, all of the cells now contain #Error instead of the 'RowSummary' info. If I then go back to the previous page, which used to have all the correct data in it, all of the cells are now replaced with #Error.

I have found a work around whereby I add a hidden list container to the END of the report and set DataSet property to the 'EventRequests' dataset with the calculated field in it. I don't have to add any fields or make this list visible, but somehow just having the list on the report prevents the #Error from replacing all of my data. This does not solve the problem completely because when I go to export to PDF, I still receive errors.

Anyone know why this is happening?

No comments:

Post a Comment

Search This Blog