[how to] Build career as DBA |
- Build career as DBA
- one vs two column index difference when doing JOIN query?
- update statement with self join
- Storing data as rows as opposed to columns
- Using a table swap for a data load but I have triggers
- ALLOW_SNAPSHOT_ISOLATION and READ_COMMITTED_SNAPSHOT
- Switch MySQL lag slave to new master and keep lagging (5.1)
- Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'
- Optimize UNION query in MYSQL
- How can Innodb ibdata1 file grows by 5X even with innodb_file_per_table set?
- Change SQL Server 2012 Version from registry?
- When converting a table valued function to inline, why do I get a lazy spool?
- Best way to synchronize several databases in different computers
- Is there slowdown inserting into an InnoDB table that has no index set?
- How can I set Timeout by View, User or Role?
- How can I identify the number of times a View is called in SQL Server?
- mysqldump: Got error: 1017: Can't find file: 'drupal_install_test' (errno: 2) when using LOCK TABLES
- Cannot Utilize Maximum CPU and Memory Usage for MySQL
- Getting sql statement to refer to more than 2 tables?
- DPM and AlwaysOn - Can they work
- PostgreSQL and default Schemas
- LATCH_EX Waits on Resource METADATA_SEQUENCE_GENERATOR
- Normalization/normal forms - May a field describe an other field?
- MySQL data too long error
- Why is SQL running the same query longer from another connection?
- consequences of using "innodb_flush_method = O_DIRECT" without having a battery backed write cache? or on a KVM guest?
- MySQL Dump configuration file
- How can I set a default session sql_mode for a given user?
- Users cannot view tables in non-default schema in SSMS
- SSRS calculated field strange behavior
Posted: 23 Apr 2013 09:11 PM PDT I wanted to know that if someone is interested to build her/his career as a DBA then is programming language required ?I mean ,without knowing programming language is it possible to become a DBA?Kindly guide me because i wanted to build my career in this field. |
one vs two column index difference when doing JOIN query? Posted: 23 Apr 2013 08:41 PM PDT Let`s suppose that alfa,beta and gamma contains milions of rows so we need to create indexes obviously to get optimal performace for this query : Now what is optimal way to create indexes on alfa table ? Is is better to create one index across two columns (beta_id,gamma_id) or two separate indexes for each column ? (I am not sure how is JOIN query internally interpreted). DB : Oracle 11g 11.1.0.7.0 |
update statement with self join Posted: 23 Apr 2013 08:21 PM PDT I needed to copy information from one row of a table to another. I noticed I could do the following: However, the above SQL seems kind of ambiguous. Edit: Note: while I have not done extensive testing, this query does seem to work as I try it against our development database. It simply updates the one @to row with results from the @from row. |
Storing data as rows as opposed to columns Posted: 23 Apr 2013 03:11 PM PDT Here's a typical way I would store data (obviously not storing the password in plain text) Is there anything wrong with storing it in the following manner? The huge benefit I see here is the ability to easily add additional attributes to the user. But I really wanted to get another opinion on this. |
Using a table swap for a data load but I have triggers Posted: 23 Apr 2013 07:01 PM PDT I'm working on re-writing a table load. The current method is a daily job that deletes from the table, then re-loads it. The complaint is that if the load fails the users have no data. The users have said they are ok if the load fails and they have to use yesterdays data. My solution is to do a table swap.
This works fine on 3 of the 5 tables I'm working with. The other 2 have triggers. Because the trigger has code that references the table name (of course) it can't handle the rename of the object it is attached to. I start with this: After I do my renames I get the following The benefits to the method are that the down time for the users is minimal. I have yesterdays data. And if anything goes wrong I roll back to yesterdays data. My question is does anyone know a better method without the trigger problem, or a way that I can swap my triggers between the two tables so that the code remains correct after the rename? Oh, and I'm somewhat embarrassed to admit but even though the database is 2005 (we are upgrading soon) the requirement is that the load be done using a DTS package. My plan is for the rest of the process to be done in a stored procedure that can be called from the DTS package (or SSIS if I can ever talk them into it). |
ALLOW_SNAPSHOT_ISOLATION and READ_COMMITTED_SNAPSHOT Posted: 23 Apr 2013 06:30 PM PDT Most of the forum and example online always suggest to have both I guess the word SNAPSHOT in both setting get a little confusing. I thought that, in order for database engine to use row versioning instead of locks for READ_COMMITTED default behavior, the database The The only reason to have these two settings set to ON is when it needs to have READ COMMITTED row versioning AND snapshot isolation. My question is, is my understanding incorrect in some way? And that these two setting have to be always set to ON together (especially for READ COMMITTED row versioning)? |
Switch MySQL lag slave to new master and keep lagging (5.1) Posted: 23 Apr 2013 02:03 PM PDT We have the following scenario: Set of slaves replicate from the master, separate standby master (a.k.a slave with binlogs) replicates from the same master; lag slave, powered by We also utilise mha4mysql to switch slaves from one master to another in case of need. All would be dandy, if not for lag slaves. Problem Let's say lag slave is running 12 hours behind master. Switch lag slave from Master to Standby Master in such a way that failover occurs immediately (i.e. Issues While I know that slave keeps track of master binlog position in its own relay log, I can calculate master position to 12 hours in the past, however, as far as I understand it Standby Master (aka slave with binlogs) has no relation between own relay logs and own binlogs. Question Is there a reliable way to calculate position of the Standby Master we need to switch to (discarding the rest of relay logs on lag slave, obviously) and restart replication from 12 hours behind, but on the new master? NOTE we are running MySQL 5.1, so 5.5 (or 5.6 with GTIDs) aren't a solution for now unfortunately. Thank you. |
Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON' Posted: 23 Apr 2013 04:17 PM PDT I have 4 SQL Server 2008 instances from which I need to pull data using But for 1 and 2 I get this error:
All servers have the same users (including the one I am logged in as) Any ideas? Thank you. |
Posted: 23 Apr 2013 01:24 PM PDT I have a problem with a UNION query in MySQL. We have 10 millions players on our website and we would like to select players with a multi-criterias system. For exemple, selecting US people, men, > 35 yo. We are using "vertical partionning" : 1 table per criter. For exemple : We would like to do this kind of query : Do you have any idea on how to optimize that ? Thanks ! François ----- More details Explain of the query : SHOW CREATE TABLE |
How can Innodb ibdata1 file grows by 5X even with innodb_file_per_table set? Posted: 23 Apr 2013 01:23 PM PDT I have innodb_file_per_table set and just today my ibdata1 file jumped from 59M to 323M after I made several changes to an 800M table to reduce it to about 600M. That particular table's .ibd file was reduced but the server's ibdata1 file went crazy. Any ideas? |
Change SQL Server 2012 Version from registry? Posted: 23 Apr 2013 01:17 PM PDT I am creating an automated build of SQL Server which is run from a batch file. Is there any way to change the registry, or other methoid, of deciding which SQL Server version (Either Enterprise or Standard) will be installed, or are the two versions still seperate builds? If they are seperate builds, I guess I could get around it, by having a "master" batch file call an appropriate "child" batch file, but this would mean having a folder which contains both sets of binaries, which I rather avoid. |
When converting a table valued function to inline, why do I get a lazy spool? Posted: 23 Apr 2013 12:18 PM PDT I have a table valued function that I want to convert to inline to improve performance. It sets the value of variables to the value four bit columns in one row of a table (looking up the row using a function parameter as the key), then runs one or more of four different insert statements (differing only by the WHERE clause) to populate a temp table based on which of the bit variables have a value of 1, and then returns the content of the temp table. I rewrote it to be one SELECT (with a CTE to get the four bit column values) using four OR clauses in the WHERE to get the same result. However, the plan cost jumped from .003 to over 2.5. On the other hand, the actual performance is better judging by the output from SET STATISTICS TIME ON. 75% of the cost of the inline version is for a lazy spool. Is there a good way to avoid the lazy spool and improve performance even more? The output could be anywhere from one row to thousands. |
Best way to synchronize several databases in different computers Posted: 23 Apr 2013 01:39 PM PDT I must to do a POS program for college in Java, the problem is that I don't know how to link the different computers to a master computer that handles the changes of the database and sends them to the others PCs . I've thought the best way is that every computer has their own copy of the database and the changes are made directly on this database and later sends them to the master PC, which return the changes made by others computers. Too, I could do it using only one database in the master, but I don't know if this are going to respond well when all the computers are working at the same time. Do you have any alternative? or It'll work fine with whatever of the ways I've thought? |
Is there slowdown inserting into an InnoDB table that has no index set? Posted: 23 Apr 2013 12:26 PM PDT I have an old application with lots of InnoDB tables, that have no indexes at all, not even a primary ID or such. Those tables only contain a few thousand rows. Would it be faster to |
How can I set Timeout by View, User or Role? Posted: 23 Apr 2013 07:30 PM PDT I also posted this question at http://superuser.com/questions/586148/how-can-i-set-timeout-by-view-user-or-role I have searched at ServerFault for an answer and not found the question or an answer. For SQL 2008 R2, data views. Looking for a timeout control using Microsoft SQL Server Management Studio (SSMS) that is NOT at the Server Level, and/or is NOT dependent on query timeout as set by application initiating the query. I have been unable to find timeout controls by View, User or Role using SSMS. There are server level timeouts (remote query timeout http://technet.microsoft.com/en-us/library/ms189040.aspx ), but as I understand it would also impact the main applications use of the database, which lives on a different server (main application gets to define its own limits). I found DBPROP_COMMANDTIMEOUT http://msdn.microsoft.com/en-us/library/windows/desktop/ms712980(v=vs.85).aspx but not seeing any way to control it by View. And this Set Command Timeout from SQL Server 2005 rather than through the code? says "Command timeouts are always set by the client" Considerations: These are connections for reporting from the production database of a major application, where the archived datasets (midnight last night) are not sufficiently current. We have a requirement to allow some access; we have a responsibility to not let that access adversely impact the application.
|
How can I identify the number of times a View is called in SQL Server? Posted: 23 Apr 2013 12:23 PM PDT How can I identify the number of times a View was called in SQL Server? I am looking for an answer that is similar to this below which shows how it can be done for stored procedures. How can I monitor the call count per stored procedure per timespan? |
mysqldump: Got error: 1017: Can't find file: 'drupal_install_test' (errno: 2) when using LOCK TABLES Posted: 23 Apr 2013 04:01 PM PDT I'm trying to backup a drupal site database but I'm having some issues. When I ran the following command: mysqldump -uroot -p drupaSite > drupaSite.sql I get the following error: mysqldump: Got error: 1017: Can't find file: 'drupal_install_test' (errno: 2) when using LOCK TABLES if I tried to query the table I get the same error: I check the status of the table in the database: show table status from drupaSite; I get the following output: I ran the following query: SELECT * FROM information_schema.tables WHERE table_name='drupal_install_test'\G I get the following output: I ran the following query: I got the following output: I ran the following query: CHECK TABLE drupal_install_test; and I get the following output: My question for you guys is how can I fix this in a way I can backup the database and restore it in another server. The site is working just fine I need to migrate the server. I would really appreciate your help guys. |
Cannot Utilize Maximum CPU and Memory Usage for MySQL Posted: 23 Apr 2013 12:37 PM PDT Good day. I know this may be a duplicate of other questions however I have applied all the suggestions in many of the threads, but I remain with the same problem. I have a single stored procedure working with max I am sitting with a my.ini looks as follows: Is it the nature of the procedure called which is causing mysql to under utiize the hardware or is it my configuration? I was running I am using the Using |
Getting sql statement to refer to more than 2 tables? Posted: 23 Apr 2013 04:04 PM PDT I have the following tables: StaffName Country IncomeGroup EducationalLevel My intention is to get all the values from other table to the main table and display it something like the following: Intended Result I tried using the following sql: But it return no results. There is no foreign or primary keys for all tables. What could I have been missing? |
DPM and AlwaysOn - Can they work Posted: 23 Apr 2013 01:11 PM PDT How does AlwaysOn tie into DPM for SQL Server 2012? Can we use one or the other for High Availability ? Can they work together or must they be used separately. Thanks |
PostgreSQL and default Schemas Posted: 23 Apr 2013 05:23 PM PDT Whenever I create a brand new database in PostgreSQL Maestro it creates the following list of default schemas: Now from my understanding Schemas are like folders for organization, etc. So the question I have is are all these schemas needed when I create a new DB? If so what are they used for on PG side as I won't ever use them myself. I can understand |
LATCH_EX Waits on Resource METADATA_SEQUENCE_GENERATOR Posted: 23 Apr 2013 12:57 PM PDT We have a process that generates an inventory report. On the client side, the process splits of a configurable number of worker threads to build a chunk of data for the report that corresponds to one store out of many (potentially thousands, typically dozens). Each worker thread calls a web service that executes a stored procedure. The database process for processing each chunk gathers a bunch of data into a #Temporary table. At the end of each processing chunk, the data is written to a permanent table in tempdb. Finally, at the end of the process, one thread on the client side requests all the data from the permanent tempdb table. The more users that run this report, the slower it gets. I analyzed the activity in the database. At one point, I saw 35 separate requests all blocked at one point in the process. All these SPIDs had on the order of 50 ms waits of type The table in tempdb that we are using does have an The server is part of a cluster. The server is running 64-bit SQL Server 2012 Standard Edition SP1 on 64-bit Windows 2008 R2 Enterprise. The server has 64 GB RAM and 48 processors, but the database can only use 16 because it is the standard edition. (Note that I'm not thrilled by the design of using a permanent table in tempdb to hold all this data. Changing that would be an interesting technical and political challenge, but I'm open to suggestions.) UPDATE 4/23/2013 We've opened a support case with Microsoft. I'll keep this question updated as we learn more. |
Normalization/normal forms - May a field describe an other field? Posted: 23 Apr 2013 06:03 PM PDT Like this:
I wonder if it violates this rule (because I don't fully understand the rule): Every non-prime attribute of R is non-transitively dependent (i.e. directly dependent) on every superkey of R. |
Posted: 23 Apr 2013 05:03 PM PDT One of the column in my table was initially created as a varchar(1000). Now there is a need to increase the size of that column to hold around 5000 characters. I used the I created an other dummy table with a coloumn size of varchar(5000) and that works just fine. Engine used is InnoDB and Default Charset is UTF8. |
Why is SQL running the same query longer from another connection? Posted: 23 Apr 2013 02:06 PM PDT Here is the issue overview: Why does my stored procedure run faster when executed localy vs remotely? Dont jump to any conclusion just yet, let me explain what I mean... Here is the setup: So, here is what I am seeing: Using SQLCMD on the application server, with SQLProfiler running from the DB server, I can execute the same exact stored procedure, with the SAME exact parameters, and I see that the execution starts immediatly, and compleats with a durration of about 110,000ms with an IO of about 1200. The query results in 1 row, with 4 columns [INT, INT, VARCHAR(50), VARCHAR(100)] ASIDE:(I know the query is a train wreck, this is a regulated system and I cannot change it on a live prodution server, so please dont make any sugestions about doing so. The next version has been rewritten to be better.) From everything we can see, there is no reason that we should be seeing differances like this, but what is heppening is the .NET application that calls this query from the application server is timing out waiting for the responce. We have checked locking and blocking, WAIT states, Query plans, IO contention, CPU contention, MEM contention, NETWORK saturation/utilization, performed indexes rebuilds on all indexes, updates all stats, and a hand full of other items, but haven't come up with anything that points to why this is happening. Please ask more questions if you have any, make recomendations as you come up with them, and depending on the impact (remember this is a production environment) we will try them and respond back. -Thanks! -Chris |
Posted: 23 Apr 2013 01:26 PM PDT Mysql 5.5.29 Innodb- 128GB Ram - 32 cores - Raid 10 SSD. Our server which is a dedicated KVM guest on a 'baremetal' is hosting our heavy read-write DB server. Everything is file-per-table. innodb_Buffer_pool is 96GB with 1GBx2 log_file_size with about 20 minutes of writes to fill up those logs at peak time. How bad of a situation would it be if O_DIRECT (currently running on the default) was enabled during a high work load without a battery backed write cache and a total crash were to occur on the OS, parent host or the power was cut? Does a battery backed write cache make a difference if the server is a vm guest of the parent anyway? . |
Posted: 23 Apr 2013 04:03 PM PDT I modified th I found that if i give the option |
How can I set a default session sql_mode for a given user? Posted: 23 Apr 2013 12:03 PM PDT I want to have a default session sql_mode that is different from the global sql_mode. I can set it manually, but I'd like to do it automatically. I thought I could add it to Yet, when I connect my session sql_mode, the session still inherits the global sql_mode. I'm using MySQL 5.5.16. |
Users cannot view tables in non-default schema in SSMS Posted: 23 Apr 2013 07:03 PM PDT I'm having an issue setting the I've tried granting permissions to view definitions: That didn't work. I tried setting the table-level permission: That also didn't work. Then I tried just a blanket grant: And that did work; they can now see My goal here is to allow the user to view all tables within a given schema. How do I accomplish that? If I should be able to do this by default, what permissions should I be looking at to find why I can't? |
SSRS calculated field strange behavior Posted: 23 Apr 2013 01:09 PM PDT I have a dataset with a calculated field: This field is then accessed inside a Table within the report using a lookup expression: When I run this, initially everything works fine and the correct data is shown in the cell. However there is an warning displayed: Then if I go to the next page, all of the cells now contain #Error instead of the 'RowSummary' info. If I then go back to the previous page, which used to have all the correct data in it, all of the cells are now replaced with #Error. I have found a work around whereby I add a hidden list container to the END of the report and set DataSet property to the 'EventRequests' dataset with the calculated field in it. I don't have to add any fields or make this list visible, but somehow just having the list on the report prevents the #Error from replacing all of my data. This does not solve the problem completely because when I go to export to PDF, I still receive errors. Anyone know why this is happening? |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment