[how to] How to improve TempDb on RAMDisk showing par performance |
- How to improve TempDb on RAMDisk showing par performance
- Fill factor based on index ranges
- Need help for creating Church Database [on hold]
- How to optimise T-SQL query using Execution Plan
- Importing delimited files into SQL server
- How identify tables with millions of entries
- Import .bak file in Microsoft SQL Server 2008 Service Pack 3
- Is there a standard formula to calculate the optimal resource required by SQL Server base on the Ram size of the server
- how to generate a range of number by text box [on hold]
- What is a standard or conventional name for a column representing the display order of the rows? [on hold]
- Can I force a user to use WITH NOLOCK?
- my.cnf validation
- Monthly backup of SQL server DB to PostgreSQL?
- How do I migrate varbinary data to Netezza?
- How can I get my linked server working using Windows authentication?
- Connection to local SQL Server 2012 can be established from SSMS 2008 but not from SSMS 2012
- Is there a way to find the least recently used tables in a schema?
- How to find Oracle home information on Unix?
- How to handle "many columns" in OLAP RDBMS
- Tuning advisor with Extended events?
- Mysql DB server hits 400% CPU
- Database stuck in restoring and snapshot unavailable
- Multiple database servers for performance vs failover
- Workspace Memory Internals
- Slow backup and extremely slow restores
How to improve TempDb on RAMDisk showing par performance Posted: 02 Aug 2013 07:23 PM PDT Given 2 SQL Server Instances where the second instance is configured with a RAMDisk for tempdb and the following test case. Then measure total runtime for these cascading selects; The runtimes for me came out the same ~(15s vs ~15s). One CPU maxes out for the entire test period. Is there a way to speed those queries across CPUs (is that tempdb file partitioning)? |
Fill factor based on index ranges Posted: 02 Aug 2013 05:31 PM PDT I'm designing a Postgres database for an events app. The app lists events sorted by when they start. Initially the app displays only 30 events. As users scroll through the list of events, more events are fetched from the database. In reduced form, the queries (depending on the direction in which the user is scrolling) are: I plan on clustering the table on Almost all of the events that users add will have a Is there a way to have Postgres cluster the events table such that the fill factor for events where |
Need help for creating Church Database [on hold] Posted: 02 Aug 2013 03:00 PM PDT Please i am creating a database for a church tithe collection. The church collect tithe every Sunday. My problem is that i don't know if i have to create a table for each month in the year and continue to create tables for every year or the is a way out. Currently i have created a database with twelve tables in it for every month in the year but thinking of what to do subsequent years to come. Do i have to create more tables for every year? Am using MS SQL Server 2005 and Visual basic 2005 as the front end. I will be very grateful if an expect out there can help me. Thanks. |
How to optimise T-SQL query using Execution Plan Posted: 02 Aug 2013 02:30 PM PDT I have a SQL query that I have spent the past two days trying to optimise using trial-and-error and the execution plan, but to no avail. Please forgive me for doing this but I will post the entire execution plan here. I have made the effort to make the table and column names in the query and execution plan generic both for brevity and to protect my company's IP. The execution plan can be opened with SQL Sentry Plan Explorer. I have done a fair amount of T-SQL, but using execution plans to optimise my query is a new area for me and I have really tried to understand how to do it. So, if anyone could help me with this and explain how this execution plan can be deciphered to find ways in the query to optimise it, I would be eternally grateful. I have many more queries to optimise - I just need a springboard to help me with this first one. This is the query: What I have found is that the third statement (commented as being slow) is the part that is taking the most time. The two statements before return almost instantly. The execution plan is available as XML at this link. Better to right-click and save and then open in SQL Sentry Plan Explorer or some other viewing software rather than opening in your browser. If you need any more information from me about the tables or data, please don't hesitate to ask. |
Importing delimited files into SQL server Posted: 02 Aug 2013 12:07 PM PDT I am trying to import a large file into SQL server that is | delimited. I know basically nothing about the data, I just want to get it imported. When I go to Database->tasks->import I use the advanced option to suggest types, and provide padding. The problem is that that routine does not go through the whole file, even when I specify an absurly large number of rows (1000000000), and so I am constantly getting tructation->error, change the types, restart the import etc errors. Is there a better way to do this? Note: The file is not on the same machine as SQL Server |
How identify tables with millions of entries Posted: 02 Aug 2013 03:20 PM PDT On a debian server with apache and mysql, how can I find out if any one table gets spammed? I have lots of different blogs, wordpress, wikis,... on my server of different customers. It seems like some PHP aplications are not protected against spamming, so some tables get really big and slow down the whole server. I would need a script, that monitors all tables. Or is there a simple tool I could install to get a report if something weird happens? |
Import .bak file in Microsoft SQL Server 2008 Service Pack 3 Posted: 02 Aug 2013 12:39 PM PDT Please some one help me... I am using Microsoft SQL Server 2008 Service Pack 3. I have a backup of my database as a What should I do.? please help me.. |
Posted: 02 Aug 2013 12:40 PM PDT My server that SQL Server runs on has 8GB of Ram. Is there a standard formula that DBA use to gauge the minimum and maximum resource to be allocated for SQL Server base on the server Ram size? I need to know how much of MB is optimal for these settings:
My research get me to this link:Guideline But I think the best solution is to know how he get to those figures. |
how to generate a range of number by text box [on hold] Posted: 02 Aug 2013 07:58 PM PDT i have 3 fields---MIN MAX and SN i use a form to enter number into MIN and MAX for example, MIN is 10, MAX is 20 then SN list from 10 to 20, total 11 records in the table what is the easiest way to do it, thx here is my code Private sub xx() Dim i As Integer For i = [Forms]![MAIN]![MIN] To [Forms]![MAIN]![MAX] [SN] = i Next i End Sub i tried DoCmd.RunCommand acCmdSave and DoCmd.RunCommand acCmdSaveRecord before Next i but both save the result in one field. what command can save each counting result?? |
Posted: 02 Aug 2013 04:59 PM PDT For example, a junction table associating a product and its pictures. What is a common/conventional, short, general, name for "some_column" if it represents the display order of the photos? "order", "sort", and "sequence" are out, as they are keywords. |
Can I force a user to use WITH NOLOCK? Posted: 02 Aug 2013 11:43 AM PDT Can I force a user's queries to always run with the hint NOLOCK? e.g. they type But what is executed on the server is THIS QUESTION IS NOT: |
Posted: 02 Aug 2013 11:19 AM PDT We have moved our server from old 8 GB RAM Server to new 16 GB RAM server so that we could have better performance. The server is still consuming lot of MEMORY. The tables in the database are not designed for InnoDB. The DB physical file size is approximately 2.8 GB. my.cnf parameters are : Please any one can validate my.cnf and suggest why taking much memory. |
Monthly backup of SQL server DB to PostgreSQL? Posted: 02 Aug 2013 11:37 AM PDT The company I'm working for has a SQL Server with read-only access. They use Crystal Reports hooked up to PostgreSQL for reporting. Is there any way to make it so I can move all the data from the MSSQL DB to PostgreSQL without user interaction? That seems to be the caveat to what I'm trying to do. They need to be able to run this report after I leave without having to interact with it during the process. Or am I looking at this the wrong way? Is there a way to save a "snapshot" of the SQL Server DB that can be manipulated in Crystal Reports? The ultimate goal is that since the DB is dynamic we need to be able to have a static DB at the end of the month that all the reports can be ran on without having to worry about it changing. |
How do I migrate varbinary data to Netezza? Posted: 02 Aug 2013 01:19 PM PDT I got a warning message while migrating DDL from SQL Server to Netezza:
I'm wondering whether this kind of data conversion will cause some issues such as truncation of data etc.? |
How can I get my linked server working using Windows authentication? Posted: 02 Aug 2013 04:05 PM PDT I'm trying to get a linked server to ServerA created on another server, ServerB using "Be made using the login's current security context" in a domain environment. I read that I'd need to have SPNs created for the service accounts that run SQL Server on each of the servers in order to enable Kerberos. I've done that and both now show the authentication scheme to be Kerberos, however, I'm still facing the error: In Active Directory, I can see that the service account for ServerB is trusted for delegation to MSSQLSvc, but I noticed that the service account for ServerA does not yet have "trust this user for delegation" enabled. Does the target server also need to have that option enabled? Is anything else necessary to be able to use the current Windows login to use a linked server? |
Connection to local SQL Server 2012 can be established from SSMS 2008 but not from SSMS 2012 Posted: 02 Aug 2013 03:49 PM PDT I have two local SQL Server instances running on my local machine. The first is SQL Server 2008 R2 Enterprise Edition (named MSSQLSERVER) and the 2nd is SQL Server 2012 Business Intelligence Edition. My problem is with SSMS 2012 which can connect to distant servers but not the local 2012 instance; I can however connect to this instance from SSMS 2008. The error message I get when trying to login is
I must point out that I don't have the necessary privileges to access SQL Server Configuration Manager (blocked by group policy). Any help would be appreciated. |
Is there a way to find the least recently used tables in a schema? Posted: 02 Aug 2013 05:49 PM PDT Is there a way to find the least recently used tables in a MySQL schema? Besides going into data directories? I was hoping there was a metadata or status trick-- but Update_Time in STATUS and INFORMATION_SCHEMA is always NULL. |
How to find Oracle home information on Unix? Posted: 02 Aug 2013 08:49 PM PDT Need help finding Oracle home path corresponding to a database instance in RAC environment. I am aware of few of the ways to achieve the same. Listing them below to avoid the same answers.
I am trying to find out a generic way which will work for all Oracle versions and it should not be dependent on anything which is not useful to DBA. Do you have any way other than listed above to do the same? Many Thanks in advance. |
How to handle "many columns" in OLAP RDBMS Posted: 02 Aug 2013 12:49 PM PDT I have a fact that has around 1K different numerical attributes (i.e. columns). I would like to store this in to a column-oriented DB and perform cube analysis on it. I tried to design a star schema, but I'm not sure how to handle this many columns. Normalising it sounds wrong, but I can't just have flat columns either. The combination of attributes are also too diverse to have a simple dimension table for this, even if I'd reduce the numerical values into categories (ranges), which is an option. I thought about storing them as XML or JSON for each row, but that doesn't sound great either. If it helps, I'm planning to use Amazon's redshift for the DB. Note: We have strong preference for RedShift as it fits perfectly for at least other few operations we do on this data. Hence I want to avoid other technologies like HBase if possible. |
Tuning advisor with Extended events? Posted: 02 Aug 2013 03:30 PM PDT With SQL traces I was able to analyze them with Database Engine Tuning Advisor to obtain basic recommendations for perf. tuning(missing indexes, statistics,...). Now, with SQL 2012 and Extended Events, how can I do something similar? Thanks |
Posted: 02 Aug 2013 11:49 AM PDT I have been facing problem with my database server quite a month, Below are the observations that I see when it hits the top. And then drains down within 5 minutes. And when I check the show processlist I see queries for DML and SQL are halted for some minutes. And it processes very slowly. Whereas each query are indexed appropriately and there will be no delay most of the time it returns less than 1 second for any query that are being executed to server the application.
Below url shows show innodb status \G and show open tables; at the time spike. And this reduced within 5 minutes. Sometimes rare scenarios like once in two months I see the processes takes more than 5 to 8 hours to drain normal. All time I notice the load processor utilization and how it gradually splits its task and keep monitoring the process and innodb status and IO status. I need not do anything to bring it down. It servers the applications promptly and after some time it drains down to normal. Can you find anything suspicious in the url if any locks or OS waits any suggestion to initially triage with or what could have caused such spikes ? http://tinyurl.com/bm5v4pl -> "show innodb status \G and show open tables at DB spikes." Also there are some concerns that I would like to share with you.
|
Database stuck in restoring and snapshot unavailable Posted: 02 Aug 2013 10:49 AM PDT I tried to restore my database from a snapshot. This usually took around a minute to complete the last couple of times. When I did it today, it didn't complete for around 30 minutes and the spid was in a suspended state. I stopped the query and now my database is stuck in restoring state and my snapshot is unavailable. Am I screwed? |
Multiple database servers for performance vs failover Posted: 02 Aug 2013 06:49 PM PDT If I have two database servers, and I am looking for maximum performance vs high-availability, what configuration would be best? Assuming the architecture is two load-balanced web/app servers in front of two db servers, will I be able to have both db servers active with synced data, with web1 to db1, web2 to db2 setup? Is this active/active? I'm also aware that the two db servers can have their own schema to manually 'split' the db needs of the app. In this case daily backups would be fine. We don't have 'mission critical data.' If it matters, we have traffic around 3,000-7,000 simultaneous users. |
Posted: 02 Aug 2013 01:19 PM PDT Per my reading books on SQL Server 2008 Internals and Troubleshooting (borrowed from local library in Illinois) by Christian Bolton, Brent Ozar etc. I am trying to seek understanding and confirmation on SQL server and lots of searching on the web I would appreciate if someone can confirm or correct my understanding. Every query or operation that requires query memory grant will need work space memory. In general query using Sort, Hash Match Join, Parallelism (Not sure about this), Bulk Insert (not sure), Index Rebuild etc. will need query workspace memory.. Workspace Memory is part of SQL Server buffer pool (it is allocated as part of buffer pool) and maximum workspace memory is 75% of memory allocated to buffer pool. By default a single query can not get more than 25% of workspace memory (in SQL 2008/SQL 2012 -- controlled by Resource Governor default workload group out of the box). Seeking a confirmation of my understanding 1) Considering system with 48 GB RAM and max server memory configured to 40 GB does this mean max workspace memory is limited to 30 GB and a single query can not get workspace memory (query memory) more than 10 GB. So if you have a bad query working with a billion rows that is doing massive hash join and need more than 10 GB of memory (workspace memory) would it even care to go through this memory grant queue or right away spill to the disk? 2) If a query doing a massive sort operation has been assign a workspace memory of 5 MB and during the query execution of the query if query optimizer realize that due to bad statistics or missing indexes this query will actually need 30 MB of workspace memory it will immediately spill to tempdb. Even if system has plenty of workspace memory available during the execution once the query exceeded the granted workspace memory during the execution it will has to spill to the disk. Does my understanding is correct? |
Slow backup and extremely slow restores Posted: 02 Aug 2013 01:49 PM PDT I don't normally work with MySQL but with MS-SQL and am having issues restoring a dump backup of a 9 GB database. I converted it to MS-SQL and it takes a grand total of 4 minutes to restore but the MySQL DB takes over an hour on the same server. The MySQL database is using InnoDB but is there an alternative to speeding up the restores? Both databases are on the same machine, Windows 2008R2 in a VM with a dymanic SANs. Correction - it takes MS-SQL 1 minute to restore, 1 hour to restore the same database in MySQL EDIT: mysql.ini (with commented lines removed): |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment