[how to] Lock Pages in Memory keep SQL database engine stuck at allocating 60MB ram? |
- Lock Pages in Memory keep SQL database engine stuck at allocating 60MB ram?
- What do you call numerical coding systems that are left-aligned?
- Is this compound index unnecessary?
- SQL Server: Additional cpus slow down batch
- Easily, quickly replace MySQL table?
- Table Design for multiple user define assignment
- How to config MySQL Enterprise Cluster
- What do you call a relationship between two entities without a foreign key?
- retrieving data speed tweaks MS SQL 2005
- Converting .TPS (TopSpeed) to SQL
- MySQL: sysbench test - InnoDB vs Memory tables
- SQL Query to fetch data from 4 different tables [migrated]
- On Oracle 12c, permitting and disallowing crashrecovery
- Import Oracle schema data without losing modifications of stored procedures
- Query getting periodically stuck in 'copying to tmp table' state, never completes
- Accumulo table design methodology
- Disaster Recovery for PostgreSQL 9.0
- Oracle Patch Update
- Replicated Database Log File Maintenance
- SQL Server Designers, Failed Saves, and Generated Scripts
- How to avoid empty rows in SSIS Excel Destination?
- How to add rows/columns to the table in runtime in SSRS 2008
- How to disable oracle's MAX_ENABLED_ROLES limit
- In MySQL, does the order of the columns in a WHERE clause affect query performance,why?
- Delete word, its meanings, its meaning's example sentences from DB
- MySQL concurrent INSERTs
- How can I optimize this query and support multiple SKUs?
- How to modify an update in Oracle so it performs faster?
- Query to find and replace text in all tables and fields of a mysql db
Lock Pages in Memory keep SQL database engine stuck at allocating 60MB ram? Posted: 13 Sep 2013 08:17 PM PDT I'm using Window Server 2003 Enterprise. Microsoft SQL Server 2005 Standard Edition. The problem is the SQL server memory always stuck at allocating 1.7GB ram max even i have 11GB ram left. According to this thread, which the thread starter have them same issues: http://www.sqlservercentral.com/Forums/Topic499500-146-1.aspx#bm499829 . I tried adding /APE to my boot.ini, add the network service account which is running sqlserv.exe to Lock Pages in Memory option. And reconfigure SQL use AWE to allocate memory. But i have no lucky, after restart the server. The SQL database engine cannot allocating more than 60MB ram. Which is terrible failure as my expected. So after that, i must restore the Lock Pages in Memory setting - remove the network service account from Lock Pages in Memory option, restart my server and it come back to the first problem. The SQL server database engine come back stuck at allocating 1.7GB ram. So the Lock Pages in Memory keep SQL database engine stuck at allocating 60MB ram ? And how to resolve the first problem now ? |
What do you call numerical coding systems that are left-aligned? Posted: 13 Sep 2013 08:28 PM PDT I'm writing an importing mechanism for the NAICS database. I have a few questions about this code-format. I've seen it before and I like it the setup. I'm going to ask some other questions about best-practices and navigation of this data, and I'd like to simply refer to it by the right name. Essentially this is an example of the data So if Bituminous Coal Underground Mining was your organization type, your code would be What is such a scheme called, is there a term to refer to this kind of organization of data? I want to call something like recursive-base10 or recursive-decimal is there a name of it though? |
Is this compound index unnecessary? Posted: 13 Sep 2013 03:57 PM PDT I have a large table which contains sensor data, along with the fields: I do queries against it using these fields almost exclusively. There are multiple sensors, and different sensors might have the same timestamp for a given set of data, so neither index can be unique. I created three indexes: one for each of these columns (not unique), and a compound one for both (unique). The table is constantly being written to, and queries to read data seem increasingly slow. My question is, is the compound index unnecessary? Would it be faster to have only the two separate indexes? (Or remove those and keep only the compound index?) No other columns are used for filtering query data. My question is similar to this one: Do I need separate indexes for each type of query, or will one multi-column index work? |
SQL Server: Additional cpus slow down batch Posted: 13 Sep 2013 02:48 PM PDT I'm running SQL Server 2012 in Windows 2008R2 in a virtualized environment. I've observed the following under both VMware Workstation 9 and Hyper-V 2012R1 and I don't know how to address it. I've got a batch that takes around 5 minutes to run when there is a single CPU in the virtual machine. Bumping up anywhere from 2-8 causes it to take over 10 minutes to run. Watching the Task Manager I see that there is not much if any parallel execution and lots of context switching. If I limit sqlservr.exe to a single CPU by setting the processor affinity in Task Manager the time drops back down to 5 minutes. The particular batch that I'm running is makes heavy use of cursors and dynamic sql which cannot be eliminated. The query has been profiled and optimized. Statistics are all up to date and indexes are rebuilt. Is there anything I can do to SQL Server to get better behavior? This seems not right. I would like to add additional CPU resources to the VM so that they can be used if necessary without a drastic performance hit for serialized processing. CPU is i7-4770K with VT-x enabled both with and without hyperthreading enabled. |
Easily, quickly replace MySQL table? Posted: 13 Sep 2013 06:02 PM PDT We have a process which will regenerate a table from scratch, drop the original table, and move the new table into the original's place. Now this should happen very quickly, but sometimes a website request hits the table after it's been dropped but before the new one is renamed. Apart from coding the website to be more robust when there's a database error, is there an easier way to do this? |
Table Design for multiple user define assignment Posted: 13 Sep 2013 12:21 PM PDT I have a Now users from a web application can define their own weather type definition (meaning that they can define that "Hot" corresponds to when the temperature is above 26 centigrade ect.) I currently have three table: The issue is that pulling data every 15 minutes means that one year's of data contains 35,040 records. And the more users create their own weather definitions the |
How to config MySQL Enterprise Cluster Posted: 13 Sep 2013 11:38 AM PDT I have 2 mysql enterprise server but I want to make it clustering by active and standby. How to config this? Thank you. |
What do you call a relationship between two entities without a foreign key? Posted: 13 Sep 2013 09:54 AM PDT Let's say I have two tables that are related in that one table contains a field that is a key to the other table. In other words, it would be a 1:1 or 1:* depending on constraints. Let's call it an Customer/Orders relationship to give it some context. However, let's say the requirement is that there be no referential integrity because the table needs to be archived without affecting the rest of the system. In our case, they want to archive the customers, but leave the orders in the system. Ignoring the fact we now have dangling references, which are just a fact of life when you need to prune systems in this way. What would you call this kind of "loose" relationship where there is no actual FK? How might you illustrate this in an ERD without implying there is a FK? But still wishing to show that there is a relationship? I realize that the proper method might be to use an FK anyways, and null the reference when archiving the customer, but that adds extra complexity they don't want to deal with. Further, they don't want to update the orders table after it has been finalized. They want to be able to find that customer number and go back in the archive and find the customer if need be. |
retrieving data speed tweaks MS SQL 2005 Posted: 13 Sep 2013 12:25 PM PDT I have a database in Microsoft SQL Server 2005. I have table with 3 columns, namely Columns I have application mainly depends on data in this table. Application retrieves data from this table with query like: But this query takes around 2 minutes to retrieve data. This is my problem. The application is real time processing application, and need to retrieve data in about 5 seconds. How can I tweak things in server, so time to retrieve data can be decreased? I have indexed the table by HASHKEY column when I created table, but still retrieving data is taking much time. Is there any setting which I can do in database to so time can be decreased? I will welcome any type of solution. But I need to solve this. I am not very expert in this. Also HASHKEY is just random values of 19 digits, no relation with other values. Result of following query is, Result: Time taken: 2 minutes, 1 second EDIT This is script to create table: I will ask random set of around 30000 hashkeys for data, no any ordering. When I was having around 20000000 rows in table, query was taking less than 2 seconds, but now retrieval time is increasing. EDIT And here is script to create index, only one index in this table. Please help me. |
Converting .TPS (TopSpeed) to SQL Posted: 13 Sep 2013 10:26 AM PDT I have an older application that uses TopSpeed as the database engine. I want to use some of the data contained in these tables in another application that uses SQL. To accomplish this, I purchased the TPS ODBC driver and used Access to move the data from the TPS tables to an SQL database by using the linked tables feature. This works fine, but I'm looking for an automated solution (plus, the Access way is messy). Is there a tool out there that could help? |
MySQL: sysbench test - InnoDB vs Memory tables Posted: 13 Sep 2013 08:40 AM PDT I've done some tests in order to investigate performance issue on the new HP Gen8 server I've created two tables, first one is using System details: Prepare stage: InnoDB Memory (heap) Testing stage: Sysbench – read only test – single table with 1 mln rows - data size 559MB (527MB data + 31MB indexes) InnoDB Total time: 16.3648s, TPS (transactions per second): 6111.40 Memory (heap) This test is running much longer and I had to stop it as the load on the server was very high - even if this is in memory table!?. |
SQL Query to fetch data from 4 different tables [migrated] Posted: 13 Sep 2013 08:23 AM PDT I have four tables and I need to fetch data from one table with where condition and the output contains ID's from three different tables using those ID's need to get the names of them. Need a query to return something like below:
|
On Oracle 12c, permitting and disallowing crashrecovery Posted: 13 Sep 2013 09:21 AM PDT To support a developer who is testing his application on an Oracle 12c database, has requested the following for two users:
and
I have not worked extensively with oracle and I am having a hard time tracking down how to honour this request. Any pointers or references would be appreciated. |
Import Oracle schema data without losing modifications of stored procedures Posted: 13 Sep 2013 05:57 PM PDT I have this scenario:
We usually do as follows because it's the only way to get data updated properly because a import with
The problem is:
What would be an import-based solution to this ? EDIT: I failed to mention that prod is Solaris and dev is RedHat. |
Query getting periodically stuck in 'copying to tmp table' state, never completes Posted: 13 Sep 2013 12:44 PM PDT I am running Wordpress on a dedicated server with a MySQL backend. I have a query that usually takes <1 second to execute, but periodically, this query will get stuck in a 'copying to tmp table' state and stay that way indefinitely until it is either killed or until mysqld is restarted. After restarting mysqld the problem goes away, the (identical) query once again takes <1 second to execute. This leads me to believe this is a configuration problem. How do I go about solving this problem? The query itself is not too intensive, and my server is not experiencing any sudden traffic spikes. The tables themselves are all InnoDB format. Here is my my.cnf: http://pastebin.com/9UMPxfAr The query: An EXPLAIN of the query: http://pastebin.com/m5ndBfVX And the output of "SHOW ENGINE INNODB STATUS" when a query is stuck in the 'copying to tmp table' state: http://pastebin.com/h0xv4Sfa |
Accumulo table design methodology Posted: 13 Sep 2013 10:23 AM PDT I am just getting started with Accumulo and NoSQL databases and I am looking for some discussion on table design. I get the key value structure that is seen in the manual. However, if I am trying to recreate a relational database, I am not sure how relationships work. Can someone explain to some degree how to setup and "Hello World" database (i.e., manager-employee database). I want to use key-value implementation. |
Disaster Recovery for PostgreSQL 9.0 Posted: 13 Sep 2013 07:18 PM PDT We have a number of PostgreSQL 9.0 servers. We use binary replication to have a host standby instance of those. The only problem is that is someone drops the master, with or without intentions, this will cascade to the replicas as well. I'm looking at the possible ways to avoid this. One possible option in seems to be Point in Time Recovery. I'm just wondering what could be a good design for this. Any ideas? Let's assume the master is compromised and we lose everything we have there. How can we avoid losing the replica or at least have a way to bring it back if it's dropped? |
Posted: 13 Sep 2013 11:20 AM PDT We have an Oracle RAC production environment with primary and secondary DB. Our DBA has asked to update oracle version from 11.2.0.1.0(64 bit) to 11.2.0.3(64 bit) with patch 6880880,10404530,16803769 and 16803775. In our current database we have Shared storage,ACL settings, security settings,Gateway/Heteregenous connectivity, Dataguard, Data broker, Backup policy and Oracle Client installed on other machines. DBA has estimated that he need to do installation, settings from scratch and test.. So, when the version is updated, do we really need to reconfig and install everything (Shared storage,ACL settings, security settings,Gateway/Heteregenous connectivity, Dataguard, Data broker, Backup policy and Oracle Client installed on other machines) ?? If yes its fine, but no then I need to justify it. I can understand testing would be required.. |
Replicated Database Log File Maintenance Posted: 13 Sep 2013 03:20 PM PDT I have a database on the publisher that is involved in replication (publication configured for merge and transaction). Trying to regain control of the log file for this particular database (VLF count, size, etc.). Is there anything I need to do (or be cautious of) with the replication setup before trying to perform any maintenance on the log file? I am not an expert in the area of replication and cannot find anything solid that provides guidance as to what measures should be taken. Edit: This would include working on the distribution database as well, data retention was not configured at all for some reason. |
SQL Server Designers, Failed Saves, and Generated Scripts Posted: 13 Sep 2013 02:09 PM PDT I am a big fan of the simple diagramming tool that comes with SSMS, and use it frequently. When I save changes to the model, I have it configured to automatically generate the change scripts that go along with the save. I then save (and source control) the resulting change script. This works great and an important piece of the process my team(s) uses. What occasionally happens is that a save fails, and I still get the option to save my change script. I then fix the problem and save again (which results in another change script). I'm never clear what I need to do at this point to maintain a consistent set of change scripts. There seems to be overlap between the two scripts (the failed and the successful), but they are not identical. If I want to continue to use this feature, what should I be doing with the resulting script as soon as I get a failed save of the model? |
How to avoid empty rows in SSIS Excel Destination? Posted: 13 Sep 2013 08:20 PM PDT Does anyone have a way to avoid empty rows when using SSIS to export to Excel. Here's a simple example of one data flow task: OLE DB Source: Data Conversion (to handle the annoying UNICODE / NON-UNICODE deal): The end result is either of the two below depending on value of "FirstRowHasColumnName" in the Excel Connection Manager. Note, the blank rows. |
How to add rows/columns to the table in runtime in SSRS 2008 Posted: 13 Sep 2013 10:20 AM PDT Usually we design the table to have x number of rows and y number of columns in a report. But how can we create a report which adds the rows and columns dynamically at run time based on the result of the source query? For example I want to list stdentId, StudentName and any course each student has enrolled in. As the number of courses is different from one person to the other, I should add the rows and related column for courses at run time based on the query result. How can it be done? For example: Thanks for your help in advance. |
How to disable oracle's MAX_ENABLED_ROLES limit Posted: 13 Sep 2013 04:20 PM PDT How to disable oracle's MAX_ENABLED_ROLES limit or expand the value of limitation. [oracle 10g (win32)] |
In MySQL, does the order of the columns in a WHERE clause affect query performance,why? Posted: 13 Sep 2013 01:20 PM PDT I have a query that doesn't use any indexes: The |
Delete word, its meanings, its meaning's example sentences from DB Posted: 13 Sep 2013 05:20 PM PDT I have three tables as below (simplified for demonstration): where, Edit1: I am using SQLite3 as the database. Edit2: I figured the following solution which requires 3 sql queries in order: I'm still looking for the answer to my question: is the whole process possible to be done in one query? |
Posted: 13 Sep 2013 08:20 AM PDT I have a MySQL database with InnoDB tables. There are different client processes making |
How can I optimize this query and support multiple SKUs? Posted: 13 Sep 2013 12:20 PM PDT My current query only can select one SKU at a time. I can leave |
How to modify an update in Oracle so it performs faster? Posted: 13 Sep 2013 02:20 PM PDT I have this query: The trouble that I am having is that this query takes a long time to run. I don't know whether it is possible to run this on parallel, or it would be easier to update a cursor in a pipeline function. What would you suggest? This is all the information that I believe it is relevant. This is the execution plan of the internal select: Table data: This is the script of the historical table: This is the other table: The temporary table is the result of FEE_SCHEDULE_HISTORICAL minus FEE_SCHEDULE |
Query to find and replace text in all tables and fields of a mysql db Posted: 13 Sep 2013 06:20 PM PDT I need to run a query to find and replace some text in all tables of a mysql database. I found this query, but it only looks for the text in the tbl_name table and just in the column field. I need it to look in all tables and all fields: (everywhere in the database) |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment