[how to] oracle huge hash_area_size |
- oracle huge hash_area_size
- Cannot initialize the data source object of OLE DB provider “Microsoft.ACE.OLEDB.12.0” for linked server
- Mysql 5.6 Failure
- After Rackspace server creation, PostgreSQL query planner doesn't work as expected
- Create Scheme that allows for fluidity between entities
- Database Tasks in C#
- What is the difference between leaf and non-leaf In SQL Server?
- Best way to perform SQL Server replication over multiple servers?
- MySQL backup InnoDB
- Collect CPU, reads and writes when replaying SQL Profiler Trace
- Getting timeouts and backup errors after mirroring and log shipping is setup
- Ms-Access 2007-2010 SQL - from single row record how to create multiple rows
- SQL Server 2008 R2 replication high delivery latency
- Refresh Postgres Linked Tables in MS Access
- How can I verify I'm using SSL to connect to mysql?
- Database design that handles data growth with time
- Strange characters in mysqlbinlog output
- High Buffer I/O SQL Server 2008 R2 causes?
- Where are PostgreSQL databases stored on my computer?
- QGIS PostGIS Authentication Failure with "trust" option
- Proper indexes or anything else to optimize
- Connecting to a SQL Server database from a Flash program
- Oracle Express edition on Ubuntu - control file missing
- Find the first gap in an aggregation of integers in SQL Server
- mysql cannot start because of init.d folder?
- One Big Database vs. Several Smaller Ones
- Is it possible to mysqldump a subset of a database required to reproduce a query?
Posted: 15 Mar 2013 08:09 PM PDT I am wondering what is the maximum setting for hash_area_size parameter? I'm trying to perform a huge join which ends up allocating 10Gb of temp space and completes in an hour. 10Gb is not that much by today's standards. I have the RAM available in the system. Should be possible to process that in memory. Is there a way to tell oracle to do that? When I try to set hash_area_size to 10Gb it complains because it seems to be limited to 32bit integers. |
Posted: 15 Mar 2013 02:25 PM PDT I'm trying to establish a linked server from SQL Server 2008 R2 to an Access database. The Access database is not password protected. As I have seen in other posts (specifically this one), this is usually due to the current user not having access to the Temp folder inside the NetworkService folder. Since we are running this in Windows Server 2008, the directory structure is quite different than what most are eluding to. I have allowed access to every temp folder in the "root:\Documents and Settings\" directory, and to no avail. This is the procedure I am using to add the linked server: It creates the linked server with no problem, but I am not able to view the tables/views of the database. Likewise, my ASP.NET application cannot access it either. I have tried both ACE and JET(64-bit)(by installing the Data Connectivity Components for Office), and they both do not work. I have also tried configuring the ACE provider with "Dynamic Parameter" and "Allow InProcess" to true. Additionally, I tried upping the memory usage by the MSSQLSERVER services by adding "-g512;" in front of "startup parameters" string in SQL configuration manager to rule out memory issues. If anyone could shed some light on this that would be fantastic! Thanks! |
Posted: 15 Mar 2013 08:25 PM PDT I have the following in I check the error file to find this information: I installed using rpm package on MySQL web site. What went wrong? |
After Rackspace server creation, PostgreSQL query planner doesn't work as expected Posted: 15 Mar 2013 06:32 PM PDT We created an image of one of our database servers in Rackspace. Then, we created a new server using that image, expecting things to work. However, the index performance we have seen seems to be bad, and even the query plans, when comparing the Is there something related to image restore that would cause this behavior? If so, what may need to be done to fix the issue? Should I look somewhere else? Thanks very much! I can provide more information if needed, so let me know if that would be useful. |
Create Scheme that allows for fluidity between entities Posted: 15 Mar 2013 01:16 PM PDT We are a small rapidly evolving company and recently investigated switching databases from SQL Server to a graph database. We like the graph approach because of the fluidity and openness that it offered. For instance, with a graph DB you can add or remove new relationships quite easily. However, in an RDBMS it is much more difficult. Is it possible to have some or all of the fluidity in a RDBMS like SQL Server? |
Posted: 15 Mar 2013 07:02 PM PDT We have 2 database environments a "Production" Database and a "Reporting" Database located on different physical windows server 2003 boxes. The reporting Db is derived off the Production database and contains some modified tables schema's and additional "Pre Cooked" Data tables which would be too costly to otherwise process on the fly. What I had originally created was a database script (MS SQL) on our Reporting Server to grab the Live Production database at 1AM and restore it in our Reporting environment by running all the scripts to get it up and going such as the tables restores, and modification of the data. The problem with this approach is that I quickly began to realize a) how long this approach takes (there is no option of putting a 'modifieddate' on any production records) and b) how a junior DB developer (me) should probably not dive into this stuff yet because he's more of a programmer :). Would it be part of the 'best practice methodology' to create a windows service which basically runs the restore at 1AM and sends the email if the process failed? I'm not sure if writing a c# component is going to allow me the level of granularity that MS SQL Scripts allow. So by using a programming language to accomplish this task - is this nuts and should this all be done in the database? How would you recommend I run the syncing of the two environments? Let me know what you think! |
What is the difference between leaf and non-leaf In SQL Server? Posted: 15 Mar 2013 08:28 PM PDT I've been running some index usage reports, and I'm trying to get a definition of Leaf and Non-leaf. There seem to be both Leaf and Non-leaf Inserts, Updates, Deletes, Page Merges, and Page Allocations. I really don't know what it means, or if one is better than the other. If someone could give a simple definition of each, and also explain why Leaf or Non-leaf matters, it would be appreciated! |
Best way to perform SQL Server replication over multiple servers? Posted: 15 Mar 2013 12:00 PM PDT My current setup requires me to use three separate SQL Server 2012 instances (installed on 3 separate machines, separated geographically) and have the data be replicated across all 3. Any changes made in the 7 tables within my database need to be replicated based on where the change was made, for example, if I add a record on Server A, it needs to be added on Server B and Server C. If a record is added on Server B, it needs to be added on Server A and Server C. Same thing with Server C. From what I've read, Bidirectional Transactional Replication seems to be the best avenue for me to use in SQL Server 2012 Standard. The issue that I'm having is that no one has a very good guide to Bidirectional Transactional Replication that really explains much. I think I've poured over Hilary Cotter and Microsoft's information a million times and I still can't seem to get it to work. The fluff data that they provide in examples just isn't practical and no one explains how to do it between two physical machines; they assume you're using the same machine and just replicating between another instance. Am I doing something wrong or am I going about this the wrong way? |
Posted: 15 Mar 2013 10:34 AM PDT I have a VoIP server running 24x7. At low peak hour at lease 150+ users are connected. My server has MySQL running with InnoDB engine on Windows 2008 platform. I like to take at least 2 times full database backup without shutting down my service. As per Peter Zaitsev - the founder of percona, mysqldump –single-transaction is not always good. read here if you are interested As I'm not a DBA, I like to know in my scenario, which would be best solution to take a database backup? Thanks, |
Collect CPU, reads and writes when replaying SQL Profiler Trace Posted: 15 Mar 2013 09:48 AM PDT I'm trying to replay a trace on a test environment with SQL Profiler in order to benchmark a database prior to making some performance tweaks. I intend to replay the same trace at various stages to compare the stats and measure the improvements. I've been following the post at keepitsimpleandfast which seems good however I'm not getting the CPU, Reads and Writes data saved when I replay the trace in Step 6.. I had to add the CPU, Reads and writes columns to events when I set up the TSQL-Replay template when I took the trace but I don't have any options in Replay Configuration to select events or columns. Strangely, I also ran a separate trace first and then replay the trace nothing gets traced at all! The events are different though, instead of RPC: Completed I get Replay Result Set Event and Replay Result Row Event, would this make a difference? Does anyone know how to either amend the columns logged when you replay a trace OR how to set up another trace to trace the replay? |
Getting timeouts and backup errors after mirroring and log shipping is setup Posted: 15 Mar 2013 01:00 PM PDT OS is Windows Server 2003 32 bit. SQL Server is SQL Server 2008 SP3 32 bit Today I finished configuring database mirroring and log shipping for 3 databases on this SQL instance and started receiving timeouts (connection and query timeouts from our applications) and log backup errors (log backup was performed by log shipping executable). Never had this kind of issues before when database mirroring was not active and log backups were performed via our custom SSIS package instead of using log shipping (also previously log backup was setup only for 1 database every 15 minutes and now it is for 3 databases every 5 minutes). More details: 1) Connection timeout from one of our applications: 2) Query timeout from another of our applications: 3) Log backup failure (all these are created at the same time in Event log)
These errors are not that frequent 5-6 times a day I think error messages related to failed log backup ( How can I better troubleshoot these issues? Is there a way to see detailed memory situation for SQL Server and trace exactly what queries/connections are timing out? |
Ms-Access 2007-2010 SQL - from single row record how to create multiple rows Posted: 15 Mar 2013 01:39 PM PDT I need help in below table updation using SQL in Ms-Access 2007-2010 database. I've a table called table1 which has below entries: table1: -------------------------------------------------- | col1 | col2 | col3 | ------------------------------------------------- | A1 | abcd | 123;456;789 | ------------------------------------------------- | B2 | efgh | 246;369 | -------------------------------------------------- | C3 | ijk | | -------------------------------------------------- Using SQL, I want to change the above table (particularly the column col3 multiple values into individual multiple rows) as shown below: -------------------------------------------------- | col1 | col2 | col3 | ------------------------------------------------- | A1 | abcd | 123 | ------------------------------------------------- | A1 | abcd | 456 | ------------------------------------------------- | A1 | abcd | 789 | ------------------------------------------------- | B2 | efgh | 246 | -------------------------------------------------- | B2 | efgh | 369 | ------------------------------------------------- | C3 | ijk | | -------------------------------------------------- How do I do it? I tried various forums. but could not come up with exact solution so far. Appreciate your help. Thanks, Jak. |
SQL Server 2008 R2 replication high delivery latency Posted: 15 Mar 2013 09:25 AM PDT I am seeing an unusually high delivery latency between our distributor and subscribers and i do not understand why. We have in this configuration 3 sql servers using transactional push replication to replicate data from one master server to two reporting servers. We have 9 publications. The distribution agent for most publications are showing under 5ms but one is show as 2000+ms to both subscribers. The suspect publication has only 4 small articles (tables) that rarely, if ever, change. Ive checked and each table has an primary key. ive also checked the @status parameter for each article according to the MS KB: The distribution agent may experience high latency when you configure transactional replication with articles that are configured not to replicate changes as parameterized statements Im tempted to start droping articles to find out if one particular table is the culprit. Doe anyone have any suggestions as to what I can look at? |
Refresh Postgres Linked Tables in MS Access Posted: 15 Mar 2013 10:42 AM PDT Does anybody knows a VBA procedure that automatically link and refresh Postgres linked tables (via ODBC) in MS-Access 2010? |
How can I verify I'm using SSL to connect to mysql? Posted: 15 Mar 2013 02:23 PM PDT I have configured my server to allow SSL, and have modified my client ~/.my.cnf so I use SSL: When I log in with my client and view the status, it lists a cipher on the SSL line: Without installing something like wireshark to verify that the connection is secure, can I assume that I'm connecting via SSL based on this information? |
Database design that handles data growth with time Posted: 15 Mar 2013 12:43 PM PDT I have a big table that stores video rental data with the following columns:
Assuming every day there are thousands of data inserts to this table, then within a year, I could have a million rows of data. The search on the data record involves range query on several columns and will always contain a To maintain this table, I could perform a query to transfer data with Additional Note: I have tested a few composite index and the query time can range from a few seconds to 50s if the row gets to 5 million. Range queries can be hard to optimize, so I am looking for other ways like keeping the table to a manageable size. |
Strange characters in mysqlbinlog output Posted: 15 Mar 2013 08:26 AM PDT Has anyone experienced this? Data replicates fine but when output in mysqlbinlog there are hidden characters that break the input?
Thanks! Julie |
High Buffer I/O SQL Server 2008 R2 causes? Posted: 15 Mar 2013 08:17 AM PDT Am trying to troubleshoot performance on an SQL server that we have for our document management server and today performance has hit an all time low. Looking at the MDW I setup recently I can see a huge spike in the Buffer I/O. The combined SQL Wait time is in the region of 1500-2000 ms which is considerably higher than normal (around 500ms). The large proportion of this increase is Buffer I/O. Not being a DBA and doing this out of neccessity I am massively out of my depth. Is there any way to tell why there has been a large increase in buffer i/o? Is this likely due to a query pulling large amounts of data or incorrect config of sql memory limits? Is there anything specific (or general for that matter) I should be looking at or DMVs I can query to help troubleshoot? The server is 2008 R2 as is SQL. It is a VM running two vCPUs and 8GB RAM with the disks hosted on a separate array on a SAN. EDIT: I think I may have misread this graph on the server. I actually think it is Lock Wait that has increased dramatically not Buffer I/O. The colours are far too similar for me to pick out at a glance :( |
Where are PostgreSQL databases stored on my computer? Posted: 15 Mar 2013 04:40 PM PDT So this might be a stupid question, but I'm really new to this. So I'm making a Postgres database on the postgres server, that I started up with this command: And then I did the command: Which supposedly created the database. But where is this database? Where can I find it on my computer? |
QGIS PostGIS Authentication Failure with "trust" option Posted: 15 Mar 2013 09:37 AM PDT I'm trying to setup a PostGIS db using PostGreSQL 9.1 on Ubuntu 12.04. I've set up the database, created a user account "jay" with no password, added spatial functionality following section 2.5 here, and set my pg_hba.conf for I can can connect to the database using the PGAdminIII, requiring no password to connect, but connection fails when I try to connect via PostGIS or with QuantumGIS. Here's a screencap of the connection window in QuantumGIS: Looking into the problem a bit, I came across this post which led me to try connecting via the terminal using I then tried setting a password for Any ideas?? |
Proper indexes or anything else to optimize Posted: 15 Mar 2013 08:40 AM PDT I need help to optimize the query to avoid using Table |
Connecting to a SQL Server database from a Flash program Posted: 15 Mar 2013 10:40 AM PDT I currently have the ability to utilize Microsoft SQL Server 2012. I am developing a project with Adobe Flash Builder 4.7. If I link my database with Adobe Flash Builder is there any additional steps I must take in order to make the database live, or as long as my computer is running will this database be accessible from any device that is utilizing it? In other words is this a LAN only system or does it automatically make itself available for the programs I link to it? |
Oracle Express edition on Ubuntu - control file missing Posted: 15 Mar 2013 01:40 PM PDT I have installed the Oracle Express edition on Ubuntu as mentioned here. I am facing issues when I try to create a sample table. Started oracle Started sqlplus Executed the CREATE command After a series of research on web, I tried to shutdown and restart oracle: Shutdown command Started the oracle instance I realized that the control file is missing at So I tried to create the control file as follows: Tried to create the sample table again So I tried to issue the following command What should be done next? I am clueless as I am not a database guy. Note: Output of |
Find the first gap in an aggregation of integers in SQL Server Posted: 15 Mar 2013 02:03 PM PDT Let's say I have a table called dbo.GroupAssignment. GroupID | Rank ------------------ 1 1 1 2 1 3 2 1 2 3 2 4 3 2 3 4 3 5 The PK is GroupID, Rank. Normally, the ranks within a group are a contiguous sequence of integers starting from 1. But it's possible for a GroupAssignment to get removed, leaving a gap. When a new assignment is made for the group, I want to fill the first gap available. So, how could I calculate this in SQL Server? |
mysql cannot start because of init.d folder? Posted: 15 Mar 2013 08:27 AM PDT After I setup mysql 5.5, I changed the data dir to another folder and modify the related config in my.cnf. This is quite simple, but mysql cannot start anymore even if reinstall or not change data dir. The error log has no useful information, so I cannot find any reason.
But if I use the command as the start script, mysql starts normally.
Much more strange, I use /usr/share/mysql/mysql.server start instead of /etc/init.d/mysql start, mysql also can start without error, while these two files are the same. So in the end I have to use cp -l /usr/share/mysql/mysql.server /etc/init.d/mysql to fix the problem. I really want to know why and if there is a better solution! |
One Big Database vs. Several Smaller Ones Posted: 15 Mar 2013 08:54 AM PDT We have a situation were we can (A) deploy instances of an applications in one MySQL database using table prefixing or (B) use different MySQL databases for each instance of the application, for e.g., Setup "A": The end result being a large db with many tables. Setup "B": The end result being many databases with some tables. All things equal (e.g., amount of data, number of app instances, etc), what are the pros and cons of going with either approach? What would be detrimental to database performance and maintenance? The application is PHP 5 based, run over Apache 2.x, and we're running MySQL 5.x. Many thanks for your time and thoughts! |
Is it possible to mysqldump a subset of a database required to reproduce a query? Posted: 15 Mar 2013 01:22 PM PDT Background I would like to provide the subset of my database required to reproduce a Question Is there a way that I can incorporate this select statement into a script that dumps the queried data into a new database, such that the database could be installed on a new mysql server, and the statement would work with the new database. The new database should not contain records in addition to those that have been used in the query. Update: For clarification, I am not interested in a csv dump of query results. What I need to be able to do is to dump the database subset so that it can be installed on another machine, and then the query itself can be reproducible (and modifiable with respect to the same dataset). Example For example, my analysis might query a subset of data that requires records from multiple (in this example 3) tables: |
You are subscribed to email updates from Recent Questions - Database Administrators - Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment