[how to] LOAD DATA INFILE on a MyISAM table blocks user read-queries, can we get around this? |
- LOAD DATA INFILE on a MyISAM table blocks user read-queries, can we get around this?
- Get the count of items related to nested category in sql
- SQL Server update query on linked server causing remote scan
- Synchronize creation of rows between tables
- MySQL 5.6 OS X 10.8: how tell if my.cnf read?
- Minimize locking on UPDATE with multiple WHERE columns
- SQL Server 2005 encryption - What Grants should I give
- SQL Server 2005 encrypted database works on new PC without restore keys!
- Arbitrarily ordering records in a table
- connecting to oracle database on Windows
- Permission denied in Postgres
- MongoDB: Aggregation workaround by helper collection
- Search within same ID accross several rows
- MySQL: Can't find .idb files after turning on innodb_file_per_table
- Adding a bool column; tracking untouched vs. false
- would that design be good, reasonable, stable or not?
- Migrate SQL Server database to multiple files (shrinkfile emptyfile vs log shipping)
- Handling concurrent access to a key table without deadlocks in SQL Server
- Finding out the hosts blocked by mysql server
- Materialized view for a yet-to-be-defined DBlink
- How to influence Oracle backup output size?
- Delete word, its meanings, its meaning's example sentences from DB
- MySQL LOAD DATA INFILE slows by 80% after a few gigs of input with InnoDB engine
- How can I optimize this query and support multiple SKUs?
- Query to find and replace text in all tables and fields of a mysql db
- PostgreSQL: The database cluster initialization failed
LOAD DATA INFILE on a MyISAM table blocks user read-queries, can we get around this? Posted: 17 Mar 2013 08:40 PM PDT I've got a large reference table, 100GB, in a MyISAM table, load it up from scratch goes well (4ish hours) using LOAD DATA INFILE. But we have 1GB each day we want to update in a batch process using LOAD DATA INFILE. Unfortunately LOAD DATA INFILE blocks users from querying against that table. Is there a way I can get around this? For example:
|
Get the count of items related to nested category in sql Posted: 17 Mar 2013 06:30 PM PDT I have two tables: categories and items. i have stored categories using nested set structure. Categories have items. Items can be only added to leaf nodes of a root category. For eg: Categories Items can be added to category Bajaj, Automobiles and Amateur Art in this case. Lets say there are 2 items inside Bajaj, 5 items inside Automobiles, 2 inside Amateur Art For root level categories I want to display as follow: How can I do this ? Here is the sql dump to work with some sample data root nodes have NULL in the field of parent_id I was able to fetch for roots using this query: The above query works for root level category. Now when the user clicks on root level category, I want to do the same. for eg when somebody clicks on vehicles I should get: For that I tried : This returned empty result set. what is wrong in this query ? |
SQL Server update query on linked server causing remote scan Posted: 17 Mar 2013 04:58 PM PDT I have a SQL Server 2012 setup as a linked server on a SQL Server 2008 server. The following queries executes in less than 1 second:
However, if I run this query to do a remote update, it takes 24 seconds, and 2 rows is affected: I tested using The table joins are identical in both queries, why is it using Remote Scan for the second query, and how do I fix this? |
Synchronize creation of rows between tables Posted: 17 Mar 2013 04:47 PM PDT Lets say I have two tables, |
MySQL 5.6 OS X 10.8: how tell if my.cnf read? Posted: 17 Mar 2013 07:41 PM PDT With showing the value of I think I'm using the installation by running |
Minimize locking on UPDATE with multiple WHERE columns Posted: 17 Mar 2013 04:21 PM PDT I have a problematic query that's causing a lock timeout: I have two issues here:
|
SQL Server 2005 encryption - What Grants should I give Posted: 17 Mar 2013 04:56 PM PDT I use Mixed mode Authentication and I want my database users (in the Public role membership) to have the ability to JUST execute procedures that encrypt/decrypt data (using Certificate and Symmteric keys). I have already given them Grant Execute on the procedures. What other Grants should I apply? Thanx! |
SQL Server 2005 encrypted database works on new PC without restore keys! Posted: 17 Mar 2013 02:15 PM PDT Something tells me that it was not normal. Let me take it from the begining. In an SQL Server Express 2005 I needed to encrypt one column. I did the following. I created procedures to encrypt, decrypt the column and everything worked OK. I then took the database (SQL Server 2005, Mixed Mode) and restored it to a new PC with an SQL Server 2008 Express. WITHOUT restoring KEY and Certificate, I launched my application which use the stored procedures to encrypt/decrypt and SURPRISINGLY everything worked in the new database!! I could decrypt previously encrypted data successfully. Is this normal?? What is the use of backing-up the Key and the Certificates then? Thanx in advance |
Arbitrarily ordering records in a table Posted: 17 Mar 2013 02:12 PM PDT A common need when using a database is to access records in order. For example, if I have a blog, I want to be able to reorder my blog posts in arbitrary order. These entries often have lots of relationships, so a relational database seems to make sense. The common solution that I have seen is to add an integer column Then, we can sort the rows by However, this seems clumsy:
It's easy to imagine a situations like:
These could happen fairly easily for a number of reasons. This is the approach that applications like Joomla take: You could argue that the interface here is bad, and that instead of humans directly editing numbers, they should use arrows or drag-and-drop—and you'd probably be right. But behind the scenes, the same thing is happening. Some people have proposed using a decimal to store order, so that you can use "2.5" to insert a record in between the records at order 2 and 3. And while that helps a little, it's arguably even messier because you can end up with weird decimals (where do you stop? 2.75? 2.875? 2.8125?) Is there a better way to store order in a table? |
connecting to oracle database on Windows Posted: 17 Mar 2013 07:41 AM PDT I have two machines on my network:
Now I have loaded Oracle 11gR2 on pc-2 (Vista) and the listener is running with the following settings: I am trying to connect to Oracle database via pc-1 (XP) using The pc-2 is reachable, I checked it by I tried a few things:
Can anyone please specify in details what shall I do? I am trying to do this first time so no idea about what is going wrong? Some of the commands output on PC-2
OUTPUT |
Posted: 17 Mar 2013 11:25 AM PDT I added the user Then, through the PgAdminIII GUI I added the database Then, I tried to give to What I did is logging to psql ad administrator: and then I run this query: Now, I can use Am I missing something? Can you help me solvign that? Thank you very much. |
MongoDB: Aggregation workaround by helper collection Posted: 17 Mar 2013 06:55 AM PDT Since I regularly run against mongodb's document size limit and also aggregation framework's group limitations >16M, depending on the amount of data I need to process, I look for a simple workaround. The solution must also fit for single nodes (and must not require multiple mongos). Saying in few words, it's an aggregation for finding
An example could make the task easier to understand: Lets say there is a collection with 2 fields, the last and surname.
=> this could also be solved with distinct count and/or mapreduce etc. no limitation is exceeded. anyways, it seems to be a slow table scan and aggregation framework does the task much faster.
=> here it is very comfortable to work with mongodb's aggregation framework which handles this task fast and properly. However, trying the task with lots of uniques in a 15GB collection, it worries about the 16M limitation. The first workaround was successfull but is only a quick'n'dirt non-scaleable trick: Instead of processing all the data and $group afterwards, I filter ($match) for each $group-item. With the example from above, I look up all distinct last names, run one aggregation per lastname and reduce the pipeline data before aggregation starts. This avoids the 16M limit issue if the amount of data for a single $group-item (last name) is not too much. => only quick'n'dirty workaround. Another idea is a help collection for the COUNTs. I could iterate each input document and write into another collection the surnames and one field per lastname with the counter. E.g. surname=John count_with_Lastname.smith=10, count_with_Lastname.brown=2, ... Building this helper collection is very slow, isn't it? When incrementing +1 which is 1 update/write per input document, I fear this could take forever :-( Is there a best-practice or tips for solving this task (withouth multiple nodes and without Hadoop or alternatives)? Please let me know |
Search within same ID accross several rows Posted: 17 Mar 2013 06:42 AM PDT My table has the following structure: I have a list of Lets say I have the following sentence_ids: 12455 and 15333. The query should return How can I achieve this with MySQL? |
MySQL: Can't find .idb files after turning on innodb_file_per_table Posted: 17 Mar 2013 06:27 AM PDT This a question about MySQL. Let me give the specifics of my setup:
In order to manage limited hard drive space on my laptop, I'd like to turn on the
I followed the steps and there appears to be no problem. Various checks that I did within Workbench all indicate that the I've deleted the Please let me know if the I made a mistake in the turning on Additional Info: After some further investigation, encouraged by Spencer7593's response, it seems like
|
Adding a bool column; tracking untouched vs. false Posted: 17 Mar 2013 03:05 PM PDT
When adding a string column, the meaning of "" is obvious; no data has been entered. But I'm adding a bool, thus existing records will default to a value that does indicate something: FALSE. My first thought was to make the user portal have, rather than a checkbox, a radio button pair for this question. Upon creating a new record, or even returning to old records with false-falses, if they click Yes or No, that value is recorded, plus another bool indicates this question was actually answered manually. That is; if the first bool is F, but the second bool is T, then the first bool is not a 'default false'. Their hand-waving solution was to have records dated before today displayed as "unknown" (this was nixed as existing records could not then manually be given a user-verified value of false). Should I use a faux-bool int and think of it in trinary? Or, back to using a separate tracking bit/bool, should I make a new table for that sort of data? Is this a common concept? Any naming conventions? |
would that design be good, reasonable, stable or not? Posted: 17 Mar 2013 03:16 PM PDT A company that wants to control 4-5 differently asp.net webforms and mvc projects via one web panel and also one mssql database. |
Migrate SQL Server database to multiple files (shrinkfile emptyfile vs log shipping) Posted: 17 Mar 2013 03:25 PM PDT I have a SQL Server 2008 database in production that we are moving onto a new server. The current database has a single ~400GB To make this work, we will need to split the single My question, is there a way to backup and restore the database to the new server in a temp location. create the new files, EMPTY the temp Or are there any other options to get from Server A with one file and Server B with 4 files on different drives with minimal downtime? |
Handling concurrent access to a key table without deadlocks in SQL Server Posted: 17 Mar 2013 11:53 AM PDT I have a table that is used by a legacy application as a substitute for Each row in the table stores the last used ID Occasionally the stored proc gets a deadlock - I believe I've built an appropriate error handler; however I'm interested to see if this methodology works as I think it does, or if I'm barking up the wrong tree here. I'm fairly certain there should be a way to access this table without any deadlocks at all. The database itself is configured with First, here is the table: And the nonclustered index on the Some sample data: The stored procedure used to update the values stored in the table, and return the next ID: Sample executions of the stored proc: EDIT: I've added a new index, since the existing index IX_tblIDs_Name is not being used by the SP; I assume the query processor is using the clustered index since it needs the value stored in LastID. Anyway, this index IS used by the actual execution plan: EDIT #2: I've taken the advice that @AaronBertrand gave and modified it slightly. The general idea here is to refine the statement to eliminate unnecessary locking, and overall to make the SP more efficient. The code below replaces the code above from Since our code never adds a record to this table with 0 in |
Finding out the hosts blocked by mysql server Posted: 17 Mar 2013 09:42 AM PDT Can someone tell me how to list the hosts which are blocked by the mysql server due to the reason that they crossed the limit of max_connect_errors. Is there any table in which MySQL server keeps this data. I am using mysql-server-5.1.63 |
Materialized view for a yet-to-be-defined DBlink Posted: 17 Mar 2013 07:42 AM PDT In our product we have a main application with an oracle 11g DB and an optional module with another DB. We want the two to communicate through a dblink and I was told to put some code for the communication stored in the main application. In practice, since the optional module may not actually exists at the time the main application is deployed, this means writing code for tables that don't exist yet, but of which we know the structure, the ones in the optional module. We can encapsulate the dblink inside materialized views in the main module and restrain from calling remote procedures from there, however, to my understanding, we can't actually define materialized views on non-existant dblinks. Perhaps we could define dummy materialized views, e.g. empty ones defined on dual, that get properly redefined once the optional module is installed and the necessary dblink is created, with a procedure calling dynamic plsql. What do you think about it? How would achieve that? TIA Andrea EDIT: currently I have to evaluate the opportunity of having the communication code in the main application without having the optional module already deployed. So I'm really interested in hearing how I can get to create materialized views over yet-to-be-defined dblinks and whether my proposed solution is reasonable. Thanks. |
How to influence Oracle backup output size? Posted: 17 Mar 2013 06:42 AM PDT I have two environment one Prod one Dev. Dev is created by Prod data and both have a similar database size. However, the backup output size varies greatly. Prod is about 300 M while Dev always above 1G. Both using incremental Level 1 backup. Why would this be happening? |
Delete word, its meanings, its meaning's example sentences from DB Posted: 17 Mar 2013 12:42 PM PDT I have three tables as below (simplified for demonstration): where, Edit1: I am using SQLite3 as the database. Edit2: I figured the following solution which requires 3 sql queries in order: I'm still looking for the answer to my question: is the whole process possible to be done in one query? |
MySQL LOAD DATA INFILE slows by 80% after a few gigs of input with InnoDB engine Posted: 17 Mar 2013 06:30 AM PDT I'm loading a 100GB file via LOAD DATA INFILE. I've had good success with MyISAM, a few hours and done. I'm trying it now using InnoDB. The load starts fast at over 10MB/sec (watching the table file growth, But after about 5GB of data it slows down to the 2-4MB/sec range, as I get over 20GB it was down around 2MB/sec. InnoDB buffer pools size is 8G. And I've done the following prior to running the LOAD DATA INFILE command: I can't see the reason why it's starting off well and slowing down over time. Also, using the same settings, I ran the same LOAD DATA INFILE command with the table using InnoDB and MyISAM and a 5GB test dataset, MyISAM was 20x faster: InnoDB: MyISAM: Anything else I should consider trying? The MyISAM engine is able to keep up the load rate much better. Additional details:
|
How can I optimize this query and support multiple SKUs? Posted: 17 Mar 2013 08:42 AM PDT My current query only can select one SKU at a time. I can leave |
Query to find and replace text in all tables and fields of a mysql db Posted: 17 Mar 2013 02:42 PM PDT I need to run a query to find and replace some text in all tables of a mysql database. I found this query, but it only looks for the text in the tbl_name table and just in the column field. I need it to look in all tables and all fields: (everywhere in the database) |
PostgreSQL: The database cluster initialization failed Posted: 17 Mar 2013 08:51 PM PDT Both I ran postgresql-9.1.2-1-windows.exe from postgres user as admin. Target Every way I try, I get "The database cluster initialization failed." Questions
I am trying to setup PostGIS to work with GeoDjango. I was able to manually install PostGIS. New to PostgreSQL and I am having a crisis of confidence over all of this. Coming from MySQL to PostgreSQL for the first time. Pertinent log output from C:\Users\Larry\AppData\Local\Temp\install-postgresql.log: Suggestions? |
You are subscribed to email updates from Recent Questions - Database Administrators - Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment