[how to] Mysql Forcing close of thread 946 user |
- Mysql Forcing close of thread 946 user
- Postgres: count(*) vs count(id)
- Dropping a group of schemas with similar name patterns
- MySQL replication: most important config parameters for performance on slave server?
- Database for opening times of locations
- Add Contents in a Column and make them 0
- How to purge logs by using Flashback feature in Oracle Database
- How do I connect to a database with a blank password using a shell script?
- Getting SELECT to return a constant value even if zero rows match
- Cannot create PostgreSQL user
- Oracle transactions deadlock
- PosgreSQL: setting high work_mem does not avoid disk merge
- SAP Business Warehouse 0APO_LOCNO_ATTR datasource extraction
- How to make a copy from a Emergency Database?
- Maintenance in MYSQL when innodb_file_per_table disabled
- slow queries - set weight to token type in tsquery - postgresql
- how to import mysql system "mysql" database
- SUPER privilege not defined for master user in Amazon MySQL RDS
- How to import table's data in MySql from Sql Server?
- Is it possible to pipe the result of a mysqldump straight to rsync as the source argument?
- Amazon RDS for MySQL vs installing MySQL on an Amazon EC2 instance
- MySQL Workbench sync keeps requesting the same changes
- Understanding COUNT() as `count`,
- Deleting Data From Multiple Tables
- mysql performance / cache configuration enigma
- Minimizing Indexed Reads with Complex Criteria
- Difference in between HAS MANY and BELONGS TO in Yii framework
- How to re-create the ##MS_PolicyEventProcessingLogin## principal
Mysql Forcing close of thread 946 user Posted: 27 Apr 2013 07:11 PM PDT My mysql box keeping shutting down and up. Below is snippet of the log file. There is quite a number of places I notice this Forcing close of thread 946 user: |
Postgres: count(*) vs count(id) Posted: 27 Apr 2013 08:01 PM PDT I saw in the documentation the difference between My question is about Postgres' internal optimizations. Is it smart enough to pick up that a I took a look at the output of |
Dropping a group of schemas with similar name patterns Posted: 27 Apr 2013 04:58 PM PDT Consider a situation where one need to perform a bunch of essentially identical operations, with the only variable being the name of some object. In my case, I need to drop some schemas, all of the form by analogy with a Unix shell. Of course this command doesn't work, so how can one do this with a single command? My understanding is that this cannot be done portably. I'm using PostgreSQL 8.4, but methods for more recent versions of PG are fine too. It would be nice if the solution had a dry run or dummy option, so one could see what commands were going to be run before actually running them. Perhaps a way to just print the commands? Also, an indication of how to deal with more general patterns than the example given would be nice. |
MySQL replication: most important config parameters for performance on slave server? Posted: 27 Apr 2013 12:43 PM PDT I'm setting up a Since, if I understand correctly, slave only works on updates/inserts, what are the most critical configuration parameters that I can adjust in
|
Database for opening times of locations Posted: 27 Apr 2013 03:33 PM PDT I'm designing a database for opening times and created this solution. The specifications of the database which will be a MySQL are, that a location have standard opening times in a week and can have special opening times for example on christmas or something like that. So if I want the opening times of an entry I would first search in 'special_opening' for the given day and if it given no results back, I would load the data from 'opening'. Is this a legit way to realize the database? Are there better ways to do it? |
Add Contents in a Column and make them 0 Posted: 27 Apr 2013 07:50 PM PDT http://sqlfiddle.com/#!3/96f11/3 In the above fiddle, I need the required output in the Fiddle. ie., The UserID column in Filtered Table and Main table are equal. I need to get the Amt column data according to the Filtered table from Main table and Add them and add the Total amount to The Amt Column of UserID Admin Row. and the Corresponding Amt Rows have to be set to Zero. I need to update the output into Main table Can you help me in doing This ?? |
How to purge logs by using Flashback feature in Oracle Database Posted: 27 Apr 2013 08:40 AM PDT I've configured flash recovery area for Oracle Database. My question is: how can I delete Flashback Database log?or is there any retention policy for that? 2- After I've enabled archive log mode and flash recovery area ,does purging automatically archived logs which are located archive log mode folder(not FRA logs)?or do I have to backup them via RMAN? Thanks, |
How do I connect to a database with a blank password using a shell script? Posted: 27 Apr 2013 03:25 PM PDT Above is my shell script which I used to source database but still its asking for password. I just want to know how can I source database in case of blank password. If root password is not blank than it works fine. |
Getting SELECT to return a constant value even if zero rows match Posted: 27 Apr 2013 12:05 PM PDT Consider this select statement: It returns the column How would one make the above SQL return at least the BTW, it's PostgreSQL 8.4. |
Posted: 27 Apr 2013 01:27 PM PDT I'm using PostrgreSQL 9.1.9 on Ubuntu 13.04. Using the following StackOverflow question, I tried to create a user/role in postgressql: How can I solve this problem? |
Posted: 27 Apr 2013 02:43 PM PDT How to rollback all active transactions on Oracle DB? I execute query and see 4 transcations in ACTIVE status. |
PosgreSQL: setting high work_mem does not avoid disk merge Posted: 27 Apr 2013 10:19 AM PDT This is not quite my day with postgres. On my server machine with PosgreSQL 9.2.3 I have set work_mem to 4MB to avoid Setting But why 4MB is not enough? In postgres wiki, there is this note:
So I assumed it will be the same in my case. EDIT: If I do: then 4MB is finally enough. It seems that the index causes lower memory usage. If anyone would be that kind to explain all this behaviour I would be very grateful. |
SAP Business Warehouse 0APO_LOCNO_ATTR datasource extraction Posted: 27 Apr 2013 12:58 PM PDT We are currently importing 0APO_LOCNO_ATTR from several different source systems. I want to be able to import the field PRECISID into BW from this datasource. Here is what I have discovered/tried so far.
|
How to make a copy from a Emergency Database? Posted: 27 Apr 2013 09:04 AM PDT I've a database on a very old Computer (Windows xp, SQL server 8.0). Yesterday, the database has been marked as suspected, I've put it to the emergency state. So now the database is Read-Only. The only way to make it functional again is make a copy of its to another database. Previously, I use Backup to save the database to a File and restore to other database but now, the database is in Emergency mode so that this way is not working. I've tried to import/Export to other database but there is one problem, some key in some the table of the old Database was mark as Identity Specification and it will automatically generate. But now the new database is not mark as Identity Specification and the Null error is generate. So why the import/export data doesn't make a exact copy of the old database to the new one? And how to make a exact copy to the database? |
Maintenance in MYSQL when innodb_file_per_table disabled Posted: 27 Apr 2013 04:04 PM PDT I have read your post and I completely understand OPTIMIZE TABLE to perform in an environment where innodb_file_per_table is disabled, does not shrink the global ibdata1 tablespace. But what if I need to perform index maintenance on InnoDB tables with ANALYZE command, it will grow the single tablespace also? What other alternatives are there to increase performance or doing some maintenance in Innodb engine, when using a single tablespace and without grow out of control the single ibdata1. Regards. Matthew |
slow queries - set weight to token type in tsquery - postgresql Posted: 27 Apr 2013 08:04 PM PDT Postgresql version 9.2.3! I'm working on a database for mapping of chemical names. My main table contains aprox 91 million records and it's indexed by gin. I want to query it with multiple names (I'm trying now with 100 entries), which I first put in a query table, create a tsquery column of the names and index it with gist. Main table structure: I was trying different approaches, thus for testing gin index I created a clone: then: The query table is: Same as in the main table, I fill it in with COPY from via a temp table and then I add the tsquery column: The query is basically a join between both tables: lexemes is the gist indexed tsquery column on my query table, whereas tsv_syns is the gin indexed tsvector column in the main names table, the one with 91 million records. The query is intended to match names, exact matches if possible. It works very well for such a large table. Normal names, containing only characters, can be retreated even in microseconds. The problem is when the string names contains numbers. The operation tsvector and tsquery create one token for each number, and all together makes the query for this sort of entries rather slow, well, slower. Instead of a few milliseconds, they take aprox 1-2 seconds each. I would like to reduce this query time to a few milliseconds like the other entries, but I don't know how. I have tested it with and without ts_rank to find out that ranking only add half a second to the total query, if it even makes a difference. so that's not my problem Some samples queries are: query: result: (cid |name|synonym|tsv vector) query: result: query: result: I wonder what the best way to make this last queries faster would be. I have tried with a pre-processing script that removes all the numbers, it speeds up the search up to 3 seconds in total, but I miss the exact/closest match that I was looking for in some of the cases, so that's no use. Other approaches that came to mind where: I think this could be a potential good solution for me, but as far as I have seen cannot be done. Tsvectors/queries can be labelled, but not token types or IS THERE A WAY TO LABEL TOKENS DIFFERENTLY WITHIN THE SAME TSVECTOR? Same as the parser, it might lead me to wrong matches, although since it keeps the positional information it me perform good. I'm not sure how i should do this though. My postgres.conf parameters: I have tried lower amounts of shared_buffer and effective_cache_size (16GB and 32GB respectively), no difference in performance from the current one, so I'm planing to change it back to those limits I tried a gist index on querytree lexemes, didn't make much difference I'm a little bit lost and I would appreciate any ideas or possible solutions to speed up my queries. Thanks :) PD: Any recommendations for nonSQL DBs that could improve performance? |
how to import mysql system "mysql" database Posted: 27 Apr 2013 05:04 PM PDT This question might be already answered but it's almost impossible to google it. When I perform full backup of mysql server using then reinstall it from scratch I want to import FULL dump of all databases, including "mysql". I successfully done that by executing but now even if all user records are in I can't even change their password, I always get
Restarting mysqld is of no help answer: It was fixed by executing now all users work as before |
SUPER privilege not defined for master user in Amazon MySQL RDS Posted: 27 Apr 2013 01:04 PM PDT I have created one medium instance on amazon rds in asia pecific (singapore) region. i have created my master user with master password. and it is working/connecting fine with workbench installed on my local PC. When, I am going to create function on that instance, it show me following error
At my instance, my variable (log_bin_trust_function_creators) shows OFF. now when I go to change with variable using it gives me another error
I don't know how to solve this error. Can anybody help??? |
How to import table's data in MySql from Sql Server? Posted: 27 Apr 2013 02:04 PM PDT I am trying to export table from SQL Server 2008 R2 TO MySql 5.5. For this I am using Here this error may be occurring because table in Sql Server has a column with data type Please provide your expert answers. If not possible through |
Is it possible to pipe the result of a mysqldump straight to rsync as the source argument? Posted: 27 Apr 2013 03:04 PM PDT Is it possible to pipe the result of a mysqldump straight to rsync as the source argument? Conceptually, I was thinking something like: I've seen people pipe the result to mysql for their one liner backup solution, but I was curious if it was possible with rsync. You know--- cause rsync is magic :) Thanks for your time! |
Amazon RDS for MySQL vs installing MySQL on an Amazon EC2 instance Posted: 27 Apr 2013 11:04 AM PDT At work, we host all our webservers on Amazon EC2 and usually have used MySQL databases installed on the same box as our Apache webserver, and communicated with them on RDS, being a dedicated database service provided by the same company as EC2, seems like it ought to be the obviously better option. However, when I look at the pricing for the two options (see http://aws.amazon.com/ec2/pricing and http://aws.amazon.com/rds/pricing) it seems that an RDS server costs almost twice as much as an EC2 server for a box with the same specs. Given that I'm capable of handling backups myself and that EC2 offers the same ability to scale up the instance as required that RDS does, I can't see any reason at all to use RDS instead of EC2. It seems like I'm probably missing something big, though, because if I were right, nobody would use RDS. What exactly am I missing, and what are the advantages of RDS over installing your own database on an EC2 instance? |
MySQL Workbench sync keeps requesting the same changes Posted: 27 Apr 2013 06:04 PM PDT I am using MySQL Workbench, and when I try to "synchronize" it with my remote database, it keeps detecting some changes to make. Specifically, the most recurrent ones are:
I was compliant and executed all the queries given to me (and added the semi-colon that they forgot). MySQL didn't complain and executed them. However it didn't help, I can run it 20 times in a row, it will still ask the same useless changes. |
Understanding COUNT() as `count`, Posted: 27 Apr 2013 11:04 AM PDT I'm currently learning how to build a site in PHP mysql. However, I seem to fail to understand I get the principles of COUNT, 0 || 1, and how it returns all the values that pertain to that query. But, don't see how COUNT as count works. Anyhow, this is how the code I'm writing goes - so we have a working example - and where I first became perplexed. If anyone can explain be a great help! |
Deleting Data From Multiple Tables Posted: 27 Apr 2013 10:52 AM PDT Suppose,I've a table called UNIVERSITY containing universities name: Now these universities ID's has been(obviously) used in many tables within the database(name e.g.Education),Suppose 10 tables. Q.Now what happen if i delete one university? A.The universityID field in other tables becomes NULL. But I don't want these,rather when I delete 1 university from UNIVERSITY TABLE,all its occurrences with Rows in all 10 table should get deleted. What will be the shortest and easiest MySQL Query for this operation. NOTE:I'm using PHP language. |
mysql performance / cache configuration enigma Posted: 27 Apr 2013 10:04 AM PDT I have two mysql 5.1 instances (say A, B) hosting the same database schema. If I run (with mysql workbench) the same query on both instances I don't understand why I get very different response times with subsequent requests. On instance A, first query execution takes 0.688s and second query execution takes 0.683s It looks like there's a cache configuration difference between the two instances but I can't find it. Comparing the Just to mention, instance A is our test environment and instance B is our production environment Edit : (recommended by @Rick James) The following variables are strictly identical on both environments The actual SELECT : The EXPLAIN SELECT (sam on both environments) : The CREATE TABLE STATEMENT (exact same on both except the constraint names) : and |
Minimizing Indexed Reads with Complex Criteria Posted: 27 Apr 2013 12:04 PM PDT I'm optimizing a Firebird 2.5 database of work tickets. They're stored in a table declared as such: I generally want to find the first ticket that hasn't been processed and is in My processing loop would be:
Nothing too fancy. If I'm watching the database while this loop runs I see the number of indexed reads climbs for each iteration. The performance doesn't seem to degrade terribly that I can tell, but the machine I'm testing on is pretty quick. However, I've received reports of performance degradation over time from some of my users. I've got an index on -- Edits for comments -- In Firebird you limit row retrieval like: So when I say "first", I'm just asking it for a limited record set where |
Difference in between HAS MANY and BELONGS TO in Yii framework Posted: 27 Apr 2013 10:23 AM PDT I am new to the I discovered Can someone give some real life examples with a diagram that will clear up my doubts? |
How to re-create the ##MS_PolicyEventProcessingLogin## principal Posted: 27 Apr 2013 04:35 PM PDT I'm getting a bunch of errors in my MS SQL Server logs about a missing How do I recreate this principal? |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment