[how to] MySQL: Alter Table on very active table gives "Waiting for table metadata lock" |
- MySQL: Alter Table on very active table gives "Waiting for table metadata lock"
- dividing the comma separated strings into columns
- Measuring SQL execution time in PostgreSQL?
- How to avoid duplicate entries in SELECT statement SQL?
- get time format in postgresql
- Specified key was too long; max key length is 1000 bytes in mysql 5.6
- How is "BIG DATA or Hadoop" in current IT industry? [on hold]
- SQL Server 2005 - Query optimization for fetching large number of rows from table with 750 million rows
- Choose security group for restore-db-instance-to-point-in-time
- How to measure a perfomance of SQL Server database? [on hold]
- How to DRY record metadata in SQL Server
- How to securely connect app and database servers?
- How to get SQL Server 2012 to use the invariant culture in format()?
- multiple line text values in mysqldump text export file
- Any advantage of creating table in temporary tablespace - Oracle
- Altering the location of Oracle-Suggested Backup
- MYSQL Timezone support
- Database Design Confusion
- replication breaks after upgrading master
- The client need 'SUPER' privileges to the reports DB in order to create and maintain my Functions
- extproc env variables oracle 11g
- Need to suppress rowcount headers when using \G
- multivalued weak key in ER database modeling
- Payment methods conceptual and logical model
- Microsoft Office Access database engine could not find the object 'tableName'
- MYSQL 5.5 Fail start Fedora 16
- Sql Anywhere 11: Restoring incremental backup failure
- What are database statistics, and how can I benefit from them?
MySQL: Alter Table on very active table gives "Waiting for table metadata lock" Posted: 15 Sep 2013 06:44 PM PDT At my work, we've recently upgraded to MySQL 5.5 and since then, have been bitten a few times by the new (and intended) behavior of metadata locking. Basically, whenever we run an ALTER TABLE statement on a fairly active table, the table becomes inaccessible to both reads and writes and threads are shown hanging, basically forever, with the "Waiting for table metadata lock" message. I've done a bit of research and found good explanations of this in the MySQL manual and here and see that this is an expected and actually intended behavior. What I haven't found anywhere online is a graceful way to mitigate the obvious problems this causes in production. The only way we have been able to run alter table statements on our more active tables since upgrading is by disabling read and write permissions for any usernames our application runs under, killing all processes that are still running, and then running the ALTER TABLE statements. This works, but results in many errors from our app: both from background processes and our web site. In the past (ie before 5.5) schema changes were no problem. I understand the reason for the new behavior, but it causes obvious problems in production and I figure folks in the community must have work arounds. Any ideas would be appreciated. Thanks! |
dividing the comma separated strings into columns Posted: 15 Sep 2013 06:51 PM PDT I have an Problem with Dividing the comma Separated values.
For ex: I want the columns divided as I used
Output was I'm Using Oracle 10g |
Measuring SQL execution time in PostgreSQL? Posted: 15 Sep 2013 09:05 PM PDT MySQL has a performance_schema database that allows one to capture SQL statement execution time data in a table (e.g. performance_schema.events_statements_history_long ; useful MySQL link). I was wondering if a similar set of tools existed in PostgreSQL? |
How to avoid duplicate entries in SELECT statement SQL? Posted: 15 Sep 2013 02:22 PM PDT +----+----------------------+---------+---------+---------------------+-------+ | id | translation | id_word | id_user | added | tuser | +----+----------------------+---------+---------+---------------------+-------+ | 17 | допомагати | 4 | 1 | 2013-08-29 14:52:20 | 2 | | 17 | допомагати | 4 | 1 | 2013-08-29 14:52:20 | 1 | | 5 | когось | 1 | 1 | 2013-08-27 23:35:09 | 1 | | 4 | хто-небудь | 1 | 1 | 2013-08-27 23:35:09 | NULL | | 1 | хтось | 1 | 1 | 2013-08-27 23:34:17 | 2 | | 1 | хтось | 1 | 1 | 2013-08-27 23:34:17 | 1 | +----+----------------------+---------+---------+---------------------+-------+ As you can see, I have duplicate entries in id. I have to avoid duplicate entries if: There is a row with a tuser = 1, then I have to remove other entries with the same id, id_user and id_word from result. If in rows (with id, id_user and id_word) no tuser = 1, it must show all values. I need only one unique row if tuser = 1 for each unique combination id, id_word, id_user |
Posted: 15 Sep 2013 03:36 PM PDT If I want get ubuntu system time, I have two options: How can I find out if postgresql is using utc or a utc offset, such as US/Eastern? |
Specified key was too long; max key length is 1000 bytes in mysql 5.6 Posted: 15 Sep 2013 08:40 PM PDT one of the application server is internally creating the database on my i am getting following error i have set my default engine to my current engines are as follows now i really don't know how would i get rid off it as the application server itself is creating database automatically in mysql so i dont have control over it. i am using |
How is "BIG DATA or Hadoop" in current IT industry? [on hold] Posted: 15 Sep 2013 10:48 AM PDT I would like to do a course & certification in BIGDATA. However I would like to verify from Database experts like you regarding the same.
|
Posted: 15 Sep 2013 10:20 AM PDT Brief on application: This is audio fingerprinting application, being developed in Java with Microsoft SQL Server 2005 database. I have one application to create fingerprints of original songs and put these fingerprints in database. To store fingerprint in database I have table: The application processes song and takes 100 sample per second, so around 15000 samples for complete song. These sample values are stored in database, 1 row for each sample as {HASHKEY, NOTE_ID, TIMEOFFSET}. For fingerprint of complete song, I may have around 15000 rows in fp_core table. I am planning to put fingerprints of 50000 songs in database, so around 750 million rows will be in fp_core table. I have other application to process recordings and detect songs played in it. Process is, create set of HASHKEY from recording audio, same as for creating fingerprint of original song. Recording audio will generate around 20000-30000 HASHKEYs. Then application retrieves rows from fp_core table for all matching HASHKEYs generated by recording audio. To retrieve data from fp_core table by processing recording, I am doing is, filling these all HASHKEYs of recording in one more table, table is: then I am joining these two tables to retrieve all matching rows, the query is: I have following indexes: Problem: Retrieving data using above query is so slow, taking time around 40 seconds. Right now, here is stats: Query: Result: Plan: Can anybody help me? |
Choose security group for restore-db-instance-to-point-in-time Posted: 15 Sep 2013 08:17 AM PDT When using the AWS CLI restore-db-instance-to-point-in-time command, I can't figure out how to set the security group. The doc on this page says:
Is it possible to override this parameter to use the same security group as the original instance? |
How to measure a perfomance of SQL Server database? [on hold] Posted: 15 Sep 2013 07:34 AM PDT I have the task to improve the performance of a SQL Server 2012 database (one of 4 in an instance) by 60% confirmed by corresponding statistics. So, I need to measure "performance" of the RDBMS database before performance tuning and optimization and after. Which metrics are better suited for this? Trying to answer the obvious questions ahead ... I/O (hardware) bottlenecks are absent since the SQL Server runs on a virtual rack having plenty of physical resources under it. The database is used by approx. 60 users (mostly 8 hours a day) with widely varying load (per sec). This is a company management task, so the results of this task should be easy to grasp. |
How to DRY record metadata in SQL Server Posted: 15 Sep 2013 11:52 AM PDT There is a database that has been given to me, and all tables contain these columns:
However, it seems that this design is not DRY and it's really hard to maintain because this database contains like 300 tables. On the other hand, the modification history and creation history and lot's of other actions on the database level matter to us. Is there any better way to record any DDL and DML operations? |
How to securely connect app and database servers? Posted: 15 Sep 2013 08:02 PM PDT (Updated) For a start, I have my app and database on separate servers. It's easy to connect them, except I am not sure how to secure my database server. Here's what I've already done:
Options considered (but not employed):
So, whatelse do I need, or am I good enough? How else do others do it? Please be as detailed as possible (link to a tutorial or something you are suggesting would help a lot). |
How to get SQL Server 2012 to use the invariant culture in format()? Posted: 15 Sep 2013 08:16 AM PDT
I'm trying to get the built-in It is said in the documentation that the function accepts a .NET culture identifier as the third parameter. The identifier for the invariant culture is a blank string:
That does not work with SQL Server however:
It is also documented that the invariant culture is associated with the English language, but not with any country/region. One would think this allows to pass So how do I make SQL Server to use the invariant culture? (Note: I'm interested in making the built-in thing to work. I already have my own CLR functions to do this, I was going to remove them in favor of the now-built-in functionality).
|
multiple line text values in mysqldump text export file Posted: 15 Sep 2013 10:20 AM PDT I'm trying to export +100mil record table into txt file. My plan is split up txt file to small pieces by size or line then import. I have one text field has multiple line like blog post text, in txt export file it exported as multiple lines which I want it to be 1 line 1 row so I can process it by lines. I tried various fields-terminated-by, lines-terminated-by, fields-escaped-by parameters for export but nothing made that multiple line text into single, quoted and comma separated line. It does quote well when I export the data in sql format but I haven't succeeded to convert new line characters in the text field to \n\r or \n whatever those characters are. Even if I escape it, still exported as new line with the quote. |
Any advantage of creating table in temporary tablespace - Oracle Posted: 15 Sep 2013 07:20 AM PDT My on-site DBA told me to create the particular table on temporary tablespace.Is there any advantage of such action? |
Altering the location of Oracle-Suggested Backup Posted: 15 Sep 2013 03:21 PM PDT On one database, the Oracle-Suggested Backup scheduled from Enterprise Manager always ends up in the recovery area, despite RMAN configuration showing that device type disk format points elsewhere. As far as I can see, the scheduled backup job is simply: Asking RMAN to If I run the script manually, the backupset is placed at the above location, when the script is run from the job scheduler the backupset goes to the RECO group on ASM, Why might Oracle still choose to dump the backupset to the Ultimately, how can I change the backup destination? |
Posted: 15 Sep 2013 05:21 PM PDT We are having a shared hosting plan and they are saying that do provide MYSQL Timezone support in a shared hosting plan. I can create timezone related tables in our database and populate them with required data(data from from our local MYSQL Timezone related tables. How to view the code syntax for MySQL "CONVERT_TZ" function? Thanks Arun |
Posted: 15 Sep 2013 03:57 AM PDT I am making a Phonegap based ERP solution for school in which every school can configure their own school individually and can manage the whole school from that single application, in which a main administrator configures the school,i.e., adds all the subjects, teachers, classes, timetables, result, etc. The application also has roles for students, teachers, parents, and admins in which the admin can define the permissions that all the different roles have plus can add and delete the permissions as well. The application manages the attendance, time table, results, profile of all the students and staff as well. I can think of only two ways of doing the same,
I just cant make out with what design I shall go forward with. Thanks in advance. |
replication breaks after upgrading master Posted: 15 Sep 2013 01:21 PM PDT I have a set up of replication with master 5.1.30 and slave 5.5.16 and the replication is working good Now i have upgraded mysql master to 5.1.47 As far as i know we have to turn off the log bin with sql_log_bin=0 before using mysql_upgrade program in order to up grade the replication setup as well but the problem here is the binary log was not turned off while mysql_upgrade program is running The reason i found is in 5.1 the sql_log_bin is a session variable and mysql_upgrade program runs in another session so how to upgrade the replication as well along with the server with any breakage on replication setup. any suggestions are really useful..... |
The client need 'SUPER' privileges to the reports DB in order to create and maintain my Functions Posted: 15 Sep 2013 03:20 AM PDT I have problem with one of my client Provided super privileges to the client James on 'Reports ' database , but the problem is the client need 'SUPER' privileges to the reports DB in order to create and maintain my Functions. So I'm still unable to move forward... BUT, I still need SUPER privileges to the reports DB in order to create and maintain my Functions. So I'm still unable to move forward... (This is what client responding , he can login but he can't create nor maintain functions on reports db server ) After providing super privileges the client cant maintenance any of this functions nor he can create functions , how to work on this . |
extproc env variables oracle 11g Posted: 15 Sep 2013 09:20 AM PDT I have oracle 11g with extproc separately configured in listener.ora. Users report some environmental variables that should be exported are not set. From where does extproc gets it environment besides ENV in its definition in listener.ora? They come from shell that started listener? Why variables included in ENV do not appear? How could I efficiently check what env variabls extproc has set? |
Need to suppress rowcount headers when using \G Posted: 15 Sep 2013 02:21 PM PDT Is there a command to suppress the rowcount headers and asterisks when using '\G' to execute a SQL statement? I am executing mysql with the |
multivalued weak key in ER database modeling Posted: 15 Sep 2013 04:21 PM PDT I was wondering since i didnt find out any clarification for this. I want to store movies that exist in different formats (dvd, bluray etc) and the price for each format differs from each other as well as the quantity of each format, so i came up with this: Is this correct from a design perspective? Does this implies redundancy? I dont understand how will this be stored in a table. Would it be better to do it like this : Thanks in advance. EDIT : I add some more descriptive information about what i want to store in this point of the design. I want to store information about sales. Each movie that exist in the company i need to store format, price and stock quantity. I will also need to store customer information with a unique id, name, surname, address, movies that he/she has already bought and his credit card number. Finally i will have a basket that temporary keeps items (lets suppose that other items exist apart from movies) that the customer wants to buy. |
Payment methods conceptual and logical model Posted: 15 Sep 2013 01:20 AM PDT I need to create a conceptual and logical (normalized) models of parking house according to the requirements below. It looks to me as a very simple concept that doesn't need all tables to have relationships - but then they could not be modelled as entities. I tried asking this on stackoverflow but got no feedback for couple of days now.
The problem is I don't know how to put those highlighted relations to the logical db model and whether event to put them there. Is it ok-practice to have isolated tables in the design? |
Microsoft Office Access database engine could not find the object 'tableName' Posted: 15 Sep 2013 06:21 PM PDT First a little background: I am using MS access to link to tables in an advantage database. I created a System DSN. In the past in Access I've created a new database, and using the exteranl data wizard, successfully linked to tables. Those databases and the linked tables are working fine. Now I am trying to do the same thing, create a new access db, and link to this same DSN. I get as far as seeing the tables, but after making my selection, I get the error, " The Microsoft Office Access database engine could not find the object 'tableSelected'. Make sure the object exists and that you spell its name and the path name correctly. I've tried creating another datasource (system and user) with no luck. Environment is Wn XP, Access 2007, Advantage DB 8.1 |
MYSQL 5.5 Fail start Fedora 16 Posted: 15 Sep 2013 12:21 PM PDT I installed mysql and mysql-server from the repos (MySQL version 5.5). Then tried to start it, but got an error. Here is the log: Fresh installation, nothing changed prior to that, just ran yum update. Here is the systemctl status trace |
Sql Anywhere 11: Restoring incremental backup failure Posted: 15 Sep 2013 11:21 AM PDT We want to create remote incremental backups after a full backup. This will allow us to restore in the event of a failure and bring up another machine with as close to real time backups as possible with SQL Anywhere network servers. We are doing a full backup as follows: This makes a backup of the database and log files and can be restored as expected. For incremental backups I've tried both live and incremental transaction logs with a renaming scheme if there are multiple incremental backups: However, on applying the transaction logs on restore I always receive an error when applying the transaction logs to the database:
The transaction log restore command is: The error doesn't specify what table it can't find but this is a controlled test and no tables are being created or dropped. I insert a few rows then kick off an incremental backup before attempting to restore. Does anyone know the correct way to do incremental backup and restore on Sql Anywhere 11? UPDATE: Thinking it may be related to the complexity of the target database I made a new blank database and network service. Then added one table with two columns and inserted a few rows. Made a full backup, then inserted and deleted a few more rows and committed transactions, then made an incremental backup. This also failed with the same error when attempting to apply the incremental backups of transaction logs after restoring the full backup ... Edit: You can follow this link to see the same question with slightly more feedback on SA: http://sqlanywhere-forum.sybase.com/questions/4760/restoring-incrementallive-backup-failure |
What are database statistics, and how can I benefit from them? Posted: 15 Sep 2013 06:19 AM PDT I've heard mention of statistics that SQL Server keeps by default. What are they tracking, and how can I use this information to improve my database? |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment