[how to] mysql tried to restart itself 3 or more times, but it failed finally |
- mysql tried to restart itself 3 or more times, but it failed finally
- Cannot change root access in MySQL to %
- "Error at line 1: ORA-00942: table or view does not exist." Sqlplus basic use, querying. Unable to retrieve schema from table "Course" or "emp"
- MySQL database datafiles in windows SystemData directory
- List of variable defaults for InnoDB engine
- Find records in same table with different company id but some other same values
- Return table based on 1, 2, 3, 4, 5 or 0 selected search options
- PostGIS Query that selects from over 124GB of Indexed data, how to Optimize?
- pt-table-checksum - Diffs cannot be detected because no slaves were found (1)
- Cron job for Mysql Incremental backup
- Removing superfluous tables with a synchronization?
- InnoDB query duration inconsistent
- TOAST Table Growth Out of Control - FULLVAC Does Nothing
- "SQL1042C An unexpected system error" when creating a database
- mysql replication goes out of sync for some tables
- Force View's query plan to update?
- MySQL+Web-App Performance Issues using different servers
- SQL Server database schema (and likely, some data changes) - how to auto-distribute over many database instances
- MySQL slap with custom query
- Overview of how MongoDB uses its various threads
- MySQL information_schema doesn't update
- How to succesfully run a batch file in an SQL Agent job?
- MySQL partitioned tables?
- Limit memory used for mongoDb
- Pgpool, Postgresql and Apache tuning (1000 concurrent spatial queries)
- innodb_file_format Barracuda
mysql tried to restart itself 3 or more times, but it failed finally Posted: 05 Jul 2013 09:26 PM PDT it happended the second time, could be the same reason cause it. mysql tried to restart itself 3 or more times, but it failed finally. here is mysql log and following is content of my.cnf there are 1.5G mem on this server but i can easily start it using "service mysqld start" anyone knows what was happened? please help! |
Cannot change root access in MySQL to % Posted: 05 Jul 2013 08:43 PM PDT This is a new MySQL Server install. I created my root password as part of the setup (on Centos 6.4). When I connect to the server through a terminal, I can connect to MySQL and issue commands, using my root password. select current_user; gives me: If I do: I get: But when I do: Here's what I get: Am I not supposed to see another line for root@% in addition to root@localhost? The real issue I'm having is that I can't connect to MySQL from outside of localhost (as in, I'm logged in using a terminal session), and if the MySQL server is not giving root universal access (root@%), then that would explain the problem. When I try to connect using PHP (from my local MAC), the following is the returned MySQLi object: I realize that granting root access from % is not a great idea, but at this point, I'm trying to figure out why I can't connect to the MySQL server, and when I solve that, I will restrict access. |
Posted: 05 Jul 2013 06:38 PM PDT I installed Oracle APEX 11g on my computer which is a WIN 7 64-bit OS. However, When I open the command prompt I try to execute the following query: "Select * from course;". I receive the following error: "Error at line 1: ORA-00942: table or view does not exist." I also receive that error when also trying to execute the following query: "select * from emp;". I have also tried to connect as a DBA and I receive the same error. I also receive the same error when trying to query while logging in by using my regular user id and password. I would like to learn sqlplus but right now I just want to learn how to query tables. I understand teh following queries but I receive the error above. Can someone please help? Thank you. |
MySQL database datafiles in windows SystemData directory Posted: 05 Jul 2013 02:45 PM PDT When i create a database and some tables in MySQL Server, it makes some data files in Windows Each table has three files with these suffixes: I don't know what those exactly are. Please explain to me what those are and what is usages? Thanks in advance. |
List of variable defaults for InnoDB engine Posted: 05 Jul 2013 02:29 PM PDT I would like to list the defaults for the global variables that are innodb specific. The problem is that the following commands doesn't list any innodb variables: However I can see the runtime ones with: |
Find records in same table with different company id but some other same values Posted: 05 Jul 2013 12:59 PM PDT I have subscriptions table that looks roughly: table subscriptions I would like to find records that: A) are of different company (company id) but have same city/state B) are of different company (company id) but have same subscription date For address, I was doing: Is this efficient and accurate way to do it? |
Return table based on 1, 2, 3, 4, 5 or 0 selected search options Posted: 05 Jul 2013 03:45 PM PDT Scenario: Lets say I have several tables for a software ticketing system: Developer, Supervisor, TicketType, TicketStatus and a Ticket table to join them on. Assume I have done my inner joins and now have a complete table listing all assigned Ticket info. Obviously the SQL for this is pretty basic: So that's the DEFAULT static portion of the report. This is what initializes first, however the client, customer, manager, etc. wants the option of selecting zero, one, or more fields to sort data on for all fields in the report. For example: User may choose 'app' from the TicketName category, 'Josh' as Developer, and BETWEEN date1 AND date2, while TicketStatus Remains NULL or ' '. The table should display all 'app' tickets assigned to 'Josh' between selected dates, while not breaking due to TicketStatus not being selected. Any ticket status should appear. User could choose not to select any values, in which case the report should default to the example above, without breaking. Meaning I cannot use NULL as a place holder bacause it would cause the initial report to not initialize (unless I did something wrong?) User can basically mix and match any variation of the fields and the table should sort on those selected fields. Question: How can someone return a table based on all or no options being selected, without breaking the query? Additional Info: Using SQL Server 2008 r2, Business Intelligence Development Studio, T-SQL. I have tried declaring variables and using LIKE statements, but this requires the user to select an option before the report will return anything. The User may not know what the field values are from memory. |
PostGIS Query that selects from over 124GB of Indexed data, how to Optimize? Posted: 05 Jul 2013 03:39 PM PDT I have a query: Everything that I am selecting from is indexed. The reports table is 164GB with an index size of 124GB The The Explain analyze Information is: The last time I ran the query it took ~50 minutes. I understand that I will eventually reach the point where going through a table with hundreds of millions of records will take a long time, and that the network will become a factor, but I ideally need to increase the time frame of the query from 1 hour to two months. Any help is greatly appreciated. |
pt-table-checksum - Diffs cannot be detected because no slaves were found (1) Posted: 05 Jul 2013 10:13 AM PDT I'm new to Percona tools. I am trying to use But get the error. My.cnf Do I have to use DSN? And how does it work? |
Cron job for Mysql Incremental backup Posted: 05 Jul 2013 12:02 PM PDT I want to run cron job Mysql it worked ,but "mysqlbackup" is not working. Mysql document is saying that we need to use "mysqlbackup" command rather than "mysqldump" command |
Removing superfluous tables with a synchronization? Posted: 05 Jul 2013 10:11 AM PDT I'm using SQL Server 2008 r2's replication functionality to update my subscriber database through a transactional pull subscription. When I mark it for reinitialization, it does fix the schema and data of any modified local tables that exist in the publication snapshot, but it doesn't remove any new tables (and presumably SPs, triggers, etc.) that have been added. Is there any way to get the synchronization to remove superfluous objects like tables that don't exist in the publication shapshot, in addition to updating and adding existing objects? |
InnoDB query duration inconsistent Posted: 05 Jul 2013 12:23 PM PDT I am running a series of UPDATE commands on a nearly empty InnoDB table, and around 1 out of every 20-30 queries will inexplicably take 10 times as long as the others. For example, the first 20 updates will take 20ms, and the 21st update will suddenly take 200ms. I've set up an incredibly basic test: I insert a single row into the table, and then I have a C# console program that does a series of updates: This is the output I see from the program: If I run "SHOW profile FOR QUERY" with the 152ms time, both the "Updating" and "query end" values are abnormally high. If I switch the table to MyISAM, the query duration is perfect, but I don't want table-locking. Does anyone have a guess as to what is making InnoDB act this way? |
TOAST Table Growth Out of Control - FULLVAC Does Nothing Posted: 05 Jul 2013 10:54 AM PDT Recently, I've had a PostgreSQL 8.2.11 server upgraded to 8.4 in order to take advantage of autovacuum features and be in line with 30ish other PGSQL servers. This was done by a separate IT group who administrates the hardware, so we don't have much choice on any other upgrades (won't see 9+ for a while). The server exists in a very closed environment (isolated network, limited root privileges) and runs on RHEL5.5 (i686). After the upgrade, the database has constantly been growing to the tune of 5-6 GB a day. Normally, the database, as a whole, is ~20GB; currently, it is ~89GB. We have a couple other servers which run equivalent databases and actually synchronize the records to each other via a 3rd party application (one I do not have access to the inner workings). The other databases are ~20GB as they should be. Running the following SQL, it's fairly obvious there's an issue with a particular table, and, more specifically, its TOAST table. Which produces: relation | size ------------------------------------+--------- pg_toast.pg_toast_16874 | 89 GB fews00.warmstates | 1095 MB ... (20 rows) This TOAST table is for a table called "timeseries" which saves large records of blobbed data. A I've performed a
REINDEXed the table which freed some space (~1GB). I can't CLUSTER the table as there isn't enough space on disk for the process, and I'm waiting to rebuild the table entirely as I'd like to find out why it is so much bigger than equivalent databases we have. Ran a query from the PostgreSQL wiki here - "Show Database Bloat", and this is what I get: current_database | schemaname | tablename | tbloat | wastedbytes | iname | ibloat | wastedibytes -----------------+------------+--------------------------------+--------+-------------+---------------------------------+--------+-------------- ptrdb04 | fews00 | timeseries | 1.0 | 0 | idx_timeseries_synchlevel | 0.0 | 0 ptrdb04 | fews00 | timeseries | 1.0 | 0 | idx_timeseries_localavail | 0.0 | 0 ptrdb04 | fews00 | timeseries | 1.0 | 0 | idx_timeseries_expirytime | 0.0 | 0 ptrdb04 | fews00 | timeseries | 1.0 | 0 | idx_timeseries_expiry_null | 0.0 | 0 ptrdb04 | fews00 | timeseries | 1.0 | 0 | uniq_localintid | 0.0 | 0 ptrdb04 | fews00 | timeseries | 1.0 | 0 | pk_timeseries | 0.1 | 0 ptrdb04 | fews00 | idx_timeseries_expiry_null | 0.6 | 0 | ? | 0.0 | 0 It looks like the database doesn't consider this space as "empty," at all, but I just don't see where all the disk space is coming from! I suspect that this database server is deciding to use 4-5x as much disk space to save the same records pulled from the other data servers. My question is this: Is there a way I can verify the physical disk size of a row? I'd like to compare the size of one row on this database to another "healthy" database. Thanks for any help you can provide! UPDATE 1 I ended up rebuilding the table from a dumped schema due to its size (couldn't leave it alone for another day). After synchronizing the data, via the software synch process, the TOAST table was ~35GB; however, I could only account for ~9GB of it from that blob column which should be the longest in terms of values. Not sure where the other 26GB is coming from. CLUSTERed, VACUUM FULLed, and REINDEXed to no avail. The postgresql.conf files between the local and remote data servers are exactly the same. Is there any reason this database might be trying to store each record with a larger space on disk? UPDATE 2 - Fixed I finally decided to just completely rebuild the database from the ground up- even going as far as to reinstall the PostgreSQL84 packages on the system. The database path was reinitialized and tablespaces wiped clean. The 3rd party software synchronization process repopulated the tables, and the final size came out to be ~12GB! Unfortunately, this, in no way, helps to solve what the exact source of the issue was here. I'm going to watch it for a day or two and see if there are any major differences with how the revitalized database is handling the TOAST table and post those results here. |
"SQL1042C An unexpected system error" when creating a database Posted: 05 Jul 2013 11:37 AM PDT I installed DB2 Express-C 10.1 on OS X 10.8.4. I installed it in user mode, and it seems to have created an instance with my username ( Now, I am trying to create a database by running What am I missing here? How can I create a simple database from the command line (I prefer not to bother with Data Studio as it isn't available for OS X)? In Note that for the install to go through, I increased the OS X shared memory, per this recommendation. |
mysql replication goes out of sync for some tables Posted: 05 Jul 2013 11:48 AM PDT We are running mysql 5.1.61 on redhat systems and have the following setup One master and four slaves replicating from the master, we recently added a new slave for replication and over a few days we have started noticing that on the newly added slave some tables ( not all ) loose some records , this happens only on this slave and it is not regular , over a period of 3 weeks this issue seems to have happened on 5-7 days . We use statement based replication. I am not sure why this happens on only one slave. There seems to be no error in the mysql error logs. The only difference between the old slaves and the new slave is that the new slave has a slightly lower ram than the other ones but the new slave is not being used for anything right now. Is there a way to trouble shoot this issue to see why this happens on only one slave ?. Could it be network related or anything else ? Any pointers on where to start looking at ? Here is the memory info Old slave New slave |
Force View's query plan to update? Posted: 05 Jul 2013 02:48 PM PDT I have a View whose query plan appears to be cached, is there a way to force the View's plan to be recalculated on each access? |
MySQL+Web-App Performance Issues using different servers Posted: 05 Jul 2013 12:48 PM PDT We are having a performance issue with our MySQL servers that does not make any sense. I have read countless articles from many people (mostly Percona) and have made my.cnf tweaks. We have even manage to squeeze out another 30% more TPS thanks to those Percona articles. However, our problem is with our in-house web-app (a Tomcat/Java/Apache model). It performs poorly when connected to certain servers - the better hardware servers. Here is the symptom: If we point our test application server (Ubuntu, Apache, Tomcat, Java) to server MYSQL02, the applications performance is acceptable. However, if we point the application to MYSQL01 or MYSQL03 (and these two boxes are idle!) the application performance is poor. There are high latencies. Example: We cannot figure out why! The MySQL servers and MONyog do NOT report any problems! If we execute the statements (100's of them) manually they return instance results and their explanations show they are all using indexes. We do NOT get any slow query, deadlock, or contention notifications. Here is some basic information about our MySQL systems. They are all DEDICATED MySQL servers: PROD (current production, not in replication farm, standalone) MYSQL01 MYSQL02 MYSQL03 We used sysbench to test and tweak all the above systems and here are the test results with notes. NOTE: TPS = Transactions Per Second Results before applying any new tweaks: Results after my.cnf tweaks: We are unsure why MYSQL01's performance is so poor. We can only summarize that there is an OS, RAID CARD or BIOS setting(s) that may be improperly set. I am leaning towards the RAID Card/Configuration. They only way to know for sure is to shutdown this server and scrutinize the configuration. A reload may be necessary. However, since it is our ultimate plan to make the current PROD hardware the primary production MySQL server then we may leave MYSQL01 alone for now and re-purpose the hardware after migrating to the 5.5 farm. However, we can't migrate until we figure out why our application is behaving so poorly on certain hardware. Anyone have any suggestions? |
Posted: 05 Jul 2013 10:48 AM PDT Our development involves a SQL Server database (also might be Oracle or Postgres later) and we would sometimes make database schema changes or some other interventions in database. What solutions exist to create a "patch" or "script" to distribute these changes on other installations of same database (we do not have direct access to these)? It needs to alter database schema and execute SQL and/or other complex, pre-programmed database data alterations as defined by person who initializes/designs change. On other instances, a system admin should be able just run some program/press button, so these changes would be applied automatically. In addition, it is plus if such solution can take database snapshot and derive "difference" on contents of particular table that would be then distributed. The solution can be commercial. Thanks in advance! |
Posted: 05 Jul 2013 06:48 PM PDT I want to conduct stress test on our MySQL DB. I have the list of queries i need to execute. I have tried using Apache JMeter for this but it is very time consuming. Is it possible to run mysqlslap with custom .sql file containing INSERT, UPDATE, SELECT queries on specified MySQL database? |
Overview of how MongoDB uses its various threads Posted: 05 Jul 2013 01:48 PM PDT On one instance I have MongoDB using ~85 threads. In lieu of having time to investigate directly, I am curious:
|
MySQL information_schema doesn't update Posted: 05 Jul 2013 08:48 PM PDT I have a database, say After I run the query The strange thing is that I find the database size doesn't decrease at all, however the data in "test" is gone. I've done this kind of test many times, this strange behavior happens sometimes. I'm using Can anybody tell me what is wrong? Update: Actually, I use another thread to check the database size periodically. |
How to succesfully run a batch file in an SQL Agent job? Posted: 05 Jul 2013 07:19 PM PDT I have a SQL Agent Job which generates a specific report in PDF-file and then copies the PDF to a network directory and then deletes the PDF file in the source directory. The SQL Jobs consists of 2 steps: 1. Generate the report 2. Copy the report to the network location. For step 2 I made a bat-file which handles the copying and removal of the pdf file. The bat-file is as follows: However, when I run my the Job, it hangs on the second step. The status just stays on "Executing". This is the line which I stated in the step (location of the bat-file to execute): My job-settings are as follows: Step 1 Type: Operating system (CmdExec) On Success: Go to the next step On Failure: Quit the job reporting failure Step 2 Type: Operating system (CmdExec) On Success: Quit the job reporting success On Failure: Quit the job reporting failure Some facts:
|
Posted: 05 Jul 2013 03:48 PM PDT I have a database that supports a web application with several large tables. I'm wondering if partitioned tables will help speed up certain queries. Each of these tables has a colum called client_id. Data for each client_id is independent from every other client_id. In other words, web queries will always contain a where clause with a single client_id. I'm thinking this may be a good column on which to partition my large tables. After reading up on partitioned tables, I'm still a little unsure as to how best to partition. For example, a typical table may have 50 million rows distributed more or less evenly across 35 client_ids. We add new client_ids periodically but in the short term the number of client_ids is relatively fixed. I was thinking something along these lines: My question. Is this an optimal strategy for partitioning these types of tables? My tests indicate a considerable speedup over indexing on client_id, but can I do better with some other form of partitioning (i.e. hash or range)? |
Posted: 05 Jul 2013 09:48 AM PDT Is there any way to limit using RAM for mongodb on Debian? I'm looking for a solution fo about 8 hours, but have no results. |
Pgpool, Postgresql and Apache tuning (1000 concurrent spatial queries) Posted: 05 Jul 2013 08:48 AM PDT I'm trying to configure a load balancing system. I've a python script, invoked through mod_wsgi on Apache, that generates a query and executes it on pgpool: request-> wsgi python -> pgpool -> postgresql. Pgpool is configured as load balancer using 4 servers with 24GB ram and 350GB ssh hd. Our db is about 150GB and a query takes about 2 seconds. These are the configurations: Pgpool
Apache (mpm_prefork)
PostgreSQL
It seems not working When I try to submit more than 150 concurrent queries, although pgpool log file doesn't have any errors I get this error from the python script:
Any ideas? |
Posted: 05 Jul 2013 11:26 AM PDT I have a couple questions for those more familiar. Most of my instances have been running Antelope despite having support for Barracuda. I was looking to play around with some compresses innodb tables. My understanding is this is only available under the Barracuda format.
From what I've read and gathered from my tests the answers are: Yes. Yes. I'm not sure. Update I've been running w/ some Dynamic and some Compressed tables in various instances since this post with out issue. Further I neglected to read http://dev.mysql.com/doc/refman/5.5/en/innodb-file-format-identifying.html at the time.
So tables will be created as Antelope even if you allow Barracuda. The mixing is unavoidable unless you specify every table as row_format dynamic or a compressed table. There is no indication you should do a complete dump and reload when introducing your first Barracuda table (such as is recommended when upgrading major versions of mysql) |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment