[how to] MySQL Database "Table Doesn't Exist" When Clicked in phpMyAdmin |
- MySQL Database "Table Doesn't Exist" When Clicked in phpMyAdmin
- How to split/explode comma delimited string field into SQL query
- MySQL needs more space
- Design DB for users with different information fields?
- Transactions' order of commitment within Serializable schedules
- MySQL: Optimizing for large but discrete data sets
- Cannot create stored procedure
- synchronizing local and server database
- Should I join datetime to a date using cast or range?
- Booted by MYSQL Error (2003) 10060 mid way through work
- insufficient privileges while executing oracle stored procedure?
- One Materialized Views in Two Refresh Groups
- "ORA-03113: end-of-file on communication channel" on startup
- Create Login command error
- MySQL gives me:“Can't open and lock privilege tables: Table 'host' is read only”
- Percona Xtradb Cluster : How to speed up insert?
- How to design a table that each rows have 5K boolean attributes?
- Is there a way to implement a cross-database task on SQL Server 2012 with the Availability Groups feature?
- Migrating from SQL Server to MySQL using MySQL Workbench tool
- Restoring database to UNC path on local drive
- how to run Db2 export command in shell
- Import from incremental backups to a new host in Oracle 11g
- InnoDB Tablespace critical error in great need of a fix
- Syncronize mysql databases between local and hosted servers automatically
- effective mysql table/index design for 35 million rows+ table, with 200+ corresponding columns (double), any combination of which may be queried
- SELECTing multiple columns through a subquery
- Designing Simple Schema for Disaggregation of Demand Forecast
- Cast to date is sargable but is it a good idea?
- T SQL Table Valued Function to Split a Column on commas
MySQL Database "Table Doesn't Exist" When Clicked in phpMyAdmin Posted: 12 Oct 2013 02:36 PM PDT I recently updated MAMP (the LocalHost for Mac) to the latest Version 2.2 in order to get the latest versions of Apache, MySQL, and PHP. After the upgrade, all my LocalHost Websites are unusable. They can't load in the Browser (with MAMP running). I see the MySQL DB Files end in .FRM (Form). When I click on a Table in phpMyAdmin, it says "Table Does Not Exist," even though they are Listed in phpMyAdmin and the Folder for that particular Database inside MAMP/db. How do I fix this, to be able to Edit My Websites Locally? |
How to split/explode comma delimited string field into SQL query Posted: 12 Oct 2013 05:57 AM PDT I have field and I want to use it to search
But in this way this doesn't work, so I need in some way to split What solution should I use here? I'm using the T-SQL Sybase ASA 9 database (SQL Anywhere). But in this way this doesn't work, so I need in some way to split Way I see this, is to create own function with while loop through, and each element extract based on split by delimiter position search, then insert elements into temp table which function will return as result. |
Posted: 12 Oct 2013 04:55 AM PDT I am using a program to import a Wikipedia dump to my local mysql server. The program is running. I start it four days ago. Unfortunately the drive C: is going to be full. I have two HDDs connected to my PC. Each one are 80GB. The econd HDD is empty. How can prevent the program from throwing exception. It has no pause option. Is is possible to use second HDD in the scenario? |
Design DB for users with different information fields? Posted: 12 Oct 2013 10:46 AM PDT Let's say I want to register all faculty of a university and they are in different fields and have different resume information. Here we have some common fields and some field specific fields. for example for CS and economic and medicine we may have: CS: Medicine faculty: Economic faculty: We also may have students in the system: My first guess was to make a I'm new to DB filed and don't have enough experience in designing DBs so I wanted to know what's the best approach in this kind of situations? I want a general answer. I don't know if the DB differs or not, and I don't care to change my DB to another open-source DB (sql or no-sql). I had |
Transactions' order of commitment within Serializable schedules Posted: 12 Oct 2013 02:31 AM PDT The following diagram was taken from a book(T1 and T2 are transactions which read and write to databases objects A and B). For convenience, I quoted few lines of text in that book which describe the diagram and the quote is below.
My question is regarding this "Even though the actions of T1 and T2 are interleaved, the result of this schedule is equivalent to running T1(in its entirety) and then running T2." statement. How can this be true if T2 commits before T1? Please give a detailed answer.
|
MySQL: Optimizing for large but discrete data sets Posted: 12 Oct 2013 10:17 AM PDT In brief: I'm developing a database that handles GTFS datasets from multiple transit agencies. Each dataset contains millions of rows in the stop_times.txt file (and thus its corresponding table). Updating the table gets slower and slower as it gets bigger. I can deal with a couple of million rows from a single agency, but what happens when I add 10 more feeds? 50? Now, the data sets are completely independent of one another. I won't be trying to join information across DART, MTA, and Transport for London. I feel like it would be very bad database design, but I'm tempted to create a separate table for each and forget about the whole thing. I'm sure this has been answered somewhere, but I really don't know what I'm searching for. I've read up a bit on partitioning, but I'm not sure if that will solve my problem. Would adding a hash partition on my Here's my current table structure: Thanks in advance for the help. |
Cannot create stored procedure Posted: 12 Oct 2013 08:13 AM PDT I have the following piece of statement entered into MySQL5.6 Command Line Client. However, the following error was received. I haven't even been able to add in END// Delimiter; after the select statement. At the same time, i was wondering after the stored procedure has been created successfully, how do i CALL the stored procedure without the command line but using java codes. Kindly assist. Greatly appreciated! |
synchronizing local and server database Posted: 12 Oct 2013 08:19 AM PDT I am developing a billing system software. For this I have created a database it contains many tables database and this is in local system all transactions data will be stored in local systems database. I would like to provide data backup tables in server. Whenever the person(billing s/w user) wants he/she can upload the data to server(only newly added data should be uploded if existing data is there). If local system data is currepted or get deleted by some reason it can be downloaded from the database server. This all features should be done by using billing software. How to do this. |
Should I join datetime to a date using cast or range? Posted: 12 Oct 2013 05:28 AM PDT This question is a take-off from the excellent one posed here: Cast to date is sargable but is it a good idea? In my case, I am not concerned with the One table has My question is which is preferable? The I expect to stay on the order of 2M rows having the Should I expect the same behavior on the My generalized use-case is to treat my events table like a calendar table |
Booted by MYSQL Error (2003) 10060 mid way through work Posted: 12 Oct 2013 10:24 AM PDT I was working on some queries and then my HeidiSQL froze, I tried to reboot the connection and I get good old MYSQL Error (2003) (10060). It worked just fine before that. I haven't made any firewall changes, and I checked the "white list" of IPs on AWS it still was fine. I encountered this error code before but never during work with no changes. Thoughts? Edit 1: Edit 2: |
insufficient privileges while executing oracle stored procedure? Posted: 12 Oct 2013 08:26 AM PDT Im getting insufficient privileges error while executing the following oracle stored procedure. Im using Oracle Database 10g Express Edition. Im using the post Update oracle sql database from CSV to build this SP. I could compile this stored procedure sucessfully. I have all the rights for the oracle user because im the admin. I have given all possible rigts. But when i execute the SP im getting error like Update: I'm not trying to update user or password. But the error message says as i'm trying to modify user details. When i try the same code step by step outside the stored procedure, its executing without any problem. What could be the reason for this? How can i resolve the issue? |
One Materialized Views in Two Refresh Groups Posted: 12 Oct 2013 02:40 AM PDT I have five materialized views that I want to refresh in two occasions, every Sunday and at the 1st of every month. I created a Refresh Group for the weekly and that works fine. But when I tried to create the second Refresh Group for the monthly I get a You can have a materialized view in only one refresh group? What options do I have to refresh it in different intervals? |
"ORA-03113: end-of-file on communication channel" on startup Posted: 12 Oct 2013 06:26 AM PDT I have been reading posts here, on Oracle support, and anywhere else I can find for the last three days and I've given up on this problem... An Oracle database hung. Shutdown of the database sat for a few hours and then it quit. It wouldn't restart. The server was restarted. Oracle was restarted. Going step by step: startup nomount works, alter database mount works, alter database open returns ORA-03113. This is all on localhost - not over the network. The machine has no firewall of any kind running. Any idea how to get past this ORA-03113 error? I've been on the phone with support in India for the last 4.5 hours and I haven't found anyone helpful yet. |
Posted: 12 Oct 2013 03:26 PM PDT |
MySQL gives me:“Can't open and lock privilege tables: Table 'host' is read only” Posted: 12 Oct 2013 12:26 PM PDT I am facing problem restoring a MySQL database. My primary database was MySQL 5.1 and now I am trying to copy it to MySQL 5.5. The database was backed up by using Xtrabackup. I am using Ubuntu 12.04.3 LTS on this server, MySQL version is: I have followed all the steps to restore using Xtrabackup, this created database files, which I have copied to a tmp directory. I have modified Now when I start the MySQL server I get this error:
I have given a try as follows:
Can some please point me in right direction? I am not sure whats wrong with permissions. |
Percona Xtradb Cluster : How to speed up insert? Posted: 12 Oct 2013 02:26 PM PDT I recently installed a 3 full master node cluster based on Percona Xtradb (very easy install). But now i need to make some tuning to increase INSERT/UPDATE requests. Actually, i made around 100 insertions every 5 minutes, but also made around 400 update in the same time. All this operation take less than 3 minutes when i was on a single server architecture. And now, with 3 node server, it takes more than 5 minutes ... Is there any tuning i can do to speed up this operations ? Here is my actual cnf configuration : Here are the 3-server hard config : Node#1 Node#2 Node#3 UPDATE Actualy there's around 2.4M records (24 fields each) in the table concerned by the INSERT/UPDATE statements (6 fields indexed). |
How to design a table that each rows have 5K boolean attributes? Posted: 12 Oct 2013 11:26 AM PDT I have about 2M rows and each row looks like the following.
-> One integer column(V) and about 5K boolean columns(B1, B2, ..., B5K) associated to the integer. Due to the limitation of the maximum number of columns that I can have for a row, I have separated the boolean columns(attributes) in a separate table. This design works alright when I try to find V's that match one boolean column. For example, finding V's where the 2nd boolean attribute is true: But the query becomes awful when I have to find V's that match a multiple boolean columns, sometimes even for all 5K columns, like finding V's with B1=true, B2=false, B3=true, ... and B5K=false. My primary use of the tables would be the following 2:
I'm thinking about constructing a varchar[5K] field to store the boolean sequence to do 2 but it seems like there's just too much waste in space since each boolean only requires just 1 bit but I'm allocating a byte. What would be the best way to go about this? |
Posted: 12 Oct 2013 10:25 AM PDT We use SQL Server 2012 and its new Availability Groups (AG) feature. There is a task for moving old data of some tables from one database to another database. Both databases are included into different availability groups. Previously (before using the AG feature) the task was resolved by adding the second server instance as a linked server (
Unfortunately, distributed transactions are not supported for AG because databases may become inconsistent in case of failover (http://technet.microsoft.com/en-us/library/ms366279.aspx). Is there some way to implement this task with keeping the AG feature and without implementing the rollback logic in case of exceptions? |
Migrating from SQL Server to MySQL using MySQL Workbench tool Posted: 12 Oct 2013 10:24 AM PDT I'm trying to migrate few tables from SQL Server to MySQL using MySQL Workbench migration wizard. All work fine for structure migrations but when I go to the data migration section it throws an error for one table:
Based on that what I can understand it limits columns with Any clue how I can get this to work? Thanks |
Restoring database to UNC path on local drive Posted: 12 Oct 2013 10:26 AM PDT When I try to restore a database using a restore command with a local UNC path: I get an error:
If I use a local drive letter instead, then it works: This command also restores the database to same folder. So why is there an error when I specify the network path? |
how to run Db2 export command in shell Posted: 12 Oct 2013 09:26 AM PDT I am trying to run the following db2 command through the python pyodbc module IBM DB2 Command : "DB2 export to C:\file.ixf of ixf select * from emp_hc" i am successfully connected to the DSN using the pyodbc module in python and works fine for select statement but when i try to execute the following command from the Python IDLE 3.3.2 cursor.execute(" export to ? of ixf select * from emp_hc",r"C:\file.ixf") pyodbc.ProgrammingError: ('42601', '[42601] [IBM][CLI Driver][DB2/LINUXX8664] SQL0104N An unexpected token "db2 export to ? of" was found following "BEGIN-OF-STATEMENT". Expected tokens may include: "". SQLSTATE=42601\r\n (-104) (SQLExecDirectW)') or cursor.execute(" export to C:\file.ixf of ixf select * from emp_hc") Traceback (most recent call last): File "", line 1, in cursor.execute("export to C:\myfile.ixf of ixf select * from emp_hc") pyodbc.ProgrammingError: ('42601', '[42601] [IBM][CLI Driver][DB2/LINUXX8664] SQL0007N The character "\" following "export to C:" is not valid. SQLSTATE=42601\r\n (-7) (SQLExecDirectW)') am i doing something wrong ? any help will be greatly appreciated. From what i came to know db2 export is a command run in shell, not through SQL via odbc. can you please give me some more information on how to run the command in the shell i am confused and what does that mean ? any guide or small quick tutorial will be great |
Import from incremental backups to a new host in Oracle 11g Posted: 12 Oct 2013 05:26 PM PDT I am using Oracle 11g. I would like to know that whether it is possible to import from incremental level 0 & 1 backups to a new host using RMAN. If yes, how can I do that? For level 1 I am using differential method. |
InnoDB Tablespace critical error in great need of a fix Posted: 12 Oct 2013 07:26 AM PDT Link to screenshot : http://www.nouvellesduquartier.com/i/1/p/Munin_%20Critical_MySql_InnoDB_.JPG (The value reported is outside the allowed range - Byte free, free, gauge, warn, critic) Question: Could the error shown on the screenshot be the reason why my site is very slow? If so, I really need help to fix it since I am far from beeing an engeneer! Thank you in advance. |
Syncronize mysql databases between local and hosted servers automatically Posted: 12 Oct 2013 05:26 AM PDT We have many website with Development , Staging and Production Server. we have many developers for many projects, we need a solution to synchronize the database with developer database with staging database. after that one is works we can move to live database. That one is need to be fully automatically synchronize so that developer dont need to run that tool each and every time |
Posted: 12 Oct 2013 06:26 PM PDT I am looking for advice on table/index design for the following situation: i have a large table (stock price history data, InnoDB, 35 million rows and growing) with a compound primary key (assetid (int),date (date)). in addition to the pricing information, i have 200 double values that need to correspond to each record. i initially stored the 200 double columns directly in this table for ease of update and retrieval, and this had been working fine, as the only querying done on this table was by the assetid and date (these are religiously included in any query against this table), and the 200 double columns were only read. My database size was around 45 Gig However, now i have the requirement where i need to be able to query this table by any combination of these 200 columns (named f1,f2,...f200), for example: i have not historically had to deal with this large of an amount of data before, so my first instinct was that indexes were needed on each of these 200 columns, or i would wind up with large table scans, etc. To me this meant that i needed a table for each of the 200 columns with primary key, value, and index the values. So i went with that. i filled up and indexed all 200 tables. I left the main table intact with all 200 columns, as regularly it is queried over assetid and date range and all 200 columns are selected. I figured that leaving those columns in the parent table (unindexed) for read purposes, and then additionally having them indexed in their own tables (for join filtering) would be most performant. I ran explains on the new form of the query Indeed my desired result was achieved, explain shows me that the rows scanned are much smaller for this query. However i wound up with some undesirable side effects. 1) my database went from 45 Gig to 110 Gig. I can no longer keep the db in RAM. (i have 256Gig of RAM on the way however) 2) nightly inserts of new data now need to be done 200 times instead of once 3) maintenance/defrag of the new 200 tables take 200 times longer than just the 1 table. It cannot be completed in a night. 4) queries against the f1, etc tables are not necessarily performant. for example: the above query, while explain shows that it lookgin at < 1000 rows, can take 30+ seconds to complete. I assume this is because the indexes are too large to fit in memory. Since that was alot of bad news, I looked further and found partitioning. I implemented partitions on the main table, partitioned on date every 3 months. Monthly seemed to make sense to me but i have read that once you get over 120 partitions or so, performance suffers. partitioning quarterly will leave me under that for the next 20 years or so. each partition is a bit under 2 Gig. i ran explain partitions and everything seems to be pruning properly, so regardless i feel the partitioning was a good step, at the very least for analyze/optimize/repair purposes. I spent a good deal of time with this article http://ftp.nchu.edu.tw/MySQL/tech-resources/articles/testing-partitions-large-db.html my table currently is partitioned with primary key still on it. The article mentions that primary keys can make a partitioned table slower, but if you have a machine that can handle it, primary keys on the partitioned table will be faster. Knowing i have a big machine on the way (256 G RAM), i left the keys on. so as i see it, here are my options Option 11) remove the extra 200 tables and let the query do table scans to find the f1, f2 etc values. non-unique indexes can actually hurt performance on a properly partitioned table. run an explain before the user runs the query and deny them if the number of rows scanned is over some threshold i define. save myself the pain of the giant database. Heck, it will all be in memory soon anyways. sub-question:does it sound like i have chosen an appropriate partition scheme? Option 2Partition all the 200 tables using the same 3 months scheme. enjoy the smaller row scans and allow the users to run larger queries. now that they are partitioned at least i can manage them 1 partition at a time for maintenance purposes. Heck, it will all be in memory soon anyways. develop efficient way to update them nightly. sub-question:do you see a reason that i may avoid primary key indexes on these f1,f2,f3,f4... tables, knowing that i always have assetid and date when querying? seems counter intuitive to me but i am not used to data sets of this size. that would shrink the database a bunch i assume Option 3Drop the f1,f2,f3 columns in the master table to reclaim that space. do 200 joins if i need to read 200 features, maybe it wont be as slow as it sounds. Option 4You all have a better way to structure this than i have thought of so far. * NOTE: i will soon be adding another 50-100 of these double values to each item, so i need to design knowing that is coming thanks for any and all help Update #1 - 3/24/2103I went with the idea suggested in the comments i got below and created one new table with the following setup: I partitioned the table in 3 month intervals. I blew away the earlier 200 tables so that my database was back down to 45 Gig and started filling up this new table. A day and a half later, it completed, and my database now sits at a chubby 220 Gigs! It does allow the possibility of removing these 200 values from the master table, as i can get them from one join, but that would really only give me back 25 Gigs or so maybe I asked it to create a primary key on assetid, date,feature and an index on value, and after 9 hours of chugging it really hadn't made a dent and seemed to freeze up so i killed that part off. i rebuilt a couple of the partitions but it did not seem to reclaim much/any space. So that solution looks like it probably isn't going to be ideal. Do rows take up significantly more space than columns i wonder, could that be why this solution took up so much more space? I came across this article http://www.chrismoos.com/2010/01/31/mysql-partitioning-tables-with-millions-of-rows it gave me an idea. where he says "At first, I thought about RANGE partitioning by date, and while I am using the date in my queries, it is very common for a query to have a very large date range, and that means it could easily span all partitions." Now i am range partitioning by date as well, but will also be allowing searches by large date range, which will decrease the effectiveness of my partitioning. I will always have a date range when i search, however i will also always have a list of assetids. Perhaps my solution should be to partition by assetid and date, where i identify typically searched assetid ranges (which i can come up with, there are standard lists, S&P 500, russell 2000, etc). this way i would almost never look at the entire data set. Then again, i am primary keyed on assetid and date anyways, so maybe that wouldnt help much. any more thoughts/comments would be appreciated thanks |
SELECTing multiple columns through a subquery Posted: 12 Oct 2013 01:26 PM PDT I am trying to SELECT 2 columns from the subquery in the following query, but unable to do so. Tried creating alias table, but still couldn't get them. Basically, I am trying to get the The above query works, but seems overkill as same row is fetched twice. Moreover, the |
Designing Simple Schema for Disaggregation of Demand Forecast Posted: 12 Oct 2013 04:26 PM PDT I am doing a simple database design task as a training exercise where I have to come up with a basic schema design for the following case: I have a parent-child hierarchy of products (example, Raw Material > Work in Progress > End Product).
Demand Forecast is usually done at the higher level in hierarchy (Raw Material or Work in Progress level) It has to be disaggregated to a lower level (End Product). There are 2 ways in which demand forecast can be disaggregated from a higher level to lower level:
Forecast shall be viewable in weekly buckets for the next 6 months and the ideal format should be: PRODUCT_HIERARCHY table could look like this: ORDERS table might look like this: where,
How to store forecast? What would be a good basic schema for such a requirement? My idea to select orders for 26 weekly buckets is: But this will give weekly buckets starting from today irrespective of the day. How can I convert them to Sunday to Saturday weeks in Oracle? Please help designing this database structure. (will be using Oracle 11g) |
Cast to date is sargable but is it a good idea? Posted: 12 Oct 2013 05:25 AM PDT In SQL Server 2008 the datatype date datatype was added. In this connect item you can see that casting a The other option you have is to use a range instead. Are these queries equally good or should one be preferred over the other? |
T SQL Table Valued Function to Split a Column on commas Posted: 12 Oct 2013 07:55 AM PDT I wrote a Table Valued Function in Microsoft SQL Server 2008 to take a comma delimited column in a database to spit out separate rows for each value. Ex: "one,two,three,four" would return a new table with only one column containing the following values: Does this code look error prone to you guys? When I test it with it just runs forever and never returns anything. This is getting really disheartening especially since there are no built in split functions on MSSQL server (WHY WHY WHY?!) and all the similar functions I've found on the web are absolute trash or simply irrelevant to what I'm trying to do. Here is the function: |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment