[how to] MySQL #1064 error on line 2 |
- MySQL #1064 error on line 2
- oracle 11g control files error
- Query mysql database through memcache
- MySQL Migration from Slave to Master
- mongos and monod have the same default port?
- How to switch Quiescing to Normal Mode in Oracle?
- Should I deploy Mongodb sharding for 50 collections?
- How do I monitor bandwidth used by SQL Server on port 1433?
- /usr/sbin/mysqld was deleted and replaced by another mysqld with different inode
- creating a report on the mysql jasperReport server
- Oracle remove and place (re-install) database on 10g
- SQL Agent embedded PowerShell script in CmdExec step fails with import-module sqlps
- MySQL error 2006 MySQL server has gone away after upgrade to 5.6.10 using homebrew on Max OS X 10.8.3
- When SQL Server uses Multi Page Allocations
- Is the key_buffer_size applicable to myisam tmp tables?
- Optimize UNION query in MySQL
- Update one table from another table while sorting that table based on one column
- increasing mysql table open cache?
- select count(*) in mysql 5.5 innodb-- rewrite advice?
- Why does this procedure raise a privilege error?
- MySQL5.6 on Mac OS X 10.6.8 problems when granting permissions to root
- Importing data with a lot of SELECTS during transaction
- PostgreSQL replication for archiving
- Replicate RDS-MySQL to a non-amazon host?
- How to prevent high memory, CPU and time consumption by MySQL restore?
- How do I find my current SCN?
- How can I profile SQL Azure?
Posted: 07 Jul 2013 09:04 PM PDT I'm quite new on learning SQL and I seem to be running into an issue with the following code: It's returning the following error: #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'NOT NULL UNIQUE AUTO_INCREMENT, |
oracle 11g control files error Posted: 07 Jul 2013 05:08 PM PDT i use oracle 11g r2 and when startup in sqlplus faced this problem then i went to alert log file i seen these errors |
Query mysql database through memcache Posted: 07 Jul 2013 02:16 PM PDT I have been trying to use memcache with MySQL 5.6 but I can not figure out how to do this. I have add an entry into innodb_memcache.containers so I can query the database through memcache. What I would usually do with memcache is get the value for the key by doing this: but in mysql5.6 potentially there could be a lot of tables specified , so using PHP , how do I specify the table I am trying to query against using the memcache layer?. |
MySQL Migration from Slave to Master Posted: 07 Jul 2013 04:26 PM PDT I recently had a problem with our server. It required us to install a new hard drive, cpanel, and setup the old hard drive as a slave. How can I migrate the mysql database from the slave drive to the master drive? The slave contains the tables with MYD, MYI, and frm files. The database is probably about 20G. I'm assuming SSH is the best way to do this. I am not very familiar with SSH commands and cannot find out how to do this. I've already setup the new database (on the fresh install on the master drive) with the same name, users, and passwords. |
mongos and monod have the same default port? Posted: 07 Jul 2013 11:18 AM PDT Is it true that by default mongos and monod uses the same port 27017? I had to pick a different port for one of these two. For example --port 37017 for mongos. Did anyone have the same confusion? Thanks, |
How to switch Quiescing to Normal Mode in Oracle? Posted: 07 Jul 2013 10:40 AM PDT I have two instance database Oracle, they are ORCL11 and ORCL10,.... I've configured them, it looks like correct. Then, I connect to replication user admin ORCL10 And run SQL command in replication admin user ORCL10,... Then, i connect to replication admin user ORCL11 to see gname status. It display gname = SCOTT_REPG and status = NORMAL,.... but if connect to replication admin ORCL10 It display that gname = SCOTT_REPG and STATUS = QUIESCING, I think it does not have any impact, so I test my replication on ORCL10, for example : It works on table DEPT ORCL10 (1 row created), but when i switch to ORCL11 then run simple sql query : it doesn't show anything, i've searched it on Google but there is nothing tutorial how to fix that, I think the problem is about how to switch QUIESCING to NORMAL mode at ORCL10. I've try this way to fix it : But it still doesn't solve the problem,..... Would you help me to fix this problem ??? thanks !!! |
Should I deploy Mongodb sharding for 50 collections? Posted: 07 Jul 2013 05:19 PM PDT I have a 10 node clusters that runs about 50 concurrent jobs. Each job needs to read/write a separate collection. So I have about 50 collections roughly. Each collection has about 20 M records. Most of the time, jobs only need to do sequential read/write. For simplicity, I could deploy a single instance of mongodb that has no replication, no sharding. And have 50 separate collections. But the single node where mogodb is running becomes a hotspot and the rest 9 nodes can't share the read/write load. So I would like to leverage the resource and balance the load. But I guess a 20M record collection is not worth sharding? Especially I only need to do sequential read/write. I thought about merging 50 collections into one big collection. But I am stumbled on the document size limitation which is 16 MB. Any suggestions? Thanks, |
How do I monitor bandwidth used by SQL Server on port 1433? Posted: 07 Jul 2013 06:55 AM PDT I have a single server running a .NET web application and a SQL Server database (2008 Standard). I'm planning to move the database onto a separate server but in order to provision the network hardware I'd like to benchmark the data throughput between the web application and the database. Can port 1433 be monitored internally? If so is there any tool native to Windows 2008 R2 that can do this, or would I need some 3rd party application like WireShark? My connection string is referencing the database using Essentially I'm trying to determine if I need a Gbit or 100 Mbit connection between the web server and database server. Any thoughts on this would be much appreciated. |
/usr/sbin/mysqld was deleted and replaced by another mysqld with different inode Posted: 07 Jul 2013 07:50 PM PDT This happened on three of our mysql server running on RHEL. While checking for open and deleted files, I found that mysqld in use is deleted (as seen in lsof) and was replaced by a similar mysqld (in /usr/sbin/mysqld) with a different inode. Size and blocks of both the files (deleted and current) are same. new mysqld (not in use) seems to be of the same version as the deleted one. I am trying to figure out what could have caused this (there are no cronjobs running on the system). I checked system logs, database logs and yum logs and found nothing relevant. Any input is appreciated. Thanks! |
creating a report on the mysql jasperReport server Posted: 07 Jul 2013 09:50 AM PDT I have mysql local database, in it I have sample data mart. I have installed jasperReport server to my computer. Just using this tool, I want connect to the local mysql database and just create a pdf report or any report without using any tool except jasperReport server. Can I do ? HOw? Note; report will be directly constructed on the whole data reside in the mysql database. (No filtering) I want to see the report on the jasperReport report page. If any change occur on the mysql database, it should be carried out on the report page of the JAsperReport server |
Oracle remove and place (re-install) database on 10g Posted: 07 Jul 2013 05:49 AM PDT I need to put together a plan to execute some change scripts to a Oracle 10g database, which will add some values, change some values a drop a table and create a new table. I have a step for creating a DMP of the original database before the changes and for running a script that will execute all the changes. But I need a back-out plan if things go wrong. I was wondering what is the bets step(s) to drop the database and import the old DMP file? I know about the sqlplus command: But what is the best way to drop the existing database? Is it just Thanks in advance. K |
SQL Agent embedded PowerShell script in CmdExec step fails with import-module sqlps Posted: 07 Jul 2013 12:50 PM PDT SQL Server 2008R2 PowerShell 2.1 I am trying to create a SQL Agent job that dynamically backs up all non-corrupted SSAS databases on an instance without the use of SSIS. In my SQL Agent job, when I create a CmdExec step and point to a PowerShell script file (.ps1) like this: the job executes successfully (or at least gets far enough to only encounter logic or other syntax issues). This approach won't work for a final solution, because there is a requirement to keep the PowerShell script internal to SQL. So I have a different CmdExec step that embeds the PowerShell script like so: However, when executed with the embedded script, the job errors out quickly with the following response:
Why can't I reference the module from an embedded script, but doing so in a ps1 file works just fine? |
Posted: 07 Jul 2013 04:49 AM PDT I upgraded my installation of MySQL on my Mac OS X 10.8.3 using homebrew
Everything seemed to go smoothly, but...not really, as it turns out. I cannot add a new user (neither through the command line nor through phpmyadmin. I always get the error
I can create new databases without a problem, and I can install, for example Joomla 3.1.1 with no problems. However, when I try to install a new extension into my Joomla installation, I get errors. For example, I get the following error when I try to install a component from NoNumber: Same thing for many other extensions. I tried to start mysql with
and it started with no problem I looked up issues where error #2006 were being reported and how to solve them, and they referred to modifying /etc/my.cnf but there is no /etc/my.cnf configuration file on my system, and there wasn't one in the previous installation of MySQL 5.5.29 (also installed with homebrew) and I had no problems. I ran The output from Exploring a little further, I tried to revert to my previous install of MySQL using
This resulted in the following output: I then accessed mysql from the command line and ran status: As you can see above, it reports But, it also says So something isn't syncing up. Furthermore, when I show databases, I get the following output: But I know I have more than just an So, somehow, it seems to me that mysql is not connecting to the right server, as phpmyadmin reports the same server information that mysql's status command reports: At this point, I am completely lost, and would really need some help. |
When SQL Server uses Multi Page Allocations Posted: 07 Jul 2013 06:49 PM PDT What are the consumers of MPA (multi page allocations) in SQL Server? I know data pages are always 8K. Is there a situation where data/index page make use of MPA? It makes sense that execution plans can use MPA as they can exceed 8 KB. There is a blog here that suggests use of MPA but it refers to stored procedures (one with 500 parameters). In the attached screenshot I see an execution plan using around 11 MB; does this use MPA? Is there a way to confirm that memory allocation for the execution plan is using multi page allocation? My confusion is what actually uses MPA (multi page allocator). The example in the link I posted shows a complex execution plan which will require contiguous allocation over 8 KB. An answer below suggests that there are many things like linked server or extended stored procedures that can use MPA. Which for some reason I am not able to agree. For example Extended Stored procedures can be C++ code using HeapAlloc which SQL server has little control over to manage its memory (it has to be at Windows OS level). Extended stored procedures even in SQL 2012 still consumes memory outside buffer pool so it has to nothing to do with multi page allocation. The same applies to Linked servers especially if you use third party providers (e.g. ORACLE). |
Is the key_buffer_size applicable to myisam tmp tables? Posted: 07 Jul 2013 04:50 PM PDT I have a database about 750GB in size. It's all innodb. Larger analytical queries often need to group by several columns or use distinct so it's common that MySQL will have to create tmp tables. The tmp tables fit into memory. My cache hit ratio (Key_reads / Key_read_requests) is 0. When MySQL creates these tmp tables, I'm guessing it doesn't it create pseudo indexes to be used by key_buffer_size. |
Posted: 07 Jul 2013 09:03 PM PDT I have a problem with a UNION query in MySQL. We have 10 millions players on our website and we would like to select players with a multi-criterias system. For exemple, selecting US people, men, more than 35 years of age. We are using "vertical partionning": 1 table per criter. For example: We would like to do this kind of query: How do I optimize that? ----- More details Explain output of the query: SHOW CREATE TABLE |
Update one table from another table while sorting that table based on one column Posted: 07 Jul 2013 03:50 PM PDT This is the problem I'm trying to figure out in MySQL. We have an old table contains some forms submitted by our users. Somehow, the previous decision was each time a user comes to this survey, a new form will be submitted. So in the old table we easily have several rows with the same Firstname, Lastname, but different values in the other columns, and there's a timestamp column Date_Submission as well. Now we are trying to move everything to a new table, but this time, for each person we only keep one row. And we want to keep some of the the latest old data of that user (like email, phone number, etc) I could do the following:
Apparently this won't give me the "latest" old date for each person. So I tried this one:
But they MySQL will complain with:
So I'm wondering, what's the correct way to achieve this? |
increasing mysql table open cache? Posted: 07 Jul 2013 02:50 PM PDT I often read that it is best to increase this variable slowly. Can someone explain why? My status indicates that I should increase it... What is best practice / "slowly"? Thanks! |
select count(*) in mysql 5.5 innodb-- rewrite advice? Posted: 07 Jul 2013 11:50 AM PDT I need advice on how to rewrite a select count(*) query for innodb tables mysql 5.5. in new environment its very slow... the query execution plan looks simple enough but very slow |
Why does this procedure raise a privilege error? Posted: 07 Jul 2013 08:50 AM PDT I am having trouble getting the following procedure to run on a remote MySQL database - the given error is privilege based (#1227). Locally, the procedure runs fine. QUESTIONS
|
MySQL5.6 on Mac OS X 10.6.8 problems when granting permissions to root Posted: 07 Jul 2013 10:50 AM PDT I'm having serious problems with a MySQL 5.6 instance on a Mac Server. We had to upgrade MySQL to a newer version, but it turned to be a bad idea, as we lost control to it. We had a backup of the /data directory and the my.cnf file as well. However, when setting an init_file to restore the previous password and its permissions. So we created a text file with this content: We have double checked the init_file permissions (we even gave it a chmod 777, to make sure it worked), but something is not working. When we run The problem may definitely come from the [Error] lines, but as we haven't used the --skip-locking, it's just confusing. If we try to log into mysql using |
Importing data with a lot of SELECTS during transaction Posted: 07 Jul 2013 01:50 PM PDT I'm using SQL server 2008 R2 and I have a I need to import about 1 million products into the This import should run as quickly as possible (<=6h) and I need to perform additional SELECTs per data row from the tables during import to validate the data. The data source is a CSV file so I'm making a transaction for each line which represents an article (entry for the During import it should also be possible to read data from the mentioned tables with a different connection. Currently this import runs in over 24h (the SELECTs to validate the data which I have to do for every line from the CSV during import takes it toll) Any advice how to improve import performance in such a case? I can split the CSV data in an preprocessing step into parts (e.g. last char of the product number) so that the parts don't interfere on row level and run these parts in different threads simultaneously if that could improve the import speed somehow. I mention this because the import machine and the sql server machine have a low CPU utilization during import so i guess a lot of time is wasted with network I/O (machines are connected with gigabit ethernet). |
PostgreSQL replication for archiving Posted: 07 Jul 2013 06:49 AM PDT I've looked around a bit and haven't found a very clear strategy or method for what I'm trying to do, which is surprising because I would think it would be a fairly common practice. I currently use Bucardo to replicate in a Master Slave setup. This works fine except I want the Slave to retain all records ever written to it. The master is a smaller system which will need to have its tables truncated periodically. I don't want these truncates to be carried over to the Slave. I would think this would be a fairly common practice but have been stumped finding a solution that will work. Could anyone point me in the right direction? The right direction doesn't necessarily need to involve using Bucardo either. Thanks |
Replicate RDS-MySQL to a non-amazon host? Posted: 07 Jul 2013 02:37 PM PDT I have a fairly large MySQL (5.1) database running in Amazon as an RDS instance. I'm working on migrating it out of Amazon. To do that smoothly I'd like to set up the new hardware as a read-only replica of the RDS instance. I know RDS supports replication within RDS. Is there any way to configure it to replicate to a host which is not in RDS? |
How to prevent high memory, CPU and time consumption by MySQL restore? Posted: 07 Jul 2013 07:50 AM PDT I have a local test machine with 2 GB RAM and a dual core processor. I imported a dump on that using
Restore took approximately 25 minutes. Question: Why it took so long time? I have already optimized my server settings. The dump also contains extended inserts and disables keys). Is 700 MB is too large for this kind of machine? Now what actually I want is to convert all my tables of that database to InnoDB. So for that I executed a simple shell command:
so Now when I run Innodb.sql one of my table takes more than 20 minutes for conversion. It contains only 1,378,397 records. In total it takes more than 30 minutes. In the meantime memory usage by mysqld daemon was 73%. The CPU usage was OK at this time. How can I minimize the time consumtion? Should I go for changing MySQL server settings or anything else? If anybody wants my my.cnf setting I will share that. |
Posted: 07 Jul 2013 11:44 AM PDT Given any version of Oracle:
|
Posted: 07 Jul 2013 02:30 PM PDT I am writing a web site that uses SQL Azure heavily. However, it is painfully slow. Is there an easy way to profile the live SQL Azure instance? |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment