[how to] ERROR 1166 (42000) at line 65203: Incorrect column name '' |
- ERROR 1166 (42000) at line 65203: Incorrect column name ''
- MySQL 1366 Error on Replication Slave
- Mysterious merge replication problems creating a subscription
- Does it make a difference to add the DEFAULT keyword in CREATE TABLE?
- dcexec login failure
- Max Memory settings on Active/Active SQL Server 2008 R2 Cluster
- Why does MySQL allow HAVING to use SELECT aliases?
- Understanding disk requirement to make fail over cluster
- Problem with PostgreSQL and the geqo optimizer
- SQL Server 2008 Convert String to Datetime question
- When was Torn Page Detection and Checksum introduced to SQL Server and what are the upgrade behaviors?
- Enforce a constraint problem
- Does TRIGGER improve the performance?
- Restrict access to rows by usename
- Managing constants changes to database
- Alter mysql database engine doesn't work from cluster to other
- MS Access: error 3045 while linking to back-end
- Security for Oracle linked server from SQL Server
- Optimizing mysql queries running on 50 million rows
- Statistical Analysis of Data that has to be done in an order?
- Record versioning and promotion
- Are there any disadvantages to partitioning on financial year?
- MongoDB: move documents before capping
- Cannot initialize the data source object of OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server
- SUPER privilege not defined for master user in Amazon MySQL RDS
- Is it possible to pipe the result of a mysqldump straight to rsync as the source argument?
- Deleting Data From Multiple Tables
- Does SQL Server read all of a COALESCE function even if the first argument is not NULL?
ERROR 1166 (42000) at line 65203: Incorrect column name '' Posted: 24 Sep 2013 08:49 PM PDT I have a big snapshot database size is about 44GB in $ mysql -u root < mydata.sql ERROR 1166 (42000) at line 65203: Incorrect column name '' MySQL Version is $ head -n 10 mydata.sql -- MySQL dump 10.13 Distrib 5.5.22, for Linux (x86_64) -- -- Host: localhost Database: -- ------------------------------------------------------ -- Server version 5.5.22-55-log This is a snapshot of the line that making an issue: $ sed -n '65200,65203p' < mydata.sql /*!50001 SET character_set_client = latin1 */; /*!50001 SET character_set_results = latin1 */; /*!50001 SET collation_connection = latin1_swedish_ci */; /*!50001 CREATE ALGORITHM=UNDEFINED */ How can I resolve this issue? |
MySQL 1366 Error on Replication Slave Posted: 24 Sep 2013 06:01 PM PDT I've got two pairs of servers (each identical hardware and OS) with replication running on both pairs. Running MySQL 5.5.34 on Windows Server 2008 R2. I'm trying to track down a problem that has occurred twice now on on of the pairs. I get a 1366 error "Incorrect integer value '\x00' for column Flag at row 1" for the following call:
When I issue the On the other pair I don't ever get this error when replicating. The type of Flag is I haven't been able to identify any differences between the two pairs yet. Ideas? |
Mysterious merge replication problems creating a subscription Posted: 24 Sep 2013 09:08 PM PDT We have a merge publication which had been mostly working OK for a long time (years). Recently an article was added, we were informed the snapshot was out of date, so we regenerated it. Then we got error messages during initialization and found in the verbose error logs this problem:
...fast forward to now. In experiments we found that yes it is possible to create a merge publication on the publisher and subscribe on our subscriber server, but only if using a different database. Went through this just to rule out permissions/etc and to determine it was truly the subscriber db that was the problem. So, it seemed logical to remove replication from afflicted DB in hopes of cleaning out any buried metadata mess: So far so good. However now, I get the exact same error, but much earlier in the process. Specifically, when executing: (above edited slightly). Is there some remaining prob lurking in the metadata? The original theory on the ANSI_PADDING stuff was there were in fact some tables in the subscriber DB w/ computed columns thus that seemed related. However some flipping of ANSI_PADDING settings (and Googling) didn't really lead to a conclusive answer. Thanks so much for reading and let me know of any other info I can provide that may help. Oh yeah, publisher and subscriber are both SQL Server 2005. |
Does it make a difference to add the DEFAULT keyword in CREATE TABLE? Posted: 24 Sep 2013 10:12 AM PDT The MySQL manual says:
Someone made a point in the comments, saying:
Does it make any difference if the |
Posted: 24 Sep 2013 11:23 AM PDT I have a SQL Server 2008 R2 instance that I have been trying to get Data Collection setup on. I get everything setup, but there is no data that gets uploaded. I've traced the root of the problem to be a failure in the SQL Agent jobs where it calls dcexec. Isolating that out, I can reproduce the issue by running: I run the above command from a PowerShell window that I've opened with RunAs Administrator. The account I am running the above with is a domain account that is a local administrator on the server. In addition, the account also has the sysadmin role on the SQL instance. The error that I get is: That is the same error I see in the history on the SQL Server Agent job. Note that if I execute dcexec with a bad instance name, such as: Then, as expected, I get a different error: Any ideas on how to successfully login to the SQL instance with dcexec? |
Max Memory settings on Active/Active SQL Server 2008 R2 Cluster Posted: 24 Sep 2013 02:04 PM PDT
The only time the instances are on the same node is when I am patching the other node. Brent Ozar's "maximum server memory" recommendation is to leave Windows 4GB or 10%, whichever is more. http://www.brentozar.com/archive/2012/11/how-to-set-sql-server-max-memory-for-vmware/ Since this a cluster, how should I set the max memory on each node? Should I treat each as a standalone server? This would make sure memory on each node is not wasted. However, in a node failure, the max memory total for all 4 instances would exceed system memory of the single node. Will this cause any issues in the timeframe until we get the second node recovered? Do I need to lower the max memory setting on the 4 instances until the secondary node is recovered? Or is SQL Server smart enough to keep working (using page file if necessary). |
Why does MySQL allow HAVING to use SELECT aliases? Posted: 24 Sep 2013 05:01 PM PDT In SQL, as far as I know, the logical query processing order, which is the conceptual interpretation order, starts with FROM in the following way:
Following this list it's easy to see why you can't have SELECT aliases in a WHERE clause, because the alias hasn't been created yet. T-SQL (SQL Server) follows this strictly and you can't use SELECT aliases until you've passed SELECT. But in MySQL it's possible to use SELECT aliases in the HAVING clause even though it should (logically) be processed before the SELECT clause. How can this be possible? To give an example: The statement is invalid in T-SQL (because HAVING is referring to the SELECT alias ...but works just fine in MySQL. Based upon this, I'm wondering:
|
Understanding disk requirement to make fail over cluster Posted: 24 Sep 2013 10:14 AM PDT Please take a look at the following screenshot. I am trying to make SQL Server 2012 Fail Over Cluster using two nodes. As you know we need to have a shared SAN which will be used by both nodes to store data but I have some confusion regarding it. I was under the impression that we need one drive for e.g. Z for "Temp DB directory" which will be shared by both nodes to store data. But someone told me that these will be two drives actually for each node and for these nodes it will appear as one. So I am not sure what is true. May be we are both saying same thing!? |
Problem with PostgreSQL and the geqo optimizer Posted: 24 Sep 2013 11:33 AM PDT I'm trying to compare the two execution plans (exhaustive search and geqo) from the optimizer, but I always get the same plan from both types. These are the variables I use for geqo: My queries have 8-12 I tried to add more variables to my queries to force the optimizer to choose an other plan: But like I said, I always get the same plan as the plan from exhaustive search. |
SQL Server 2008 Convert String to Datetime question Posted: 24 Sep 2013 09:24 PM PDT I have a string column in a table that display data as I'm trying to convert this column to datetime, but I can't figure out how to extract only date and change the data type.
Conversion failed when converting date and/or time from character string. Thank you |
Posted: 24 Sep 2013 02:33 PM PDT There are two different options in modern SQL Server for page verify; being Torn Page Detection and Checksum. None is also of course an option. I believe Checksum was introduced in SQL Server 2005 and that upgrading or restoring a DB from a prior version would maintain its previous page verify method. i.e. there was no implicit upgrade. The problem involved is that we have a production database that went into production using SQL Server 2000 and has since moved to a SQL Server 2008 R2 server. Page Verify is set to None when I had been expecting it to be Torn Page Detection. Going back this amount of time we seem to think the DB was originally developed in SQL Server 7.0 then migrated to SQL Server 2000 and this may explain the observed result. I was wondering when Torn Page Detection and Checksum became a feature of SQL Server, and how they behaves when migrated or upgraded to newer versions. |
Posted: 24 Sep 2013 12:54 PM PDT This business model is for a case management database. This is closely modeled on the idea of a file folder representing the phase and a sequential checklist representing the stages. A case consists of a phase that can have one or more stages. A phase can only have one stage that is "Current" or open at any one point in time. A case can only start from one type of stage but can progress to any one of a number of stages that are end types. In this business model there are many different types of phases and stages An example: you apply for a license. The process always starts with you submitting a form but can have different endings: the application is approved or rejected or sent back for more information. Problem:
|
Does TRIGGER improve the performance? Posted: 24 Sep 2013 03:26 PM PDT I understand that Replacing two or three queries as with a trigger-based query as with trigger
Additional Information: The database is mysql 5.5 with innoDB. |
Restrict access to rows by usename Posted: 24 Sep 2013 01:22 PM PDT In my database i store data of different users (e.g. addresses or invoices) in the corresponding tables. I want to make sure that a logged in user only has access to it's own data in the tables (so that the user cannot read e.g. an address of an other user). Currently this is done in the application accessing the mysql server. Because the application will be split into multiple independent parts, written in different languages, I'm looking for a solution that is closer to the database, otherwise i need to make sure that the access rules are equal in every application. While I also have the alternative of a middleware on my list, I'm looking for a in database solution in the first place. Currently I already have a structure running in a test environment (It is a shorted version for illustrating). But I would like to know if there is a better solution to achieve this. I already know that that a Tables Views Constraints |
Managing constants changes to database Posted: 24 Sep 2013 03:22 PM PDT Here's the deal, I'm in charge of creating a website that will host small web apps. This apps will be photo contests, creatives sentences and other similar gibberish. The deal is, since apps we'll be constantly changing and some new ones will come up what's the best way to save the config specs for each one? Should I create new tables and fields as I need? Or should I serialize this data and put it on one field? In this case instead of having a table apps, I was thinking having a table for each app, this way I could have a more modular way of dealing with apps, making changes more easy, or more painfull (in case of the changes apply to all the apps) Or if I serialize everthing, then I would only have one table and I won't have the need to make changes to the table, but this option seems a bit lame to me. |
Alter mysql database engine doesn't work from cluster to other Posted: 24 Sep 2013 10:22 AM PDT I got into this issue when I tried to alter my database table engine to It did not throw any error but did not work. I just want to know why this did not work as I got it from For client requirement I may need to alter the engine on installed database without losing any data. Any Ideas or help that anybody can provide to resolve issue. |
MS Access: error 3045 while linking to back-end Posted: 24 Sep 2013 12:22 PM PDT In our environment multiple users each work with their own copied front-end, with tables linked to a single networked back-end (both files are .mdb). Each user has permissions to read and write to the location of BE, and both FE and BE are set to "shared access". Recently a single user started getting error: 3045 "Could Not Use Database; File Already in Use" while trying to open forms using linked data. The same error appears while trying to re-link the tables using the manager. All other users can still normally work. Tried restarting his workstation, removing local files and re-copying the FE and it didn't work. There is no .ldb file left after all users turn off the FE, compacting after that and restarting the FE of the unlucky user didn't help either. Said user is working on AC2010, but some others are working on AC2007 and AC2003. Please help! edit: BE is stored on Windows Server 2003. |
Security for Oracle linked server from SQL Server Posted: 24 Sep 2013 04:22 PM PDT I'm trying to make the linked server from SQL Server 2005 to Oracle more secure by not having user/pass on "Be made using this security context:" fields so only a specific user can access the data from linked server. So I tried mapping SQL Security User to Oracle user/pass by inserting it in "Local server login to remote server login mappings:" area then "For a login not defined in the list above, connection will:" set to "Not be made". When I click OK, I get: Is it not possible to map SQL login to a Oracle login? Is there any way to get this to work? |
Optimizing mysql queries running on 50 million rows Posted: 24 Sep 2013 12:31 PM PDT Is there any technique/advice in order to optimize the performance of the queries below? According to my needs, my average db size will be approximately 30mill. rows per day, so every second less, will make huge difference. My DB engine is innoDB and I'm using 1 core CPU, with 2GB RAM. Exec Time : 7,5 sec (5,5 mil. rows) Exec Time : 88,4 sec (5,7 mil. rows) Any help would be appreciated. |
Statistical Analysis of Data that has to be done in an order? Posted: 24 Sep 2013 09:22 PM PDT Bear with me - that is the first time try that in SQL Server, normally I have been doing that on the front end ;) I a implementing some analysis on time coded data series. This is not super complicated stuff, but some of it requires some numbers we do not store in the database and that has to be calculated by aggregating the numbers in a specific algorithm IN ORDER. To give an example:
This can not be pre-calculated due to dynamic filtering - there are a number of filters that can be applied to the data. So far - past - I pulled the data to the application, now for the standard stuff I plan to try to keep that in the sql server. My problem now is - I can see how that works (acceptable) in SQL Server: But if I put that into a view... and then filter out rows, the Sum is still calcualted from the beginning. And I need a view because I want (need) to map that standard analysis data into an ORM (so dynamic SQL is out). Anyone an idea how to do that? |
Record versioning and promotion Posted: 24 Sep 2013 07:22 PM PDT Let's say we have this hierarchy: Now, this will not reflect real life completely, but.
3.Each city can modify laws. After they modified the law, it is available to the city itself and to the regions within the city. After 3-step approval process, it can become a country level law, and all children will inherit new version of the law. Each city still can modify new version of a law to have changes.
Currently we solve this issue like this: We have table Because of this we have to use Table-valued functions in our application to filter specific records that are relevant to the current location. (Some laws can come from country level, some laws come from city level, some laws come from organization level). This makes database much slower. We cannot use native unique constraints on our fields, because all fields must be unique only within a branch, so we have to use custom check constraints to keep data clean. Also, insertion and editing records is much slower, since we have to promote some records to a country level, and then apply data transformation for other countries. Currently we have around 150 tables, and we use a lot of joins. I'm not sure what's the best way to model complex hierarchical data with versioning. (I'm thinking we need an enchanced version of source control..) |
Are there any disadvantages to partitioning on financial year? Posted: 24 Sep 2013 08:22 PM PDT Our current set up has one table per financial year (May 1- April 30). Each table has approx 1.5 million rows. We have about 8 years of data, and will obviously be adding each year. The majority of queries are within the financial year/one partition. Either My plan is to have a range partition on an InnoDB table. e.g. This means that the PK has to become Are there any significant disadvantages to partitioning compared to having an unpartitioned table? I know that means the PK is now length 12 and all further indexes will have that prepended to it. Does that make a difference? The table needs to work faster on reads than writes, and there are a fair few indexes on it.
We do sometimes need to query the time across all time or over "the last X months", but this is pretty rare. The main advantages of moving to a single table is to eliminate the logic in the application working out which table to insert/update/select and not needing to calculate unions in those situations where we need more than one table. |
MongoDB: move documents before capping Posted: 24 Sep 2013 11:22 AM PDT The cappedCollection concept works well for most of my projects where cleaning old data without care makes sense. For another projects, I need a more complex and safe concept. The requirement is nearly the same as Similar to the logrotate rules, I'd like the main collection not to become too large => capped by size; if possible, capping by timestamp might be a plus. This sounds like a cappedCollection, but I do not want any data loss when it's capped. The old data should be stored into another db's collection that must be Example:
Before removing old documents, these are savely moved into No data is lost and the main collection remains small and fast. Storing the data in another database avoids db locks, e.g. repairDatabase tasks on an archive file will not affect or delay the main collection. Is there a good practice or how to achieve this - as reliable and automated as possible - without writing all the data transfer for a cronjob which handles the data transfer but should never ever be missed because data is lost if capping starts before old data is copied into the archive. |
Posted: 24 Sep 2013 02:14 PM PDT This is going to be a long story I'm afraid ;-( I have to read some text-files into a SQL database. This database is on a SQL 2008 R2 x64 database server. I want to use a linked server so after some research I found that I had to install the Microsoft.ACE.OLEDB.12.0 x64 provider. I did this on my local machine and after I created my linked server I had some trouble reading text files. The provider for the linked server was set to use Microsoft.ACE.OLEDB.12.0, the datasource pointed to the correct folder and the providerstring was set to "TEXT", but I could not test the connection. It was throwing this error:
After some research, I found that the SQL Service was running under the NETWORKING SERVICE-account. When I changed this to Local Account, everything was working fine. Note: I also had to change some settings for the provider, "Allow inprocess" is checked, otherwise it was not possible to browse the files in the configure datasource folder. But now I try to implement this solution on my customers network and I cannot get it to work. It is also a SQL 2008 R2 x64 Db-server, Microsoft.ACE.OLEDB.12.0 provider is installed, but it still throwing the mentioned error. So I asked them which account is used to run the SQL Service. This is an AD-account belonging tot the local Administrator group of this server and therefore it had enough privileges as far as I can see. The user who is testing is connected to the database engine with a windows AD account, also belonging to the administrator group. But when he tries to test the connection, it throwing the same error. An other thing I tried is using the OPENROWSET, this also give an unspecified error. And yes I have set the "Ad hoc distributed query" using the next statement: So here I got my development box, where everything is working fine. But when I deploy to the test-server it has these connection-problems. Actually I don't know what to do or check anymore. I have been working and searching for several days now. Does anyone have some other suggestions I can try? |
SUPER privilege not defined for master user in Amazon MySQL RDS Posted: 24 Sep 2013 05:22 PM PDT I have created one medium instance on amazon rds in asia pecific (singapore) region. i have created my master user with master password. and it is working/connecting fine with workbench installed on my local PC. When, I am going to create function on that instance, it show me following error
At my instance, my variable (log_bin_trust_function_creators) shows OFF. now when I go to change with variable using it gives me another error
I don't know how to solve this error. Can anybody help??? |
Is it possible to pipe the result of a mysqldump straight to rsync as the source argument? Posted: 24 Sep 2013 06:22 PM PDT Is it possible to pipe the result of a mysqldump straight to rsync as the source argument? Conceptually, I was thinking something like: I've seen people pipe the result to mysql for their one liner backup solution, but I was curious if it was possible with rsync. You know--- cause rsync is magic :) Thanks for your time! |
Deleting Data From Multiple Tables Posted: 24 Sep 2013 03:01 PM PDT Suppose,I've a table called UNIVERSITY containing universities name: Now these universities ID's has been(obviously) used in many tables within the database(name e.g.Education),Suppose 10 tables. Q.Now what happen if i delete one university? A.The universityID field in other tables becomes NULL. But I don't want these,rather when I delete 1 university from UNIVERSITY TABLE,all its occurrences with Rows in all 10 table should get deleted. What will be the shortest and easiest MySQL Query for this operation. NOTE:I'm using PHP language. |
Does SQL Server read all of a COALESCE function even if the first argument is not NULL? Posted: 24 Sep 2013 02:31 PM PDT I'm using a T-SQL If, for example, I know with the VB.NET |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment