[how to] Refactoring SQL Server to workaround Anti-Semi Join SQL 2005 bug |
- Refactoring SQL Server to workaround Anti-Semi Join SQL 2005 bug
- PostgreSQL: column of type timestamp expression varchar ... How do I cast the expression and why?
- Illegal mix of collations for operation 'UNION'
- Adding an index to a system catalog in Postgres
- Replication error
- Window functions causes error "The Parallel Data Warehouse (PDW) features are not enabled."
- Understanding MS SQL Server Date Types
- Optimising Large SQL Server Tables
- Estimated Time to Create a Database with Microsoft Access and Python
- Running the DBT2 test results in 0.00 NOTPM
- MySQL row does not persist, but primary key with auto-increment is incremented
- AlwaysOn AG, DTC with failover
- Grant Execution Permissions on SQL Server 2005 Failing
- How do I determine if a column is defined as a serial data type instead of an integer based off the catalog?
- Setting up replication alerts
- Getting "ORA-00942: table or view does not exist" while table does exist
- Datatype range Clarification [on hold]
- How to handle "or" possibilities in MongoDB
- Mongo repair database "ERROR: error: exception cloning object in local.system.replset "
- Check and remove iteratively without an index
- Security for Oracle linked server from SQL Server
- App for rapid prototyping of relational data structures
- Statistical Analysis of Data that has to be done in an order?
- Clear schema from database without dropping it
- SUPER privilege not defined for master user in Amazon MySQL RDS
- How to import table's data in MySql from Sql Server?
- Is it possible to pipe the result of a mysqldump straight to rsync as the source argument?
- Conditional compilation of SQL Server stored procedure
- Minimizing Indexed Reads with Complex Criteria
- Calculating percentage of a row over total sum
Refactoring SQL Server to workaround Anti-Semi Join SQL 2005 bug Posted: 26 Jul 2013 09:14 PM PDT I built the following SQL Server query, but it is encountering the anti-semi join defect in SQL Server 2005 which results in inaccurate cardinality estimates (1 -- urgh!) and runs forever. Since it is a longstanding production SQL server I can't easily suggest to upgrade versions, and as such I cannot force the traceflag 4199 hint on this specific query. I'm having a hard time refactoring the WHERE AND NOT IN (SELECT). Can anyone care to help? I've made sure to try and use the best joins based on clustered key pairs. Note that the Employee_MSG_Index table is 500m rows, doc2 is 1.5b rows, SIS is ~500m rows. Any help would be appreciated! |
PostgreSQL: column of type timestamp expression varchar ... How do I cast the expression and why? Posted: 26 Jul 2013 06:35 PM PDT This is infuriating. I have a query, in psql, which gives me the exact row I want from one table and I just want to INSERT INTO other_table SELECT .... FROM backup_table WHERE ... Here are the (barely sanitized) queries: ... which returns exactly one row. I try to insert that into my other table with something like: ... (the 'id' is pulled from the query in lieu of the name, though it doesn't matter when I change that to match the exact query). It gives the following error message: (The actual INSERT lists over 60 column names because this particular pair of tables is a nightmare which I've inherited and cannot yet normalize. However the only detail which should be relevant to this question is that I'm excluding the ts column from the INSERT because it's set automatically in the target column. Changes to the target column, mytable in these examples, trigger an insertion into the source, *my_log* here). I'm trying to restore a row which was removed from the main table. The only column on which I can distinguish among the rows in *my_log* to select the most recent is the timestamp which, as the error message indicates is of type: ts | timestamp without time zone | not null in *my_log* and ** ts | timestamp without time zone | not null default now()** in mytable. I've tried many different incantations of ::date and ::timestamp, date() and cast(ts as text) and so on ... both to the literal time/date sting and to the 'ts' column in the WHERE clause. None of that seems to help. So, what am I doing wrong and why is it that syntax which works just fine for a SELECT query fails for SELECT clause the INSERT INTO? |
Illegal mix of collations for operation 'UNION' Posted: 26 Jul 2013 06:09 PM PDT i have this database with tables bellow. it was running good , till when i have done this code to change collation and the website is down and showed this error: , now i dont know what to do . if i will continue to convert all tables , or no . i have also converted the database to utf8 . i should backup my database but its too late. any rollback for this or is there a fix ? thanks. EDIT: where there is |
Adding an index to a system catalog in Postgres Posted: 26 Jul 2013 06:03 PM PDT I'm having a situation very similar to the one described here:
and, while the application itself appears to be working fine, some queries involving system catalogs are very slow. Also, In particular, I need to calculate the on-disk size of each schema using something like this: which is dog-slow. I'm thinking of adding an index on
I'm also worried by the fact that modification of system catalogs seemed to be completely disabled in Postgres 9.0 and 9.1 (I'm using 9.2) So, the question is: what are the gotchas in adding an index to a system catalog in Postgres and will I be nuts if I (eventually) do that on a production system? |
Posted: 26 Jul 2013 05:19 PM PDT We have a slave server that has stopped replication due to the following error: What could be the cause of this error? And what would be a way to fix it? Version of both master and slave is MySQL 5.5.30 What else I can't figure out is how a temporary table would lead to such an error (in master): The last lines from master error log: Additional info: |
Window functions causes error "The Parallel Data Warehouse (PDW) features are not enabled." Posted: 26 Jul 2013 09:03 PM PDT I have the following virtual column, However, when I execute that, I get the following. This is where it gets interesting though, this works: And, further, this works How come the desired statement doesn't work? Where is this documented? The version information was requested, this is what I in Help → About. The result from |
Understanding MS SQL Server Date Types Posted: 26 Jul 2013 04:00 PM PDT Consider the following: Output: Now, I already understand from the documentation that What I don't understand though, is that Consider if I want to know how many integer days are represented by a But due to the endianness, you get 1966080 days! To get the correct result of 30 days, you have to reverse it: Or, of course you can do this: But that means internally somewhere it is reversing the bytes anyway. So why did they switch endianness? I only care because I'm working on a custom UDT in SQLCLR and the binary order of the bytes does seem to matter there, but these built-in types seem much more flexible. Does SQL Server have something internal where each type gets to provide it's own sorting algorithm? And if so, is there a way I can tap into that for my custom UDT? See also, a related (but different) question on StackOverflow. |
Optimising Large SQL Server Tables Posted: 26 Jul 2013 03:28 PM PDT I have a large table with around 50 million rows and 15 columns. Whenever I read, I always need all columns so I can't split them. I have a clustered index on the table with 4 keys (ALL INT) and I always read data using those keys and if I see the query plan it shows 84% for clustered index seek. But the performance is still slow, my queries are fairy simple like this I'm not using any IN operator or any sub queries here on any inline functions... which I know obviously make it slow. I've looked at partitioning but not sure about that, can I get some performance improvement by doing partition? OR is there anything else I can do? I'm using SQL Server 2012 Enterprise Edition |
Estimated Time to Create a Database with Microsoft Access and Python Posted: 26 Jul 2013 03:19 PM PDT How long on average would it take to create a Python program for a customer database in Access? The process would involve parsing some 50-page requirement Word documents and miscellaneous Excel tables, and then inputting the gathered information into the Access database. The database has several tables: Requirements Category Subcategory Film Specifications MainDoc Doctype Customer Facilities Products The category, subcategory, customer, facilities and product tables will be filled with other information not found in the documents. The only information that would be inputted would be the categorization of requirements, the requirements itself, the responses, the document information(ID, etc.) How long would this take for someone unfamiliar with Access or Python? I'm merely trying to get an estimate, as I'm planning to create this database soon. This may be the wrong board to ask this on, so if it is, please let me know. |
Running the DBT2 test results in 0.00 NOTPM Posted: 26 Jul 2013 03:15 PM PDT I'm trying to run the MySQL DBT2 Benchmark Tool, I have unzipped the archive and run the following commands: But got the following output: During the execution, Any idea why no transactions are executed? |
MySQL row does not persist, but primary key with auto-increment is incremented Posted: 26 Jul 2013 03:35 PM PDT I have a MySQL database that is acting oddly. I insert a new row and observe that for a brief period of time (less than 30 seconds), the row persists as expected in the table. After this time, though, the row vanishes (despite no other queries being executed). This table has an integer ID as a primary key, with auto-increment set, and the primary key is auto-incremented as expected for new rows. This leads me to believe there is not some kind of transactional rollback (and my table is MyISAM, so that shouldn't be possible anyways) or anything else that is somehow reverting the database to a previous state. What logs should I be checking to see what is going on here? The contents of my '/var/log/mysql/error.log' are below, but I don't see anything unusual. I noted the 'crashed' mark on the appname/users table, but mysqlcheck suggests the table is OK. Any thoughts? |
AlwaysOn AG, DTC with failover Posted: 26 Jul 2013 04:13 PM PDT Problem: How can I run Distributed Transaction Coordinator (DTC) on all servers in an AlwaysOn Availability Group (AG)? I do NOT need to maintain transactions over failover/switchover events. Setup: I have a Windows Failover Cluster (WSFC) with three Windows 2008 R2 servers where they are all running SQL 2012. Two servers are in one data center and are part of an AlwaysOn Failover Cluster (FCI), while the third server is in a second data center. The WSFC is a multi-subnet cluster. Here's a sketch of the setup: I've been able to install and configure DTC to work between the two FCI nodes because they are on the same subnet and share storage. I have configured a couple of AGs and they have been working fine. This screenshot shows DTC installed on the FCI: This screenshot shows that I can configure DTC on one of the FCI nodes (whichever is active): I'd like to migrate an application that uses DTC onto this cluster and use an AG. I've read that DTC is not supported with AGs (Reference). I have not been able to find a way to configure DTC on the third node in the second data center. When I try to configure DTC on the third node, it seems to be unavailable, as shown in this screenshot: In Brent Ozar's Free Setup Checklist PDF for Availability Groups he lists: Cluster Installation... 29. If an FCI is involved, configure DTC per your Planning section decisions. In the comments on SQL Server 2012 AlwaysOn Availability Groups Rock Brent says that "... nothing changes when AGs are in play. Just keep in mind that databases in an Availability Group don't support transactional consistency when failed over together to another replica..." This makes it seem that DTC can be used in Availability Groups as long as you understand that transactions will not be maintained in a AG switchover. I would not need it to maintain transactions from the FCI nodes. I would just need DTC available for the application to use in the case of a catastrophic disaster (where I lost my primary data center). How do I configure DTC on my third node? Or, is it the case that I'm just out of luck when it comes to using AGs and an application that needs DTC? |
Grant Execution Permissions on SQL Server 2005 Failing Posted: 26 Jul 2013 03:36 PM PDT This is an interesting question that is similar to some asked before but with an interesting difference. I am attempting to add the execute permission for a user to a stored procedure using T-SQL on a SQL Server 2005. This should be straightforward and I've done this dozens of times before : The user is a local SQL Server user, it is the owner of the Immediately I receive an error:
I've checked the permissions here and there doesn't appear to be anything unusual with this user account with the exception of the ownership if the schema, However when I apply the permissions manually through SSMS and the securables tab in the user account, they are applied without a problem. I am executing this with a separate sysadmin elevated account. My question is why is this happening in TSQL and not in SSMS ? Any ideas would be appreciated. Thanks ! |
Posted: 26 Jul 2013 02:52 PM PDT So I'm currently creating some SQL to read through the postgres (9.1) catalogs to build table definitions. However, I am encountering a problem with SERIAL/BIGSERIAL data types. Example: It gives me database name (db), schema name (cruft), table name (temp), column name (id), default value (nextval( ... )), and data type (bigint and int8 .. NOT bigserial) ... I realize that I could just check to see if the default value was a sequence - but I don't believe that would be 100% accurate since I could manually create a sequence and create a non serial column where the default value was that sequence. Does anyone have a suggestion for how I might accomplish this? Anything other than checking the default value for a nextval(*_seq)? |
Posted: 26 Jul 2013 03:27 PM PDT I 'm trying to configure alerts for replication. Currently on my test setup I run everything under user 'sa'. When I try to set up alert for agent failure I get the following error Login failed for user 'sa'. This error is shown when i try to add a new operator. Any suggestions how to fix this problem. thanks |
Getting "ORA-00942: table or view does not exist" while table does exist Posted: 26 Jul 2013 02:50 PM PDT I'm fairly new to Oracle database. I have installed I successfully created a table and inserted some data and performed some selects: Then I created a new user with the CONNECT privilege: Then I created a new role with appropriate object privileges: And granted the role to the user: Next I exited sqlplus with But when I try to select from the table it says: What I'm missing here ?! |
Datatype range Clarification [on hold] Posted: 26 Jul 2013 04:00 PM PDT What would happen if we use Datatype range in exceeded way (e.g assign varchar(max) or varchar(500) for the field which contains the records "Yes or No only") |
How to handle "or" possibilities in MongoDB Posted: 26 Jul 2013 07:01 PM PDT I'm new to MongoDB and by no means an expert in DB design in general. This feels like a really basic question that's probably been answered a million times, but I'm having a surprisingly hard time finding an answer for it: is there a good way to easily handle either/or choices in the DB, or structure it in a way that makes it easy to deal with in code? Let's say I'm building a cocktail database in MongoDB. I'm adding the entry for a Manhattan.
So I might do a cocktails collection with an entry like: Which is fine, but how do I get the rye in there? I don't think I'd want to do "ingredient" : "Bourbon or Rye Whiskey", would I? Is it better for the purpose of searching, later, to have them separated out? What's the best practice, here? Also, an even more tricky drink would be something like a Martini, where I would actually use a different garnish for vodka than I would for gin, so I'd want to give the user the option to choose the base liquor they're working with and then give the proper garnish accordingly. Any suggestions? Thanks! |
Mongo repair database "ERROR: error: exception cloning object in local.system.replset " Posted: 26 Jul 2013 07:04 PM PDT I'm trying to repair a mongo database and am running into the following error early on: (replica set and host names have been sanitized for public posting) To attempt the repair I shut down the mongod and started it up with My mongod.conf consists of just two lines: The .conf normally did have a replSet declaration, but this was removed for the repair as I'm trying to run the repair on a secondary. (Per the docs repairs can only be done on primaries or standalones; so need to make it look like a standalone temporarily). Any idea what the problem could be? |
Check and remove iteratively without an index Posted: 26 Jul 2013 07:08 PM PDT Without sync priority or performance requirements on the task, I need to check all documents once per day and removing some of them. The find filter is not static but can be defined within a db engine script function like With .find() - no filter/condition in brackets - the function will check some non-indexed fields and decides whether the document must be removed or not.
Furthermore, is there a variable available for checking the mongodb status, whether it is locked or sync delay (because of other queries and tasks on the DB)? If so, I'd prefer to delay the check_doc which has lowest priority. However, so far I could not find a way to decrease priority or a flag to check the status/delay. |
Security for Oracle linked server from SQL Server Posted: 26 Jul 2013 03:14 PM PDT I'm trying to make the linked server from SQL Server 2005 to Oracle more secure by not having user/pass on "Be made using this security context:" fields so only a specific user can access the data from linked server. So I tried mapping SQL Security User to Oracle user/pass by inserting it in "Local server login to remote server login mappings:" area then "For a login not defined in the list above, connection will:" set to "Not be made". When I click OK, I get: Is it not possible to map SQL login to a Oracle login? Is there any way to get this to work? |
App for rapid prototyping of relational data structures Posted: 26 Jul 2013 05:43 PM PDT What are some apps (especially web apps) that provide an Extremely Lightweight user interface for building, inserting test data into, and querying a relational data structure? The app should have some kind of "visual" interface (even if only a dropdown) for defining relationships between properties (columns in the RDBMS world). The schema (if there is one), data, and any relationships should be exportable in a common format and convention (something based on JSON maybe). An API for interacting with the base programmatically would be nice (REST and JSON for example), but since I can't find anything that fits the above criteria, I'll settle for prototype-and-then-export functionality. |
Statistical Analysis of Data that has to be done in an order? Posted: 26 Jul 2013 08:43 PM PDT Bear with me - that is the first time try that in SQL Server, normally I have been doing that on the front end ;) I a implementing some analysis on time coded data series. This is not super complicated stuff, but some of it requires some numbers we do not store in the database and that has to be calculated by aggregating the numbers in a specific algorithm IN ORDER. To give an example:
This can not be pre-calculated due to dynamic filtering - there are a number of filters that can be applied to the data. So far - past - I pulled the data to the application, now for the standard stuff I plan to try to keep that in the sql server. My problem now is - I can see how that works (acceptable) in SQL Server: But if I put that into a view... and then filter out rows, the Sum is still calcualted from the beginning. And I need a view because I want (need) to map that standard analysis data into an ORM (so dynamic SQL is out). Anyone an idea how to do that? |
Clear schema from database without dropping it Posted: 26 Jul 2013 07:43 PM PDT I'm working on a school project where I have a SQL Server with a database for my team. I already imported a local database created with Entity Framework. Now the model has changed, table properties were added/deleted and I want to update my full database. However, the teachers didn't gave us the create rights so dropping the whole database isn't really an option. Now is my question, is it possible to drop all the tables currently in the database and just import the newly created one without problems? Or do I really need to drop the whole database? |
SUPER privilege not defined for master user in Amazon MySQL RDS Posted: 26 Jul 2013 03:43 PM PDT I have created one medium instance on amazon rds in asia pecific (singapore) region. i have created my master user with master password. and it is working/connecting fine with workbench installed on my local PC. When, I am going to create function on that instance, it show me following error
At my instance, my variable (log_bin_trust_function_creators) shows OFF. now when I go to change with variable using it gives me another error
I don't know how to solve this error. Can anybody help??? |
How to import table's data in MySql from Sql Server? Posted: 26 Jul 2013 06:43 PM PDT I am trying to export table from SQL Server 2008 R2 TO MySql 5.5. For this I am using Here this error may be occurring because table in Sql Server has a column with data type Please provide your expert answers. If not possible through |
Is it possible to pipe the result of a mysqldump straight to rsync as the source argument? Posted: 26 Jul 2013 04:43 PM PDT Is it possible to pipe the result of a mysqldump straight to rsync as the source argument? Conceptually, I was thinking something like: I've seen people pipe the result to mysql for their one liner backup solution, but I was curious if it was possible with rsync. You know--- cause rsync is magic :) Thanks for your time! |
Conditional compilation of SQL Server stored procedure Posted: 26 Jul 2013 07:34 PM PDT Short version: Is there some way to conditionally compile chunks of TSQL code in a SQL Server Data Project using SQL Server Data Tools for Visual Studio 2010? I'm using SQL Server Data Tools in Visual Studio 2010 to work on an experimental SQL Server Express database. The eventual destination if things work well would be an enterprise SQL Server platform. I have both a 2008 instance on one box and a 2012 instance on another, because my company is in the process of migrating from 2008 to 2012 for the many enterprise databases as well. In other programming languages I have used, preprocessor directives make it easy to conditionally compile parts of a code base. The most common uses for this are to have different code for different platforms in restricted sections or to excluded debugging output code from release builds. Both of these could be very helpful in some store procedures I'm working on. Is there anything like this available? I know I can use Example: |
Minimizing Indexed Reads with Complex Criteria Posted: 26 Jul 2013 02:43 PM PDT I'm optimizing a Firebird 2.5 database of work tickets. They're stored in a table declared as such: I generally want to find the first ticket that hasn't been processed and is in My processing loop would be:
Nothing too fancy. If I'm watching the database while this loop runs I see the number of indexed reads climbs for each iteration. The performance doesn't seem to degrade terribly that I can tell, but the machine I'm testing on is pretty quick. However, I've received reports of performance degradation over time from some of my users. I've got an index on -- Edits for comments -- In Firebird you limit row retrieval like: So when I say "first", I'm just asking it for a limited record set where |
Calculating percentage of a row over total sum Posted: 26 Jul 2013 06:42 PM PDT Apologies for the bad title, I wasn't sure what would be a good title for this. This is currently (simplified view of the) data I'm working with I need to calculate the percentage of the total commission, each agent is responsible for. So, for Agent Smith, the Percentage would be calculated as So, my expected data would be I have a function returning the commission for each agent. I have another function returning the percentage as Is there a way of having the I was thinking something on the lines of a 2 part query, the first part would fetch the I am limited to using SQL, and I'm running on Oracle 10g R2. |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment