[how to] Query_cache doesn't work with join |
- Query_cache doesn't work with join
- SSRS switch statement
- Calculating the number of times a group occures
- mysql doesnt have write access in data dir?
- Which schema is better for a shopping project?
- MSG 666 when running an insert query on 80M-row indexed table
- Locking the table so that no transactions can complete
- Export/Import Subset of data in postgres
- How do I set up high availability at the instance level? [on hold]
- SQL Server - How to determine ideal memory for instance?
- How to use OR in query with priority?
- Change MySQL database name in SQL file
- Can SQL update script update all rows even though there is where clause
- MySQL : Extract SQL statements from binlog
- How to drop SQL Server database currently in use and in Single user mode
- inner joins with where clause for no values?
- PostgreSql allocate memory for each connection
- Combining multiple left join for one single line
- psql, record separators in the data
- Incorrect information in .frm file after a power outage?
- How to access a SQL Server database from other computer connected to the same workgroup?
- Queryplan changes depending on filter values
- How to build a database that contain only the delta from yesterday
- Why does Log Shipping .TRN file copy just stop
- Is it a bad practice to always create a transaction?
- Oracle schema import is not importing all the tables present in the schema dump file
- Selecting with varbinary(max) criteria (in the where clause)
Query_cache doesn't work with join Posted: 06 Sep 2013 07:13 PM PDT I have a simple join query, but for some reason query_cache won't cache it! without the query_cache is on
both tables are innodb with utf8 charset |
Posted: 06 Sep 2013 04:17 PM PDT I have a report with a variable number of columns. I can hide columns with the My thought was to create a multivalue column whose value depends on a switch statement: The idea is:
It works if I run the report at the maximum drill down, which is to say if I run it for a specific state the sites in the state show. But, if I go up levels in the hierarchy I get |
Calculating the number of times a group occures Posted: 06 Sep 2013 04:17 PM PDT I'm trying to figure out how many times in my database a particular grouping has occurred. I have two relevant tables. and Lets say I want to know what applications were used in a given session. I can use: to get the distinct program names, but I'm interesting in how often particular groups of applications are used in a given session. So essentially, how often are these three applications grouped together. Is there a way that can get me results in the form of |
mysql doesnt have write access in data dir? Posted: 06 Sep 2013 03:16 PM PDT I am running a MySQL on my Windows 7 64 bit development machine at work and I can't seem to get around a database corruption issue. In my unit tests it is telling me that the storage engine (which is innodb) is issuing an error message of -1. In my mysql error log I am reading a log message that says but looking through my file permissions it is showing that both myself and SYSTEM have full access. I have run mysqld from command line as well as from a service, restarted my computer, and even reinstalled mysql in a vain attempt to be able to fix things so I can get back to work. I changed my my.ini file to see if that would fix it. I am at a loss as to how to fix this? Did I miss something in my my.ini file that I should change? Should I be looking for gremlins mucking around with my user permissions? Is this the beginning of the end? |
Which schema is better for a shopping project? Posted: 06 Sep 2013 03:06 PM PDT I'm working on a business-to-customer project that has variety of product types. There are a few properties like name, description, brand_id that each product has but there are also many specialized properties for different products. I'm looking for a reasonable solution for handling different types of products like cell phones and air conditioners. For example a cell phone has a property called CPU whereas an air condition doesn't have that property. Also an air condition has a property called BTU whereas a cell phone doesn't have a property like BTU. The thing I want to do is a classic category based product system. Users will be able to create categories that contains different fields like CPU of BTU and when they want to create a product, they will be able to enter values for these fields to that product. The SQL world, I see that many projects use a schema like this one: However I wonder that if there is a better solution for this problem. There are two different solutions came into my mind. As you may noticed there is a column called category_values in products table. This column is JSON type (which Postgresql has) and I will index the keys that I need to search. The other schema: In this schema, when a user creates a category the system will create a new table with its fields on database in runtime. And I will create a view that generates the sql selects on the fly and executes them. Which way should I choice, or do you have any other solutions? |
MSG 666 when running an insert query on 80M-row indexed table Posted: 06 Sep 2013 02:20 PM PDT Strangely, my stored procedure started to receive Msg 666 for some input data. Stored procedure fails on the last step when it tries to insert a row into a table with the following structure: This is essentially a table that connects all referenced entities together. I've got this design from previous developer :) Fragmentation for both indexes is low (<25%). However PK_TableName fragmentation quickly grows, since the amount of operation on the table is quite intense. Table size: So, when I try to run a veeery simple query, for some of D_Id's I get the following message:
Query example: For example, when I set D_Id to some values - it fails, '14' for example. If I set D_ID to other values (1,2,3,...13, 15,16,...), the query runs fine. I suspect there's something really bad going on with indexes... But I cannot get to the bottom of this... :( Why it fails? |
Locking the table so that no transactions can complete Posted: 06 Sep 2013 01:37 PM PDT I am using following procedure to do some duplicate checking before inserting into This stored procedure is not throwing any error but is locked in an infinite state .. It may be there's some problem with my staging table schema so my table schema as follow In the master table there will be millions of records, and at once in the staging table there will be a max of 800 k records. |
Export/Import Subset of data in postgres Posted: 06 Sep 2013 01:04 PM PDT Using the following query, I'm able to export two tables, selecting 500 items randomly from the first: I want to import this data into my testing database, but How can I make this work? |
How do I set up high availability at the instance level? [on hold] Posted: 06 Sep 2013 12:26 PM PDT I have two database instances (Instance A, Instance B) of SQL Server, each one hosted on a separate VM using windows server 2012. Instance A has SQL Server 2008 R2, instance B has Sql server 2012. I'm not a DBA, and I'm completely lost as to how to provide high availability for both of these instances. Ideally, I'd like someone to point me in the right direction to a solution that allows these instances to automatically fail over gracefully if we have problems, preferably at the instance level, since I have a bunch of databases in each instance. I have enough hardware to host these instances somewhere else on the network, and several gigabits of network bandwidth to get this done. Please go ahead and assume that I am an idiot. Mew your cat into the sun, brother! |
SQL Server - How to determine ideal memory for instance? Posted: 06 Sep 2013 11:46 AM PDT We have some virtual machines that have X memory allocated to them. This amount of memory is somewhat random. It was allocated to the machine because that was the amount of memory the physical machine had, because of a vendor recommendation, or because we threw out a best guess as to how much memory we thought the instance would require. I understand the more memory the better, but I would also like to avoid over allocating memory to the VM when it isn't necessary. The memory could be better utilized by another machine. What would be the best way to determine an ideal amount of memory per instance that is actually active and being used? Are there other counters we should be looking at in addition to page life expectancy? We have instance that have PLE's of 10k+ and others that have 100k+. Any insight is much appreciated. Thanks, Sam |
How to use OR in query with priority? Posted: 06 Sep 2013 09:56 AM PDT In a simple query like How to give a priority to get only one of the I mean getting rows with In other words, do not use |
Change MySQL database name in SQL file Posted: 06 Sep 2013 09:36 AM PDT I have a 2GB mysql backup file that has several databases in it and would like to change the name for one of the databases. What approaches can I use to tackle this issue ? I have used sed but it ends up making changes where it should not. |
Can SQL update script update all rows even though there is where clause Posted: 06 Sep 2013 09:16 AM PDT I ran this script earlier today The result said 4 rows affected, but when I checked the entire table was updated. The entire message column in that table was updated even though only 4 rows satisfy the where condition. Has anyone ever faced this before, I cannot seem to find a way to explain what would have caused this. Any help would be great. |
MySQL : Extract SQL statements from binlog Posted: 06 Sep 2013 11:08 AM PDT I am trying the following command to extract SQL statements from binlog for using it in another database (Amazon RDS) But I got the following error: I read in the documentation that mysqlbinlog exits with an error if a row event is found that must be displayed using BINLOG. Is there any work around to extract SQL statements from binlog? Update 1 : I run the command again with option --verbose but I did not get any return in the command line When I browse output4.sql, I can see some valide SQL + some commented SQL like this: Update 2 : I am following the tutorial here https://engineering.gosquared.com/migrating-mysql-to-amazon-rds I want to update Amazon RDS from the output of binlog. Currently Amazon RDS do not allow writing from binlog. I am trying to extract the SQL statments from binlog so I can update Amazon RDS. Is there any way to update Amazon RDS from binlog? |
How to drop SQL Server database currently in use and in Single user mode Posted: 06 Sep 2013 11:35 AM PDT I have a Database on SQL Server 2008, which I want to drop. Currently it is in single user mode and it is currently in use. returns and I do not know how to identify the session I have to kill. An attempt to set it offline yields |
inner joins with where clause for no values? Posted: 06 Sep 2013 09:22 AM PDT I have 2 tables : Table1 Parent(varchar) Child (varchar) Table2 C1(varchar) PC (varchar) Sample data: Requirement - I need Table2.C1 values for which column Table2.PC = Table1.Child , but Child values must be different from Table1.Parent's values.. I'm using below query in mysql: It is giving empty set, but there are values in Child which is same as in PC, but not in Parent.... Where I'm getting wrong? |
PostgreSql allocate memory for each connection Posted: 06 Sep 2013 09:10 AM PDT When we configure the memory setting for the Postgres DB what is the recommended memory allocation for each connection ? Is there any formula to apply ? I know 25% of the memory of the Server should be allocated. But how do we allocate based on the DB connections? How do we know, what is the maximum number of connections should be allocated? Also in a multi-nodes environment can we allocate more connections for each node (in Postgres-ds.xml max connections) than what is actually allocated in the DB? |
Combining multiple left join for one single line Posted: 06 Sep 2013 10:50 AM PDT We have a query as below. Structure for the tables are as below: Sample output is. The problem now is how to show say for e.g. A1 all the Alert Message in one line and same goes for A2 etc. Now each alert message is on a different line. |
psql, record separators in the data Posted: 06 Sep 2013 11:20 AM PDT I want to use psql to list all of the databases on a Postgres server, to be parsed by a script. This command lists them: but the output shows an obvious issue: the records are separated by newlines, but also contain newlines. Using the -R option I can change the record separator, but it seems like no matter what I change it to, there's the risk of that string appearing in the data. Is it possible to instead tell psql to replace the newlines in the data with something else? (and then what if that string also appears in the data?) I'd also tried to set the record separator to a null character with such sequences as The other option I know of to list all databases is: but that requires me to give the password for the postgres user, so it's not desirable. Perhaps there's another way to get a list of the names of all databases? |
Incorrect information in .frm file after a power outage? Posted: 06 Sep 2013 09:55 AM PDT
SHOW SLAVE STATUS\G: /var/log/mysqld.log:
and this is the The partition definitions on the first slave looks like this: I've tried to copy this file to the second slave, then used So the questions are:
PS: I know I can rebuild the second slave but I think this is an interesting challenge. |
How to access a SQL Server database from other computer connected to the same workgroup? Posted: 06 Sep 2013 10:31 AM PDT I have created a C# application which uses a SQL Server database. I have other computers connected to me and to each other in a workgroup. I have shared my C# application with others. When they open the application they get the error
But the application is working fine on my PC. The connection string I am using is which is stored in a The application is working fine on my PC. What must I do? I have enabled the TCP/IP in the server but the same error persists. Some change in connection string or something else? Please help.. Thank you.. |
Queryplan changes depending on filter values Posted: 06 Sep 2013 01:17 PM PDT I created a clustered index on a table expecting it to make the queries with ranges perform better, but, different values in the where clause can produce differente query plans, one uses the clustered index and one does not. My question is: What can I do to make the DBMS use the better query plan? Or better yet, should I change my schema to something better? Details:
Tables: Query: An area has many locations, every location has either a zip or key. Execution plans: Here is an Here is the Edit: Added |
How to build a database that contain only the delta from yesterday Posted: 06 Sep 2013 07:18 PM PDT I need to know what has been changed on my database since last night. Is it possible to extract this data from the LDF file and to build a new Database that contains the delta? For example, let say I have a table for users and now, a new user was added and one of the users update his home address. I need to be able to build a new database that users table will contain two records 1. The new user (and to add a new column to know if it's new or update field) 2. The user that update his record (it will be nice to know which record has been update)? BTW, I have to SQL servers that I can use (2008 and 2012) Thanks In Advance |
Why does Log Shipping .TRN file copy just stop Posted: 06 Sep 2013 02:18 PM PDT I apologize in advance for a long post but I have had it up to here with this error of having to delete LS configuration and starting it over for any DB thats got this error. I have LS setup on 3 win2k8r2 servers(pri,sec,monitor) with 100 databases transactions backed up and shipped from the primary to secondary and monitored by monitor. Back ups and copies are run every 15min and then the ones older than 24hrs are deleted. Some DBs are very active and some not so much but shipped regardless for uniformity sake(basically to make secondary server identical to primary). Some DBs are for SP2010 and majority for inhouse app. The issue is that after all LS configs are setup, all works well for about 3 to 4 days then i go to the Transaction LS Status report on the secondary, I see that randomly some LS jobs have an Alert Status because the time since last copy is over 45min so no restore has occured. This seems random and the only errors i see is from an SP2010 DB(WebAnalyticsServiceApplication_ReportingDB_77a60938_##########) which I belive is a reports db that gets created weekly and LS cannot just figure which the last copy to backup or to restore is. I posted here regarding that and i have yet to find a permanent solution. For my main error(time since last copy) i have not seen anything that could have caused that and i dont get any messages(even though some alert statuses have been ignored for 3 days). Anyway, I would really appreciate any input on understanding whats causing this and how i could fix it. Thanks. |
Is it a bad practice to always create a transaction? Posted: 06 Sep 2013 09:34 AM PDT Is it a bad practice to always create a transaction? For example, it is a good practice to create a transaction for nothing but one simple What is the cost of creating a transaction when it is not really necessary? Even if you are using an isolation level like |
Oracle schema import is not importing all the tables present in the schema dump file Posted: 06 Sep 2013 10:18 AM PDT I have exported an existing oracle schema from another machine and then imported it in my local machine. Import was successful, but some tables which are present in the export dump file are not imported. Here are the export and import commands i have used. The Oracle we are using is 10g EE. What could be going wrong ? Can you please suggest a solution to this issue. |
Selecting with varbinary(max) criteria (in the where clause) Posted: 06 Sep 2013 10:58 AM PDT Basic info
Backround (skip if not interested): A project I'm maintaining uses an ORM, which apparently stored my enum values (which inherit from Byte) into binary serialized .Net objects stored in a varbinary(max) field. I only found out this was happening after a new requirement emerged dictating my code to run under medium trust. Since the .Net binary formatter needs full trust to be called, it started crashing on the enums. To clean the mess up I need to create migration scripts that will convert these (varbinary(max)) values back to integer values. There are only a handful of different values so it shouldn't be a big problem (I thought). The problem: I am able to get string representations of the blobs when selecting: It returns a string '0x...(an array of hexadecimal values)'. But when I try to select on the column using copy-and-paste making sure I have the exact binary equivalent: it does not return any records. So is it possible to solve this using a client (such as Management Studio)? If so, what would be the correct syntax? |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment