[how to] What can I do to make mysql use the expected indices? |
- What can I do to make mysql use the expected indices?
- Wanted: Android versions of JDBC drivers for common RDMBSes
- Backup daily for MySQL, MariaDB or PostgreSQL
- Backup daily for MySQL, MariaDB or PostgreSQL
- Is there an effective way to create extended event data that includes stacktraces in SQL Server on a local db instance?
- Timezones in Oracle 10
- The job failed. The owner (pc\user) of job FULL DB BACKUP does not have server access
- MySQL query optimization when no row is returned
- Use cases for hstore vs json datatypes in postgresql
- Best practice: Unions or a derived table?
- Database design - People and Organisations
- Mysql database user specified as a definer
- Restoring differential backup from SQL Server 2005 to SQL Server 2012
- MySQL-problem in Fetching Result set in Desired Format
- Many:Many with Shared Relation
- SQL Server: The multi-part identifier "xxx" could not be bound
- DB comparer field by field for all tables
- Oracle 11g delete by unique key slow
- Fulltext stoplist replication
- MySQL hogging memory
- SSIS keeps force changing excel source string to float
- SQL server ODBC connection for Active Directory user who doesn't log in to the windows
- Process attempted to unlock a resource it does not own
- Setting DATEFIRST to 6
- Postgresql Retrieve / Find Primary Key Serial Autonumber field for Audit logs
- How does MySQL determine the 'index first key' and 'index last key' with indexes?
- compare the same table
- sql server database sharding - what to do with common data / non sharded data
- Possible to make MySQL use more than one core?
What can I do to make mysql use the expected indices? Posted: 02 May 2013 09:16 PM PDT What can I do to make mysql use the expected indices?I've got 4 tables, two containing resources, and the others containing historical changes. One pair uses indexes correctly, the other doesn't, but both are structured almost identically. I've tried changing the order of the primary keys, and the order of the other keys, I've tried changing table structures so they use the same name in both tables, and both have the same key names, but nothing seems to make the query use the correct index. Irrelevant columns have been removed for brevity. These two tables are working as expected. These tables aren't working as expected (required). |
Wanted: Android versions of JDBC drivers for common RDMBSes Posted: 02 May 2013 04:40 PM PDT I've got a Java-based application library I'm trying to port to Android. The package presently supports:
...And can easily be configured to use any other SQL92 compliant RDBMS that has a JDBC driver. It would be nice to keep this flexibility. However, I've been having troubles converting over the JDBC drivers. I've asked about this on Stack Overflow, but no solution yet. Meanwhile, I'm wondering if anyone has successfully converted any of these JDBC driver packages and has made them available via the internet. There should be no license issues with at least any of these packages since all these are are access libraries and not the DBMS itself - all these vendors give away the JDBC drivers! But I've been looking and haven't found anything. Anyone? |
Backup daily for MySQL, MariaDB or PostgreSQL Posted: 02 May 2013 07:07 PM PDT Good day: I wonder if there is a program with which I can create daily backups with these requirements: The first time, make a full backup. The second time, only the changes since the last backup, and so on. Best regards. PD: Only with some software or script, not php. thanks! |
Backup daily for MySQL, MariaDB or PostgreSQL Posted: 02 May 2013 08:59 PM PDT Good day: I wonder if there is a program with which I can create daily backups with these requirements:
Best regards. PD: Only with some software, not languages. |
Posted: 02 May 2013 02:34 PM PDT I read Paul Randal's article on getting the symbols for SQL Server. I have a theory that I could get the extended events to read the symbols from the symbol server catch directory by setting the _NT_SYMBOL_PATH environment variable. The only problem is I can't create the right conditions for any data to show up when I create an extended events session with sqlos.spinlock_backoff. I tried the script in this pdf. I tried HammerDB and I tried this script with adventure works. I don't get any spinlocks. I tried setting max server memory at 256 megs (my default setting on my laptop) and I tried it at 2 gigs. The Extended event session was running, but no data showed up. |
Posted: 02 May 2013 04:10 PM PDT I have an application that stores its timestamps in Oracle using UTC (Coordinated Universal Time). Another application reads these timestamps from the database, but there is no convenient way to convert the timestamps to another timezone automatically (from within that application). Is it possible to adjust session settings in Oracle to return timezone adjusted timestamps for SELECT queries? |
The job failed. The owner (pc\user) of job FULL DB BACKUP does not have server access Posted: 02 May 2013 02:37 PM PDT SQL Server 2008R2 is failing to execute log backup hourly.
the account have administrator privilege. any idea why its failing? |
MySQL query optimization when no row is returned Posted: 02 May 2013 06:00 PM PDT I am using explain to figure out what is happening in my query which is: I see "Using where" in the Extra column of the resultset. But this only happens if I use a valid u1_id. So if I run the same query, on the same server but with an invalid u1_id I get "Using where; Using temporary; Using filesort" in the Extra column. I assume this means that when the user_id is not in the table then MySQL takes another route to process the query. Does anyone know why this happens and how can I prevent this from happening? EDIT Here is my table: |
Use cases for hstore vs json datatypes in postgresql Posted: 02 May 2013 02:10 PM PDT In Postgresql, the hstore and json datatypes seem to have very similar use cases. When would you choose to use one vs. the other? Initial thoughts:
|
Best practice: Unions or a derived table? Posted: 02 May 2013 03:40 PM PDT I've inherited a medium-sized database with a terrible schema. The sanitized portion in question is like so: And two similar tables, customers and locations: So, yes, when an invoice is not yet fulfilled, it's in the "pending" table, then when the company's finished with it, it gets moved to the "done" table. So any given invoice will only be on one of the two tables. Unfortunately, we're still actively using the proprietary software that runs on this schema, so I am powerless to fix it. The "pending" table will usually have around 9000 rows, whereas the "done" table is ~800,000 and counting. For two years (the full extent of my LAMP/MySQL experience), I have been writing UNIONs to capture information from the customers or locations tables, with the first query joining customers/location/both against "pending" and the second joining customers/locations/both against "done." I've recently had a nagging feeling that writing two nearly-identical queries UNIONed together is probably newbish and should be done differently. So I finally applied my more recent learning to a problem I had "solved" in my first few weeks of working with MySQL and realized that I can just use a derived table with a UNION inside that subquery to achieve the same results with less typing. The execution time is near-similar, usually 0.01s for the derived query and slightly less than that for the UNIONed query, but the dervied table query uses far fewer characters and requires less time to type out. So my question for those with more experience, is using a derived table subquery more sane and maintainable in the long run? It's easier for me, the programmer, so that makes me suspicious that maybe it's not the best practice! Sample queries below: |
Database design - People and Organisations Posted: 02 May 2013 02:18 PM PDT Databases are not my primary skill and I am finding I need some assistance please. The software we are building has "Customers". The Customer could either be a Person or an Organisation. I really want to create an optimal schema for this. I have these considerations.
I would like the App to scale decently, so the schema should fit this choice. I am trying to achieve something like the following. I would really appreciate some direction, help, assistance on how i should create an optimal schema for relating the above. I have attached a rough Visual Schema hierarchy - I know im way off!! and Im sure im making errors. Is it possible to get a married set of results depending on whether its a Person or Organisation? Joining a Person and Contact/Address to Customer is simple, but How do you join Organisation's primary Contact/Address ? Is there any easier way to achieve my spider-diagram looking Schema.?? |
Mysql database user specified as a definer Posted: 02 May 2013 10:54 AM PDT I have a view in my database. problem is below Error SQL query: MySQL said:
i Google for a solution User is created for Host & not for Global. How to create the User for Global ???? |
Restoring differential backup from SQL Server 2005 to SQL Server 2012 Posted: 02 May 2013 12:32 PM PDT We are migrating a client to a new server. Their old server has SQL Server 2005. I have attempted to perform a differential backup however I received an error about the LSN chain being broken. Please note I have done this many times, the full backup is restored on the new server with I made a test database, got the same result. I then took the test database files and moved them to a server running SQL Server 2008 R2, the backup restored correctly. Does anyone know why the differential backups from SQL Server 2005 will not restore properly on SQL Server 2012? And how can I fix this? |
MySQL-problem in Fetching Result set in Desired Format Posted: 02 May 2013 11:18 AM PDT I have 2 tables having the following columns The query is The resultset returned is as follows One Person can have many item_type and can order same item many times,I want to show the record set in such a manner that no matter the item is purchased by person or not , if the item is present in the record set i am drawing a column and have to show the result corresponding to that as such if the item is purchase 3 times then i want to show the item and it's corresponding record i.e order_date,status etc in sequential manner. For eg :- If there are 3 items A, B,C then output should be shown on the screen as :- PID A B C 1 Y N Y 2/5/2012 5/5/2012 7/5/2012 ----------- Y 17/5/2012 2 Y N 12/6/2012 15/6/2012 Now Suppose there is another item so what i am doing is first fetching the result set from above query and then traversing through that result set to check the existence of that item & if that item is present i am showing it & it's corresponding values , so 1st i have to traverse the whole result set to get the item_type and then have to show it . |
Many:Many with Shared Relation Posted: 02 May 2013 04:39 PM PDT I'm modelling data with multiplicity like this: Each Composition/Anthology related pair must share a Composer. Also, each Anthology must contain at least one Composition. How would you recommend I model this? Here is one possible representation with nearly enforced consistency (it doesn't enforce the 1+ Composition : 0+ Anthology multiplicity). However it duplicates FK_Composer lots of places (which as a side annoyance breaks some of my Entity Framework features). Note: I'm also trying to hack this out at the business logic and ORM layers and have hit obstacles there as well. |
SQL Server: The multi-part identifier "xxx" could not be bound Posted: 02 May 2013 11:56 AM PDT The following is a The last 2
Any ideas? |
DB comparer field by field for all tables Posted: 02 May 2013 10:20 AM PDT I have a liveDB and I have a development DB. Now when I make a change to development DB, I want to find out the changes later when I am deploying my application on server. For eg. I may have added fields and tables and so on. I want to be able to detect those changes. Could you please let me know some famous tools that are free and does this job for me so I don't have to remember or write the changes in notepad and later on apply them. |
Oracle 11g delete by unique key slow Posted: 02 May 2013 11:30 AM PDT We have been plagued by a deadlock issue for quite some time that relates to deletes and inserts on the same table. It happens sporadically, but had been increasing in frequency. It would manifest itself as a Initially, we could not see the bind variables because we were using Hibernate and we could not turn on show_sql for Hibernate since there is far too much traffic on other Hibernate-managed POJOs in other areas of the application. We had thought Hibernate might be doing something with its cache at unexpected times. Recently, I removed Hibernate for the table in question, replaced it with JDBC, and log EVERY touch (every SELECT/INSERT/UPDATE/DELETE) against that table. After doing that, I've only seen 1 "blocker". So I don't know if the problem is fixed by virtue of removing Hibernate and doing everything via JDBC. However, I would be remiss if I didn't attempt to run down that blocker instance. Here's what I see:
I verified with our DBAs that:
Does anyone have any insight/recommendations on what might cause a Delete statement by Unique ID to possibly take 10 seconds, when there is no other touches on that table for that row (no select, update, or insert by that Unique ID)? I realize that there probably isn't a precise and exact answer to this questions, I'm just hoping someone can share insight or anecdotal information on their own experiences. Thank you sincerely in advance. |
Posted: 02 May 2013 05:15 PM PDT In MS SQL Server 2008 R2 there is replication of table with fulltext index on it. But stoplist that is associated with replicated fulltext index doesn't replicate. Is there any possibility to replicate stoplist also? |
Posted: 02 May 2013 10:15 AM PDT An installation of MySQL 5.6.10 on a virtualized Ubuntu 12.04 is exhibiting massive memory hogging. The mysqld process claims the entire available memory within a couple of hours of uptime and forces the host to swap: It has grown as large as 50 GB once and by thus has significantly outgrown the data set itself: Usually, I am able to free ~ 3 GB by issuing While the system was swapping, I could observe rather high numbers for "swap out" counters (vmstat is showing ~1k pages/second during bursts) and hardly anything at all swapped back in (few dozens of pages per minute). I first suspected memory leakage but have not found anything supporting this hypothesis so far. SHOW INNODB STATUS indicates that the buffer pool is only partially filled: The server has a total of 80-90 connections most of which are reported to be in "Sleep" state by SHOW PROCESSLIST. The memory-sensitive options set are The tuning-primer.sh script calculates sane values for memory usage: Binlog is enabled and the host has a replication slave attached to it (although results were not all that different at the time this has not been the case). Innodb_file_per_table is enabled by default in 5.6 and the databases are hosting a total of ~ 1,300 tables. What means do I have to identify the possible causes for the apparently unlimited growth? After reading "How MySQL uses memory" I had the suspicion that temporary tables might be the culprit. If they are not being released correctly for whatever reason, they could accumulate pretty quickly. The application querying the database issues a lot of nested, complicated queries, so temporary tables would be heavily in use according to the referenced docs. I tried checking if killing / resetting existing (idle) connections would significantly reduce memory usage when mysqld has reached ~20 GB - it would not, so this is either not related to connection states or the memory is leaking from there in a way which would be unaffected by closing the connection. How would I verify if in-memory temporary tables are occupying a significant amount of memory? The STATUS variables and the INFORMATION_SCHEMA do not seem to have this information. MySQL's memory usage appears hard to debug - the counters available seem not to account for the larger part of the usage I am seeing. I might be missing something, though. I also have a MyISAM-based replication slave attached to the InnoDB master taking similar (read-only) loads - it does not show any signs of excessive memory usage (mysqld RSS is continuously < 1GB) , so the problem appears to be specific to the InnoDB configuration. |
SSIS keeps force changing excel source string to float Posted: 02 May 2013 12:25 PM PDT There is a column in Excel that is supposed to be text, but SSIS sees numeric text there and automatically makes it double-precision float [DT_R8]. I can change it manually in the Output branch's External and Output columns, but the Error Output simply refuses to let me change the respective column. Error at Extract Stations [Excel Source [1]]: The data type for "output "Excel Source Error Output" (10)" cannot be modified in the error "output column "Group No" (29)". Error at Extract Stations [Excel Source [1]]: Failed to set property "DataType" on "output column "Group No" (29)". I have tried modifying the package xml. I tried the IMEX=1 and typeguessrow=0 but not of that has fixed my problem. Does this have any fix at all? The excel field to be imported into the SQL nvarchar field reads for example but they are being written to the SQL table as I put in dataviewers and the fields show all the way thru execution which is correct but I guess when it hits the OLE DB Destination source it somehow converts it to the which is wrong. I also have values such as V321.1 and V213.34 for example which are definitively not integers and need to be stared as varchar. |
SQL server ODBC connection for Active Directory user who doesn't log in to the windows Posted: 02 May 2013 02:05 PM PDT Do you think it is possible to create a SQL server ODBC Connection for an active directory user who doesn't log in into the windows. Ideally this type of users will be used in the batch process. So, Another person logs in and creates a batch process and runs it with another user. Note: I dont want to create a SQL server authentication. Instead would like to use active directory. Thanks. |
Process attempted to unlock a resource it does not own Posted: 02 May 2013 01:05 PM PDT SQL Server 2005 SP4 32-Bit I have a DBCC CHECKDB job running nightly. Last night, soon after the job started, I got the errors below. The database is NOT in suspect mode, and CHECKDB comes back clean when I run it now. The database is fully accessible. Should I be concerned? I'd hate to go back to a backup at this point. |
Posted: 02 May 2013 05:05 PM PDT I would like to set But: i get many languages, but no one has the Question: Can i add another language based on an existing one, and set de field Can i modify an existing syslanguage ? All what i want, is te set it to 6 parmanently, because i need it to get the right week number. Thank you for help ! |
Postgresql Retrieve / Find Primary Key Serial Autonumber field for Audit logs Posted: 02 May 2013 06:07 PM PDT I need to find the Primary Key field name in each of a number (approx 70) of tables to create audit logs. All the primary keys are type serial (auto incrementing integer) and all have the name in the format TablenameID eg so a table with the name "person" will have the serial primary key called "PersonID". The only way I can find to do this is: http://wiki.postgresql.org/wiki/Retrieve_primary_key_columns the disadvantage is the extra time this will add to the creation of the audit logs, from explain (on an empty table) Is there a better / faster way of doing this? The serial autonumber key name can be changed at this stage of the project. |
How does MySQL determine the 'index first key' and 'index last key' with indexes? Posted: 02 May 2013 09:05 PM PDT In other words, how does MySQL find the result range with indexes on the first step when it's executing a select query? |
Posted: 02 May 2013 11:05 AM PDT I am facing an issue with the following query. When I execute the query, it takes very long. I broke the query into two parts, compared with a shell script, but is there any chance to go with one query? Any suggestion welcome. |
sql server database sharding - what to do with common data / non sharded data Posted: 02 May 2013 06:05 PM PDT We have a very large scale enterprise level database. As part of our business model all web users hit our web servers at the same time each month which in turn hammer our sql box. The traffic is very heavy and continues to grow heavier the larger the company grows. sql proc optimization has been performed and hardware has already been scaled up to a very high level. We are looking to shard the database now to ensure that we can handle company growth and future loads. We have decided what particular data should be sharded. It is a subset of our database which is highly utilized. However, my question is regarding the non sharded data which is common/universal. An example of data like this may be an Inventory table for instance or possibly an Employee table, user table etc . I see two options to handle this common/universal data: 1) design 1 - Place the common/universal data in an external database. All writes will occur here. This data will then be replicated down to each shard allowing each shard to read this data and inner join to this data in t-sql procs. 2) design 2 - Give each shard its own copy of all common/universal data. Let each shard write locally to these tables and utilize sql merge replication to update/sync this data on all other shards. concerns about design #1 1) Transactional issues: If you have a situation in which you must write or update data in a shard and then write/update a common/universal table in 1 stored proc for instance, you will no longer be able to do this easily. The data now exists on seperate sql instances and databases. You may need to involve MS DTS to see if you can wrap these writes into a transaction since they are in a separate database. Performance is a concern here and possible rewrites may be involved for procs that write to sharded and common data. 2)a loss of referential integrity. Not possible to do cross database referential integrity. 3) Recoding large areas of the system so that it knows to write common data to the new universal database but read common data from the shards. 4). increased database trips. Like #1 above, when you run into a situation in which you must update sharded data and common data you are going to make multiple round trips to accomplish this since the data is now in separate databases. Some network latency here but I am not worried about this issue as much as the above 3. concerns about design #2 In design #2 each shard gets its own instance of all common/universal data. This means that all code that joins to or updates common data continues to work/run just like it does today. There is very little recoding/rewriting needed from the development team. However, this design completely depends on merge replication to keep data in sync across all shards. the dbas are highly skilled and are very concerned that merge replication may not be able to handle this and should merge replication fail, that recovery from this failure is not great and could impact us very negatively. I am curious to know if anyone has gone with design option #2. I am also curious to know if i am overlooking a 3rd or 4th design option that I do not see. thank you in advance. |
Possible to make MySQL use more than one core? Posted: 02 May 2013 10:10 AM PDT I've been presented with some dedicated MySQL servers that never use more than a single core. I'm more developer then DBA for MySQL so need some help SetupThe servers are quite hefty with an OLAP/DataWarehouse (DW) type load:
Note: The biggest DB is the replicated one from the OLTP DR server and the DW is loaded from this. It isn't a full DW: just last 6 months to 6 weeks so it is smaller than the OLTP DB. Observations on test server
Questions
Other notes
Edit 1
Edit 2
To test
Let me know if I've missed anything important Cheers UpdateChanged innodb_flush_method + 3 x thread settings in RolandoMySQLDBA's answer |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment