[how to] SQL Max() revision number |
- SQL Max() revision number
- Apply in a CSV file to multiple databases
- Database Market Share [closed]
- Is there a way that I can export a list of all table triggers on a server?
- Which database is designed for storing and incrementing values?
- How can I calculate time spent at work in a single column?
- Logging only mySQL logins on heavily used system
- Defining the Relationship - Some Basic Help [closed]
- Does this query make sense?
- Integer comparison in MySQL. does it cause lots of overhead?
- Logging and saving user activity
- Linked server error not caught by TRY-CATCH
- Key Lookup and Full-text index
- Large time difference between two almost identical simple queries....why?
- upgrade mysql 5.1 to 5.5 on Ubuntu 11.10
- MySQL information_schema doesn't update
- How do I add a database in IBM DB2
- What is the maximum no of rows in clustered index on datetime column
- Can I configure MySQL's typecasting to consider 0 != 'foo'?
- Select MaxDate where hh:mm:ss makes the difference
- Change Data Capture and the __$update_mask binary
- How to succesfully run a batch file in an SQL Agent job?
- MySQL partitioned tables?
- Best way to defrag/compact a database for archival purposes
- Using MySQL EXPLAIN to analyze the performance of a function
- How to scale MySQL transaction wrapped queries?
- MySQL user defined rollback procedure
- How do I generate a tnsnames.ora file to connect to an Oracle Database?
- How to drop multiple tables with common prefix in one query?
Posted: 07 Mar 2013 06:49 PM PST This SQL returns a table with everything unique, which is what I want. However, if there are two (or more) event numbers, I need it to return the line with the highest I have read about the max() function, but I'm not having any success with it. Here's what the output looks like: |
Apply in a CSV file to multiple databases Posted: 07 Mar 2013 03:08 PM PST Recently, I have been working with a client whom has asked us to provide their dataset for use in a mobile application we are making for them. The requirements for this project include the ability to update the local dataset on the mobile devices with any changes that our client delivers to us. The local databases are to be stored in sqlite according to the mobile device standard. The Problems:
Approach: Currently, the solution we are looking into involves using one of our backend MySQL servers to maintain a backup of database stored on the mobile sqlite databases. When a new dataset arrives, we were planning to make a new database with the new data, and then to be able to use diff and patch linux commands to update the server side data. However, with this approach, knowing what or how to update in the mobile phone's local databases gives us some issues. I am currently unaware of any way to use the patching or the diff file from MySQL in either the Android or iOS environments. Ideal Approach: Preferably, when we receive the entirely new dataset, we would like to be able to detect just the changes in the new CSV against the current state of the database. Then, with this list of changes, we would generate INSERT/UPDATE/DELETE statements that would be able to apply each of the changes to both the MySQL and sqlite databases. We would store a list of each of the INSERT/UPDATE/DELETE statements along with a the date updated so that upon request each of the phones can be updated accordingly to the newest data. However, I am also currently unaware of a way to detect changes from a new CSV to an existing database which makes constructing INSERT/UPDATE/DELETE statements difficult. The Questions:
I am currently seeking any advice for either improvements to our design or proposed implementation. I feel like I can't be the first person to try and reflect changes to a CSV in a database or synchronize changes in a server side database to a local offline database. Thank you everyone in advance for your help. I look forward to hearing alternative approaches to this problem. |
Database Market Share [closed] Posted: 07 Mar 2013 02:44 PM PST Does anyone have any knowledge or references as to who the latest database market share players are? Basically, I'm looking to see how to following stacks up in terms of support in the industry: Oracle, IBM DB2, Microsoft SQL Server, MySQL, PostgreSQL, and Sybase ASE. I've seen data points from Gartner studies come out but was unable to view them on their site. I did find an old data point from 2008. Is there a better source for this information elsewhere? |
Is there a way that I can export a list of all table triggers on a server? Posted: 07 Mar 2013 01:50 PM PST I have two SQL Server instances that I manage. One is a SQL Server 2000 instance, and the other is 2005. Some where, on these servers, I recall setting up a couple table triggers that were executed under certain conditions. I need to look-up these triggers as a point of reference for a new project, however, for the life of me I can not find them. Is there some crafty SQL statement that I can execute that can iterate over all my databases and, subsequently, iterate over each table and output any triggers associated with the tables? |
Which database is designed for storing and incrementing values? Posted: 07 Mar 2013 12:02 PM PST We're trying to build a stats server that stores visits for each document on a section for a specific date, so we need a table that has the following parameters:
The plan is then to increment the Now, we tried to do this using MongoDB (albeit a very simple instance of it) and it turns out this is too write intensive for it. Is there another database system that's better suited for this task? Or should we just start growing the MongoDB infrastructure that we're using? |
How can I calculate time spent at work in a single column? Posted: 07 Mar 2013 06:42 PM PST I would like to calculate the amount of time spent working from rows of data that contain a single date/time field. The first occurrence on a given day can be considered the start time, with the next occurrence being the end time, alternating back and forth on any given day. So if I start at 9:00am and go on lunch at 12:00pm, return from lunch at 12:30pm, then leave for the day at 5:00pm, my data would look like: Here is a further sample (in my locale we use I would like to know the total time spent at work for any given day, and would also like to be able to tally the time spent over, for instance, any given week or month. |
Logging only mySQL logins on heavily used system Posted: 07 Mar 2013 06:23 PM PST I have an audit requirement where they want all database access (logins) logged. We have a very large distributed system of hundreds of mySQL servers (shards, slaves, slaves of slaves) that are serving a thousand queries per second. As a result, I can't just turn on general query logging as recommended in Audit logins on MySQL database, it'll be multiple GB/day per instance and kill our disk IO. I've looked and see two options to do discretionary logging with filtering - an Oracle plugin and a McAfee plugin. The Oracle one requires you to be paying them $ for enterprise mySQL, and with hundreds of db servers I'm not really prepared to start doing that today. The McAfee one is brand new and I'm worried about stability on a large scale system like we have. Any other good solutions to log only logins to our mySQL databases? It doesn't have to be integral to mySQL, we'd consider UNIX level port sniffing shenanigans if they'd scale. |
Defining the Relationship - Some Basic Help [closed] Posted: 07 Mar 2013 09:58 AM PST I thought I had this figured out but I am starting to doubt myself on something that should be simple. I have a table of Group, and a table for Activity. Each Group has one Activity. Different groups can have the same Activity. Right now I have the foreign key in the Group table connected to the primary key in the Activity table. I started to doubt myself when I started entering data into the Access database table Group. When I got to the foreign key field to enter the data for Activity ID in the Group table, I did not get the (+) symbol to allow a drop down choice. Should the table Group have a foreign key for Activity table (the way I have it now), or should the Activity table have a foreign key for Group ID? I am having a rough week, with stomach flu, and I am just not on my game Thank you. |
Posted: 07 Mar 2013 09:16 AM PST I recently migrated a database from Sybase ASE to Microsoft SQL 2008r2. One category of queries that was broken in transit is: According to Sybase, this is a valid query. But Microsoft SQL is stricter, so I get this error: While looking at this query to try and correct it, I realized that it does not make sense to me. Isn't this exactly the same as the below? Or maybe I should write like this? There are several queries like this in one application and I need to fix them without breaking anything else. Please help. |
Integer comparison in MySQL. does it cause lots of overhead? Posted: 07 Mar 2013 09:46 AM PST I have an 8 digit number in my db. I want to do a select on the table in MySql (Innodb) and get the list of all numbers that are equal or smaller than my number. Then I sort them from larger to smaller. I store the number as Integer. Let's say my number is : 12040670 and after the select I get this list : 10340070 12005608 . . . etc Then I sort them from large to small. If my table contains 1,000,000 records, and I run such comparison, do you think it causes lots of load on the db server ? How about indexing the column ? Does indexing help in lowering overhead when it comes to integer comparison ? |
Logging and saving user activity Posted: 07 Mar 2013 02:44 PM PST Database:Oracle 11gR2 OS: Windows 2008 Server R2 I'm not trained DBA, just have to run things on an Oracle database temporarily, which is used to perform OLAP processes. I need to automatize user activity logging (about 30 users) and saving this data (as a text file) if possible. What I need to log: It would be great to automatically save this logs on an hourly basis. I've made a research about RMAN, but it seems complicated for me at this stage. Thanks. |
Linked server error not caught by TRY-CATCH Posted: 07 Mar 2013 09:04 AM PST I am setting up a job to loop through a list of linked servers and execute a specific query against each one. I am trying to execute the query inside a TRY-CATCH block so if there's a problem with one particular server I can log it but then carry on with the other servers. The query I'm executing inside the loop looks something like this: If there is a problem connecting to the server the code just fails immediately and doesn't transfer to the For example, I created a linked server to a name that I know doesn't exist. When executing the above I just get: I've read BOL on Does anyone know why these errors aren't caught correctly? |
Key Lookup and Full-text index Posted: 07 Mar 2013 03:58 PM PST I have the following query:
My Execution Plan is such that it is using the Key Lookup (Clustered) operation 80+% of the time:
Each and every mentioned column is in its own non-clustered index, Title is additionally in the full-text index and there's a clustered index on the primary key as well. Is there a way to include the full-text index in the covering index or something? |
Large time difference between two almost identical simple queries....why? Posted: 07 Mar 2013 10:37 AM PST I'm trying to understand why is there so much difference in execution time and CPU usage between two simple queries that only differ in a computation. The queries are the following: Some statistics for this query are: 67000 logical reads, 3000 ms CPU time, 800 ms time elapsed. Some statistics for this query are: 20 logical reads, 0 ms CPU time, 0 ms time elapsed. Table "twg" has a nonclustered index on "Metadata" field (its primary key fields don't appear on the query). Table "Metadata", has its field "Metadata" as primary key and therefore it has a clustered index on it. As you can see, the only difference is specifying the concrete values that, at the moment, result from this computation "(Metadata % 1000) = 100". Shouldn't SQL engine compute first the valid values for Metadata and then apply the filter? Is this so much time consuming to cause such a big difference in their performance numbers? Thanks in advance!! |
upgrade mysql 5.1 to 5.5 on Ubuntu 11.10 Posted: 07 Mar 2013 05:37 PM PST I currently have mysql server 5.1 installed via apt-get on my production Ubuntu 11.10 server I would like to upgrade this to 5.6, but the mysql docs seem to suggest upgrading to 5.5 first, and from there to 5.6. I've seen various lengthy guides describing how to upgrade from 5.1 to 5.5, but they all seem to describe how to upgrade by installing the tarball rather than using the package manager. Is there a simpler to upgrade using the package manager if the current version was installed using Obviously I want my existing configuration and databases to be retained after the upgrade and I will be sure to backup my databases using |
MySQL information_schema doesn't update Posted: 07 Mar 2013 06:22 PM PST I have a database, say After I run the query The strange thing is that I find the database size doesn't decrease at all, however the data in "test" is gone. I've done this kind of test many times, this strange behavior happens sometimes. I'm using Can anybody tell me what is wrong? Update: Actually, I use another thread to check the database size periodically. |
How do I add a database in IBM DB2 Posted: 07 Mar 2013 09:29 AM PST I'm supposed to migrate a MySQL database into IBM DB2, but I thought I'd take baby steps first before I do something like that. So for now, I'd just like to add a database. I'll admit that I have no experience at all with IBM DB2. Here's what I did so far:
Can I have some help on this? Where do I go from here, and how do I create a database? EDIT: I added a Port Number (12345), which then allowed me to create a database, albeit with an error:
EDIT: Additional information: - Using Windows 7 32-bit |
What is the maximum no of rows in clustered index on datetime column Posted: 07 Mar 2013 03:55 PM PST would like know, what is the max no of rows could be in clustered index(non-unique) on datetime column table in SQL Server 2008R2 Thanks, |
Can I configure MySQL's typecasting to consider 0 != 'foo'? Posted: 07 Mar 2013 05:40 PM PST This article points out the following security problem with MySQL: Here we have nice, random tokens that a user must prove they have in order to reset their account. But the user manages to submit a reset token of MySQL converts the token Even if the reset token starts some numbers, the situation is not much better. MySQL ignores all characters but the initial numbers when making this comparison, so it considers Can I configure MySQL not to do this kind of typecasting? If, for example, it cast NoteIf the query is run with That bug has been patched in Rails; it now converts all request parameters to strings, so it would never build a query with an integer. But I still think MySQL should be configurable to prevent this. |
Select MaxDate where hh:mm:ss makes the difference Posted: 07 Mar 2013 11:12 AM PST How to select the maximum date from a table where hh:mm:ss is selected? My query is the following, and returns the latest date, but if there are different times recorded, the date will be returned twice. What I want is to get the absolute maxDate. This query returns the following table: The records I want is: Does anyone knows the solution how to filter the records to max time? Thanks for the help. |
Change Data Capture and the __$update_mask binary Posted: 07 Mar 2013 10:56 AM PST We're using CDC to capture changes made to a production table. The changed rows are being exported out to a data warehouse (informatica). I know that the __$update_mask column stores what columns were updated in a varbinary form. I also know that I can use a variety of CDC functions to find out from that mask what those columns were. My question is this. Can anyone define for me the logic behind that mask so that we can identify the columns that were changed over in the warehouse? Since we're processing outside of the server we don't have easy access to those MSSQL CDC functions. I would rather just break down the mask myself in code. Performance of the cdc functions on the SQL end is problematic for this solution. In short, I'd like to identify changed columns by hand from the __$update_mask field. Update: As an alternate sending a human readable list of changed columns over to the warehouse was also accepatable. We found this could be performed with performance far greater than our original approach. The CLR answer to this question below meets this alternative and includes details of interpreting the mask for future visitors. However the accepted answer using XML PATH is the fastest yet for the same final result. |
How to succesfully run a batch file in an SQL Agent job? Posted: 07 Mar 2013 03:28 PM PST I have a SQL Agent Job which generates a specific report in PDF-file and then copies the PDF to a network directory and then deletes the PDF file in the source directory. The SQL Jobs consists of 2 steps: 1. Generate the report 2. Copy the report to the network location. For step 2 I made a bat-file which handles the copying and removal of the pdf file. The bat-file is as follows: However, when I run my the Job, it hangs on the second step. The status just stays on "Executing". This is the line which I stated in the step (location of the bat-file to execute): My job-settings are as follows: Step 1 Type: Operating system (CmdExec) On Success: Go to the next step On Failure: Quit the job reporting failure Step 2 Type: Operating system (CmdExec) On Success: Quit the job reporting success On Failure: Quit the job reporting failure Some facts:
|
Posted: 07 Mar 2013 11:33 AM PST I have a database that supports a web application with several large tables. I'm wondering if partitioned tables will help speed up certain queries. Each of these tables has a colum called client_id. Data for each client_id is independent from every other client_id. In other words, web queries will always contain a where clause with a single client_id. I'm thinking this may be a good column on which to partition my large tables. After reading up on partitioned tables, I'm still a little unsure as to how best to partition. For example, a typical table may have 50 million rows distributed more or less evenly across 35 client_ids. We add new client_ids periodically but in the short term the number of client_ids is relatively fixed. I was thinking something along these lines: My question. Is this an optimal strategy for partitioning these types of tables? My tests indicate a considerable speedup over indexing on client_id, but can I do better with some other form of partitioning (i.e. hash or range)? |
Best way to defrag/compact a database for archival purposes Posted: 07 Mar 2013 01:10 PM PST We've got an SQL Server instance that's used for email archiving (courtesy of a 3rd party archiving package). Every so often, the software is rolled over to a new empty database. We've done this quarterly in the past, but we're looking to do it monthly now. The amount of data being archived is about 15 - 20 GB per month, and the bulk of the data resides in only a handful of tables (usually 2 - 4). Once we roll over to a new database, the old one becomes used on a strictly read-only basis. What I'd like to do is optimize it into a nice, tight data file, with all the tables/indexes contiguous and having a very high fill factor, and not much empty space at the end of the data file. Also, we're using Standard Edition on this server, with all the limitations that implies (otherwise I'd be using data compression already). A few possibilities I can think of:
The final step in every case would be setting the database to read-only mode. What other good/better options are there for doing this? My concern is moving the data over in such a way to preserve a high fill factor, and in a logically contiguous fashion. Edit: I should mention that about 75% of the data seems to be stored in image (LOB) columns. |
Using MySQL EXPLAIN to analyze the performance of a function Posted: 07 Mar 2013 02:41 PM PST You can precede a |
How to scale MySQL transaction wrapped queries? Posted: 07 Mar 2013 10:33 AM PST So I'm working on an app that wraps many of it's SQL queries in a transaction for it's SQL usage and I'm trying to think of some ways to scale the DB layer up.... Up until now we've been throwing hardware at the problem, slowly upgrading our Rackspace cloud servers...but we're reaching the upper echelons of their cloud server offerings. I'm thinking the standard R/W split won't work for us because the transaction gets stuck to a single server eliminating the benefits of the R/W split. (Is that right?) I've looked at dbShards and ScaleBase both of which look like interesting solutions to the problem, but as a cash strapped startup we'd prefer to use something OSS or devote some man hours creating a sharding strategy instead. So I suppose the question boils down to, are Galera/MySQL Cluster and sharding my only two options for scaling this setup? EDIT To answer some more questions... We manage a great deal of financial and inventory related data, for this reason we wrap most of our SQL traffic in a transaction so it's all or nothing. We have ~100 tables in our DB with ~30 of them getting a great deal of use. The most commonly used tables have anywhere from from 15MM to 35MM rows. Our transactions span across 20-30 tables with joins between ~10 of them. The queried tables are rather large and we have made some inroads with respect to query optimization and proper indexing, but no one is a fully fledged DBA in house....closest is me haha A lot of our DB traffic is machine generated, we do a lot of syncing with third party data sources. As such, most of the SQL queries are wrapped in a transaction to ensure data consistency when performing all the calculations and syncing with the third parties (and to allow for a rollback if the link to the third party gets disrupted) Based off of our cacti monitoring it looks like a great deal of our SQL traffic is reads (~85%R to ~15%W) Just as a note, we haven't hit too much of a bottleneck with our current cloud server but I'm doing my due diligence ahead of time to make sure we don't run into the wall without a plan. |
MySQL user defined rollback procedure Posted: 07 Mar 2013 12:33 PM PST I'm attempting to write my own mini-rollback procedure. I have a table that tracks any updates or deletes to another table using a trigger. I am attempting to make it possible to restore one or more of these tracked changes through the use of a procedure. However, I'm receiving a syntax error with the following: The syntax error comes in in regards to my update statement, any help or suggestions would be appreciated. |
How do I generate a tnsnames.ora file to connect to an Oracle Database? Posted: 07 Mar 2013 08:33 PM PST I have installed Oracle 11g, and I can connect as Do I need to generate the tnsnames.ora file myself? If so, where do I place it? If not, how does Oracle generate it for me? If I do need to generate it, what is the appropriate syntax for the file? |
How to drop multiple tables with common prefix in one query? Posted: 07 Mar 2013 12:11 PM PST I'm using Microsoft SQL Server 2008.My question is:How to drop multiple tables with common prefix in one query? something like that table names: |
You are subscribed to email updates from Recent Questions - Database Administrators - Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment