[how to] I can not connect to NeorSQL tool from my Microsoft .NET application to MySQL Server database |
- I can not connect to NeorSQL tool from my Microsoft .NET application to MySQL Server database
- What are some best monitoring tools available for mysql?
- Email alerts and Server Side Tracing
- Permissions for a MySQL user which will monitor replication status?
- Schema design: Use of association (aka: bridge/junction) table vs foreign key constraint with composite index containing a non-key field
- Search every column in every table in Sybase Database
- Why isn't my includeThreadNamesAsStatementComment JDBC parameter working?
- SQL Server link to linked server
- Update Table A row if Table B row is changed
- How are DB locks tied to connections and sessions?
- Does large table variables fill up logs in tempdb?
- Do I need tweak something for this MySQL setup?
- UPPER Case issue postgreSQL when importing from text file
- Update field1.table1 based on compound join between table1 and table2 - Oracle
- Creating indexes with t-sql scrips vs rebuild indexes in maintenance plan
- Moving site to amazon EC2 - better to have 1 large instance or 2 medium instances?
- Are Schemas SQL Standard? [closed]
- TimesTen performance as compared to 10g
- Why would running DBCC FREEPROCCACHE result in different data being returned?
- Delete a variable number of records by a chronological order
- Complex query with multiple normalized fields
- Unique Identifier with Extra Characters Still Matching in Select
- MySQL5.6 on Mac OS X 10.6.8 problems when granting permissions to root
- Table Size Analysis on SQL Server 2000
- EMS SQL manager permission problem
- Write differences between varchar and nvarchar
- how to dump a single stored procedure from a database
- Deriving formulas for input/output
- How do you extend the tablespace when using innodb_file_per_table?
- Window functions cause awful execution plan when called from a view with external parametrized 'where' clause
I can not connect to NeorSQL tool from my Microsoft .NET application to MySQL Server database Posted: 08 Mar 2013 08:42 PM PST As per documentation, i used port 4040 to connect .NET web app with Neor Profile, but I am not able to make this work. Can you help me. Siva Kishore reddyrm@gmail.com |
What are some best monitoring tools available for mysql? Posted: 08 Mar 2013 05:21 PM PST I have a setup a linux mysql server and would like to monitor it. I found percona plugins for nagios and cacti. Are there any other tools available? |
Email alerts and Server Side Tracing Posted: 08 Mar 2013 04:55 PM PST This question is in regards to SQL 2005 and email alerts on long running queries. Does this sound like the best approach (with the least impact to the server) to receiving alerts to long running queries is the following:
Thanks. |
Permissions for a MySQL user which will monitor replication status? Posted: 08 Mar 2013 05:49 PM PST I'd like to use a script to monitor the replication status of a MySQL database like in this bash script: https://gist.github.com/wesdeboer/1791632 I'd like to create a user which will only be used to query the status of the replication. So basically this user would just need to run the command: What is the minimum set of permissions I need to grant a user to allow this? (Is this even something which can be |
Posted: 08 Mar 2013 05:27 PM PST This is an inventory database for IT assets. The models used are trimmed in order to focus on the problem at hand. Using SQL Server 2008. Thanks for taking the time to read and for any input you can provide. My design includes a The Design 1My first design uses an index on The I put check constraints on the The issue I have with this design is I'm not sure if the relationship with Design 2In this design I thought I would use a bridge/association/junction table to decide which network statuses are valid for a device. It looks like this: The Due to this design's granularity, I could theoretically allow any mix of statuses for devices with this design, but I wanted to control it so I wrote some triggers to only insert the correct mix of statuses depending on whether the device is network capable. Triggers as follows: I used the following CHECK constraint on the This design eliminates the need to propagate |
Search every column in every table in Sybase Database Posted: 08 Mar 2013 04:19 PM PST I'm been taxed with the task of creating an application that pulls data from our Micros Point of Sales system. The POS is using a Sybase database running on one of our servers. The database schema is very convoluted. I've been able to figure out most of the schema to gain access to the data I need, however, there are a few things still left to find. I know what they are called in the actual POS, but I can't seem to find them anywhere in the database (although I haven't actually looked through all 200+ tables). I'm wondering if there's any way to run a query to search for a specific string in all the columns in all the tables. Btw, I'm using the "Interactive SQL" application that comes with the Sybase Database Client software to connect to the database. Any help you can offer with this is highly appreciated. Thanks. |
Why isn't my includeThreadNamesAsStatementComment JDBC parameter working? Posted: 08 Mar 2013 03:31 PM PST The JDBC API lists a includeThreadNamesAsStatementComment parameter that seems to indicate it will prefix all queries with the Java thread ID. I thought this would be useful in tying back DB activity to application logs. I added it to my resource URLs and it seemed to only be working part of the time. Curiously it was only the applications heartbeat thread that periodically pings the database as part of a health check. None of the "real" queries were getting tagged. Why isn't this working for all queries? |
SQL Server link to linked server Posted: 08 Mar 2013 03:21 PM PST My client has a Visual FoxPro database that needs to co-mingle with SQL Server data (SELECT only, no updates or deletes). After quite a bit of searching I found the VFP OLEDB provider doesn't work with 64-bit SQL, so I installed an instance of 32-bit SQL Express. I was able to successfully create linked server to VFP. The 32-bit instance is installed on a different server than the 64-bit, but I'm using SSMS on the server with the 32-bit instance and can connect via SSMS fine to the 64-bit instance. Now I need to link my main 64-bit SQL instance to the 32-bit instance, but I've not had luck. I've tried both the SQL Native Client and the OLE DB Provider for SQL Server, but I keep getting "Server is not found or not accessible." Remote connections are enabled, I've enabled TCP/IP, and tried all manner of server name combinations (.\SQLEXPRESS32, localhost\SQLEXPRESS32, (local)\SQLEXPRESS32, MyServerName\SQLEXPRESS32). At this point I'm not sure if this is a linked server configuration issue, or a new install of SQL Server configuration issue, so I'm not entirely sure if I'm asking how to link my server or how to ensure I can access my server. Or maybe I'm asking the best way for my 64-bit SQL Server to read VFP data. Any suggestions? |
Update Table A row if Table B row is changed Posted: 08 Mar 2013 04:28 PM PST Is is anyhow possible to tell the database to execute a little script, if a row is changed in a table? Scenario: The database constantly monitoring Table B to see if rows are inserted, updated or deleted. If a row is inserted, updated or deleted: Update the row in Table A with the referring identifier. Setup: The point of this being not to do a subquery every time the SalesSum is needed in the application, like: But instead "just" doing it easy, nice and clean: Or is there anything I might have overseen or am wrong about? EDIT: Many of described the uses of inner joins for these sub-calculations. But when "trying" this, I get a couple of errors: First I have to include each column in the GROUP BY clause unless it is defined by via an Aggregate function. But when trying to do this, then it errors when having a NTEXT datatype column in the clause, because it cannot be compared. Here is an example of the original query: The "new" query, with another inner join, could be something like: As written, this is only an example, and therefore there can be a large number of selected columns - specially from the Orders table. All these needs to be in the Group By clauses. But this cannot be done with the Ntext datatypes. So what do I do? |
How are DB locks tied to connections and sessions? Posted: 08 Mar 2013 02:26 PM PST I've heard of page- and row-level locks, but never heard them used with regards to sessions and connections. Today, our (MySQL) DBA was talking about row locks and implied that their scope can be set at the connection or session level. Is this true, or did I misunderstand him? I didn't realize that locks could be set when a connection is made (or when a session is started) and then released when that connection/session ends. If this is true, what benefits does this have over row-/page-level locks? Thanks in advance. |
Does large table variables fill up logs in tempdb? Posted: 08 Mar 2013 04:28 PM PST I'm running into an issue with a DBA who claims that table variables reside in ldf of the tempdb and when large amount of data is loaded into the table variable, the tempdb's logs fill up. The DBA's solution is to use temporary table instead of table variable. While I see the justification to use the temp table in case of large data sets, I don't understand how temp table is created and stored in the mdf of tempdb where as the table variable is stored in ldf. Can someone please throw some light? |
Do I need tweak something for this MySQL setup? Posted: 08 Mar 2013 03:10 PM PST I'm running a fairly popular website with the following stats and setup. stats: ~250,000 pageviews per day, rendering ~47 queries / second to MySQL setup: the MySQL instance is running alone on a Linode512 on Debian 6.0. Nothing else runs there, for performance reasons. Using MyISAM and the ini for "large" mysql. I see the following figures right now from MYSQLREPORT. Please advice. The site performance is quite OK but I'm not sure what the 256 MB key buffer indicates and if that is tannable in the long run or not. I'm not very good at understanding the inner workings of MYSQL. Also, I can't figure out why the QC is being hit so low i.e. why isn't it using all of the QC. Btw, I tried posting this question in the DBA forums of stack exchange prior to here but it's like a grave in the other forums. Appreciate the help. |
UPPER Case issue postgreSQL when importing from text file Posted: 08 Mar 2013 02:01 PM PST I have ruby and rails app, and I have cron task to download text file and import it to the database using There is a header field which is called EANHotelID in this text file. My import fails because of this particular filed and if I manually rename it to, for example hotel_id, then import goes OK. Is there any way to overcome this issue because I will be unable to rename it every time manually? In my database schema the column is called ean_hotel_id. UPDATE Error Description: Text file example Table definition For COPY I use gem postgres-copy (https://github.com/diogob/postgres-copy). in my task I have the code to map table columns names in the file to the database |
Update field1.table1 based on compound join between table1 and table2 - Oracle Posted: 08 Mar 2013 02:07 PM PST I want to run update statement on Oracle 11g, but it fails with the following error:
Here's what I'm trying to do in pseudocode: Is there a way to do this in Oracle SQL? My code looks like this: |
Creating indexes with t-sql scrips vs rebuild indexes in maintenance plan Posted: 08 Mar 2013 02:15 PM PST I'm using SQL Server 2008 and I am running several (15) scripts each day to bulk insert to tables. Each script drops the table at the beginning. I create indexes in the scripts. Some of the scripts reference the tables that were created with previous scripts. I want to improve the bulk insert operations but also want to keep the indexes to improve query performances. Does it make sense to create the indexes using maintenance plan rather than creating them in the script? Will it improve the bulk insert performance if I create the indexes in the maintenance plan at the end of all scripts run? Thanks. |
Moving site to amazon EC2 - better to have 1 large instance or 2 medium instances? Posted: 08 Mar 2013 11:40 AM PST I currently have a website which is running on windows 2008 r2 standard along with sql 2012 express. All on the same box. The site gets around 700-900 visits a day,although that is slowly going up (yea us!). The database itself is around 2gb in size and makes use of full-text search. The bulk of the full text search is done against a product table which currently has about 36,000 records in it and that's slowly growing. We're leaning towards moving the site over to amazon EC2. I have 2 basic questions
I don't want the site to run any slower then it's running right now. Faster would always be nice. Budget is around $250/mo but we're willing to go higher if we see performance gains. Using either 1 large or 2 medium instances looks like it'll cost around $170/mo and that includes a WEB license for sql, which would be an upgrade for us as we're running express right now. Thanks |
Are Schemas SQL Standard? [closed] Posted: 08 Mar 2013 10:43 AM PST Well I have evidence that it's not supported. For schema I do not mean "the set of data structure". Definition of schema (for this questions) :
SQLite3 Really? MySQL Segmentation fault (joke. not supported...right?) Oracle Postgres SQLServer I think is supported Well I have evidence that the answer is NO, but I need to get where it says that schemas are not standard. I seems to be on first thought Thanks! |
TimesTen performance as compared to 10g Posted: 08 Mar 2013 10:59 AM PST I am new here, so please be nice... I have a question regarding TimesTen & Oracle. I have attempted an experiment with both pieces of software, and realize that the query speed on a table is slower on TimesTen than on Oracle. The experiment setup was as follows:
Table definition:
Query: The query speed for different ranges is consistently slower on TimesTen, as compared to Oracle. No indexes were built on the search key. Is there a particular reason for this? Other notes: I ran the same experiment, but built an index on the search key on both TimesTen & on 10g, and the difference is stark on TimesTen's favour. |
Why would running DBCC FREEPROCCACHE result in different data being returned? Posted: 08 Mar 2013 10:32 AM PST I experienced a situation where a query run directly in SSMS was giving a different result set than the same query run from code on a web server, using the same database. After spending a lot of time ensuring the queries were identical, I ran Am I missing something here? Edit: By different results, I mean the same number of rows were returned with different numeric values. The query that was giving incorrect results was using this TVF, which is what generates the data that was incorrect: |
Delete a variable number of records by a chronological order Posted: 08 Mar 2013 10:12 AM PST Our developers have been using a cursor within a stored procedure to delete old password history. The number of records to delete is being passed by variable. I would like to replace this with a set based approach. I can't do a simple top statement because the number of records to delete is a variable. I can't use a top with a variable without dynamic sql and by policy we don't allow dynamic sql in production. I'm considering this approach below but it makes me nervous as I know that Microsoft is planning on changing the way ROWCOUNT affects return results. By putting the delete targets in a subquery I should be ok with future SQL versions, but I'm still wondering if there is a better way to delete a variable number of records by a chronological order. |
Complex query with multiple normalized fields Posted: 08 Mar 2013 03:42 PM PST We have a fairly simple table structure, but with a LOT of fields per table (talking 40+). This data is initially produced in plain-text, user-readable tables, but then it is translated into higher-performance, easier to query tables before being installed for use in production. What we do is, wherever possible and reasonable we translate certain fields into enumerated values, and keep track of the enumerations in a MasterEnum table. There are usually 20-25 enumerated fields out of 40 or so. Sample table structure: Plain text version: | PartNumber | Manufacturer | SomeData | SomeMoreData | SomeTextData ... ---------------------------------------------------------------------------------- | 1x9kdah | GizmoCorp | ThisIsData | OtherData | ThisStaysText ... | 8xcjkzh | GadgetInc | MoreData | OtherData2 | ThisTooStaysText ... Target table sample structure: | PartNumber | Manufacturer | SomeData | SomeMoreData | SomeTextData ... ------------------------------------------------------------------------------------- | 1x9kdah | 1 | 1 | 1 | ThisStaysText ... | 8xcjkzh | 2 | 2 | 2 | ThisTooStaysText ... Master Enumeration Table Structure | FieldName | InputText | ValueCode | --------------------------------------------- | Manufacturer | GizmoCorp | 1 | | Manufacturer | GadgetInc | 2 | | SomeData | ThisIsData | 1 | | SomeData | MoreData | 2 | | SomeMoreData | OtherData | 1 | | SomeMoreData | OtherData2 | 2 | We have a means of doing this translation that works and works well; however it's a little on the slow side since all the processing is done in Java via Spring/Hibernate. My question is: Is there a way to write a single query that would accomplish all the above translations? (Note that we have an excellent way of keeping track of our field definitions programmaticly, so generating complex SQL queries on the fly is not an issue). If it is not possible to do it in a single query, how would I structure queries to iterate over the individual fields and make sure that as the translations happen the data is inserted into the new table remains associated with the correct rows? Note that it is safe to assume the target table is always empty at the beginning of the process. |
Unique Identifier with Extra Characters Still Matching in Select Posted: 08 Mar 2013 10:41 AM PST We are using SQL Server 2012 with a unique identifier and we've noticed that when doing selects with additional characters added onto the end (so not 36 chars) it still returns a match to a UUID. For example: select * from some_table where uuid = '7DA26ECB-D599-4469-91D4-F9136EC0B4E8' returns the row with uuid 7DA26ECB-D599-4469-91D4-F9136EC0B4E8 but if you run select * from some_table where uuid = '7DA26ECB-D599-4469-91D4-F9136EC0B4E8EXTRACHARS' it also returns the row with the uuid 7DA26ECB-D599-4469-91D4-F9136EC0B4E8 SQL Server seems to ignore all characters beyond the 36 when doing its selects. Is this a bug/feature or something that can configured? It's not a massive issue as we have validation on the front end for the length but it doesn't seem correct behaviour to me. Thanks |
MySQL5.6 on Mac OS X 10.6.8 problems when granting permissions to root Posted: 08 Mar 2013 02:55 PM PST I'm having serious problems with a MySQL 5.6 instance on a Mac Server. We had to upgrade MySQL to a newer version, but it turned to be a bad idea, as we lost control to it. We had a backup of the /data directory and the my.cnf file as well. However, when setting an init_file to restore the previous password and its permissions. So we created a text file with this content: We have double checked the init_file permissions (we even gave it a chmod 777, to make sure it worked), but something is not working. When we run The problem may definitely come from the [Error] lines, but as we haven't used the --skip-locking, it's just confusing. If we try to log into mysql using |
Table Size Analysis on SQL Server 2000 Posted: 08 Mar 2013 02:37 PM PST Our SQL Server 2000 database The results were as follows:
|
EMS SQL manager permission problem Posted: 08 Mar 2013 12:40 PM PST I have a strange problem with EMS SQL Manager. I'm using MySQL 5.5 with linux servers. One of my developers need to edit store procedure, so I grant him: When he clicked edit store in EMS SQL Manager, what he got is: I find out that EMS SQL Manager ask for select privilege in some tables, so I need to grant this user some extra permissions. So my question is, WHY EMS SQL Manager need those privileges? I can create, drop procedure in mysql command line client without problem. |
Write differences between varchar and nvarchar Posted: 08 Mar 2013 12:25 PM PST Currently in our SQL Server 2012 database, we're using My question is are there any differences in how SQL Server writes to Edit: |
how to dump a single stored procedure from a database Posted: 08 Mar 2013 12:34 PM PST I need to create a dump file that contains a single stored procedure from a database. Not all routines from that database |
Deriving formulas for input/output Posted: 08 Mar 2013 06:34 PM PST I'm currently enrolled in a DBS class and am having problem with an assignment. I've searched around and have been unable to understand what it is I'm meant to be doing with this derivation formula.
I've been using some software that was given with the assignment and it also asks what are the maximum number of blocks that are allowed and that is not given by the above brief. I'm not really sure how to derive a formula for this. I've assumed that because there are 3 records per block there are 4 blocks required and that a random heap file uses 1 disk i/o per write/read. If this is a larger topic than is worth explaining a link to a reliable few pages is also helpful. |
How do you extend the tablespace when using innodb_file_per_table? Posted: 08 Mar 2013 02:34 PM PST With innodb_file_per_table off, you can create multiple tablespaces on multiple devices if necessary to manage growth, balance I/O, etc. With the option on, how do you control growth of the files? Do they autoextend? And can you set a maximum and then extend the tablespace for a given table onto another filesystem if necessary? |
Posted: 08 Mar 2013 02:34 PM PST I had this issue long time ago, I found a workaround which suited me and forgot about it. But now there's that question on SO so I'm willing to bring this problem up. There's a view that joins few tables in a very straightforward way (orders + order lines). When queried without a This returns about 10 records out of 5m. An important thing: the view contains a window function, Now, if this view is queried with literal parameters in the query string, exactly as shown above, it returns the rows instantly. The execution plan is fine:
However, when the view is called in a parametrized way, things get nasty:
This happens in all cases when parameters are involved. It can be SSMS: It can be an ODBC client, such as Excel: Or it can be any other client that uses parameters and not sql concatenation. If the window function is removed from the view, it runs perfectly quickly, regardless of whether or not it's quieried with parameters. My workaround was to remove the offending function and reapply it at a later stage. But, what gives? Is it genuinely a bug in how SQL Server 2008 handles window functions? |
You are subscribed to email updates from Recent Questions - Database Administrators - Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment