[how to] Tool to view intermediate query results |
- Tool to view intermediate query results
- Range query from month and year
- Cannot rebuild index, but there's no reason why not?
- mysqlslap chokes on strings that contain ";" delimiter?
- Do you know any data model archive website like this one? [closed]
- Writing with multiple psql processes to database seems to damage data
- When to use MOLAP or Tabular
- pros/cons of different ways to store whether a record is one of two options?
- Transfer data from DB2 database to Oracle database
- How can I query data from a linked server, and pass it parameters to filter by?
- TokuDB not faster then mysql
- sql server db log file issues
- Can I get notification when event occur?
- How should permissions for anonymous users be modelled?
- Table corruption: How to perform Innodb Checksum checks in MySQL 5.5 for Windows?
- Converting Non Unicode string to Unicode string SSIS
- Is replication from SQL Server 2008 to PostgreSql possible?
- MySQL high CPU usage (MyISAM table indexes)
- Total Memory used by SQL Server (64 bit)
- MySQL slap with custom query
- Overview of how MongoDB uses its various threads
- upgrade mysql 5.1 to 5.5 on Ubuntu 11.10
- MySQL information_schema doesn't update
- How to succesfully run a batch file in an SQL Agent job?
- MySQL partitioned tables?
- Experience using ScaleArc in test or production?
- MySQL user defined rollback procedure
- Enforce a hard limit on write execution time during Amazon RDS write stalls
- Connect to SQL Server Management Studio over VPN (Hamachi)
Tool to view intermediate query results Posted: 06 May 2013 08:01 PM PDT Ok, I am not sure if this question belongs here, but I am using SQL server 2008 express edition and I've worked two days on a problem where seems to be a cross join in running a query. I would want to know the set returned per join operation visually. Is there any tool that can give you intermediate query results. I couldn't find one on google. |
Range query from month and year Posted: 06 May 2013 07:53 PM PDT im nubie in mysql I have Table like this I want use the query it can select based with month and year with ranged i have query like this that query not error but zero result. Can anyone help me for fix this query? NB: the type of column month and year is integer. Im very Appreciated form your answer |
Cannot rebuild index, but there's no reason why not? Posted: 06 May 2013 08:49 PM PDT I've created a process whereby I am able to only rebuild indexes that need rebuilding(the process takes an hour and a half if I rebuild them all), and while it works beautifully, it gets stuck on one particular index and I see no reason why I should. It fails with the following message:
However, when I run the query based on a suggestion by this chap, shown below, I get no results: On top of that, a manual inspection of the index in question shows no text, ntext, image, xml or varchar(MAX), nvarchar(MAX) or varbinary(MAX). Could there be something I'm missing here? For the record, this is a clustered index. |
mysqlslap chokes on strings that contain ";" delimiter? Posted: 06 May 2013 06:31 PM PDT I'm having a problem passing preexisting files full of SQL statements into mysqlslap. For example, I have a file named I also have a file called And I run: I get the error: Which is consistent with MySQL terminating the statement at the ; that's in the middle of a string. What can I do to make mysqlslap take |
Do you know any data model archive website like this one? [closed] Posted: 06 May 2013 03:04 PM PDT I found this website with many data models... It's not bad actually, but just for curiosity, do you know any others site like this? http://www.databaseanswers.org/data_models/index.htm Thanks |
Writing with multiple psql processes to database seems to damage data Posted: 06 May 2013 05:31 PM PDT I have a couple terabytes of CSV data that I am trying to import into a PostgreSQL 8.4 database (on a RedHat 6.2 server), whose data directory is initialized on a multipath hardware RAID. There are four folders of CSV data that need to be imported, and the import script acts according to what it finds in those directories, so right now it's simplest for me to run the import script separately for each server. I have run these scripts serially on a Debian server (without multipath) before, waiting for each script to finish, and that worked. However, when I had to re-import later on this RedHat system, I decided to fire up four (the ls access errors are input/output errors) There should be a postgres data directory here with ownership What's going on here? I was assured the multipath drivers and mounts for the RAID volume in question is working, so I don't think it's the hardware at this point. For reference, each script adds about 105,000 points every couple seconds to a table in the database. Here's the import script code: Sample script output: |
Posted: 06 May 2013 02:51 PM PDT What choice and factor do I need to consider if I gonna have my data mart database to use SSAS MOLAP or tabular? Is there a guideline in what context to use MOLAP or tabular? |
pros/cons of different ways to store whether a record is one of two options? Posted: 06 May 2013 02:31 PM PDT I am trying to store whether an address is a Work address or a Home address. There will never be another type of address. I'm wondering what the pros/cons are of the different ways to store this, and if there is an accepted 'style' for this type of situation which is considered best practice, Would it be better to just have a single The third option seems a little cleaner however needing to join every time seems inefficient. |
Transfer data from DB2 database to Oracle database Posted: 06 May 2013 02:55 PM PDT I want to transfer data from an old DB2 system to a new oracle database. How should I go about doing this? |
How can I query data from a linked server, and pass it parameters to filter by? Posted: 06 May 2013 03:09 PM PDT I have a really big query that needs to be run on multiple databases, and the results appended to a temp table and returned. The basic syntax looks something like this: The query runs quickly if run locally on the the individual servers, however it takes a long time to run if it's run from a linked server using the 4-part names like above. The problem appears to be it's querying the linked server for the unfiltered result set first, then joining it to the If I hardcode the Ids to filter the result set on the linked server, such as it runs quickly in just a few seconds. Is there a way to run this query so it filters the result set of the query from the linked server by the Some things to note
|
Posted: 06 May 2013 02:32 PM PDT i have converted a mysql DB with 80.000.000 entries to TOKUDB. Now when i make a select count(id) from xxx where active=1 it takes 90% of the time of the normal mysql request. What do i have to further optimize, that it is faster ? Best regards, Andreas The table definition: I have put the code here: http://pastebin.com/yD1gi8ph this is the table. |
Posted: 06 May 2013 11:24 AM PDT Were consolidating data from a bunch of databases into four reporting databases each night. Because the entire dataset is imported each night we do not need to be able to restore the data to a point in time thus the databases are in simple recover mode. Each time we run the import however our database ldf files are growing to obsurdly large sizes (50+ Gigs). Is there a way to turn off the logging all together or get sql server to clear those log files sooner. I'm guessing no foor clearing as the log_reuse_wait_desc is ACTIVE_TRANSACTION. |
Can I get notification when event occur? Posted: 06 May 2013 12:14 PM PDT SQL Servers has Traces and XEvents. These are used to capture and analyze what is going on with our SQL Server Instances. Events are stored in the stack for later analysis. For example, If I decide to monitor any dead lock in the database, I just query the trace file to see the history of deadlock for a period of time. Here is my question: While events occur, in our example deadlock event, is there a way to get an email notification using msdb.dbo.xp_send_dbmail? |
How should permissions for anonymous users be modelled? Posted: 06 May 2013 11:01 AM PDT I'm designing a web application. There will be users who log into the site, but also anonymous, non-authenticated users, i.e. any member of the public who accesses the site. Users will be assigned to groups, and those groups will be assigned permissions. Some site content will be accessible only to authenticated users, while other content may be marked as publicly accessible. I'm considering how best to model facts such as "this item is publicly accessible". Options that occur to me so far:
Any thoughts on which is the right way to go? The first option feels like a more consistent approach to permissions, but the notion of some users and groups being special/fake/dummy doesn't feel right. |
Table corruption: How to perform Innodb Checksum checks in MySQL 5.5 for Windows? Posted: 06 May 2013 02:14 PM PDT Having a corrupted Mysql 5.5.31 (Windows) database, my question relates to the the top solution provided in How do you identify InnoDB table corruption? , more precisely to the following script that is supposed to tell you which tables are corrupted: In fact I have two questions: 1) Where do you execute such a script? I thought the scripting shell in MySQL Workbench would to the job by saving this snippet as a Python file and then executing it - however it reports invalid syntax already in the "for ..." line. 2) According to http://dev.mysql.com/doc/refman/5.5/en/innochecksum.html innochecksum is a utility provided by MySQL/Oracle. However, I do not seem to find it in the bin or other folders of my MySQL installation. How do I obtain it? UPDATE: As I did not trust my own MySQL installation, I downloaded the zip files for both 32 and 64 bit versions of 5.5.31 but can confirm that a innochecksum file is not included. Thanks. |
Converting Non Unicode string to Unicode string SSIS Posted: 06 May 2013 01:44 PM PDT I am creating a package where I will be exporting data from a database into an empty excel file. When I added only the source and destination components and I ran the package I got a conversion error stating Output column and column "A" cannot convert between unicode and non-unicode string data types. To fix this I added a data conversion component and converted all the columns to "Unicode String [DT_WSTR]" and I no longer received the error. The only problem is that I had about 50 columns where I had to go 1 by 1 and select "Unicode String [DT_WSTR]" from the drop down list. I then had to go into the destination component and map the newly converted columns to my excel file. My question is, if anyone else has come across this, is there a better more efficient way to get around having to do all the manual data type conversions? Having to convert and map all the columns one by one doesn't seem to practical especially if you have a large number of rows. I understand excel files are not the best way to go for importing and exporting data but it is what is required in this particular case. I might look for a way to just export to a flat text file and then try to convert to excel as a last step in the package. I'm hopping this wont trigger the same unicode / nonunicode conversion error. |
Is replication from SQL Server 2008 to PostgreSql possible? Posted: 06 May 2013 11:05 AM PDT Is it possible ? Sql Server as publisher(master) and PostgreSql as slave(subscriber) ? Any type of replication really. |
MySQL high CPU usage (MyISAM table indexes) Posted: 06 May 2013 02:18 PM PDT I have a problem with an inherited MySQL database. From time to time mysqld uses up to 2300% CPU.. The only solution is to service mysql stop and run an myisamchk -r on a table. After the indexes have been fixed, I start MySQL and everything is ok. Any ideas for an permanent solution? Edit (from the comments): Using 5.5.29-0ubuntu0.12.04.2-log key_buffer = 16M max_allowed_packet = 16M thread_stack = 128K thread_cache_size = 8 myisam-recover = BACKUP max_connections = 500 #table_cache = 512 #thread_concurrency = 10 query_cache_limit = 1M query_cache_size = 16M returns +----------+ | ndxsize | +----------+ | 59862016 | +----------+ returns: +--------------------+ | datndxsize | +--------------------+ | 488.69915199279785 | +--------------------+ The server has 16GB of RAM, but it is not a DB server...It is running nginx + php-fpm |
Total Memory used by SQL Server (64 bit) Posted: 06 May 2013 12:29 PM PDT My knowledge on the subject suggests that perf counter SQL Server:Memory Manager: Total Server Memory only gives you buffer pool memory. There is a column called physical_memory_in_use in a DMV named sys.dm_os_process_memor that gives you physical working set. But I am not sure ...
|
Posted: 06 May 2013 03:07 PM PDT I want to conduct stress test on our MySQL DB. I have the list of queries i need to execute. I have tried using Apache JMeter for this but it is very time consuming. Is it possible to run mysqlslap with custom .sql file containing INSERT, UPDATE, SELECT queries on specified MySQL database? |
Overview of how MongoDB uses its various threads Posted: 06 May 2013 01:05 PM PDT On one instance I have MongoDB using ~85 threads. In lieu of having time to investigate directly, I am curious:
|
upgrade mysql 5.1 to 5.5 on Ubuntu 11.10 Posted: 06 May 2013 07:05 PM PDT I currently have mysql server 5.1 installed via apt-get on my production Ubuntu 11.10 server I would like to upgrade this to 5.6, but the mysql docs seem to suggest upgrading to 5.5 first, and from there to 5.6. I've seen various lengthy guides describing how to upgrade from 5.1 to 5.5, but they all seem to describe how to upgrade by installing the tarball rather than using the package manager. Is there a simpler to upgrade using the package manager if the current version was installed using Obviously I want my existing configuration and databases to be retained after the upgrade and I will be sure to backup my databases using |
MySQL information_schema doesn't update Posted: 06 May 2013 08:05 PM PDT I have a database, say After I run the query The strange thing is that I find the database size doesn't decrease at all, however the data in "test" is gone. I've done this kind of test many times, this strange behavior happens sometimes. I'm using Can anybody tell me what is wrong? Update: Actually, I use another thread to check the database size periodically. |
How to succesfully run a batch file in an SQL Agent job? Posted: 06 May 2013 05:05 PM PDT I have a SQL Agent Job which generates a specific report in PDF-file and then copies the PDF to a network directory and then deletes the PDF file in the source directory. The SQL Jobs consists of 2 steps: 1. Generate the report 2. Copy the report to the network location. For step 2 I made a bat-file which handles the copying and removal of the pdf file. The bat-file is as follows: However, when I run my the Job, it hangs on the second step. The status just stays on "Executing". This is the line which I stated in the step (location of the bat-file to execute): My job-settings are as follows: Step 1 Type: Operating system (CmdExec) On Success: Go to the next step On Failure: Quit the job reporting failure Step 2 Type: Operating system (CmdExec) On Success: Quit the job reporting success On Failure: Quit the job reporting failure Some facts:
|
Posted: 06 May 2013 02:51 PM PDT I have a database that supports a web application with several large tables. I'm wondering if partitioned tables will help speed up certain queries. Each of these tables has a colum called client_id. Data for each client_id is independent from every other client_id. In other words, web queries will always contain a where clause with a single client_id. I'm thinking this may be a good column on which to partition my large tables. After reading up on partitioned tables, I'm still a little unsure as to how best to partition. For example, a typical table may have 50 million rows distributed more or less evenly across 35 client_ids. We add new client_ids periodically but in the short term the number of client_ids is relatively fixed. I was thinking something along these lines: My question. Is this an optimal strategy for partitioning these types of tables? My tests indicate a considerable speedup over indexing on client_id, but can I do better with some other form of partitioning (i.e. hash or range)? |
Experience using ScaleArc in test or production? Posted: 06 May 2013 11:12 AM PDT Has anyone had any experience using ScaleArc? My CTO has asked my thoughts on it, and I have seen no information out there regarding real-world experiences. |
MySQL user defined rollback procedure Posted: 06 May 2013 03:08 PM PDT I'm attempting to write my own mini-rollback procedure. I have a table that tracks any updates or deletes to another table using a trigger. I am attempting to make it possible to restore one or more of these tracked changes through the use of a procedure. However, I'm receiving a syntax error with the following: The syntax error comes in in regards to my update statement, any help or suggestions would be appreciated. |
Enforce a hard limit on write execution time during Amazon RDS write stalls Posted: 06 May 2013 05:08 PM PDT I am trying to recover from the situation where Amazon RDS write operations stall (some discussion on why this is here). If I show full processlist, I will see something like this: As you can see from the number after query, these have been in the state of "updating" for 80-90 seconds! I am running the largest DB instance, so clearly something bad is happening on the EBS node on which the DB is running. In these situations, I would prefer the query fail after 1 or 2 seconds, not wait for over 1 min stuck in the "Updating" state. I am using Ruby's ActiveRecord FYI. What is the best way to force a failure in this case after 2 seconds? Should I use innodb_lock_wait_timeout (Don't think so since these tables aren't locked and besides these times are clearly > 50, which is what it is set at). I believe the optimal approach is to set the per-session read timeout and write timeout. In Ruby on Rails this is done by editing the databases.yml file and entering something like: Will this approach work? Are there any other approaches I can use to more quickly fail these queries instead of having them hang my application threads indefinitely? Thanks! |
Connect to SQL Server Management Studio over VPN (Hamachi) Posted: 06 May 2013 01:11 PM PDT I just got my Hamachi VPN set up. For anyone familiar with Hamachi, I have it set up as a gateway so I'm part of the network when I'm away. Almost everything seems to be working perfectly. I can even backup using Windows Home Server if I want. I cannot connect to my SQL Server from SQL Server Management Studio. Of course, when I'm at home, everything works perfectly. I can communicate with the database server just fine remotely (i.e., ping). I just can't connect with SSMS. The network configuration is at the default (TCP Enabled). Does anyone know
Extra info:
|
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment