[how to] Can I force mysql server to resolve the external ip to localhost? |
- Can I force mysql server to resolve the external ip to localhost?
- Multi-master quorum offsetting performance gain through distribution
- What is the best data modelling technique for a credit business organization? [on hold]
- Mysqlbinlog statement reading
- How large are blocks of data in SQL Server?
- Would adding indexes to my foreign keys improve performance on this MySQL query?
- Creating Indexed View GROUP BY Epoch Date
- Audit Queries in SQL 2008 Including Start Stop Times
- Why does Log Shipping .TRN file copy just stop
- How to determine master in mysql master-slave
- MySQL 5.6 Delayed Replication - cancel delay questions
- Oracle OEM Database Backup Failure
- ORA-40341: Access violation on model storage object in Oracle?
- Cumulative Game Score SQL
- TOAST Table Growth Out of Control - FULLVAC Does Nothing
- mysqldump freezing on a specific table
- Longest prefix search in Oracle
- In place upgrade from MySQL 5.5 to 5.6.11 removes all users from user table
- SSRS 2008 R2 setup issue
- Get all the database in db2 through web
- Is there an effective way to create extended event data that includes stacktraces in SQL Server on a local db instance?
- Database design for an E-commerce website
- Shrink database operation in maintenance plan failed
- Why would mysql "show global status" query be taking 15 minutes?
- Is there a way to export Oracle's UNDO?
- What is the difference between int(8) and int(5) in mysql?
- MySQL table relations, inheritance or not?
- Temporarily Disable Sql Replication
Can I force mysql server to resolve the external ip to localhost? Posted: 09 Jul 2013 08:07 PM PDT I'm finally succeeded with ssh tunneling. My point is to open the mysql server to local users only (user@localhost etc.) while providing a remote control to my customers through ssh tunneling. The problem is when I connect through the mysql command line tool ( Well it makes sense since my root exists in But then, is there a way for me to somehow tell mysql that this ip address is actually the server's address and that the root should be allowed to connect? I've got no clue where to start. To me if such a thing existed it would be some kind of ip resolving table... |
Multi-master quorum offsetting performance gain through distribution Posted: 09 Jul 2013 01:33 PM PDT Suppose we have a multi-master p2p setup (3 masters) that requires a quorum of 2 to write. Then, in order to get full consistency, reads also require quorum of 2, because on write, the 3rd server may not be updated and can serve outdated data. Correct? Then, doesn't the extra quorum IO requests near completely offset the purpose of having a multi-master distribution model? Sure there are 3 server instead of one, but each request becomes three requests, so each server doesn't get a lighter load. Additionally, isn't this even worse for 2-phase-commit scenarios? Thanks! |
What is the best data modelling technique for a credit business organization? [on hold] Posted: 09 Jul 2013 01:33 PM PDT I would like to know what is the difference between notation and technique? which notation is most intuitive? which modelling technique is the best while designing database for a small business organization? |
Posted: 09 Jul 2013 01:11 PM PDT I am trying to perform a point in time on a specific database using a snapshot and mysql's transaction logs. I am using the following to pull statements from the time of the snapshot to the time of recovery: The resulting statements that mysqlbinlog produces include INSERT/UPDATE/DELETE statements for another database that has the form: So functionally I'm getting statements from 2 different databases, one is our production database, the other is our reporting database, differentiated by nomenclature by the '_reporting'. It would appear to me that our application is inserting to the secondary database while still using the primary one and the binlogs associate both statements with the primary database. Am I correct that mysqlbinlog is going to read out statements for both databases as long as they are executed after a |
How large are blocks of data in SQL Server? Posted: 09 Jul 2013 01:05 PM PDT I'm working in SQL Server 2008 R2 and have created a query that gathers and sums the total of data files and log files' sizes. However, I can't find how much actual disk space a single block of SQL data takes up on the disk so I can convert it into something more meaningful. Here is the script: How large is one block of space? Would it be easy to convert those two integer variables into something more meaningful for a sysadmin? |
Would adding indexes to my foreign keys improve performance on this MySQL query? Posted: 09 Jul 2013 02:35 PM PDT Consider the following query: Using a profiling tool, I got this report: This is a MySQL database. All the tables are InnoDB with utf8_unicode_ci. The primary keys on each table are called
My thinking is that since I'm selecting from locations first, I would want an index on the foreign keys that are being referenced. I read here: indexes, foreign keys and optimization that MySQL requires indexes on all foreign keys. Does explicitly defining the foreign keys in an InnoDB schema improve performance? Sorry for being such a total n00b. Thanks for the help! |
Creating Indexed View GROUP BY Epoch Date Posted: 09 Jul 2013 04:24 PM PDT I have a few big tables with about 6 billion rows that I was looking to optimize. Clustered key is Epoch (unix date time which is the number of seconds that has passed after 1970) and customer ID. This table records usage data per customer per product type. For example, if this were for a Telco, TypeID 1 is a local call and the value is how many minutes used for that customer. TypeID2 is a international call and is the value how many minutes were used in that hour for that customer. Let's say TypeID3 is a special discounted rate for domestic calling. The data is stored in 1 hour intervals. I want the indexed view to store the aggregated 24 hour value so when we run a query for 1 day per customer, it has to only look up 1 row in the indexed view instead of 24 rows in the base table. This is the base table: We don't care about Aggregate or RowID for our reporting purposes, so I figure the indexed view will look like this: EDIT: Sample base data ( i left out the columns we don't need in this case, just assume the ID columns are there). Each "TypeID" will have a value assigned to it, which the value can be 0. For example,
Let's assume all the other VALUE columns are 0 for the remainder of the day since the system went down and no one could use their phones after 2am. I want my indexed view to record the value column aggregated per day, per customerID and TypeID. Sample would be: |
Audit Queries in SQL 2008 Including Start Stop Times Posted: 09 Jul 2013 12:56 PM PDT I have SQL Server 2008 Enterprise. I know how to set up auditing on the SQL server, but it looks like the audit logs don't include query start/stop times. We currently use a product called Splunk to do a lot of data analysis at our organization. I would like to be able to feed Splunk our machine performance data along with a SQL query audit log to be able to correlate high CPU usage to long running queries and things like that. However, I would need query start/stop times to be able to do that. It doesn't look like the built in auditing has that ability. Is there any other way to achieve this? |
Why does Log Shipping .TRN file copy just stop Posted: 09 Jul 2013 12:57 PM PDT I apologize in advance for a long post but I have had it up to here with this error of having to delete LS configuration and starting it over for any DB thats got this error. I have LS setup on 3 win2k8r2 servers(pri,sec,monitor) with 100 databases transactions backed up and shipped from the primary to secondary and monitored by monitor. Back ups and copies are run every 15min and then the ones older than 24hrs are deleted. Some DBs are very active and some not so much but shipped regardless for uniformity sake(basically to make secondary server identical to primary). Some DBs are for SP2010 and majority for inhouse app. The issue is that after all LS configs are setup, all works well for about 3 to 4 days then i go to the Transaction LS Status report on the secondary, I see that randomly some LS jobs have an Alert Status because the time since last copy is over 45min so no restore has occured. This seems random and the only errors i see is from an SP2010 DB(WebAnalyticsServiceApplication_ReportingDB_77a60938_##########) which I belive is a reports db that gets created weekly and LS cannot just figure which the last copy to backup or to restore is. I posted here regarding that and i have yet to find a permanent solution. For my main error(time since last copy) i have not seen anything that could have caused that and i dont get any messages(even though some alert statuses have been ignored for 3 days). Anyway, I would really appreciate any input on understanding whats causing this and how i could fix it. Thanks. |
How to determine master in mysql master-slave Posted: 09 Jul 2013 09:07 PM PDT I am setting up MySQL Master-slave replication and I am trying to figure out how to handle the failover situation where I promote the slave to master (in the event that the master goes down). My application server needs to direct all writes to the current master, but I cannot use server level HA between the master and slave (heartbeat, keepalived) since the two db servers are on completely different subnets in different physical locations. I think this is something that I need to handle at the application level. I can query the two servers and ask which one is a master, then perform all queries to that one. Is there a query in MySQL to see if the current server is a master in a master-slave replica? |
MySQL 5.6 Delayed Replication - cancel delay questions Posted: 09 Jul 2013 02:13 PM PDT I'm using the MySQL 5.6 Master/Slave replication, with Delay set for 2 hours. My questions are : Q1. What is the proper way to cancel the Delay at the slave - i.e. "roll forward" all the changes executed at the Master ? Q2. Let's say that the Master became totally unavailable. What is the proper way to cancel the Delay and "roll forward" the Slave so, that it will apply all changes from Master ? I've tried to : - stop slave ; - change master to master_delay = 0; - start slave ; (At this moment - the Master is still unavailable, and Slave IO thread status is Connecting.) After the listed above steps - the relay log bin files are deleted at the Slave host, Delay value becomes 0, but the changes from the Master are NOT applied... If anybody could please provide some useful tips - i'll very appreciate. Best regards, Avi Vainshtein |
Oracle OEM Database Backup Failure Posted: 09 Jul 2013 02:19 PM PDT I am trying to back up an oracle database from OEM, but upon completion the job report says the job failed with the following error: A DBA set this database up but he is not currently available. I have tried running a crosscheck and a delete expired in RMAN but experienced the same problem when trying again. The path it is looking in: C:\APP\RM\FLASH_RECOVERY_AREA\RONNIE\ARCHIVELOG\2013_06_10\O1_MF_1_1508_8VDHNOLY_.ARC ...does not exist on the server, but it did exist on an original machine from which the DBA copied the database. The path containing the archive log now is: C:\APP\CS\FLASH_RECOVERY_AREA\RONNIE\ARCHIVELOG\2013_06_10\O1_MF_1_1508_8VDHNOLY_.ARC Can anyone help? Thanks |
ORA-40341: Access violation on model storage object in Oracle? Posted: 09 Jul 2013 08:31 PM PDT While I was trying to drop a table, it throws following error in Oracle SQL Developer: The tables are temporary tables created while pushing into the database using Oracle R Enterprise. The names of the tables are: I need to drop all these tables as these tables have occupied large space of my database. While googling, I found this link but it provides no solution clues. |
Posted: 09 Jul 2013 08:11 PM PDT I have developed a game recently and the database is running on MSSQL. Here is my database structure Table : Player Table : GameResult I have done an SQL listing Top 50 players that sort by highest score (DESC) and timetaken (ASC) Sql below allowed me to get the result for each puzzle id. I'm not sure if it is 100% but I believe it is correct. Question 1) I need to modify the SQL to do a cumulative rank of 3 puzzle ID. For example, Puzzle 1, 2, 3 and it should be sort by highest sum score (DESC), and sum timetaken (ASC) 2) I also need an overall score population for all the possible 1 to 7 puzzle. 3) Each player only allowed to appear on the list once. First played and first to get highest score will be rank 1st. I tried using CTE with UNION but the SQL statement doesn't work. I hope gurus here can help me out on this. Much appreciated. |
TOAST Table Growth Out of Control - FULLVAC Does Nothing Posted: 09 Jul 2013 12:51 PM PDT Recently, I've had a PostgreSQL 8.2.11 server upgraded to 8.4 in order to take advantage of autovacuum features and be in line with 30ish other PGSQL servers. This was done by a separate IT group who administrates the hardware, so we don't have much choice on any other upgrades (won't see 9+ for a while). The server exists in a very closed environment (isolated network, limited root privileges) and runs on RHEL5.5 (i686). After the upgrade, the database has constantly been growing to the tune of 5-6 GB a day. Normally, the database, as a whole, is ~20GB; currently, it is ~89GB. We have a couple other servers which run equivalent databases and actually synchronize the records to each other via a 3rd party application (one I do not have access to the inner workings). The other databases are ~20GB as they should be. Running the following SQL, it's fairly obvious there's an issue with a particular table, and, more specifically, its TOAST table. Which produces: relation | size ------------------------------------+--------- pg_toast.pg_toast_16874 | 89 GB fews00.warmstates | 1095 MB ... (20 rows) This TOAST table is for a table called "timeseries" which saves large records of blobbed data. A I've performed a
REINDEXed the table which freed some space (~1GB). I can't CLUSTER the table as there isn't enough space on disk for the process, and I'm waiting to rebuild the table entirely as I'd like to find out why it is so much bigger than equivalent databases we have. Ran a query from the PostgreSQL wiki here - "Show Database Bloat", and this is what I get: current_database | schemaname | tablename | tbloat | wastedbytes | iname | ibloat | wastedibytes -----------------+------------+--------------------------------+--------+-------------+---------------------------------+--------+-------------- ptrdb04 | fews00 | timeseries | 1.0 | 0 | idx_timeseries_synchlevel | 0.0 | 0 ptrdb04 | fews00 | timeseries | 1.0 | 0 | idx_timeseries_localavail | 0.0 | 0 ptrdb04 | fews00 | timeseries | 1.0 | 0 | idx_timeseries_expirytime | 0.0 | 0 ptrdb04 | fews00 | timeseries | 1.0 | 0 | idx_timeseries_expiry_null | 0.0 | 0 ptrdb04 | fews00 | timeseries | 1.0 | 0 | uniq_localintid | 0.0 | 0 ptrdb04 | fews00 | timeseries | 1.0 | 0 | pk_timeseries | 0.1 | 0 ptrdb04 | fews00 | idx_timeseries_expiry_null | 0.6 | 0 | ? | 0.0 | 0 It looks like the database doesn't consider this space as "empty," at all, but I just don't see where all the disk space is coming from! I suspect that this database server is deciding to use 4-5x as much disk space to save the same records pulled from the other data servers. My question is this: Is there a way I can verify the physical disk size of a row? I'd like to compare the size of one row on this database to another "healthy" database. Thanks for any help you can provide! UPDATE 1 I ended up rebuilding the table from a dumped schema due to its size (couldn't leave it alone for another day). After synchronizing the data, via the software synch process, the TOAST table was ~35GB; however, I could only account for ~9GB of it from that blob column which should be the longest in terms of values. Not sure where the other 26GB is coming from. CLUSTERed, VACUUM FULLed, and REINDEXed to no avail. The postgresql.conf files between the local and remote data servers are exactly the same. Is there any reason this database might be trying to store each record with a larger space on disk? UPDATE 2 - Fixed I finally decided to just completely rebuild the database from the ground up- even going as far as to reinstall the PostgreSQL84 packages on the system. The database path was reinitialized and tablespaces wiped clean. The 3rd party software synchronization process repopulated the tables, and the final size came out to be ~12GB! Unfortunately, this, in no way, helps to solve what the exact source of the issue was here. I'm going to watch it for a day or two and see if there are any major differences with how the revitalized database is handling the TOAST table and post those results here. Relation Size
INFO: "timeseries": found 12699 removable, 681961 nonremovable row versions in 58130 out of 68382 pages DETAIL: 0 dead row versions cannot be removed yet. There were 105847 unused item pointers. 0 pages are entirely empty. CPU 0.83s/2.08u sec elapsed 33.36 sec. INFO: vacuuming "pg_toast.pg_toast_17269" INFO: scanned index "pg_toast_17269_index" to remove 2055849 row versions DETAIL: CPU 0.37s/2.92u sec elapsed 13.29 sec. INFO: "pg_toast_17269": removed 2055849 row versions in 518543 pages DETAIL: CPU 8.60s/3.21u sec elapsed 358.42 sec. INFO: index "pg_toast_17269_index" now contains 7346902 row versions in 36786 pages DETAIL: 2055849 index row versions were removed. 10410 index pages have been deleted, 5124 are currently reusable. CPU 0.00s/0.00u sec elapsed 0.01 sec. INFO: "pg_toast_17269": found 1286128 removable, 2993389 nonremovable row versions in 1257871 out of 2328079 pages DETAIL: 0 dead row versions cannot be removed yet. There were 18847 unused item pointers. 0 pages are entirely empty. CPU 26.56s/13.04u sec elapsed 714.97 sec. INFO: analyzing "fews00.timeseries" INFO: "timeseries": scanned 30000 of 68382 pages, containing 360192 live rows and 0 dead rows; 30000 rows in sample, 821022 estimated total rows The only noticeable difference after the rebuild (other than disk usage) is INFO: "pg_toast_17269": found 1286128 removable, 2993389 nonremovable row versionsas @CraigRinger mentioned in a comment. The nonremovable row count is much smaller than before. New question: Can other tables affect the size of another table? (via foreign keys and such) Rebuilding the table did nothing, yet rebuilding the whole database proved to fix the problem. |
mysqldump freezing on a specific table Posted: 09 Jul 2013 12:57 PM PDT I dumped a database ( When the dump was blocked: On the other hand I can dump the table This works well and quickly! The table What is the next step in investigating why I can't dump the whole database? How can I make it work? |
Longest prefix search in Oracle Posted: 09 Jul 2013 08:38 PM PDT I have a list of phone number prefixes defined for large number of zones (in query defined by gvcode and cgi). I need to efficiently find a longest prefix that matches given number PHONE_NR. I use inverted LIKE clause on field digits (which contains prefixes in form +48%, +49%, +1%, +1232% and so on). Therefore I can't use normal index on that field. I managed to get substantial improvement by using IOT on gvcode and cgi field (which are part (first two cols) of primary key). I also looked at some oracle text indexes but can't find one that will match longer input with shorter prefix in the table. Is there any other way to perform such search that is faster than this approach. Here is the query which gives a list of all matched prefixes (I sort it afterwards on digits length). |
In place upgrade from MySQL 5.5 to 5.6.11 removes all users from user table Posted: 09 Jul 2013 01:52 PM PDT On Windows, I upgraded from 5.1 to 5.5 no problem.
All good, but going from 5.5 to 5.6:
but I get: If I look at the
Thanks. |
Posted: 09 Jul 2013 04:52 PM PDT I have installed SSRS 2008 R2 on my desktop and server. When I hit the reports link on my desktop all I get to see is this I cant create a new folder or data source or anything of the sort On the server where I am attempting to set up SSRS 2008 R2, all I get is a white screen that shows the virtual folder name in large fonts, followed by the version of the reporting services server on the next line. This is not leaving me any clues as to what needs to be fixed. On both pcs I am using the credentials of the local admin. Any clues on what needs to be fixed? |
Get all the database in db2 through web Posted: 09 Jul 2013 02:17 PM PDT I would like to get all the databases available on a db2 instance from a C# application. The CLP command to be used is How can I fire this command from C#? Is there another select statement I can use to get all the databases on one db2 instance? |
Posted: 09 Jul 2013 06:43 PM PDT I read Paul Randal's article on getting the symbols for SQL Server. I have a theory that I could get the extended events to read the symbols from the symbol server catch directory by setting the _NT_SYMBOL_PATH environment variable. The only problem is I can't create the right conditions for any data to show up when I create an extended events session with sqlos.spinlock_backoff. I tried the script in this pdf. I tried HammerDB and I tried this script with adventure works. I don't get any spinlocks. I tried setting max server memory at 256 megs (my default setting on my laptop) and I tried it at 2 gigs. The Extended event session was running, but no data showed up. |
Database design for an E-commerce website Posted: 09 Jul 2013 06:52 PM PDT I am new to database design. I am designing a database for an E-commerce website, there are a lot of products to be updated, but while designing for product specification table I do not understand whether I need specify all the specifications in a table or do I need to use different tables for different products? For example, consider the products Mobile and book, each of these having unique specifications (such as color,size, cost, model for mobile and title, ISBN, author, cost, year_of_publication, etc. for book), if it is only less number of products then it is possible for me to design, but when there is a thousands of products, it takes a lot of time. Can anyone tell me how to design my database to manage this situation? |
Shrink database operation in maintenance plan failed Posted: 09 Jul 2013 11:17 AM PDT See updates below I have a nightly DB maintenance plan and it's causing some strange behaviour. About 1 out of 10 runs it causes the DB to not response to queries. It takes about 5 minutes for the DBM plan to complete at which point it starts responding. The error logs don't seem to point to any problem at all: I'm not quite sure what info is needed to diagnose the problem. Please let me know what you need and I'll do my best to get it. I managed to find this error, not sure if it helps. dm_os_waiting_tasks Update: DB Shrink removed. Still hanging. I'm 99% it's the rebuild index task. Here are a few example lines from the generated SQL Update 2013-07-09: So I'm continuing to have problems with this maintenance plan. The shrink operation was removed a few months ago and the rebuild index task was modified. I'm continuing to have failures and the web site that is driven by this DB continues to go down for a few minutes on each failure. I managed to dig up the job history task and it appears that the update statistics task is now failing. See log below: Should I not be doing an update statistics task in my nightly backup? Or is this pointing to a different issue? Update 2013-07-09: The maintenance plan logs to file. Here is an entry from 2013-07-08: Related SQL: Here is an entry from 2013-07-04: Related SQL: Here is an entry from 2013-07-03: Related SQL: All of the maintenance plan tasks are generated by the GUI wizard tool. As you can see they fail on different steps everytime. What should I do here? Should I increase the time out? Or will that just take longer to fail? Should I modify the queries? ALLOW_ROW_LOCKS = OFF ? ONLINE = ON? Should I remove the rebuild index task completely? Should I remove the update statistics task completely? Thanks for your input, Tomas |
Why would mysql "show global status" query be taking 15 minutes? Posted: 09 Jul 2013 08:52 PM PDT I'm reviewing the slow log, and on one of my slaves the average time for SHOW GLOBAL STATUS is 914s. Any idea how to determine the cause of this? |
Is there a way to export Oracle's UNDO? Posted: 09 Jul 2013 07:52 PM PDT I tried exp utility to dump all database. Looks like this exports only the last version of data skipping undo log. Using flashback queries I see: What I'm trying to do is to capture db changes, make backup for later use with the ability to flashback to timestamp. With rman backup I have similar situation: Update: I managed to do what I needed only by increasing undo retention and direct copying of data files and control file modification on cloned instance. |
What is the difference between int(8) and int(5) in mysql? Posted: 09 Jul 2013 11:52 AM PDT I found out, that if you have a field defined as INT(8) without ZEROFILL it will behave exactly as INT(5) in both cases the maximum value is or do i miss something? I found this Question: http://dba.stackexchange.com/a/370/12923
so there seems to be no difference then. |
MySQL table relations, inheritance or not? Posted: 09 Jul 2013 05:52 PM PDT Im building a micro CMS. Using Mysql as RDMS, and Doctrine ORM for mapping. I would like to have two types of pages. Static Page, and Blog Page. Static page would have page_url, and page_content stored in database. Blog page would have page_url, but no page_content. Blog would have Posts, Categories... Lets say I have route like this: This is page, with page url that can be home, or news, or blog... That page can be either Static page, and then I would joust print page_content. But it can also be Blog Page, and then I would print latest posts as content. How should I relate these Static Page and Blog Page tables? Is this inheritance, since both are pages, with their URL, but they have different content? Should I use inheritance, so that both Static and Blog page extends Page that would have page_url? Or should I made another table page_types and there store information about available page types? |
Temporarily Disable Sql Replication Posted: 09 Jul 2013 03:37 PM PDT Due to a business need, I may need to disable transactional replication in my environment (Sql 2k -> Sql 2008). Functionally, I understand this to mean that I will need to drop subscriptions and articles. Is getting the create scripts enough to restore replication back to original state when the conflicting need is addressed? Thanks. |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment