[how to] mysql replication successful but slave not replicating |
- mysql replication successful but slave not replicating
- Mistake during Oracle 11g PITR
- how to find similar word with more similarities (length less than or equal)
- How to view the full, flattened query underlying a Postgresql view?
- How to ensure that date-range queries involving multiple time zones are sargable
- createdb command error in postgres sql
- passing --single-transaction in mysqldump command line tool
- dead lock when updating
- grant execute on utl_file to user in Oracle 11g XE
- SQL Index order and performance based on cardinality and data
- Multiple SQL Server data files on same SAN disk
- Oracle Undo tablespace users
- Creating the MySQL slow query log file
- Breaking Semisynchronous Replication in MySQL 5.5
- how to add attachment(text file) to database mail?
- DB2 Server Table Space Locked
- Efficient way to move rows across the tables?
- unable to login oracle as sysdba
- "connectivity libraries needed are not found" error in IBM Data Studio
- Connecting to AWS RDS DBInstance with MySQL workbench
- mysql replication delay very serious
- SQL developer: Setup debugger for plsql
- Missing quotation - openquery to oracle
- Data dictionary best practices in SQL Server 2008 r2
- Maintenance Plan fails but the query it generates Runs
mysql replication successful but slave not replicating Posted: 11 May 2013 06:07 PM PDT I have created a mysql master-slave configuration and things look fine . the " show master status;" on slave doesnt show any error . this is the output However on replication on slave doesnt seem to be happening when master undergoes change On master . SHOW MASTER STATUS on slave SHOW PROCESSLIST yields Am i missing something |
Mistake during Oracle 11g PITR Posted: 11 May 2013 01:11 PM PDT I tried using set time until.. and mis-typed the date. Can anyone help me understand how to get my backups into a manageable state? After the accidental recover, most of my backupset disappeared. I recovered them and used 'catalog recovery area' .. and they're listed in 'list backupset'. But something still isn't right. When I do a PITR now, I get messages that my dbf files aren't available and... the 'list backupset' seems to show backupsets. But they are listed differently than the files which weren't included in the 'bad' recovery. Gists with the error and the list of backupsets are here https://gist.github.com/akinsgre/5561254 |
how to find similar word with more similarities (length less than or equal) Posted: 11 May 2013 12:42 PM PDT how to Find words with length less than or equal.... @inp not return any data, but i want show: |
How to view the full, flattened query underlying a Postgresql view? Posted: 11 May 2013 05:04 PM PDT If I have a view on a view on a view (etc.), is there a way to show the full query that this expands/flattens to? |
How to ensure that date-range queries involving multiple time zones are sargable Posted: 11 May 2013 08:06 PM PDT I am building a web analytics application for search engine traffic only. You can see some screenshots here: http://myhappyanalytics.com/ It works similar to Google Analytics but it only saves and shows you data from search traffic: visitors, keywords, pages and page views. Since it's a application that will store some large amount of rows I want to make sure it won't overload the server in the first month after launch. I am currently using MySQL with InnoDB engine and this is the database structure for the 4 main tables: Right now with a website that has 30.000 monthly visits some queries are slow because what I need to do is to select between dates, and then for charts I group data by days. What I use right now for the date is the field "date_add" with DATETIME column type and I store date in UTC and then I convert it to the timezone of the website. I think the main problem is that I am doing too much conversions on the date_add field, for selecting, for comparing and for grouping, and also I am adding or subtracting the offset of the timezone. I also don't know if I should index the date field. Example query that I use to get the data for the visits chart: One thing I had in mind is to:
So do you think this changes will improve performance? Or are there better ways to do it? PS: For the production server I was thinking to start with a dedicated server with some 16-32GB RAM because I know that giving more memory to mysql buffers is also very important. |
createdb command error in postgres sql Posted: 11 May 2013 10:27 AM PDT I am using the following command for creating database in postgresql. createdb -D pg_default -E UTF8 -h localhost -p 5432 -U pramil -W pramil mydb but this command results prompt for password.When I enter password localhost server password, the following results createdb:could not connect to database postgres:fe_senauth:no password supplied Any one can hepl me? |
passing --single-transaction in mysqldump command line tool Posted: 11 May 2013 04:29 AM PDT I'm using MySQL GUI tool to backup my innodb database and I use the option I have however, noticed that if I backup using command line mysqldump tool, the backup process speeds up noticeably. I am wondering if is possible to mention |
Posted: 11 May 2013 11:06 AM PDT The error is:
|
grant execute on utl_file to user in Oracle 11g XE Posted: 11 May 2013 02:06 AM PDT then i get: ORA-24372: invalid object for describe How can I access to UTL_FIL from other users in 11g xe? I already tried with public as well |
SQL Index order and performance based on cardinality and data Posted: 11 May 2013 05:06 PM PDT I've been working quite a bit on a query that's not operating very efficiently on a DB2 database. The thing will return up to 30k rows or so and take up to 10 seconds. I've been working on getting this thing to run faster and I think I've finally tracked it down to an improper index. A new and more appropriate index should fix the problem, but I'm not entirely sure how this should be ordered as there appears to be different ways for different data. The query itself isn't terribly complicated. Here's a brief rundown: A few details about these tables:
The join between A and B is taking a third of the query cost. The explain plan shows a FETCH operation that uses a particular index four times. I was able to get a VERY detailed explain plan report and found that this operation is returning all 400k rows, then performing the join, which only needs about 26k of them. I found a short-circuit method that added this to the predicate: Logically this doesn't affect the result set, but it does tweak the optimizer. Adding this cut the query time in half. Checking the detailed explain showed that it added an additional filter factor that reduced the number of estimated rows in that FETCH operation to 26k or so, which is about right. This in turn reduced the estimate for the number of rows from A that would be returned overall and enabled index scans on A that hadn't been used prior due to the optimizer thinking it could be grabbing nearly half of A. The result was a much faster query. However, the COALESCE is a hacky bit and not suited for production, but it was handy for finding an issue. So now it falls to creating a new index on B to fix this problem. The DB2 query analyzer suggested an index on all six fields, starting with B.TYPE_CODE and following with the rest in no particular logical order. This would cover all selected fields and the table itself would not need to be touched, which certainly has its advantages. It has also been suggested that an index on (B.A_ID1, B.A_ID2, B.TYPE_CODE) could be used as the more selective columns are first, which would narrow the results faster. I've seen different suggestions based on the cardinality of the data, so I'm wondering if anyone has some advice on how to construct the index here. I've been doing a lot of reading on indexes lately, but it can be difficult to find a good guide on certain aspects. UPDATE: Well, DB2 refuses to use the new index and keeps using the existing index of (TYPE_CODE, B.ID1, B.ID2), and those latter two are the primary key for B, not to be confused with B.A_ID1, B.A_ID2. I think it has to to do with the optimized query grabbing B.ID1 and B.ID2 even though the original query doesn't touch those fields and using them as a SORTKEY. It may be that any index will need to include those fields. |
Multiple SQL Server data files on same SAN disk Posted: 11 May 2013 08:06 AM PDT I'm currently in the process of creating a new database, and have previously only ever used a single data file and a single log file. I've done some research online regarding the benefits of multiple data files, but I've found mixed opinions regarding the benefits of multiple data files on a single disk. So my question is: if the only disk available to me is a SAN disk would I see any performance benefits (such as multithreading/parallel processing) from splitting the data into 4 separate files? Would I be better of with a single file? Also would there be any performance benefit of separating the indexes (and possibly an archive table) into their own filegroups on separate files on the same disk? Would this only provide administrative benefits? Thanks! |
Posted: 11 May 2013 03:06 AM PDT Can anybody help me on "what all users are using the undo tablespace and how much" in oracle database 11g r2. any pointers would be much appreciated. |
Creating the MySQL slow query log file Posted: 11 May 2013 10:06 AM PDT What do I need to do to generate the slow logs file in MySQL? I did: What more do I need to do to? |
Breaking Semisynchronous Replication in MySQL 5.5 Posted: 11 May 2013 04:06 PM PDT I've set up Semisynchronous Replication between two MySQL 5.5 servers running on Windows 7. My application is running and updating the database of the master server and same is being updated in the slave database server. But due to some unknown reasons sometimes, Replication breaks. On running the command: It gives this status: Ideally, in semi synchronization, when the sync breaks the status should come as OFF since master is not able to receive any acknowledgement from the slave. Please help us in this regard. |
how to add attachment(text file) to database mail? Posted: 11 May 2013 06:06 AM PDT I have scenario Daily i run a sql job to apply a new updates to one table - this job will create one text file daily - text file contains all new updates I can send a mail to client that job is successfully completed - now i need to send him a text file as a attachment Is there any way to send attachment through GUI (SQL Server Job setting) I cann't run the script I googled for this scenario but no information from GUI END - i could find from only with scripts |
Posted: 11 May 2013 06:06 PM PDT At work we keep receiving the following DataException seemingly at random when one of our processes tries to write/access a table for one of our clients: Has anyone encountered this? I'm not the person who primarily does administrative tasks on our databases, but even they seem to be having difficulty finding the root of this problem. Any suggestions? Anyone encounter this before? This error comes up for only one of our clients at a time, and it generally seems to rotate. We have Rackspace service but they wont be of much help unless we can provide screenshots, etc at the exact moment this occurs. Apologies if this post may be too vague, please let me know what information to supply to clarify things more. I'm one of the developers in my office, but I don't primarily handle the administrative tasks on our databases. edit: We spoke with IBM and this could possibly be caused by some sort of virus scan being run by IBM/Rackspace as a part of maintenance? They said this kind of dubiously though, so I am doubting this is the culprit because tables remained locked for variable amounts of time. |
Efficient way to move rows across the tables? Posted: 11 May 2013 04:06 AM PDT This is somewhat long question as I would like to explain all details of the problem. System DescriptionWe have a queue of incoming messages from external system(s). Messages are immediately stored in the e.g. INBOX table. Few thread workers fetch the job chunk from the table (first mark some messages with UPDATE, then SELECT marked messages). Workers do not process the messages, they dispatch them to different internal components (called 'processors'), depending on message command. Each message contains several text fields (longest is like 200 varchars), few ids and some timestamp(s) etc; 10-15 columns total. Each internal component (i.e. processor) that process messages works differently. Some process the message immediately, others triggers some long operation, even communicating via HTTP with other parts of the system. In other words, we can not just process message from the INBOX and then remove it. We must work with that message for a while (async task). Still, there are not too many processors in the system, up to 10. Messages are all internal, i.e. it is not important for user to browse them, paginate etc. User may require list of processed relevant messages, but that's not mission-critical feature, so it does not have to be fast. Some invalid message may be deleted sometimes. Its important to emphasize that expected traffic might be quite high - and we don't want bottlenecks because of bad database design. Database is MySql. DecisionThe one of the decisions is not to have one big table for all messages, with some flags column that will indicate various messages states. Idea is to have tables per processors; and to move messages around. For example, received messages will be stored in INBOX, then moved by dispatcher to some e.g. PROCESSOR_1 table, and finally moved to ARCHIVE table. There should not be more then 2 such movements. W While in processing state, we do allow to use flags for indicating processing-specific states, if any. In other words, PROCESSOR_X table may track the state of the messages; since the number of currently processing messages will be significantly smaller. The reason for this is not to use one BIG table for everything. QuestionSince we are moving messages around, I wonder how expensive this is with high volumes. Which of the following scenarios is better: (A) to have all separate similar tables, like explained, and move complete messages rows, e.g. read complete row from INBOX, write to PROCESSOR table (with some additional columns), delete from INBOX. or (B) to prevent physical movement of the content, how about to have one big MESSAGES table that just stores the content (and still not the state). We would still have other tables, as explained above, but they would contain just IDs to messages and additional columns. So now, when message is about to move, we physically move much less data - just IDs. The rest of the message remains in the MESSAGE table unmodified all the time. In other words, is there a penalty in sql join between one smaller and one huge table? Thank you for your patience, hope I was clear enough. |
unable to login oracle as sysdba Posted: 11 May 2013 07:06 PM PDT I just got 11gR2 running and was able to conn as sysdba. I shutdown and started up the database to mount a new pfile. Now, I cannot login as sysdba. My parameter for password file is: I am using sqlplus within the server. This is not a remote connection. Here's some more information: |
"connectivity libraries needed are not found" error in IBM Data Studio Posted: 11 May 2013 03:06 PM PDT UPDATE I am getting the following error when I try to create a new database in IBM Data Studio v3.1.1.0. I have already started the instance using command. After searching exhaustively, I am not able to find any help on the internet regarding this error. |
Connecting to AWS RDS DBInstance with MySQL workbench Posted: 11 May 2013 09:06 AM PDT I am trying to evaluate AWS RDS to use as possible DB for our application. I have created a DBInstance(micro) and added IP security group. I am able to connect to the DBInstance from Xampp shell command line and i am able to run queries from that. But when i try to connect with workbench to the same DBInstance, it gives me error that 'MySQL server has gone away'. Following are the steps i followed to connect with workbench:
My database is completely new, I have just created a plain DB and trying to connect with workbench right away. I am using workbench version 5.2.25. Any ideas on what am i missing? And yes, I am just a beginner with MySQL and AWS. Edit: As asked by RolandoMySQLDBA, i have run the query I tried using both |
mysql replication delay very serious Posted: 11 May 2013 01:19 PM PDT We use a ssd disk for master database and SATA disk for slave. On high write load (300 writes/s), slave delay is very serious. I have set:
on the slave, but the delay continues. the slave is for backup only, no read or write request. Seconds_Behind_Master: 92265, continue increasing. some conf: innodb_buffer_pool_size=15G innodb_log_file_size=1G innodb_log_buffer_size=32M |
SQL developer: Setup debugger for plsql Posted: 11 May 2013 05:06 AM PDT I'm trying to debug remotely pl/sql. But I can't - database returns me an error. What should I do to fix this and start debugging ? UPD |
Missing quotation - openquery to oracle Posted: 11 May 2013 07:13 AM PDT I am not able to successfully run a code with openquery to a oracle server. Please do not take into account field names/data types, as I had to present only a part of the whole procedure. It's quite long. I believe the problem lies in quotation marks, etc... The procedure compiles all right. Each time I execute the procedure, an error occurs: I am lost - where the heck should I put those missing quotation mark? |
Data dictionary best practices in SQL Server 2008 r2 Posted: 11 May 2013 08:06 PM PDT We are interested in sharing the meta data and data dictionary among the team. I know that we can use the Extended Properties for this purpose, but based on my experience I've seen it gets out of date easily, because team members tend to forget to update them or skip this step. I'm wondering if there is a more convenient way to create the data dictionary which can be maintained with the least amount of effort and time. Thank you. |
Maintenance Plan fails but the query it generates Runs Posted: 11 May 2013 03:25 AM PDT I have an SQL Server 2005 Enterprise Edition whose Maintenance plan fails constantly with the error: But the query that this maintenance plan generates: runs normally. even stranger is the error message: "Cannot open backup device 'C:\Program Files(...)" maybe there's a difference between how SSIS and SSMS/SS Agent handles the backslash? The user for SQL Agent and the user with which I ran this query successfully in all these cases was a domain user "ABC.MyDomainUser" that has permissions to access the network mapping \myNetworkDrive. I even used MSTSC to log in the server that runs SQL Server and ran the query locally, it runs fine, only fails when in the maintenance plan. Is this a bug? What am I missing here? What is the elegant way to backup to a network location? Thanks in advance, Lynx Kepler |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment