Saturday, May 11, 2013

[how to] mysql replication successful but slave not replicating

[how to] mysql replication successful but slave not replicating


mysql replication successful but slave not replicating

Posted: 11 May 2013 06:07 PM PDT

I have created a mysql master-slave configuration and things look fine . the " show master status;" on slave doesnt show any error . this is the output

 Slave_IO_State: Waiting for master to send event                    Master_Host: 109.123.100.58                    Master_User: replica                    Master_Port: 3306                  Connect_Retry: 60                Master_Log_File: mysql-bin.000001            Read_Master_Log_Pos: 106                 Relay_Log_File: relay-bin.000001                  Relay_Log_Pos: 4          Relay_Master_Log_File: mysql-bin.000001               Slave_IO_Running: Yes              Slave_SQL_Running: Yes                Replicate_Do_DB:             Replicate_Ignore_DB:              Replicate_Do_Table:          Replicate_Ignore_Table:         Replicate_Wild_Do_Table:     Replicate_Wild_Ignore_Table:                      Last_Errno: 0                     Last_Error:                    Skip_Counter: 0            Exec_Master_Log_Pos: 106                Relay_Log_Space: 106                Until_Condition: None                 Until_Log_File:                   Until_Log_Pos: 0             Master_SSL_Allowed: No             Master_SSL_CA_File:              Master_SSL_CA_Path:                 Master_SSL_Cert:               Master_SSL_Cipher:                  Master_SSL_Key:           Seconds_Behind_Master: 0  Master_SSL_Verify_Server_Cert: No                  Last_IO_Errno: 0                  Last_IO_Error:                  Last_SQL_Errno: 0                 Last_SQL_Error:   

However on replication on slave doesnt seem to be happening when master undergoes change

On master . SHOW MASTER STATUS

+------------------+----------+--------------+-------------------------+  | File             | Position | Binlog_Do_DB | Binlog_Ignore_DB        |  +------------------+----------+--------------+-------------------------+  | mysql-bin.000001 |   639495 |              | mysql,informationschema |  +------------------+----------+--------------+-------------------------+  

on slave SHOW PROCESSLIST yields

+-----+-------------+-----------+------+---------+-------+-----------------------------------------------------------------------+------------------+  | Id  | User        | Host      | db   | Command | Time  | State                                                                 | Info             |  +-----+-------------+-----------+------+---------+-------+-----------------------------------------------------------------------+------------------+  | 174 | system user |           | NULL | Connect | 25777 | Waiting for master to send event                                      | NULL             |  | 175 | system user |           | NULL | Connect | 25777 | Has read all relay log; waiting for the slave I/O thread to update it | NULL             |  | 199 | root        | localhost | NULL | Query   |     0 | NULL                                                                  | show processlist |  +-----+-------------+-----------+------+---------+-------+-----------------------------------------------------------------------+------------------+  3 rows in set (0.00 sec)  

Am i missing something

Mistake during Oracle 11g PITR

Posted: 11 May 2013 01:11 PM PDT

I tried using set time until.. and mis-typed the date. Can anyone help me understand how to get my backups into a manageable state?

After the accidental recover, most of my backupset disappeared.

I recovered them and used 'catalog recovery area' .. and they're listed in 'list backupset'. But something still isn't right.

When I do a PITR now, I get messages that my dbf files aren't available

and... the 'list backupset' seems to show backupsets. But they are listed differently than the files which weren't included in the 'bad' recovery.

Gists with the error and the list of backupsets are here https://gist.github.com/akinsgre/5561254

how to find similar word with more similarities (length less than or equal)

Posted: 11 May 2013 12:42 PM PDT

how to Find words with length less than or equal....

  declare @inp nvarchar(max),@data nvarchar(max)    set @inp='You can dance, you can jive, having .... jove... jve, ...'     set @data = 'jeve'    select @inp as results where @inp like '%' + @data +'%'  

@inp not return any data, but i want show:

results   =====  jive  jove  jve  

How to view the full, flattened query underlying a Postgresql view?

Posted: 11 May 2013 05:04 PM PDT

If I have a view on a view on a view (etc.), is there a way to show the full query that this expands/flattens to?

How to ensure that date-range queries involving multiple time zones are sargable

Posted: 11 May 2013 08:06 PM PDT

I am building a web analytics application for search engine traffic only. You can see some screenshots here: http://myhappyanalytics.com/

It works similar to Google Analytics but it only saves and shows you data from search traffic: visitors, keywords, pages and page views.

Since it's a application that will store some large amount of rows I want to make sure it won't overload the server in the first month after launch.

I am currently using MySQL with InnoDB engine and this is the database structure for the 4 main tables:

CREATE TABLE IF NOT EXISTS `keyword` (    `id_keyword` int(11) NOT NULL AUTO_INCREMENT,    `id_website` int(11) NOT NULL,    `keyword` varchar(255) NOT NULL,    `position` int(11) DEFAULT NULL,    `date_add` datetime NOT NULL,    `date_upd` datetime NOT NULL,    PRIMARY KEY (`id_keyword`),    KEY `fk_keyword_website1_idx` (`id_website`)  ) ENGINE=InnoDB  DEFAULT CHARSET=utf8 AUTO_INCREMENT=33290 ;    CREATE TABLE IF NOT EXISTS `page` (    `id_page` int(11) NOT NULL AUTO_INCREMENT,    `id_website` int(11) NOT NULL,    `url` varchar(1000) NOT NULL,    `path` varchar(1000) DEFAULT NULL,    `date_add` datetime NOT NULL,    `date_upd` datetime NOT NULL,    PRIMARY KEY (`id_page`),    KEY `fk_page_website1_idx` (`id_website`)  ) ENGINE=InnoDB  DEFAULT CHARSET=utf8 AUTO_INCREMENT=65167 ;    CREATE TABLE IF NOT EXISTS `page_view` (    `id_page_view` bigint(20) NOT NULL AUTO_INCREMENT,    `id_visit` int(11) NOT NULL,    `id_page` int(11) NOT NULL,    `id_website` int(11) NOT NULL,    `date_add` datetime DEFAULT NULL,    PRIMARY KEY (`id_page_view`),    KEY `fk_page_view_visit1_idx` (`id_visit`),    KEY `fk_page_view_page1_idx` (`id_page`),    KEY `id_website` (`id_website`)  ) ENGINE=InnoDB  DEFAULT CHARSET=utf8 AUTO_INCREMENT=180240 ;    CREATE TABLE IF NOT EXISTS `visit` (    `id_visit` int(11) NOT NULL AUTO_INCREMENT,    `id_keyword` int(11) NOT NULL,    `id_page` int(11) NOT NULL,    `id_website` int(11) NOT NULL,    `id_search_engine` int(11) DEFAULT NULL,    `id_guest` int(11) DEFAULT NULL,    `position` int(11) DEFAULT NULL,    `ip` int(11) NOT NULL,    `date_add` datetime NOT NULL,    PRIMARY KEY (`id_visit`),    KEY `fk_visit_keyword1_idx` (`id_keyword`),    KEY `fk_visit_page1_idx` (`id_page`),    KEY `fk_visit_website1_idx` (`id_website`),    KEY `id_search_engine` (`id_search_engine`),    KEY `id_guest` (`id_guest`,`timestamp`)  ) ENGINE=InnoDB  DEFAULT CHARSET=utf8 AUTO_INCREMENT=47335 ;  

Right now with a website that has 30.000 monthly visits some queries are slow because what I need to do is to select between dates, and then for charts I group data by days.

What I use right now for the date is the field "date_add" with DATETIME column type and I store date in UTC and then I convert it to the timezone of the website.

I think the main problem is that I am doing too much conversions on the date_add field, for selecting, for comparing and for grouping, and also I am adding or subtracting the offset of the timezone.

I also don't know if I should index the date field.

Example query that I use to get the data for the visits chart:

  SELECT DATE_FORMAT(DATE_ADD(t.date_add, INTERVAL 7200 second), "%Y-%m-%d") AS chartDay,    count(t.id_visit) AS chartVisitCount, `t`.`id_visit` AS `t0_c0`,     `keyword`.`id_keyword` AS `t1_c0`, `keyword`.`id_website` AS `t1_c1`,      `keyword`.`keyword` AS `t1_c2`, `keyword`.`position` AS `t1_c3`,      `keyword`.`date_add` AS `t1_c4`, `keyword`.`date_upd` AS `t1_c5`,     `page`.`id_page` AS `t2_c0`, `page`.`id_website` AS `t2_c1`, `page`.`url` AS `t2_c2`,     `page`.`path` AS `t2_c3`, `page`.`date_add` AS `t2_c4`, `page`.`date_upd` AS `t2_c5`,     `engine`.`id_search_engine` AS `t3_c0`, `engine`.`name` AS `t3_c1`,     `engine`.`code` AS `t3_c2`, `engine`.`host` AS `t3_c3`,      `engine`.`r_keyword` AS `t3_c4`, `engine`.`r_position` AS `t3_c5`,      `engine`.`date_add` AS `t3_c6`, `engine`.`date_upd` AS `t3_c7`       FROM `visit` `t` LEFT OUTER JOIN `keyword` `keyword`      ON (`t`.`id_keyword`=`keyword`.`id_keyword`)     LEFT OUTER JOIN `page` `page` ON (`t`.`id_page`=`page`.`id_page`)     LEFT OUTER JOIN `search_engine` `engine` ON       (`t`.`id_search_engine`=`engine`.`id_search_engine`)      WHERE ((t.id_website=21) AND ((t.date_add >= '2013-04-10 22:00:00'      AND t.date_add <= '2013-05-11 21:59:59')))      GROUP BY DATE_FORMAT(DATE_ADD(t.date_add, INTERVAL 7200 second), "%Y-%m-%d")  

One thing I had in mind is to:

  • change the date_add to a TIMESTAMP or INT and index that column
  • add another column to store just the DATE without the time, and use it when I need grouping, and also index this column
  • and in the last place, to stop saving data in UTC that needs converting, and saving it directly in the timezone of that website

So do you think this changes will improve performance? Or are there better ways to do it?

PS: For the production server I was thinking to start with a dedicated server with some 16-32GB RAM because I know that giving more memory to mysql buffers is also very important.

createdb command error in postgres sql

Posted: 11 May 2013 10:27 AM PDT

I am using the following command for creating database in postgresql.

  createdb -D pg_default -E UTF8 -h localhost -p 5432 -U pramil -W pramil mydb  

but this command results prompt for password.When I enter password localhost server password, the following results

  createdb:could not connect to database postgres:fe_senauth:no password supplied  

Any one can hepl me?

passing --single-transaction in mysqldump command line tool

Posted: 11 May 2013 04:29 AM PDT

I'm using MySQL GUI tool to backup my innodb database and I use the option '--single-transaction' to get a consistent copy as backup.

I have however, noticed that if I backup using command line mysqldump tool, the backup process speeds up noticeably. I am wondering if is possible to mention --single-transaction in command line mysqldump utility?

dead lock when updating

Posted: 11 May 2013 11:06 AM PDT

 Update Operation.TrTable   Set    RecordId = RecordTABLE.newRecordId   From  Operation.TrTable tr      Inner Join (          SELECT r.Id AS newRecordId, r.KeyM AS MappingKey          From  Operation.Record r          WHERE r.KeyM > 0        ) RecordTABLE        ON RecordTABLE.MappingKey = tr.KeyM   WHERE tr.KeyM > 0          UPDATE Operation.Record SET KeyM = 0   WHERE KeyM > 0     UPDATE Operation.TrTable SET KeyM = 0  WHERE KeyM > 0  

The error is:

Transaction (Process ID 93) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction.

grant execute on utl_file to user in Oracle 11g XE

Posted: 11 May 2013 02:06 AM PDT

sqlplus connect <sys>/<pwd>  grant execute on utl_file to <user>  connect <user>/<pwd>  DESC utl_file  

then i get: ORA-24372: invalid object for describe

How can I access to UTL_FIL from other users in 11g xe? I already tried with public as well

SQL Index order and performance based on cardinality and data

Posted: 11 May 2013 05:06 PM PDT

I've been working quite a bit on a query that's not operating very efficiently on a DB2 database. The thing will return up to 30k rows or so and take up to 10 seconds. I've been working on getting this thing to run faster and I think I've finally tracked it down to an improper index. A new and more appropriate index should fix the problem, but I'm not entirely sure how this should be ordered as there appears to be different ways for different data.

The query itself isn't terribly complicated. Here's a brief rundown:

SELECT   ...  B.TYPE_CODE,  B.FIELD,  ...  FROM A  INNER JOIN B  ON A.ID1 = B.A_ID1  AND A.ID2 = B.A_ID2  --A few inner joins and some left joins--  WHERE  B.TYPE_CODE IN (:parameter) --No more than four values within the IN clause  

A few details about these tables:

  • A has about 900k rows and B has about 2.7 million.
  • B.TYPE_CODE has 15 possible values. Stats are accurate. The four parameters together are contained in about 400k rows (about 15% of all entries in the table). The A_ID1/2 columns are much more unique, occurring 15 times at most, of course.
  • B already has an index on (TYPE_CODE, A_ID1, A_ID2). This is being used, but it's not optimal at all
  • There are 6 rows being selected by the DB2-optimized query. It grabs A_ID1 and 2 along with TYPE_CODE and FIELD, of course, but it's also grabbing B.ID1 and B.ID2, which together form the primary key for B. These fields are not used in the original query, but the optimizer is including them, although I don't think they're used in any joins or filters. They're just there.
  • B will have far more reads than write operations on it

The join between A and B is taking a third of the query cost. The explain plan shows a FETCH operation that uses a particular index four times. I was able to get a VERY detailed explain plan report and found that this operation is returning all 400k rows, then performing the join, which only needs about 26k of them.

I found a short-circuit method that added this to the predicate:

AND COALESCE(B.TYPE_CODE, B.TYPE_CODE) = B.TYPE_CODE  

Logically this doesn't affect the result set, but it does tweak the optimizer. Adding this cut the query time in half. Checking the detailed explain showed that it added an additional filter factor that reduced the number of estimated rows in that FETCH operation to 26k or so, which is about right. This in turn reduced the estimate for the number of rows from A that would be returned overall and enabled index scans on A that hadn't been used prior due to the optimizer thinking it could be grabbing nearly half of A. The result was a much faster query. However, the COALESCE is a hacky bit and not suited for production, but it was handy for finding an issue.

So now it falls to creating a new index on B to fix this problem. The DB2 query analyzer suggested an index on all six fields, starting with B.TYPE_CODE and following with the rest in no particular logical order. This would cover all selected fields and the table itself would not need to be touched, which certainly has its advantages. It has also been suggested that an index on (B.A_ID1, B.A_ID2, B.TYPE_CODE) could be used as the more selective columns are first, which would narrow the results faster. I've seen different suggestions based on the cardinality of the data, so I'm wondering if anyone has some advice on how to construct the index here. I've been doing a lot of reading on indexes lately, but it can be difficult to find a good guide on certain aspects.

UPDATE: Well, DB2 refuses to use the new index and keeps using the existing index of (TYPE_CODE, B.ID1, B.ID2), and those latter two are the primary key for B, not to be confused with B.A_ID1, B.A_ID2. I think it has to to do with the optimized query grabbing B.ID1 and B.ID2 even though the original query doesn't touch those fields and using them as a SORTKEY. It may be that any index will need to include those fields.

Multiple SQL Server data files on same SAN disk

Posted: 11 May 2013 08:06 AM PDT

I'm currently in the process of creating a new database, and have previously only ever used a single data file and a single log file. I've done some research online regarding the benefits of multiple data files, but I've found mixed opinions regarding the benefits of multiple data files on a single disk. So my question is: if the only disk available to me is a SAN disk would I see any performance benefits (such as multithreading/parallel processing) from splitting the data into 4 separate files? Would I be better of with a single file?

Also would there be any performance benefit of separating the indexes (and possibly an archive table) into their own filegroups on separate files on the same disk? Would this only provide administrative benefits?

Thanks!

Oracle Undo tablespace users

Posted: 11 May 2013 03:06 AM PDT

Can anybody help me on "what all users are using the undo tablespace and how much" in oracle database 11g r2. any pointers would be much appreciated.

Creating the MySQL slow query log file

Posted: 11 May 2013 10:06 AM PDT

What do I need to do to generate the slow logs file in MySQL?

I did:

 log_slow_queries  = C:\Program Files\MySQL\MySQL Server 5.1\mysql-slow.log    long_query_time  = 1   

What more do I need to do to?

Breaking Semisynchronous Replication in MySQL 5.5

Posted: 11 May 2013 04:06 PM PDT

I've set up Semisynchronous Replication between two MySQL 5.5 servers running on Windows 7.

My application is running and updating the database of the master server and same is being updated in the slave database server.

But due to some unknown reasons sometimes, Replication breaks.

On running the command:

SHOW STATUS LIKE 'Rpl_semi_sync%';  

It gives this status:

'Rpl_semi_sync_master_no_times', '0'  'Rpl_semi_sync_master_no_tx', '0'  'Rpl_semi_sync_master_status', 'ON'     <<-------------  'Rpl_semi_sync_master_timefunc_failures', '0'  'Rpl_semi_sync_master_tx_avg_wait_time', '338846'  'Rpl_semi_sync_master_tx_wait_time', '29479685'  'Rpl_semi_sync_master_tx_waits', '87'  'Rpl_semi_sync_master_wait_pos_backtraverse', '0'  'Rpl_semi_sync_master_wait_sessions', '0'  'Rpl_semi_sync_master_yes_tx', '3106'  

Ideally, in semi synchronization, when the sync breaks the status should come as OFF since master is not able to receive any acknowledgement from the slave. Please help us in this regard.

how to add attachment(text file) to database mail?

Posted: 11 May 2013 06:06 AM PDT

I have scenario

Daily i run a sql job to apply a new updates to one table - this job will create one text file daily - text file contains all new updates

I can send a mail to client that job is successfully completed - now i need to send him a text file as a attachment

Is there any way to send attachment through GUI (SQL Server Job setting)

I cann't run the script

EXEC sp_send_dbmail  

I googled for this scenario but no information from GUI END - i could find from only with scripts

DB2 Server Table Space Locked

Posted: 11 May 2013 06:06 PM PDT

At work we keep receiving the following DataException seemingly at random when one of our processes tries to write/access a table for one of our clients:

com.ibm.db.DataException: A database manager error occurred. :   [IBM][CLI Driver][DB2/NT64] SQL0290N  Table space access is not allowed.  SQLSTATE=55039  

Has anyone encountered this? I'm not the person who primarily does administrative tasks on our databases, but even they seem to be having difficulty finding the root of this problem. Any suggestions? Anyone encounter this before? This error comes up for only one of our clients at a time, and it generally seems to rotate. We have Rackspace service but they wont be of much help unless we can provide screenshots, etc at the exact moment this occurs.

Apologies if this post may be too vague, please let me know what information to supply to clarify things more. I'm one of the developers in my office, but I don't primarily handle the administrative tasks on our databases.

edit: We spoke with IBM and this could possibly be caused by some sort of virus scan being run by IBM/Rackspace as a part of maintenance? They said this kind of dubiously though, so I am doubting this is the culprit because tables remained locked for variable amounts of time.

Efficient way to move rows across the tables?

Posted: 11 May 2013 04:06 AM PDT

This is somewhat long question as I would like to explain all details of the problem.

System Description

We have a queue of incoming messages from external system(s). Messages are immediately stored in the e.g. INBOX table. Few thread workers fetch the job chunk from the table (first mark some messages with UPDATE, then SELECT marked messages). Workers do not process the messages, they dispatch them to different internal components (called 'processors'), depending on message command.

Each message contains several text fields (longest is like 200 varchars), few ids and some timestamp(s) etc; 10-15 columns total.

Each internal component (i.e. processor) that process messages works differently. Some process the message immediately, others triggers some long operation, even communicating via HTTP with other parts of the system. In other words, we can not just process message from the INBOX and then remove it. We must work with that message for a while (async task).

Still, there are not too many processors in the system, up to 10.

Messages are all internal, i.e. it is not important for user to browse them, paginate etc. User may require list of processed relevant messages, but that's not mission-critical feature, so it does not have to be fast. Some invalid message may be deleted sometimes.

Its important to emphasize that expected traffic might be quite high - and we don't want bottlenecks because of bad database design. Database is MySql.

Decision

The one of the decisions is not to have one big table for all messages, with some flags column that will indicate various messages states. Idea is to have tables per processors; and to move messages around. For example, received messages will be stored in INBOX, then moved by dispatcher to some e.g. PROCESSOR_1 table, and finally moved to ARCHIVE table. There should not be more then 2 such movements. W

While in processing state, we do allow to use flags for indicating processing-specific states, if any. In other words, PROCESSOR_X table may track the state of the messages; since the number of currently processing messages will be significantly smaller.

The reason for this is not to use one BIG table for everything.

Question

Since we are moving messages around, I wonder how expensive this is with high volumes. Which of the following scenarios is better:

(A) to have all separate similar tables, like explained, and move complete messages rows, e.g. read complete row from INBOX, write to PROCESSOR table (with some additional columns), delete from INBOX.

or

(B) to prevent physical movement of the content, how about to have one big MESSAGES table that just stores the content (and still not the state). We would still have other tables, as explained above, but they would contain just IDs to messages and additional columns. So now, when message is about to move, we physically move much less data - just IDs. The rest of the message remains in the MESSAGE table unmodified all the time.

In other words, is there a penalty in sql join between one smaller and one huge table?

Thank you for your patience, hope I was clear enough.

unable to login oracle as sysdba

Posted: 11 May 2013 07:06 PM PDT

I just got 11gR2 running and was able to conn as sysdba. I shutdown and started up the database to mount a new pfile. Now, I cannot login as sysdba. My parameter for password file is:

 *.remote_login_passwordfile='EXCLUSIVE'  

I am using sqlplus within the server. This is not a remote connection.

[oracle@oel56 ~]$ sqlplus /nolog    SQL*Plus: Release 11.2.0.1.0 Production on Tue Feb 5 22:50:46 2013    Copyright (c) 1982, 2009, Oracle.  All rights reserved.    SQL> conn / as sysdba  ERROR:  ORA-01031: insufficient privileges  

Here's some more information:

[oracle@oel56 ~]$ grep -E "ine SS_DBA|ine SS_OPER" $ORACLE_HOME/rdbms/lib/config.c  #define SS_DBA_GRP "oracle"  #define SS_OPER_GRP "oracle"  [oracle@oel56 ~]$ id oracle  uid=500(oracle) gid=500(oracle) groups=500(oracle),54321(oinstall),54322(dba),54323(oper) context=user_u:system_r:unconfined_t  

"connectivity libraries needed are not found" error in IBM Data Studio

Posted: 11 May 2013 03:06 PM PDT

UPDATE

I am getting the following error when I try to create a new database in IBM Data Studio v3.1.1.0.

The connectivity libraries that are needed for local or remote non-JDBC operations were not found. To provide these libraries, you can install IBM data server client or a local DB2 server instance.  

I have already started the instance using

db2start  

command.

After searching exhaustively, I am not able to find any help on the internet regarding this error.

Connecting to AWS RDS DBInstance with MySQL workbench

Posted: 11 May 2013 09:06 AM PDT

I am trying to evaluate AWS RDS to use as possible DB for our application. I have created a DBInstance(micro) and added IP security group. I am able to connect to the DBInstance from Xampp shell command line and i am able to run queries from that.

But when i try to connect with workbench to the same DBInstance, it gives me error that 'MySQL server has gone away'. Following are the steps i followed to connect with workbench:

  1. Give the Endpoint address of DBInstance as Hostname.
  2. Port : 3306
  3. Set the username and password to master user and its password.

My database is completely new, I have just created a plain DB and trying to connect with workbench right away. I am using workbench version 5.2.25.

Any ideas on what am i missing? And yes, I am just a beginner with MySQL and AWS.

Edit: As asked by RolandoMySQLDBA, i have run the query SELECT USER(),CURRENT_USER(); and in the result i get one row with two columns USER()=ei@xxx.xx.xxx.xxx and CURRENT_USER()=ei@%. ei is the master username for my RDS DBInstance. And the column USER() has my ip address after @.

I tried using both ei@xxx.xx.xxx.xxx and ei@% with workbench, but i get access denied error .

mysql replication delay very serious

Posted: 11 May 2013 01:19 PM PDT

We use a ssd disk for master database and SATA disk for slave.

On high write load (300 writes/s), slave delay is very serious. I have set:

  • innodb_flush_log_at_trx_commit = 0
  • increased innodb_log_file_size

on the slave, but the delay continues.

the slave is for backup only, no read or write request.

Seconds_Behind_Master: 92265, continue increasing.

some conf:

innodb_buffer_pool_size=15G innodb_log_file_size=1G innodb_log_buffer_size=32M

du -hs mysql  338G    mysql    iostat  Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util  sde               0.00    73.00   34.50  185.00   552.00  7086.00    69.59     5.63   25.63   19.19   26.83   3.65  80.20  

SQL developer: Setup debugger for plsql

Posted: 11 May 2013 05:06 AM PDT

I'm trying to debug remotely pl/sql. But I can't - database returns me an error. What should I do to fix this and start debugging ?

    ORA-30683: failure establishing connection to debugger      ORA-12541: TNS:no listener      ORA-06512: at "SYS.DBMS_DEBUG_JDWP", line 68      ORA-06512: at line 1  

UPD
I'm trying to debug procedure on remote server. On local computer I have only sql developer installed.

Missing quotation - openquery to oracle

Posted: 11 May 2013 07:13 AM PDT

I am not able to successfully run a code with openquery to a oracle server.

Please do not take into account field names/data types, as I had to present only a part of the whole procedure. It's quite long. I believe the problem lies in quotation marks, etc...

The procedure compiles all right. Each time I execute the procedure, an error occurs:

Msg 105, Level 15, State 1, Line 53  Unclosed quotation mark after the character string 'SELECT TO_NUMBER(XYZ_1) XYZ_1,  XYZ_2,    cast (''''0'''' as number(5)) as B1,   cast(''''1753-01-01'''' as date) NULL_DATE  

I am lost - where the heck should I put those missing quotation mark?

CREATE TABLE #tmpXYZ Header (      [XYZ_1] [int] PRIMARY KEY,      [XYZ_2] [varchar](20),      [XYZ_3] [varchar](20),          -- more fields        [XYZ_N] [varchar](50)   )    declare @sqlInv nvarchar(3000)   set @sqlInv =             'insert into #tmpXYZ Header (          [XYZ_1],          [XYZ_2],          [XYZ_3],          -- more fields          [XYZ_N]        )        select          * FROM OPENQUERY(XYZ_ORACLE,           ''SELECT TO_NUMBER(XYZ_1) XYZ_1,          XYZ_2,            cast (''''0'''' as number(5)) as B1,           cast(''''1753-01-01'''' as date) NULL_DATE,          -- more fields          cast ('''' '''' as varchar(20)) as A19         from XYZ.V_HEADER       where            (DATE >= ''''TO_DATE(''''' + @startDate + ''''', ''''YYYYMMDD'''')'''' AND           DATE <= ''''TO_DATE(''''' + @endDate + ''''', ''''YYYYMMDD'''')'''' AND           QWE = ''''0'''' AND          ABC = ''''13'''' ) '' )      '         exec sp_executesql @sqlInv  

Data dictionary best practices in SQL Server 2008 r2

Posted: 11 May 2013 08:06 PM PDT

We are interested in sharing the meta data and data dictionary among the team. I know that we can use the Extended Properties for this purpose, but based on my experience I've seen it gets out of date easily, because team members tend to forget to update them or skip this step.

I'm wondering if there is a more convenient way to create the data dictionary which can be maintained with the least amount of effort and time.

Thank you.

Maintenance Plan fails but the query it generates Runs

Posted: 11 May 2013 03:25 AM PDT

I have an SQL Server 2005 Enterprise Edition whose Maintenance plan fails constantly with the error:

backup MYSERVER (MYSERVER)  Backup Database on MYSERVER  Databases that have a compatibility level of 70 (SQL Server version 7.0) will be skipped.  Databases: All databases  Type: Differential  Append existing  Task start: 2011-10-18T00:10:09.  Task end: 2011-10-18T00:10:09.  Failed:(-1073548784) Executing the query "BACKUP DATABASE [model] TO  DISK = N'\\myNetworkDrive\\opovo\\BackupSQL\\MYSERVER\\model\\model_backup_201110180010.bkp' WITH  DIFFERENTIAL ,  RETAINDAYS = 13, NOFORMAT, NOINIT,  NAME = N'model_backup_20111018001008', SKIP, REWIND, NOUNLOAD,  STATS = 10  " failed with the following error: "Cannot open backup device 'C:\\Program Files\\Microsoft SQL Server\\MSSQL.1\\MSSQL\\Backup\\Arca\\opovo\\BackupSQL\\MYSERVER\\model\\model_backup_201110180010.bkp'. Operating system error 3(The system cannot find the path specified.).  BACKUP DATABASE is terminating abnormally.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.  

But the query that this maintenance plan generates:

BACKUP DATABASE [model] TO  DISK = N'\\myNetworkDrive\\opovo\\BackupSQL\\MYSERVER\\model\\model_backup_201110180010.bkp' WITH  DIFFERENTIAL ,  RETAINDAYS = 13, NOFORMAT, NOINIT,  NAME = N'model_backup_20111018001008', SKIP, REWIND, NOUNLOAD,  STATS = 10  

runs normally.

even stranger is the error message: "Cannot open backup device 'C:\Program Files(...)" maybe there's a difference between how SSIS and SSMS/SS Agent handles the backslash?

The user for SQL Agent and the user with which I ran this query successfully in all these cases was a domain user "ABC.MyDomainUser" that has permissions to access the network mapping \myNetworkDrive. I even used MSTSC to log in the server that runs SQL Server and ran the query locally, it runs fine, only fails when in the maintenance plan. Is this a bug? What am I missing here? What is the elegant way to backup to a network location? Thanks in advance, Lynx Kepler

No comments:

Post a Comment

Search This Blog