Friday, July 12, 2013

[how to] Altering the location of Oracle-Suggested Backup

[how to] Altering the location of Oracle-Suggested Backup


Altering the location of Oracle-Suggested Backup

Posted: 12 Jul 2013 08:29 PM PDT

On one database, the Oracle-Suggested Backup scheduled from Enterprise Manager always ends up in the recovery area, despite RMAN configuration showing that device type disk format points elsewhere.

As far as I can see, the scheduled backup job is simply:

run {  allocate channel oem_disk_backup device type disk;  recover copy of database with tag 'ORA_OEM_LEVEL_0';  backup incremental level 1 cumulative  copies=1 for recover of copy with tag 'ORA_OEM_LEVEL_0' database;  }  

Asking RMAN to show all reveals that device type disk is indeed configured to store elsewhere:

CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT   '/s01/backup/PROD11/PROD11_%U';  

If I run the script manually, the backupset is placed at the above location, when the script is run from the job scheduler the backupset goes to the RECO group on ASM,

Why might Oracle still choose to dump the backupset to the db_recovery_file_dest?

Ultimately, how can I change the backup destination?

How do I safely import MySQL data in a directory to a running MariaDB server?

Posted: 12 Jul 2013 08:15 PM PDT

We're integrating another office into ours. That includes some of their data.

I have a new installed MariaDB 5.5 server. The data dir = /home/data/MariaDB. I've done a new/fresh install into that directory:

ls -1 /home/data/MariaDB      ibdata1      ib_logfile0      ib_logfile1      multi-master.info      mysql/      mysql_upgrade_info      performance_schema/      test/  

The server is up & running.

I inherited some data from a now-defunct MySQL server. It's in:

ls -1 /home/data/MySQL-OLD      auto.cnf      ibdata1      ib_logfile0      ib_logfile1      mysql/      mysql_upgrade_info      performance_schema/      proj_DB_2136/      proj_DB_4216/      proj_DB_6217/      test/  

I want to move data for three DBs -- "proj_DB_2136/", "proj_DB_4216/", and "proj_DB_6217/" -- into my working MariaDB server. Same names, same data, everything.

I just need to do this one time, so I can start working on the old data on my new server.

I'd like to do it the right way, without losing anything!

mysqldump makes sense for me, but only for working databases already IN the running server.

What's the correct way to get this data safely brought over?

MySQL Replication - Used to work, now get error 2026

Posted: 12 Jul 2013 07:00 PM PDT

I've had replication running now for about 2 years, with 1 master and 2 slaves. Both are through an SSL connection. I'm using MySQL 5.5.15 on the master (CentOS), and MySQL 5.14.46 on the slaves (on a Windows machine - don't ask).

Last week both slaves stopped replicating off the master, giving me an error code of 2026 and saying that they can't connect to the master. Looking in the error logs, both stopped being able to connect to the server at the same time - 16:46 in the afternoon.

I was here, I'm the only system admin and I was fiddling with anything on the server. In fact, it has run smoothly now for a long time. The SSL certificates appear to still be set up correctly in the master.

Has anybody else had a problem like this?

Creating a database redundancy based on Mysql

Posted: 12 Jul 2013 06:13 PM PDT

I have this requirement where my web application running on a tomcat is behind a loadbalancer and I am planning to deploy this application on multiple servers.

This webapp though needs a database ( mysql ) for managing profile and other similar stuff, now i can only have a single master database and I want all my front end servers to talk to this database. obviously if I do this it becomes my single point of failure , hence the need to run a redundancy/ dynamic failover.

Can you please guide me on same. my requirement though becomes a little more complicated something which i couldn't find on available articles is how to set up a connection to this cluster. Below is the example of how the server.xml is configured in my environment when it talks to single DB

=========================================================================================================================

    <Resource name="jdbc/master" auth="Container" type="javax.sql.DataSource" maxActive="10"              maxIdle="10" validationQuery="SELECT 1" testOnBorrow="false" testWhileIdle="true" validationQueryTimeout="5"              timeBetweenEvictionRunsMillis="10000" minEvictableIdleTimeMillis="60000" maxWait="10000"              username="XXX" password="XXXXXX" driverClassName="com.mysql.jdbc.Driver" removeAbandoned="true"  

removeAbandonedTimeout="60" logAbandoned="true" **url="jdbc:mysql://localhost:53306/master?autoReconnect=true" />**

Now only thing I can replace is the URL but how is the question. and what should be the localhost pointing to.

Really appreciate any response/suggestions here. please feel free to ask me any information you feel i haven't supplied , suggesting an alternative approach to my problem is equally welcome but please note i cant change the webapp as it is a 3rd party application.

Database update rather than cycle through?

Posted: 12 Jul 2013 06:32 PM PDT

I'm new when it comes to MySQLi and databases so I'm not sure how exactly to title this question.

I have a PHP script setup to where it inserts data into the database. Pretty self explanatory. However, I noticed that the database runs through each row instead of just inserting the data. Possibly to check for duplicates? Wouldn't it be much faster not having it query through each row by just inserting the data? The data I'm inserting will never be duplicated anyway unless I force it.

How can I just get it to insert the data without it running through each value in the row? My apologies if this is off-topic.

$SQL = "INSERT INTO `read`(`title`,`url`) VALUES\n ".implode( "\n,",array_reverse( $sql_values ) );  

How do I get the aggregate of a window function in Postgres?

Posted: 12 Jul 2013 08:59 PM PDT

I have a table containing a two columns of permutations/combinations of integer arrays, and a third column containing a value, like so:

CREATE TABLE foo  (    perm integer[] NOT NULL,    combo integer[] NOT NULL,    value numeric NOT NULL DEFAULT 0  );  INSERT INTO foo  VALUES  ( '{3,1,2}', '{1,2,3}', '1.1400' ),  ( '{3,1,2}', '{1,2,3}', '0' ),  ( '{3,1,2}', '{1,2,3}', '1.2680' ),  ( '{3,1,2}', '{1,2,3}', '0' ),  ( '{3,1,2}', '{1,2,3}', '1.2680' ),  ( '{3,1,2}', '{1,2,3}', '0' ),  ( '{3,1,2}', '{1,2,3}', '0' ),  ( '{3,1,2}', '{1,2,3}', '1.2680' ),  ( '{3,1,2}', '{1,2,3}', '0.9280' ),  ( '{3,1,2}', '{1,2,3}', '0' ),  ( '{3,1,2}', '{1,2,3}', '1.2680' ),  ( '{3,1,2}', '{1,2,3}', '0' ),  ( '{3,1,2}', '{1,2,3}', '0' ),  ( '{3,1,2}', '{1,2,3}', '1.2680' ),  ( '{3,1,2}', '{1,2,3}', '0' ),  ( '{3,2,1}', '{1,2,3}', '0' ),  ( '{3,2,1}', '{1,2,3}', '0.8000' )  

I want to find out the average and standard deviation for each permutation, as well as for each combination. I can do that with this query:

SELECT    f1.perm,    f2.combo,    f1.perm_average_value,    f2.combo_average_value,    f1.perm_stddev,    f2.combo_stddev,    f1.perm_count,    f2.combo_count  FROM  (    SELECT      perm,      combo,      avg( value ) AS perm_average_value,      stddev_pop( value ) AS perm_stddev,      count( * ) AS perm_count    FROM foo    GROUP BY perm, combo  ) AS f1  JOIN  (    SELECT      combo,      avg( value ) AS combo_average_value,      stddev_pop( value ) AS combo_stddev,      count( * ) AS combo_count    FROM foo    GROUP BY combo  ) AS f2 ON ( f1.combo = f2.combo );  

However, that query can get pretty slow when I have a lot of data, because the "foo" table (which in reality, consists of 14 partitions each with roughly 4 million rows) needs to be scanned twice.

Recently, I learned that Postgres supports "Window Functions", which is basically like a GROUP BY for a particular column. I modified my query to use these like so:

SELECT    perm,    combo,    avg( value ) as perm_average_value,    avg( avg( value ) ) over w_combo AS combo_average_value,    stddev_pop( value ) as perm_stddev,    stddev_pop( avg( value ) ) over w_combo as combo_stddev,    count( * ) as perm_count,    sum( count( * ) ) over w_combo AS combo_count  FROM foo  GROUP BY perm, combo  WINDOW w_combo AS ( PARTITION BY combo );  

While this works for the "combo_count" column, the "combo_average_value" and "combo_stddev" columns are no longer accurate. It appears that the average is being taken for each permutation, and then being averaged a second time for each combination, which is incorrect.

How can I fix this? Can window functions even be used as an optimization here?

mysql works, mysqldump doesn't

Posted: 12 Jul 2013 04:23 PM PDT

I have mysql 5.5 on my ubuntu 12.04 server. This command:

mysql -u root -p  

works perfectly, but this gives me error:

mysqldump -u root -p mydb_name > a.sql  mysqldump: Got error: 2002: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) when trying to connect  

In my.cnf file, the socket is set to:

socket = /home/mysql/mysql.sock  

so I have no idea where /var/run/mysqld/mysqld.sock is coming from. Thanks.

Optimizing PostgreSQL for transient data

Posted: 12 Jul 2013 08:58 PM PDT

I have several tables with 100-300 columns of integer types each, that hold highly volatile data. The datasets are keyed by one or two primary keys, and when refresh occurs, the whole dataset is deleted and new data is inserted in one transaction. Dataset size is usually a few hundred rows, but can be up to several thousand rows in extreme cases. Refresh occurs once per second, and dataset updates for different keys are usually disjointed, so dropping and recreating the table is not feasible.

How do I tune Postgres to handle such load? I can use the latest and greatest version if that makes any difference.

Which one is more efficient: select from linked server or insert into linked server?

Posted: 12 Jul 2013 08:17 PM PDT

Suppose I have to export data from one server to another (through linked servers). Which statement will be more efficient?

Executing in source server:

INSERT INTO [DestinationLinkedServer].[DestinationDB].[dbo].[Table]  SELECT a, b, c, ... FROM [dbo].Udf_GetExportData()  

Or executing in target server:

INSERT INTO [dbo].[Table]  SELECT a, b, c, ...  FROM OPENQUERY([OriginLinkedServer],      'SELECT a, b, c, ... FROM [OriginDB].[dbo].Udf_GetExportData()')  

Which one will be faster and consume fewer resourcers in total (both source and target server)? Both servers are SQL Server 2005.

Count Most Recent Browser Used By All Users

Posted: 12 Jul 2013 03:43 PM PDT

I have a table with columns Type, Login, and Timestamp. It has been collecting information about the type of browsers users log into the system with for years. Obviously many of the users have since upgraded their browsers. I do not want to consider data for old browsers when users have upgraded. Here is the query I currently have to get a count of how many people use which browser:

select     Type,     count(Type) as Count   from     (select       distinct Login,       Type      from BROWSER) as q   group by Type  

This appears to work fine, but I have a lot of old browsers that I know are no longer being used:

yucky - old browsers no one uses anymore

How would I only select the most recent browser used by each user based on timestamp (or Browser_ID could work also since they are written sequentially). I also intend to ignore data by users who have not logged in during the past year, but that is simple to add later (I only mention it so that no one brings up why I would want browser data that is old for inactive users - I do plan on addressing that as well but I know how to do that already).

If anyone is up for a challenge, there are some users who log in from a couple different computers - or perhaps from their phone and from their desk. It would be great if I could determine that they frequently log in from two different places and also collect that data, but this might be too complicated. If anyone figured out how to add this complicated step, I'll gladly award a 50 point bounty to the answer.

compilation error while referencing pl/sql variable in FOR loop of a procedure

Posted: 12 Jul 2013 02:32 PM PDT

I have written a PL/SQL procedure to get the output in a file for each sale_location, but I am getting an errors as follows:

create or replace   PROCEDURE Rekey_Report_allcntries(sale_loc number)  is    v_outfile UTL_FILE.FILE_TYPE;  v_outbuffer varchar2(1000);  v_outbuffer1 varchar2(200);  v_tempbuffer varchar2(1000);    Begin    v_outfile:=UTL_FILE.FOPEN('INOUT_DIR','Rekey_report.txt','w');    UTL_FILE.PUT_LINE(v_outfile,'Upc,Description,sale_location_id,Keycat,Keycat_name,Maintained_flag,Approval_code,Eff_date,Create_user_id,responsible_party_name,User_Status,Count_of_keycats,Count_of_actions,Original_Keycat_Suggestion');  for c_lang_id in(Select default_language_id from sale_location where sale_location_type_id=1 and sale_location_id=sale_loc)  LOOP  for i in (select e1.upc_id upc_id,upc_description,sale_location_id,keycat_id,xlong_name,maintained_flag,approval_code,eff_date,e1.create_user_id current_user_id,  responsible_party_name,user_status,i1.create_user_id Original_Keycat_Suggestion   from   (select ik.upc_id, replace(it.description,',','') upc_description,ik.sale_location_id,ik.keycat_id, k.xlong_name, k.maintained_flag, ik.approval_code, ik.eff_date,  ik.create_user_id, rp.responsible_party_name, decode(rp.exp_date,null,'Active','Inactive') USER_STATUS  FROM item_keycat ik,keycat k,responsible_party rp,item it  where ik.sale_location_id= sale_loc  and it.sale_location_id = sale_loc   and it.upc_id = ik.upc_id  and ik.keycat_id=k.keycat_id   and k.language_id=c_lang_id.default_language_id  and k.sale_location_id=sale_loc  and rp.default_sale_location_id = sale_loc  and ik.create_user_id = rp.user_id   and rp.exp_date is null   and ik.create_user_id like 'IS%'   and ik.create_user_id (+)= rp.user_id   and ik.eff_date>=trunc(sysdate-365)  and ik.upc_id <> 0  and not exists (select 'x' from item_state ist             where ist.upc_id = it.upc_id             and   ist.sale_location_id = sale_loc             and   ist.state_id = 'COMPLETE'             and   ist.eff_date <= trunc(sysdate) - 7             and   ist.exp_date is null             )     )e1,(select k.upc_id,item_keycat_id,create_user_id,k.sale_location_id  from item_keycat k,   (select upc_id,sale_location_id, min(item_keycat_id) item_id from item_keycat   group by upc_id,sale_location_id) u   where k.upc_id = u.upc_id and  k.item_keycat_id = u.item_id  and k.sale_location_id=u.sale_location_id  and k.sale_location_id=sale_loc) i1  where e1.upc_id = i1.upc_id  --added to get Original_Keycat_Suggestion  and  e1.eff_date in(select max(eff_date) from item_keycat where sale_location_id = sale_loc and upc_id = e1.upc_id  and eff_date >= trunc(sysdate-365)))    Loop    begin    select count(distinct keycat_id) into v_outbuffer1  from item_keycat where sale_location_id = sale_loc and upc_id = i.upc_id; -- Get the keycat count          v_outbuffer := v_outbuffer1;          Exception          When No_data_found then          v_outbuffer := 'NULL';    End;    begin    select count(*) into v_outbuffer1  from item_keycat where sale_location_id = sale_loc and upc_id = i.upc_id ;--Get the number of keycat actions            v_outbuffer := v_outbuffer ||','||v_outbuffer1;            Exception          When No_data_found then          v_outbuffer := v_outbuffer ||','||'NULL';    End;      v_tempbuffer:=i.upc_id||','||i.upc_description||','||i.sale_location_id||','||i.keycat_id||','||i.xlong_name||','||i.maintained_flag||','||i.approval_code||','||  i.eff_date||','||i.current_user_id||','||i.responsible_party_name||','||i.user_status||','||v_outbuffer||','||i.Original_Keycat_Suggestion;      UTL_FILE.PUT_LINE(v_outfile,v_tempbuffer);      End Loop;  End Loop;  UTL_FILE.FCLOSE(v_outfile);    --End;  END Rekey_Report_allcntries;  /  

Errors in above script are as follows:

Error(16,11): PL/SQL: SQL Statement ignored  Error(16,51): PL/SQL: ORA-00918: column ambiguously defined  Error(58,1): PL/SQL: SQL Statement ignored  Error(59,65): PL/SQL: ORA-00904: "I"."UPC_ID": invalid identifier  Error(59,65): PLS-00364: loop index variable 'I' use is invalid  Error(71,1): PL/SQL: SQL Statement ignored  Error(72,65): PLS-00364: loop index variable 'I' use is invalid  Error(72,65): PL/SQL: ORA-00904: "I"."UPC_ID": invalid identifier  Error(83,1): PL/SQL: Statement ignored  Error(83,15): PLS-00364: loop index variable 'I' use is invalid  

Can anyone please help. Thanks

mongostat causing CPU 100%

Posted: 12 Jul 2013 01:39 PM PDT

On a 64Bit Debian 6 OS with 4 CPU cores and 8GB RAM, I can reproduce an issue with mongostat.

Whenever I create an index, reIndex or even index in background, according to top, the CPU load is 40-60% and RAM is filling up to 90%. While index task is running, I try mongostat and watch the CPU load. Immediately, CPU load is nearly 100% and blocking.

mongostat

connected to: 127.0.0.1

Nothing after that, only CPU load at the limit until Ctrl + C for stopping mongostat. Few seconds later, CPU load scales down to ~50% and all is fine again.

Trying mongostat again with the same behaviour...

Is mongodb expecting more CPUs or what's wrong?

How do I get Pentaho Reporting/Kettle to produce PDF reports by business entity?

Posted: 12 Jul 2013 12:42 PM PDT

We use Kettle/PDI to handle usage records from legacy systems and feed them to an analysis database/DWH and also we report to our customers their activity and a backend commission report on a monthly basis from this legacy data.

Right now, I compile this into one enormous PDF using Pentaho Report Designer, then print it and hand it to the Finance gals who send checks out to the customer for the amount on the statement I've made. They hand-collate all this.

Obviously we would like to put these online, since we have a file-area for each customer already. The thing I need is for Pentaho to make, instead of a new page, a new PDF file for each business, and then name it the Business ID number and month, or something equally meaningful. Is this possible?

We have experimented with splitting up the PDF but it takes someone several hours to process, and it is not pleasant work at all. It seems this should be possible, but I do not know enough of the intricacies of Pentaho to make it work.

Relational Database. Data Input. SQL/PHP [on hold]

Posted: 12 Jul 2013 12:12 PM PDT

I am starting to explore database design using php and sql; just looking to bulk out high level understanding as I read through the details of SQL, PHP and RDBMS.

Example

Two tables: CUSTOMER and ADDRESS; customerid and addressid are the respective auto-incrementing primary keys. The addressid is a foreign key in the customer table.

Goal: Customers use a web form to populate the database with their customer and address details.

I split out addresses into a separate table in order to avoid data duplication. If a customer is filling out a web form and has an address that is already stored, how would I go about creating a lookup to select and link new customer details with the address already there? starting to enter the address will create a new address record distinguished from existing records by the automatically created new addressid.

Appreciate this question is basic and probably similar to others somewhere on the web; any pointers towards useful articles/information sources would be much appreciated!

Kind regards,

Installing Full Text Search to SQL Server 2008 R2

Posted: 12 Jul 2013 01:03 PM PDT

I installed sql server 2008 R2 Express successfully, then realised I need to get Full Text Search. So I downloaded the "Advanced Services Installation Package", but when I run it there is no option in the "Feature Selection" part for Full Text Search. Please don't tell me I hav to uninstall and reinstall?

multi master replication

Posted: 12 Jul 2013 02:22 PM PDT

Let us say I have a master(m) slave(s) replication. Now, in the scheme of things, I introduce another database/schema(foo) not associated with the master in any way and I want this schema to be replicated into the slave(s). Can I do this?

AFAIK, this cannot be done. What is the best way to pull this off? The reason I want to do this is I want to join tables from foo with s/m.

The data replication need not happen in real time, can be a daily cron job too. Is mysqldump the way to go? Is there some hack that I can pull off?

SQL server windows authentication through network

Posted: 12 Jul 2013 12:44 PM PDT

I am using sql server 2008 r2.There are about 50 pc in my office connected in network.In a sql server 2008 r2 express(installed in windows 8) i have created all users windows account with same name as their name in their pc and then created windows authentication user in sql. Now all the users are able to connect to sql server using windows authentication.

but now i am trying to do same for the another sql server 2008 r2 express which is installed in windows xp sp3. But it is not working when i try to connect to sql server using windows authentication from network pc message comes like "Login failed for user 'PC91\Guest'. " It is not recognizing the windows account of my pc. why it happens? Please tell me a solution.

How to merge two rows in a table? [on hold]

Posted: 12 Jul 2013 11:12 AM PDT

I have a table called Person with columns Col1, Col2, Col3, etc.

In Col1 I have two values: ABC and XYZ.

What I want is to replicate all the values from the other columns (Co2, Col3, etc.) for the row ABC to XYZ.

What is the easiest way to do this?

linked list in SQL

Posted: 12 Jul 2013 10:48 AM PDT

Though SQL is more affiliated with table like operations and not so much with recursive, say we would like to implement the linked (or double-linked) list concept (like we have for instance in C).
Is there any way to do this efficiently, considering that we can have items moving around from any place into any place on a linked list?
Some solution using CLR?
Or is it really something that should never be brought to SQL Server?

Though I pinned SQL Server, this is an academic like question, so a solution in any other is also good, even if we just get to the conclusion that this is something that should never be brought to the database.

Most efficient ordering post database design

Posted: 12 Jul 2013 12:06 PM PDT

I have posts table that is has post_order column. I store order of each post in it. when I change the order of a row from 25 to 15, I should update all the row from 15 to end. It's good for few rows, But in thousands rows it's worst.

Is there any better design for ordering posts, that is more efficient?

How to do incremental/differential backup every hour in Postgres 9.1?

Posted: 12 Jul 2013 09:05 PM PDT

Trying to do an hourly hot incremental backup of a single postgres server to s3.

I have the following setup in postgresql.conf:

max_wal_senders=2  wal_level=archive  archive_mode=on  archive_command='copy "%p" "c:\\postgres\\archive\\%f"'  

I did a base backup with pg_basebackup -U postgres -D ..\archive -Ft -l 2013-07-07 -x

Which made a big base.tar file in the archive folder and added some long file name files, which I assume are the WALs.

pg_start_backup('label') and pg_stop_backup() seem to create the WAL files in xp_log, and then copy them to the archive folder.

  1. what command(s) do I run to do a new incremental backup (pg_basebackup does a new base backup which I don't want right now)?

  2. What does the label in pg_basebackup and pg_start_backup() do exactly?

  3. WAL Files don't seem to be removed from pg_xlog. What should I do about that? It seems to keep 5 or so WALs in pg_xlog. Is that to be expected?

  4. Do I need to backup the .backup files in the archive folder or just the 16,384KB WAL files?

  5. should I use the --xlog parameter and if so do I need to change wal_keep_segments from 0?

Deploying to SSIS catalog and get permissions error

Posted: 12 Jul 2013 12:58 PM PDT

When I attempt to deploy to SSIS 2012 Catalog on new server I get an error. I have researched it on the web for several hours and all of the information available online does not fix my issue.
enter image description here

InnoDB Failure of some kind

Posted: 12 Jul 2013 01:56 PM PDT

I have MySQL 5.5 installed. I tried to install Joolma but it failed. I went into their sql and replace EGNINE=InnoDB with MyISAM and the install worked.

InnoDB is listed under SHOW ENGINES;

Any idea what the cause is or how to fix this so other InnoDB sites can be used?

I had these errors:

MySQL ERROR No: 1030  MysqL Error: Got error -1 from storage engine  

MySQL is running but not working

Posted: 12 Jul 2013 03:56 PM PDT

In an attempt to tune MySQL to make it work with a recent installation of Drupal I had to modify the MySQL settings on my server. After modifying the configuration file for MySQL (/etc/my.cnf) MySQL stopped working. After some attempts I make it start again but now all my php/MySQL webistes are not being able to connect to their DBs.

Here is why is so confusing:

  • If I check a phpinfo.php on any given site, the MySQL info is there
  • phpmyadmin runs just fine
  • I can run mysql from SSH using root but I see that mysql activity is reduced, look:

[root@server mysql]# mysqladmin processlist

+-----+-----------+-----------+-----------+----------------+------+--------------------+------------------+  | Id  | User      | Host      | db        | Command        | Time | State              | Info             |  +-----+-----------+-----------+-----------+----------------+------+--------------------+------------------+  | 7   | eximstats | localhost | eximstats | Sleep          | 30   |                    |                  |  | 103 | DELAYED   | localhost | eximstats | Delayed insert | 30   | Waiting for INSERT |                  |  | 104 | DELAYED   | localhost | eximstats | Delayed insert | 149  | Waiting for INSERT |                  |  | 105 | DELAYED   | localhost | eximstats | Delayed insert | 149  | Waiting for INSERT |                  |  | 119 | root      | localhost |           | Query          | 0    |                    | show processlist |  +-----+-----------+-----------+-----------+----------------+------+--------------------+------------------+  

My websites using MySQL almost all say:

Error establishing a database connection  

Another say:

Warning: mysql_connect() [function.mysql-connect]: Access denied for user 'website_USER'@'localhost' (using password: NO)  

This is my current my.cnf:

[mysqld]  #datadir=/var/lib/mysql  socket=/var/lib/mysql/mysql.sock  #pid-file=/var/lib/mysql/mysqld.pid  #skip-bdb  #skip-innodb  #err-log=/var/log/mysql.log  #bind-address = server.escalehost.com  log-bin = /var/lib/mysql/log-bin      #innodb_buffer_pool_size=2M  #innodb_additional_mem_pool_size=500K  #innodb_log_buffer_size=500K  #innodb_thread_concurrency=2  #max_connections=125  #table_cache=2500  #thread_cache_size=100  #thread_stack=250K  #wait_timeout=10  #join_buffer=5M  #myisam_sort_buffer_size=15M  #query_cache_size=15M  #read_rnd_buffer_size=5M  max_allowed_packet = 64M  #open_files_limit=8602    #[client]  #port           = 3306  #socket=/var/lib/mysql/mysql.sock    #[mysql.server]  #user=mysql  #basedir=/var/lib    [mysqld_safe]  #socket=/var/lib/mysql/mysql.sock  #err-log=/var/log/mysqld.log  pid-file=/var/run/mysqld/mysqld.pid  

I commented most of it to return it to its simplest version... How can I make the web side to connect to mysql?

Mistake during Oracle 11g PITR

Posted: 12 Jul 2013 10:56 AM PDT

I tried using set time until.. and mis-typed the date. Can anyone help me understand how to get my backups into a manageable state?

After the accidental recover, most of my backupset disappeared.

I recovered them and used 'catalog recovery area' .. and they're listed in 'list backupset'. But something still isn't right.

When I do a PITR now, I get messages that my dbf files aren't available

and... the 'list backupset' seems to show backupsets. But they are listed differently than the files which weren't included in the 'bad' recovery.

Gists with the error and the list of backupsets are here https://gist.github.com/akinsgre/5561254

How can I query data from a linked server, and pass it parameters to filter by?

Posted: 12 Jul 2013 04:00 PM PDT

I have a really big query that needs to be run on multiple databases, and the results appended to a temp table and returned.

The basic syntax looks something like this:

INSERT INTO #tmpTable (Id, ...)    SELECT T1.Id, ...  FROM Server.Database.dbo.Table1 as T1  INNER JOIN #tmpIds as T ON T1.Id = T.Id    INNER JOIN Server.Database.dbo.Table2 as T2 ON T1.Id = T2.Id  INNER JOIN Server.Database.dbo.Table3 as T3 ON T2.Id = T3.Id  LEFT OUTER JOIN Server.Database.dbo.Table4 as T4 ON T3.Id = T4.Id  LEFT OUTER JOIN Server.Database.dbo.Table5 as T5 ON T4.Id = T5.Id  LEFT OUTER JOIN Server.Database.dbo.Table6 as T6 ON T5.Id = T6.Id  

The query runs quickly if run locally on the the individual servers, however it takes a long time to run if it's run from a linked server using the 4-part names like above.

The problem appears to be it's querying the linked server for the unfiltered result set first, then joining it to the #tmpIds table on the local server afterwards, which makes the query take an very long time to run.

If I hardcode the Ids to filter the result set on the linked server, such as

SELECT T1.Id, ...  FROM Server.Database.dbo.Table1 as T1  -- INNER JOIN #tmpIds as T ON T1.Id = T.Id  INNER JOIN Server.Database.dbo.Table2 as T2 ON T1.Id = T2.Id  INNER JOIN Server.Database.dbo.Table3 as T3 ON T2.Id = T3.Id  LEFT OUTER JOIN Server.Database.dbo.Table4 as T4 ON T3.Id = T4.Id  LEFT OUTER JOIN Server.Database.dbo.Table5 as T5 ON T4.Id = T5.Id  LEFT OUTER JOIN Server.Database.dbo.Table6 as T6 ON T5.Id = T6.Id    WHERE T1.Id IN (1, 2, 3)  

it runs quickly in just a few seconds.

Is there a way to run this query so it filters the result set of the query from the linked server by the #tmpId table first, before returning the result set to the local server?

Some things to note

  • The query is very large and complex, and Dynamic SQL is not a viable option for it due to the maintenance nightmare that causes.

    I would be open to suggestions about how to use Dynamic SQL for something else though, such as running a stored procedure or UDF if there's a way to do that over a linked server (tried a few different methods such as sp_executeSQL, OPENROWSET, and OPENQUERY, but those all failed).

  • Because it uses the 4-part naming convention, I cannot use a UDF on the remote server
  • Distributed Transactions are disabled, so the following does not work

    INSERT INTO #table   EXEC Server.Database.dbo.StoredProcedure @ids  

DBCC CHECKDB Notification

Posted: 12 Jul 2013 12:28 PM PDT

There are plenty of questions on DBA.SE regarding DBCC CHECKDB and how to resolve problems when errors are returned. My specific question is on actually getting notified that DBCC CHECKDB returned errors. Most all DBAs know that you can automate the command and should run it often.

I came across this article by Cindy Gross, which has some very good notes. In it she mentions use of SQL Server Agent that if it finds errors from the execution of the CHECKDB command it will fail that step (or job depending on configuration). She points to Paul Randal's blog post on the topic here.

Now I am curious if anyone knows that the Check Database Integrity Task in a maintenance plan would do the same thing? MSDN does not mention that it will and I have not truthfully been an environment where it has come across a corruption issue; so can't say that it does. This would be versus simply setting up a SQL Agent Job with multiple steps that runs the specific command against each database, as Cindy suggested.

Thoughts? Obviously proof is in the pudding so providing more than just a guess would be helpful...

Problem with order by in MySQL subquery

Posted: 12 Jul 2013 03:32 PM PDT

I'm running a query that runs in about 2 seconds when the subquery does not include the order by date_sold clause. However, when I add the order by clause, the query runs forever.

I'm confused, since it seems that I'm using a index for both queries, and as I've understood, the subquery should be able to use the index for the subquery. Any ideas where I went wrong?

EXPLAIN Select buyer_id as bid,                 (SELECT seller_id                 from sale USE INDEX(tester)                 where buyer_id = bid                 order by date_sold LIMIT 1)           from sale where seller_id = 35514335;  +----+--------------------+-------+-------+-----------------------------------+--------+---------+-------+-------+--------------------------+  | id | select_type        | table | type  | possible_keys                     | key    | key_len | ref   | rows  | Extra                    |  +----+--------------------+-------+-------+-----------------------------------+--------+---------+-------+-------+--------------------------+  |  1 | PRIMARY            | sale  | ref   | seller_id,seller_id_index_asc,sub | sub    | 8       | const | 14933 | Using index              |  |  2 | DEPENDENT SUBQUERY | sale  | index | NULL                              | tester | 24      | NULL  |     1 | Using where; Using index |  enter code here  

The table structure

| id               | int(11)    | NO   | PRI | NULL    | auto_increment |  | seller_id        | bigint(20) | NO   | MUL | NULL    |                |  | buyer_id         | bigint(20) | NO   | MUL | NULL    |                |  | date_acquired    | bigint(20) | NO   |     | NULL    |                |  | date_sold        | bigint(20) | NO   | MUL | NULL    |                |  | brand            | text       | NO   | MUL | NULL    |                |  | muncipality_code | bigint(20) | NO   | MUL | NULL    |                |  | new              | int(11)    | YES  |     | NULL    |                |  | car_regnumber    | varchar(6) | YES  |     | NULL    |                |  | gender           | varchar(1) | YES  |     | NULL    |                |  

Tester index

| sale  |          1 | tester              |            1 | date_sold        | A         |       14840 |     NULL | NULL   |      | BTREE      |         |  | sale  |          1 | tester              |            2 | buyer_id         | A         |    11768564 |     NULL | NULL   |      | BTREE      |         |  | sale  |          1 | tester              |            3 | seller_id        | A         |    11768564 |     NULL | NULL   |      | BTREE      |         |  

SHOW CREATE TABLE SALE

| sale  | CREATE TABLE `sale` (    `id` int(11) NOT NULL AUTO_INCREMENT,    `seller_id` bigint(20) NOT NULL,    `buyer_id` bigint(20) NOT NULL,    `date_acquired` bigint(20) NOT NULL,    `date_sold` bigint(20) NOT NULL,    `brand` text NOT NULL,    `muncipality_code` bigint(20) NOT NULL,    `new` int(11) DEFAULT NULL,    `car_regnumber` varchar(6) DEFAULT NULL,    `gender` varchar(1) DEFAULT NULL,    PRIMARY KEY (`id`),    KEY `muncipality_code` (`muncipality_code`),    KEY `brand` (`brand`(10)),    KEY `seller_id` (`seller_id`),    KEY `seller_id_index_asc` (`seller_id`),    KEY `date_sold` (`date_sold`),    KEY `tester` (`date_sold`,`buyer_id`,`seller_id`)  ) ENGINE=MyISAM AUTO_INCREMENT=66390336 DEFAULT CHARSET=latin1 ROW_FORMAT=FIXED |  

How to check growth of database in mysql?

Posted: 12 Jul 2013 04:56 PM PDT

I want to know is there any method to check the growth of database on file

EXAMPLES

  • Database A contains all tables in INNODB storage engine
  • Database B contains all tables in MyISAM storage engine
  • Database C contains a mixture of InnoDB and MyISAM tables

No comments:

Post a Comment

Search This Blog