Monday, May 13, 2013

[how to] Reloading MySQL my.cnf changes

[how to] Reloading MySQL my.cnf changes


Reloading MySQL my.cnf changes

Posted: 13 May 2013 09:36 PM PDT

Do you have to restart MySQL to edit changes? Or can you edit them on the fly and restart MySQL or make the changes inside MySQL?

This server runs 100s of website that are active and I don't really want to bring mysql down for a restart unless I have to.

How to clone a database from one environment to a database on another environment?

Posted: 13 May 2013 08:47 PM PDT

OS: RHEL

DB: Oracle 11g version 11.1.0.7.0

I am trying to clone a database on one environment (environment A) to another database on a separate environment (Environment B). Environment B's database is an out-of-date clone of Environment A's database. What would be the best methods for exporting the data from the database on environment A, and importing it to the database on environment B? I have tried using the export and import data pump to do this but it has been unsuccessful for importing the full database.

Also, we use Oracle Warehouse Builder to load data into the databases. When there are issues loading the data I currently restore a previous snapshot of the virtual machine prior to loading the data. Is there also a method to export the data prior to loading data, and then importing it again if there are issues when loading the data? Would export and import data pumps solve this issue?

Thanks!

How can i pass parameter needed by procedure that run through Scheduler Program

Posted: 13 May 2013 08:42 PM PDT

i have a scheduler program that run a store procedure that requires input parameter.

BEGIN    DBMS_SCHEDULER.DROP_PROGRAM      (program_name          => 'MYSCHEMA.EXPORT_SCHEMA_STARTING');  END;  /  BEGIN    SYS.DBMS_SCHEDULER.CREATE_PROGRAM      (        program_name         => 'MYSCHEMA.EXPORT_SCHEMA_STARTING'       ,program_type         => 'STORED_PROCEDURE'       ,program_action       => 'MYSCHEMA.EXPORT_STATUS'       ,number_of_arguments  => 0       ,enabled              => FALSE       ,comments             => NULL      );      SYS.DBMS_SCHEDULER.ENABLE      (name                  => 'MYSCHEMA.EXPORT_SCHEMA_STARTING');  END;  /  

i wonder how can i pass the parameter into EXPORT_STATUS procedure

Best

i changed it into program that run PLSQL BLOCK so far

BEGIN    SYS.DBMS_SCHEDULER.CREATE_PROGRAM      (        program_name         => 'MYSCHEMA.EXPORT_SCHEMA_STARTING'       ,program_type         => 'PLSQL_BLOCK'       ,program_action       => 'MYSCHEMA.EXPORT_STATUS(''STARTING''); END;'       ,number_of_arguments  => 0       ,enabled              => FALSE       ,comments             => NULL      );      SYS.DBMS_SCHEDULER.ENABLE      (name                  => 'MYSCHEMA.EXPORT_SCHEMA_STARTING');  END;  /  

it works as i expected. but if anyone know how to keep it in previous format, will be appreciated.

Restore of Replicated Database fails with “Cannot drop the table <table name> because it is being used for replication.”

Posted: 13 May 2013 08:21 PM PDT

published database restoration failed once you setup transactional Replication using system Store procedures. Few work around to this problem is restore with KEEP_replication option and than drop the publication and subscription.

Is there anything we can do while setting up the Transactional replication using System SP that can avoid this issue ?

How to install and configure Postgres-XC in windows?

Posted: 13 May 2013 07:23 PM PDT

Can anybody suggest me any resources of the complete installation procedure of postgres-XC in windows. I've installed postgresql-9.2 in my windows and can use it. Now I need to know the installation procedure of postgre-XC and the way to do clustering and the whole distribution and replication process using postgres-XC. I've googled and found few but all of them are for linux. But I need this for windows.

FYI:

http://manojadinesh.blogspot.com/2012/08/postgres-xc-setup.html http://alexalexander.blogspot.com/2013/01/postgres-xc-explained.html

Thanks

If the users need INSERT/UPDATE/DELETE permissions, is Windows auth still more secure than SQL Server auth?

Posted: 13 May 2013 03:02 PM PDT

Some background first:

The problem described below wouldn't exist at all if the database in question would have been built with a DBA's mindset:
Data access only through views and stored procs --> the user is able to use the application without having any permissions on the table.

But the database in question was built with a "developer's mindset", so the app sends INSERT, UPDATE and DELETE queries to the server --> it needs an account with sufficient permissions to be able to access the database.


Apparently everyone agrees that Windows authentication is more secure than SQL Server authentication, for example here:

My question:
Is this still valid when the database was built with a developer's mindset, like described above? (when the users needINSERT/UPDATE/DELETE permissions on the tables)

Why I'm asking this:
Our main app is an MS Access frontend with a SQL Server database. At the moment, we are using Windows authentication, but I'm considering switching to SQL authentication.

The problem I'm seeing:
Everyone connects to the database with his own Windows account.
So in order for the application to work, the Windows accounts of everyone and their dog need to have INSERT/UPDATE/DELETE permissions on the tables.

This means that literally EVERYBODY here is able to just create a new Access database, link a few tables from our main database and edit (or delete) them.
And yes, we have a few power users who really know how to do this.

From that viewpoint, it's hard for me to understand why this security threat is apparently not considered when recommending Windows authentication over SQL authentication.

To me, the possibility that anyone can just edit or delete tables with his Windows account screams insecurity.
On the other hand, the hypothetical danger that someone outside the IT department could get a clear-text SQL Server auth password from a config file is quite unlikely.
(given that there are no clear-text config files lying around on the users' machines - which is the case at our company)


As I said in the beginning - I know that this problem only exists because our app is written in a way so that the users need INSERT/UPDATE/DELETE permissions on the tables.
But then, there are a lot of databases like this out there...and I still see only recommendations that Windows auth is always more secure than SQL Server auth.
IMO, the answer should be (as always) "it depends", because you can't really tell without knowing more about the database in question.

So, does no one else see this problem or am I missing something?
Or is there a way to use Windows auth without all users being able to delete data?
(besides changing all data access to stored procs)

SQL Server Express - How to check if I am hitting the size limit?

Posted: 13 May 2013 07:18 PM PDT

I am confused, AFAIK Sql Server 2005 Express has a limit of 4GB database data size.

  1. How can I check if my DB is hitting the size limit?
  2. Is unallocated space the space left untill I hit the limit?
  3. How much space do I have left?
  4. Does index_size count in the limit?

I am not allowed to post images yet, so here is a link ! what is the actual size of the database

DB Design - Which of the two is best-Normalized or Not [closed]

Posted: 13 May 2013 08:04 PM PDT

Please see the analysis below and let me know the best db design (InnoDB) out of the two. Requirement- Faster Write and Read for users not having to wait when many concurrent DB connections exists, which are expected to increase exponentially. Disk space advantage is irrelevant if users have to wait.

Assumption – single CPU (just for comparison)

Method 1 (M1) Table1 UserProfile -> UserID, City, State, Country

Method2 (M2)(Normalized) Table2a UserProfile->UserID,LocationsID Table2b Locations-> LocationsID, City, State, Country

Write (Sequence is not in order)

a. Write to Table

M1-Direct Write= t1 M2-(Search Table2b to see record exists=t2+ Insert if no match=t1 Write UserID and LocationsID in Table 2a=t3)
(t1+t2+t3) > t1

b.CPU Interrupts

M1=1,M2=2

c.Disk I/O

M1=1,M2=2

d.Row locks & Releases

M1=1,M2=2

e. Disk space

M1=More, M2=Less(Only advantege in M2)

Read (Assuming record not in Cache)

a. Read from table

M1-Direct read=t4, M2-Join-t5 t5>t4

b. CPU Interrupts

M1=1, M2=2

c.Disk I/O

M1=1,M2=2

I believe, time spent in Method2 can be improved if Table2b is pre-populated or if Country, State, City dropdowns are numerically tagged.
Even if you load balance M1 seems to be an attractive design. Increasing BW may worsen the situation as there will be more concurrent DB connections. Let me know your thoughts

How to share resources among accounts?

Posted: 13 May 2013 01:51 PM PDT

Using google doc, I am able to share my documents with others. I wonder how they implement it underlying in terms of DB design?

The simplest way I imagine is to use a joining table which keeps a many-to-many relationship between resource and accounts to share.

However, I wonder if there is any well-known pattern for this? If the account has hundred resources, then hundred joining tables seem not a scale way, as for each resource you have to write specific code. Is there any thing like RBAC(role based access control) for this sharing problem?

Restore a Single File from FileStream

Posted: 13 May 2013 02:56 PM PDT

We have an document management application uses SQL Server 2012 to store document index information and uses a Windows file share to store the documents. We are working on migrating to store the documents within a SQL FileStream. However, we have encountered one issue.

According to this document: http://download.microsoft.com/download/d/9/4/d948f981-926e-40fa-a026-5bfcf076d9b9/FILESTREAM%20Design%20and%20Implementation%20Considerations.docx

It states that "It is not possible to back up a single file as a unit of the filegroup or to restore a single file from the filegroup backup. Keep in mind that you can export a single file when you restore the primary filegroup and then the FILESTREAM backup to a separate instance".

There are numerous times when a client asks us to restore a single file because they accidentally modified it. With our current system (using a Windows file share that is backed up continuously) we can go to our backup system (Microsoft DPM) then select and restore just a single file to it's original location. If we use FileStream to store the documents within SQL, how can we restore just one document, not the entire database?

InnoDB Failure of some kind

Posted: 13 May 2013 01:19 PM PDT

I have MySQL 5.5 installed. I tried to install Joolma but it failed. I went into their sql and replace EGNINE=InnoDB with MyISAM and the install worked.

InnoDB is listed under SHOW ENGINES;

Any idea what the cause is or how to fix this so other InnoDB sites can be used?

I had these errors:

MySQL ERROR No: 1030  MysqL Error: Got error -1 from storage engine  

When SQL Server uses Multi Page Allocation

Posted: 13 May 2013 12:59 PM PDT

In SQL 2008 R2

I always wonder what can be consumer of MPA (multi page allocation) in SQL Server? I know data page is always 8K so is there a situation where data/index page make use of MPA? It make sense to me that execution plan can use MPA as it can exceed 8 KB.

There is a blog here that suggests use of MPA but referred stored proc (one with 500 parameters is rare). In attached screenshot I see an execution plan around 11 MB does it will use MPA? Is there a way to confirm that memory allocation for the execution plan is using multi page allocation MAP?

For e.g. enter image description here

I have a trigger that is giving me following error: *

Posted: 13 May 2013 01:05 PM PDT

CREATE OR REPLACE TRIGGER  "TRANSACTION"   before  update of  totalunitsbought on SIPHOLDER referencing new AS NEW old AS OLD  for each row  begin    declare      mportfolioname varchar2(20);      mportfoliono number;      moperationmode varchar2(8);      mtransactionworth number;      munits number;        mnav number;      begin      select portfoliono into mportfoliono from sipholder where accno=:new.accno;      select portfolioname into mportfolioname from sip_price where portfoliono=mportfoliono;      munits:= :new.totalunitsbought - :old.totalunitsbought;        select nav          into mnav      from sip_price where portfoliono=mportfoliono;       if :new.totalunitsbought>:old.totalunitsbought then        moperationmode:='B';        mtransactionworth:=munits*mnav;     else        moperationmode:='S';        mtransactionworth:=munits*mnav;     end if;       insert into transaction       (accno, ransdate, modeofoperation, units, transactionworth, portfoliono, portfolioname, oldunits, newunits)       values        (:new.accno, sysdate, moperationmode, munits, mtransactionworth, :new.portfoliono, mportfolioname, :old.totalunitsbought, :new.totalunitsbought);      end;  end;  

ERROR:

ERROR at line 1:
ORA-04091: table ORA-04091: table NETBANKING.SIPHOLDER is mutating,
trigger/function may not see it
ORA-06512: at "NETBANKING.TRANSACTION", line 12
ORA-04088: error during execution of trigger 'NETBANKING.TRANSACTION'
ORA-06512: at "NETBANKING.BUYUNITS_FIFTEEN", line 23
. is mutating, trigger/function may not see it
ORA-06512: at "SYS.DBMS_ISCHED", line 150
ORA-06512: at "SYS.DBMS_SCHEDULER", line 441
ORA-06512: at line 1

Most efficient and practical way to store time values (HH:MM) in Oracle that makes it easily searchable

Posted: 13 May 2013 06:56 PM PDT

I have a set of starting and ending times that I need to store in an Oracle database. I need to store them in a manner that makes them easily searchable (i.e. a typical search is find all rows where a time value, such as 9:30AM, falls in between the start time and end time). In SQL Server I would just use a TIME datatype field, but Oracle does not appear to have an equivalent.

I have seen a number of blogs and forums that recommend just using a DATE field and doing conversions with TO_CHAR then comparing, or to store the time values as varchar(4 / HHMM) or varchar(6 / HHMMSS) fields. Both of these seem needlessly inefficient.

Is there a better or more efficient way to accomplish this in Oracle?

Am I overengineering MySQL?

Posted: 13 May 2013 08:22 PM PDT

On my project, I have to make difference between registered users, and simple visitors. Both of them can set their own properites, specified in accounts and visitors table.

I identify them by cookie, if there is no cookie, a user can log in, and create a session (and a "remember me" cookie if she wishes) and the simple one time visitor also creates a cookie and a session.

I split the current session to either accountSession (logged in person), and to visitorSession (simple visitor).

I insert the visited pages, userAgents, IPs by the accountSession OR the visitorSession.

Am I overengineering?

Here is the diagram:

MySQL

SQL 2012 Simple Recovery Model with LOG_BACKUP log_reuse_wait_desc

Posted: 13 May 2013 10:58 AM PDT

While I'm doing my own investigation, does anyone know why a database in SIMPLE recovery model has a LOG_BACKUP for the log_reuse_wait_desc?

SQL Server: SQL 2012 SP1. No replication, no mirroring, no log shipping.

Oracle : How find all the data loaded on previous day?

Posted: 13 May 2013 07:19 PM PDT

Issue: Data Loading into development environment from production database

My production database is 40gb but I do not want all of those 40gb of data into my dev env and so my question is

  1. How can I get all the data loaded into Oracle database yesterday?
  2. Is there a way in Oracle to get all data loaded into db on previous day, manage all dependencies and generate another dev snapshot with one day minimal data?

MySQL is running but not working

Posted: 13 May 2013 01:54 PM PDT

In an attempt to tune MySQL to make it work with a recent installation of Drupal I had to modify the MySQL settings on my server. After modifying the configuration file for MySQL (/etc/my.cnf) MySQL stopped working. After some attempts I make it start again but now all my php/MySQL webistes are not being able to connect to their DBs.

Here is why is so confusing:

  • If I check a phpinfo.php on any given site, the MySQL info is there
  • phpmyadmin runs just fine
  • I can run mysql from SSH using root but I see that mysql activity is reduced, look:

[root@server mysql]# mysqladmin processlist

+-----+-----------+-----------+-----------+----------------+------+--------------------+------------------+  | Id  | User      | Host      | db        | Command        | Time | State              | Info             |  +-----+-----------+-----------+-----------+----------------+------+--------------------+------------------+  | 7   | eximstats | localhost | eximstats | Sleep          | 30   |                    |                  |  | 103 | DELAYED   | localhost | eximstats | Delayed insert | 30   | Waiting for INSERT |                  |  | 104 | DELAYED   | localhost | eximstats | Delayed insert | 149  | Waiting for INSERT |                  |  | 105 | DELAYED   | localhost | eximstats | Delayed insert | 149  | Waiting for INSERT |                  |  | 119 | root      | localhost |           | Query          | 0    |                    | show processlist |  +-----+-----------+-----------+-----------+----------------+------+--------------------+------------------+  

My websites using MySQL almost all say:

Error establishing a database connection  

Another say:

Warning: mysql_connect() [function.mysql-connect]: Access denied for user 'website_USER'@'localhost' (using password: NO)  

This is my current my.cnf:

[mysqld]  #datadir=/var/lib/mysql  socket=/var/lib/mysql/mysql.sock  #pid-file=/var/lib/mysql/mysqld.pid  #skip-bdb  #skip-innodb  #err-log=/var/log/mysql.log  #bind-address = server.escalehost.com  log-bin = /var/lib/mysql/log-bin      #innodb_buffer_pool_size=2M  #innodb_additional_mem_pool_size=500K  #innodb_log_buffer_size=500K  #innodb_thread_concurrency=2  #max_connections=125  #table_cache=2500  #thread_cache_size=100  #thread_stack=250K  #wait_timeout=10  #join_buffer=5M  #myisam_sort_buffer_size=15M  #query_cache_size=15M  #read_rnd_buffer_size=5M  max_allowed_packet = 64M  #open_files_limit=8602    #[client]  #port           = 3306  #socket=/var/lib/mysql/mysql.sock    #[mysql.server]  #user=mysql  #basedir=/var/lib    [mysqld_safe]  #socket=/var/lib/mysql/mysql.sock  #err-log=/var/log/mysqld.log  pid-file=/var/run/mysqld/mysqld.pid  

I commented most of it to return it to its simplest version... How can I make the web side to connect to mysql?

Repair with 1 thread

Posted: 13 May 2013 08:24 PM PDT

Why would MySQL be using Repair with 1 thread to rebuild the indexes on a table (1 primary key, 2 billion rows) when I have myisam_repair_threads set to 4?

One database or multiple referencing one?

Posted: 13 May 2013 02:22 PM PDT

Database design: one database or multiple databases, which is best?

We have a database which has about a 100 or so tables, accessed by about five different applications. Five different applications have their own set of tables but also need to access about 20 master tables (used by all our systems: users, accounts, contacts, shops, etc). Now we are going to have another 15 or so applications with their own set of tables but also again need access to get information from the master tables. So before we get set up what do you think is the best schema and database set up. i.e. one database with all applications including the master ones. Each application has its own database with the master records staying in master?

Anyone's thoughts here would be much appreciated. I think I am leaning towards separate databases so they can be managed better, and performance should be better (maybe not?).

If I go with separate are their any implications: setting up references wont be possible, performance joining databases for selects, updates, asp.net needs 2 connections strings (is that even possible with say entity framework database first or LINQ DBML).

Can't Select a View on Informix

Posted: 13 May 2013 07:30 PM PDT

I'm using Informix IDS 11.50 Innovator-C edition running on Slackware Linux 12.0. I've been running a very small Database on it(less than 10000 records on the biggest table).

I'm trying to select a view from a web service, and every time there's a high load user load queries are being dropped with messages like:

ERROR [HY000] [Informix .NET provider][Informix]Could not write to a temporary file

Or

ERROR [HY000] [Informix .NET provider][Informix]Could not open or create a temporary file

The Web Service is using the .NET Informix Driver which relies on the Informix ODBC API.

Searching for those errors on the web or in the online documentation I found that it's something related to DBTEMP environment variable and the DBSPACETEMP configuration parameter.

Both are set. DBTEMP pointing to a directory with all permissions(rw), three dbspaces are listed on DBSPACETEMP. One regular dbspace and two flagged as temporary dbspace.

Storage is not a problem, the directory and the dbspaces have a lot of free space left. The tables that the view Targets use R-tree indexes, and those are stored on a dedicated dbspace. One more thing I'm using a Geodetic Datablade on the database.

Alternative to sequence and timestamp: uniquely ordering records in time

Posted: 13 May 2013 01:35 PM PDT

Oracle 11gR2 Exadata

I'm required to uniquely identify when records are created in time. Sequence caching means I cannot use a sequence-based ID and batching inserts means that all records inserted in one batch will have the same timestamp value (even using TIMESTAMP(9)). Akin to Twitter's since_id concept.

The best alternatives I've ideated so far

  • creating an additional sequence used for each unique timestamp
  • not batching inserts of the records to force a unique timestamp for each record
  • not caching the sequence, although there has been some discussion that this won't solve the problem under Exadata

Here's my requirement: I have an API that allows users to supply a sequence as a marker and request all records since that time. For example, they request 1000 records with a marker of 7 and they'll get the next 1000 records in my table. For example, they request 1000 records with a marker of 7 and they'll get 1000 records from my table with an ID greater than 1007. As an example let's say the numerically greatest ID of the returned 1000 records is 2045 so we return 2045 as the marker Later the clients request 1000 records with a marker of 2045 expecting to get the next batch of 1000 and a new marker.

Pretty straightforward way to allow them to get all of the records in whatever size works for them without missing any. However, due to sequence caching across multiple Exadata nodes, at the time the client requests 1000 records with a marker of 1007, a record with an ID of 2020 may not have been created. Therefore, when they do the next request using the marker of 2045, they will have missed record 2020 forever. Using the ID to get the timestamp of the associated record solves this, but then I must make sure to always insert records into the table individually to guarantee unique timestamps.

Assumptions:

  • Not a way to get separate timestamps for individual records on a bulk/batch insert
  • Multiple nodes may cause insertion of records to be non-time-sequential even with NOCACHE on the sequence (e.g., a record with sequence value 180 could be written with a systimestamp greater than that of a record with sequence value 179)

Hopefully I just haven't hit on the correct terms to search for existing answers. I feel that this is a problem that should have been solved by some pattern(s) for years. I think Twitter has solved it...

Thank you for your time.

Object name 'Clean Up History' in use when trying to complete the maintenance plan wizard in SQL Server 2008 R2

Posted: 13 May 2013 04:07 PM PDT

I am trying to create a maintenance plan on a instance running SQL Server 2008 R2 SP1 (no CU's installed).

When completing the wizard I get the following error:

The object name cannot be changed from "{1E1746D3-B671-4799-8F61-7EE6117257C2}" to "Clean Up History" because another object in the collection already uses that name. Use a different name to resolve this error. ({1E1746D3-B671-4799-8F61-7EE6117257C2})

I've checked both [msdb].[dbo].[sysjobsteps] and [msdb].[dbo].[sysjobs] and neither contain the name "Clean Up History".

The server is being backed up by Microsoft DPM. Could it be that DPM inserts a cleanup job somewhere that I don't see?

Remote connection to PostgresSQL in Windows 2008 Server is prompting the error below

Posted: 13 May 2013 01:07 PM PDT

The error: "Unable to read data from the transport connection. an existing connection was forcibly
closed by the remote host."

The conditions:

1) I have the following connection string:      server = "CIRMS01"      password = "password"      user id = "postgres"      database = "egov_euc"      port = "5432"    2) pg_hba.conf  has:      host all all 150.46.1.9/32 md5    3) postgres.conf has:     listen_addresses = *    4) client = Windows XP 32bit     server = Windows 2008 Server 64bit  

I don't know what seems to be the problem why it is prompting the error message above and why it cannot connect. I googled it but there seems I cannot find a clear cut
solution.

How to increase fast backup and restore of 500GB database using mysqldump?

Posted: 13 May 2013 05:07 PM PDT

There is a database A size of 500GB. Tables in database A contains both MyISAM and INNODB tables. MyISAM tables are master tables and Innodb tables are main transaction tables.

Backup and restore using mysql dump taking quit a long time or days.

  • max_allowed_packet=1G
  • foreign_key_checks=0
  • auto_commit = off

How to check growth of database in mysql?

Posted: 13 May 2013 04:00 PM PDT

I want to know is there any method to check the growth of database on file

EXAMPLES

  • Database A contains all tables in INNODB storage engine
  • Database B contains all tables in MyISAM storage engine
  • Database C contains a mixture of InnoDB and MyISAM tables

Swap Columns on some rows?

Posted: 13 May 2013 04:03 PM PDT

I have a table of about 106k rows where about 11k rows have the values in 2 columns swapped. I want to run a query to fix it, but I don't think I can do this:

UPDATE `game`   SET `homescore`=`awayscore`, `awayscore`=`homescore`   WHERE (`awayscore`>`homescore` AND `winner`=`hometeam`) OR (`awayscore`<`homescore` AND `winner`=`awayteam`);  

Or can I? I'm worried the 2 scores will end up the same value.

Also, please validate that my query will do what I intend to do:

I need to swap the home and away scores on the rows where the recorded winner (which is correct) doesn't match the scores recorded (they were accidentally swapped by a coding mistake now fixed) saying who the winner is (team with more points).

Can I add "Included Columns" to an index without affecting performance?

Posted: 13 May 2013 07:18 PM PDT

I have a non-clustered index with 63 Million leaf level rows. Currently it does not have any included columns. I would like to add one included column while the site is online. Will this significantly affect performance if I do it through the SSMS GUI?

Does a DBA need to know how to program in a system language besides SQL?

Posted: 13 May 2013 11:53 AM PDT

To what extent does a Database Administrator need to know system or application level programming languages (for example .NET or PHP) besides "just SQL"?

For the purposes of this question, no specific version of the SQL standard is considered for this answer (SQL ANSI 86, SQL ISO 87, SQL:2008) as the question is in regards to desktop or server languages outside the realm of SQL.

No comments:

Post a Comment

Search This Blog