Monday, June 3, 2013

[how to] Why is the padding on months in Oracle 9 characters?

[how to] Why is the padding on months in Oracle 9 characters?


Why is the padding on months in Oracle 9 characters?

Posted: 03 Jun 2013 09:00 PM PDT

I ran two simple sql statements

select length(to_char(sysdate, 'fmMonth')) from dual;  select length(to_char(sysdate, 'Month')) from dual;  

The length of the first statement was 4 (It's June) and the length of the second statement was 9. Why does it pad 5 extra characters unless you specifically ask it not to?

The only reason I could come up with was that the month with the most amount of characters is september (nls_date_language set 'English') which has 9 character.

Second question is if you changed your nls_date_language variable would you also change the amount of padding?

How to merge data sets without including redundant rows?

Posted: 03 Jun 2013 07:49 PM PDT

I think this must be a fairly common thing to do, but I haven't been able to find an example. It basically involves merging data from two data sets that involve dates/times (or more generally any sequential values) and removing redundant records. By redundant I mean records that can be implied by other records.

Here's the scenario I'm trying to solve. We have a table of historical prices for items. New prices are imported periodically into the table. The import file can contain dates from any time, we don't know that they are always "after" the data in the history table.

To avoid data bloat I only want to insert new records if they give new information. So if a new record can be inferred from an existing one I don't want to insert it. Conversely if an existing record can be inferred from a new one I want to replace it. Record B can be inferred by record A if the price is the same and there are no other changes between the date of record A and the date of record B (if the price was $1 yesterday and is $1 today and there are no changes between yesterday and today, then we can infer that the price today is $1 using yesterday's price).

Some examples might help, here are the two problem scenarios:

  1. An incoming record can be inferred from an existing record, so the incoming record is redundant. E.g.

    old record: 2013-04-23 1.00

    new record: 2013-04-24 1.00 <-- this is implied by the existing record, don't insert it

  2. An existing record can be inferred from a new record, so the existing record is redundant. E.g.

    new record: 2013-04-23 1.00

    old record: 2013-04-24 1.00 <-- this is implied by the new record, delete it

  3. This just shows an example of when a value is not redundant. E.g.

    old record: 2013-04-23 1.00

    old record: 2013-04-24 1.20

    new record: 2013-04-25 1.00 <-- not redundant, the price changed since it was last 1.00

There is a more detailed example here http://sqlfiddle.com/#!3/c61a8/1

Currently I'm leaning towards a multi-step approach of:

  1. Delete from incoming where there is an existing record that has the same price with an earlier date (scenario 1 above).
  2. Delete from existing where there is an incoming record that has the same price with an earlier date (scenario 2 above).
  3. Insert the remaining incoming records.

There must be a nicer way, maybe using MERGE, but it's doing my head in trying to work out how to do it.

How do I efficiently "merge" the existing and incoming records? Thanks

TempDB MDF file huge after update script ran

Posted: 03 Jun 2013 06:28 PM PDT

On the weekend we had a huge deployment run that affected a large chunk of our system. It was all run in one massive transaction (probably the wrong thing to do). As a result our TempDB DatabaseFile has grown from ~5GB to ~30GB.

This is causing some space issues on the drive that it is located on. The server is Windows 2003 and we are running SQL 2005.

This was a one off event and so we don't want the TempDB file to stay that large. I have tried running DBCC SHRINKDATABASE and DBCC SHRINKFILE commands however it says:

Page 1:571344 could not be moved because it is a work table page.

I saw this question TempDB data files don't shrink well on SQL 2008 and was wondering if I need to free all system cache before I can shrink the database down. Obviously I don't want to do that on a live database if I can help it.

Is there any other way that I can shrink TempDB's MDF file?

MYSQL - Using simple inventory number convention as Primary Key

Posted: 03 Jun 2013 05:29 PM PDT

I have an inventory with three numbered groups like so:

Group 1: 0-999 Group 2: e0-e999 Group 3: v0-v999

Can I simply make a one-column table to store this information and make that column the primary index (needed?), or should I make it two columns.

example:

code (primary)  000  016  760  433  e221  e011  v999  v003  

vs.

(primary)   code_id    code      1        000      2        016      3        760      4        433      5        e221      6        e011      7        v999      8        v003  

Oracle 11g http listener configuration

Posted: 03 Jun 2013 07:06 PM PDT

Could please someone point me how to configure oracle db to be able to display PL/SQL Server pages. I have successfully created and mapped dad with the DBMS_EPG.

Here is the listener:

# listener.ora Network Configuration File: D:\app\Ja\product\11.2.0\dbhome_1\NETWORK\ADMIN\listener.ora  # Generated by Oracle configuration tools.    LISTENER =    (DESCRIPTION_LIST =      (DESCRIPTION =        (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))      )      (DESCRIPTION =        (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))      )      (DESCRIPTION =        (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 8080))      )    )    ADR_BASE_LISTENER = D:\app\Ja  

When I want to enter

localhost:8080/my_dad/home

I get error: No data received.

How do I solve this?

What is a good way to Merge 2 range sets into a single view

Posted: 03 Jun 2013 03:31 PM PDT

The real purpose of this has to do with effectivity dates and contains additional keys and fields, but I simplified it for the sake of, well, simplicity.

Sample:

test_Widget Table  theFirst   theLast    Letter  1          5          A  7          10         (null)  11         15         C    test_Default Table  theFirst   theLast    Letter  1          8          T  9          12         U  13         20         V    Expected Output:  theFirst   theLast    Letter  1          5          A  7          8          T  9          10         U  11         15         C  

Here's what I'm trying to do:

Use the Widget and Override table to craft a query that has for every range of numbers the proper Letter chosen.

Use Widget.Letter for all Letters, but in the case of a NULL in Widget use the Default.Letter. Only include ranges that are inclusive to the Widget table

I have working code that does it, but I'm wondering if there's something a bit more elegant, built-in, etc to handle this.

My code:

select distinct    case when w.letter is null then greatest(w.thefirst, d.thefirst) else w.thefirst end as tfirst,     case when w.letter is null then least(w.thelast, d.thelast) else w.thelast end as tlast,    case when w.letter is null then d.letter else w.letter end as letter  from test_widget w  inner join test_default d  on   w.thefirst <= d.thelast and  w.thelast >= d.thefirst  

Connection to local SQL Server 2012 can be established from SSMS 2008 but not from SSMS 2012

Posted: 03 Jun 2013 02:48 PM PDT

I have two local SQL Server instances running on my local machine. The first is SQL Server 2008 R2 Enterprise Edition (named MSSQLSERVER) and the 2nd is SQL Server 2012 Business Intelligence Edition.

My problem is with SSMS 2012 which can connect to distant servers but not the local 2012 instance; I can however connect to this instance from SSMS 2008.

The error message I get when trying to login is

Login Failed. The login is from an untrusted domain and cannot be used with Windows Authentication. (Microsoft SQL Server, Error: 18452)

I must point out that I don't have the necessary privileges to access SQL Server Configuration Manager (blocked by group policy).

Any help would be appreciated.

Problem with sorting matrix groups in SSRS

Posted: 03 Jun 2013 02:24 PM PDT

I've hit the following issue in SQL Server 2005 Reporting Services

This is a slightly odd and complicated situation: I have a table containing various groups and within each group is also embedded a matrix. The rows of each matrix are sorted by a different field to their group expression. So far, so good.

The problem arises in the table header: it needs to mirror exactly the format of each group to give a "Grand Total" type functionality. I've copied exactly the matrix from the table's groups, but it refuses to sort the data correctly. I've tried changing the sort direction - even removing the sort altogether - but it makes no difference to the order in which the data is displayed.

Since I originally hit this issue, I've found a workaround involving simply grouping by the "sort" column - this works since it was only a calculate field created to sort the matrix groups.

Would still be interested to know why this is an issue for a matrix in the table header but not the group header.

Any ideas?

Create database role to allow read access to all tables; write access to some

Posted: 03 Jun 2013 02:07 PM PDT

We have a database with over 1000 tables. I need to create a role that allows read only on all tables, as well as write access to two specific tables.

I was messing around with database roles, but whenever I went to add the tables, I had to hand select all 1000... is there a better way to do this?

Import 4 million rows

Posted: 03 Jun 2013 01:22 PM PDT

Hi I need to import about 4 million records into my DB. The file is a csv. I have tried MyAdminphp and it times out. Then i tried Razor SQL and it's way too slow. Any suggestions for handling this many iserts?

Getting Random Sample from large database MYSQL (No Auto Inc Field)

Posted: 03 Jun 2013 02:52 PM PDT

You were so helpful with my last question, I thought id throw another one at you thats stumping me.

Basically I got a table, requirements from company was it was all supposed to be 1 table so I got myself 1000 columns, and 100million rows (... life sucks).

Anyway they want me to generate for them 500,000 row random sample of their data. Problem is my PK is a VARCHAR(50) of their ids and has no increment system to it.

My rig is not overly powerful so a ORDER BY RAND() would be very ill advised!

Currently I am working on a Stored Procedure to maybe make a temp table then export it to csv?

Its not currently done, but if I am going the wrong direction let me know!

CREATE PROCEDURE randomResults()  BEGIN      DECLARE results CURSOR SCROLL FOR      FOR      SELECT * FROM massiveTable;      DECLARE nth INTEGER;      SET nth = 0;      DECLARE hits INTEGER;      SET hits = 0;      DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = TRUE;      CREATE TABLE tmp LIKE massiveTable;        -- open      OPEN results;      -- fetch        WHILE hits <= 500000 DO          FETCH NEXT FROM results;          IF (nth mod 5) THEN              INSERT INTO               SET hits = hits + 1;                  END IF;          SET nth = nth + 1;      END WHILE;      -- end fetch          -- close      CLOSE results;  END$$  DELIMITER ;  

I am sure you guys have a better way of doing this, was just planning to get every 5th record and put it into a tmp table then export it...

My Table:

TABLE massiveTable (    staffhashID VARCHAR(50) PRIMARY KEY,  email VARCHAR (100),  marketCat1 tinyINT(1),  ...  1000 of them  ) engine = MYISAM;  

an example of the PK: 433kfdgsd3gfsd232

Is there a way to find the least recently used tables in a schema?

Posted: 03 Jun 2013 03:21 PM PDT

Is there a way to find the least recently used tables in a MySQL schema? Besides going into data directories? I was hoping there was a metadata or status trick-- but Update_Time in STATUS and INFORMATION_SCHEMA is always NULL.

MySql one time event never runs?

Posted: 03 Jun 2013 03:17 PM PDT

Please have a look at below events

1) create EVENT Test1 ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 20 second ON COMPLETION PRESERVE ENABLE DO ...     2) create EVENT Test2 ON SCHEDULE EVERY 20 SECOND STARTS CURRENT_TIMESTAMP ON COMPLETION PRESERVE ENABLE DO ...   

I expect event Test1 to run one time after 20 seconds but it never runs. Event Test2 is working fine.

Any idea? Thanks.

Optimize command issuing "'View is not base table" and "Corrupt" error messages

Posted: 03 Jun 2013 01:23 PM PDT

I have no experience with MySQL. My boss ran an optimize against a MySQL database. Against a bunch of views we got the message

"tablexyz" is not BASE TABLE

and the next has message is

Corrupt

It seems to be only against views, we didn't get any of these error messages against the base tables.

Does this look like an incorrect error message, or do we have issues with our tables?

How to find Oracle home information on Unix?

Posted: 03 Jun 2013 08:16 PM PDT

Need help finding Oracle home path corresponding to a database instance in RAC environment. I am aware of few of the ways to achieve the same. Listing them below to avoid the same answers.

  1. /etc/oratab This file is not mandatory and hence may not have all instance information.

  2. Parsing contents of Listener.ora In RAC environment, the listener.ora can be located at non default location.

  3. use TNS_ADMIN to find Listener.ora location and parse the file.

  4. ORACLE_HOME env variable May not be set always.

  5. ps -ef | grep tns to get the home path from service name. Gives path for currently running listener

  6. select "SYSMAN"."MGMT$TARGET_COMPONENTS"."HOME_LOCATION"
    from "SYSMAN"."MGMT$TARGET_COMPONENTS"
    where "SYSMAN"."MGMT$TARGET_COMPONENTS"."TARGET_NAME" = <Database SID>

    The schema sysman can be dropped after first time login to oracle.

  7. SELECT NVL(SUBSTR(FILE_SPEC, 1, INSTR(FILE_SPEC, '\', -1, 2) -1) , SUBSTR(FILE_SPEC, 1, INSTR(FILE_SPEC, '/', -1, 2) -1)) FOLDER
    FROM DBA_LIBRARIES
    WHERE LIBRARY_NAME = 'DBMS_SUMADV_LIB';

    So if a DBA changes Oracle Home (and hence the location of libqsmashr.so) after installation of Oracle, the path retrieved from above query would be invalid.

  8. . oraenv Works only for 11g

I am trying to find out a generic way which will work for all Oracle versions and it should not be dependent on anything which is not useful to DBA.

Do you have any way other than listed above to do the same?

Many Thanks in advance.

Storing dynamic data in NoSQL

Posted: 03 Jun 2013 06:46 PM PDT

I have scenario where I need to store unstructured data but the rest of my data is structured and relational. An example of the type of unstructured data is as explained below:

User Type 1:    How do you blah blah : 5 fields    User Type 2 :    How do you blah blah : 3 fields    User Type 3 :    How do you blah blah : 7 fields  

All 3 types are asked the same question "How do you blah blah" but each user type answers it using different number of fields. And there can be a lot of different user types.

For the relational data, I'm using MySQL but i'm a little confused on how to store this unstructured data:

  1. Serialize as JSON and store in MySQL
  2. Use NoSQL

My requirements are high reads, average updates, average inserts & no deletes. No JOINS needed. I need guaranteed writes & high availability. If I were to choose a NoSQL, it'd be a AP type according to the CAP theorem. I won't be hitting millions of records anytime soon.

I also plan to provide a text search for this data in the future, but it doesn't need to be a real time search, so I can always index the data using Lucene periodically. But of course, document based NoSQL implementations do provide this out of the box. But I have read in a few places where people have advised against storing JSON data in MySQL. But adding a NoSQL layer could be overkill.

What do I do & if you do advise me to go for NoSQL DB, which one should I choose?

PostgreSQL 9.2.4 (Windows 7) - Service won't start, “could not load pg_hba.conf”

Posted: 03 Jun 2013 02:28 PM PDT

I am trying to get Postgres 9.2.4 to run as a service on Windows 7. After installing postgres, the service was running fine. However, after setting postgres up as a server for another program, the service stopped running. When I try to start the service now, I get a message saying :

"The postgresql-x64-9.2 - PostgreSQL Server 9.2 service on Local Computer started and then stopped. Some services stop automatically if they are not in use by other services or programs."

When I try running the program that should use the database server, I get this error :

"A problem was encountered while attempting to log into or create the production database. Details: Could not connect to the server; Could not connect to remote socket. The application must now close"

I have also encountered this error once while opening the same program :

"A problem was encountered while attempting to log into or create the production database. Details: FATAL: could not load pg_hba.conf The application must now close."

I have tried running the service logged on as a local system account as well as my own account (In the postgres service properties) to no avail. I also tried restarting my computer. After a lot of troubleshooting online, I learned that a good thing to check is the pg_log file. Here are the contents of the latest pg_log entry :

2013-05-29 14:59:45 MDT LOG:  database system was interrupted; last known up at 2013-05-29 14:58:01 MDT  2013-05-29 14:59:45 MDT LOG:  database system was not properly shut down; automatic recovery in progress  2013-05-29 14:59:45 MDT LOG:  record with zero length at 0/175BB98  2013-05-29 14:59:45 MDT LOG:  redo is not required  2013-05-29 14:59:45 MDT LOG:  database system is ready to accept connections  2013-05-29 14:59:45 MDT LOG:  autovacuum launcher started  2013-05-29 15:07:00 MDT LOG:  local connections are not supported by this build  2013-05-29 15:07:00 MDT CONTEXT:  line 1 of configuration file "C:/PostgreSQL/data/pg_hba.conf"  2013-05-29 15:07:00 MDT FATAL:  could not load pg_hba.conf  2013-05-29 15:07:00 MDT LOG:  local connections are not supported by this build  2013-05-29 15:07:00 MDT CONTEXT:  line 1 of configuration file "C:/PostgreSQL/data/pg_hba.conf"  2013-05-29 15:07:00 MDT FATAL:  could not load pg_hba.conf  2013-05-29 15:09:03 MDT LOG:  received fast shutdown request  2013-05-29 15:09:03 MDT LOG:  aborting any active transactions  2013-05-29 15:09:03 MDT LOG:  autovacuum launcher shutting down  2013-05-29 15:09:03 MDT LOG:  shutting down  2013-05-29 15:09:03 MDT LOG:  database system is shut down  

It seems to be having issues with the pg_hba.conf file, which looks like this :

local all all trust  host all all 127.0.0.1 255.255.255.255 trust  host all all 0.0.0.0 0.0.0.0 trust  

As per many suggestions online, I tried editing the top line to a number of different alternatives (host all all trust / host all 127.0.0.1/32 trust / host all 192.168.0.100/24 trust , etc.). This made sense to me, as the log file was saying that local connections are unsupported by postgres and was also pointing to that line. However, none of my changes had any effect. I tried restarting my computer after every change but nothing made any difference.

When I searched for examples of what a pg_hba.conf file normally looks like, the examples looked slightly different from my file. I noticed that in the PostgreSQL program file, in addition to pg_hba.conf, there was also a "20130529-150444-old-pg_hba.conf" file which looked a lot more like the examples I was finding online. This file has several lines of comments before these last few lines :

# TYPE  DATABASE        USER            ADDRESS                 METHOD    # IPv4 local connections:  host    all             all             127.0.0.1/32            md5  # IPv6 local connections:  host    all             all             ::1/128                 md5  # Allow replication connections from localhost, by a user with the  # replication privilege.  #host    replication     postgres        127.0.0.1/32            md5  #host    replication     postgres        ::1/128                 md5  

I was hoping that this was the original pg_hba.conf file and that if I replaced the new file with the contents of the old one, postgres would start working again. No such luck. I have been hoping for more error files to be logged in pg_log to see if the previously stated error had disappeared or changed to something else, but no more files have been logged.

I have been troubleshooting online for a few days now and nothing I've found has worked. Sorry for having such a long question, but I wanted to be thorough and include all relevant information. I would appreciate it if anyone could shed some light on this problem or offer suggestions.

Relation to original tables or to existing linking table

Posted: 03 Jun 2013 05:22 PM PDT

In my database I have a table with different settings for my app. Each setting is in relation to a guest (table guests) and an event (table events). So basically each guest has specific settings for each event he is linked to.

Every guest which has settings is are allready linked to the events for other reasons so there is an existing event_guest table with the necessary links.

So I'm not exactly sure about how I should link the settings table with the others.

Option 1

I link the settings with the table event_guest which links guests and events.

enter image description here

Option 2

I link the settings with the "original" tables guests and events.

enter image description here

Spontaneous I would go with option 1 but I'm a little bit confused about it...

My concern with option 1 is, that if I have a lot of deep relations, maybe even another table after settings, I need more complex sql queries to get for example data from settings, guests and events. Could this become a performance issue?

Which is the better solution and what are its advantages and disadvantages?

PostgreSQL backup error

Posted: 03 Jun 2013 09:23 PM PDT

I am trying to backup my company PostgreSQL database using pgAdmin III, so I selected our company DB from the tree, then right clicked on it and selected 'Backup', selected the destination folder for the backup file then clicked OK. Then I got this error message:

C:/Program Files/PostgreSQL/9.0/bin\pg_dump.exe --host localhost --port 5432 --username "pgres1" --format tar --blobs --encoding UTF8 --verbose --file "E:\DB_Backup\DB_Backup_TEST.backup" \"CompanyDB_TEST\" pg_dump: [archiver (db)] connection to database ""CompanyDB_TEST"" failed: FATAL: database ""CompanyDB_TEST"" does not exist pg_dump: *** aborted because of error

Process returned exit code 1.

So can someone please help me by telling me what I am doing wrong here?

I am 100% sure that CompanyDB_TEST do exist.

I am running the PostgreSQL under Windows Server 2003.

Slow SSRS Report in production

Posted: 03 Jun 2013 03:23 PM PDT

I have an SSRS report which gets its data by firing a series of stored procedures.

Now the report is timing out big time when run in production, yet when I pull down the prod database and restore to development the report runs fine.

I was thinking to set up a sql server profiler trace in production and hopefully that will tell me something... eg high Disk I/O at the time it's being run.

What else should I be doing? Something with perfmon?

Create Oracle ASM Disks Without Installing Oracle

Posted: 03 Jun 2013 03:21 PM PDT

I have been asked to setup workstations (VMs) for an instructor's Oracle 11g R2 Workshop. I am finding the documentation that was downloaded from the Oracle Academy site extremely underwhelming. I am supposed to be configuring the environment with a set of ASM disks as LO devices that the students will then use to create an ASM disk group when they install Oracle Grid and as disks for the Fast Recovery Area.

My understanding is that the student will be installing Oracle Grid, Oracle Database, then creating a database, and that I do not need to have Oracle installed beforehand. However, the documentation makes use of the oracleasm command which comes with the Oracle installation.

Is there a way to set this up without installing Oracle? Has anyone ever setup/taught these workshops? I tried downloading the Oracle ASMLib tool here but it says it needs oracleasm as a dependency.

Error 1044 Access denied to user

Posted: 03 Jun 2013 04:23 PM PDT

This is driving me crazy.

When I try to create a few tables from my Workbench model I get this error 1044.

I've been trying to find a solution but nothing works for me.

Curiously when I run SELECT USER(),CURRENT_USER(); I get two versions of the same user. One is techboy@(an ip address) and the other is techboy@%.

Access is denied to both at one point or another.

The MySql server is a remote hosted server with the user permissions correctly set.

Multiple database servers for performance vs failover

Posted: 03 Jun 2013 05:23 PM PDT

If I have two database servers, and I am looking for maximum performance vs high-availability, what configuration would be best?

Assuming the architecture is two load-balanced web/app servers in front of two db servers, will I be able to have both db servers active with synced data, with web1 to db1, web2 to db2 setup? Is this active/active?

I'm also aware that the two db servers can have their own schema to manually 'split' the db needs of the app. In this case daily backups would be fine. We don't have 'mission critical data.'

If it matters, we have traffic around 3,000-7,000 simultaneous users.

Cross Database transactions - Always on

Posted: 03 Jun 2013 08:23 PM PDT

Recently we are working on a POC to get Always on work and happened to see this article in BOL

http://technet.microsoft.com/en-us/library/ms366279.aspx

This article suggests that there would be logical inconsistency when we are dealing with Synchronous mode too, but will this actually be the case?

Consider for example databases A and B on which the transaction is running and A is in High-safety mode and B is not mirrored. The log of A has to go to Mirrored database then the Primary database commits eventually two phase commit(transaction on B) succeeds but article suggests that log will not be transferred in the first place and results in commit on B which is contradictory. Please help me in understanding Whether the statement suggested in above article is true. If yes how can it be :).

PS :Please let me know if I need to provide more information around this.

Maatkit shows the MySQL replication error in one table, but won't fix it

Posted: 03 Jun 2013 03:06 PM PDT

We use MySQL 5.1 with a primary/secondary replication setup. I use mk-table-checksum from Maatkit to generate checksums on the master and perform consistency checks on the replica.

mk-table-checksum is used to generate the checksums, and is pretty simple:

% mk-table-checksum localhost \          --quiet \          --replicate=test.checksum --empty-replicate-table --create-replicate-table \          --chunk-size=500000 --sleep-coef=0.5 \          --ignore-tables=mysql.general_log_backup,mysql.general_log,mysql.slow_log,mysql.help_topic,mysql.tables_priv  

These checksums are then replicated to the Replica (slave) server, where we compare the checksums.

Around a month ago we had a server failure. We fixed the server, started up the database and resumed replication. Everything seems to be working fine-- Slave_IO_Running and Slave_SQL_Running are both set to "Yes", everything appears to be getting replicated successfully except for the error mentioned below, etc.

There is one problem, however. When I check the table using the following recommended method on the Replica, I see a consistency problem.

# mysql --execute "SELECT db, tbl, chunk, this_cnt-master_cnt AS cnt_diff, \  >         this_crc <> master_crc OR ISNULL(master_crc) <> ISNULL(this_crc) AS crc_diff \  >         FROM test.checksum \  >         WHERE master_cnt <> this_cnt OR master_crc <> this_crc OR ISNULL(master_crc) <> ISNULL(this_crc);"  +--------+------------------+-------+----------+----------+  | db     | tbl              | chunk | cnt_diff | crc_diff |  +--------+------------------+-------+----------+----------+  | plants | trees_properties |    40 |        0 |        1 |  +--------+------------------+-------+----------+----------+  

This problem still occurs even after force another sync on the master using mk-table-sync on the master. I will probably need to replace the corrupt data by hand, but I am unclear what the actual problem is. What is a 'chunk'?

My guess is that only a handful of rows are different, and I can probably replace those rows by hand. But how can I list which rows are missing?

I realize that Maatkit has been deprecated in favor of Percona Tools. I plan to upgrade some day. But for now, "if it ain't broke, don't fix it."

MySQL generic trigger to save identification data for later accessing of the changed row

Posted: 03 Jun 2013 07:23 PM PDT

I am pretty inexperienced with this.

I need a generic trigger, able to create and save in a fixed table some sort of identification data for a changed row from generic (any) table. The identification data should be used later to SELECT the changed item in the given table.

Can be this done without previously knowing the table structure?

The only idea I had, but it's way too inefficient in my opinion, also requires previous knowledge of the table column names, is to save a hash by:

MD5(concat(NEW.field1, NEW.field2, NEW.field3, ...))  

then

SELECT * FROM chaged_table WHERE hash = MD5(concat(field1, field2, field3, ...))  

to identify the changed row in the table which triggered the change.

I would greatly appreciate any help or suggestions!

TempDB data files don't shrink well on SQL 2008

Posted: 03 Jun 2013 06:23 PM PDT

So ideally you want to pre-size your TempDB data and log files appropriately so that this isn't a concern, but sometimes a rogue developer runs a crazy huge query on a production server during work hours, causing the TempDB data files to blow up huge.

If TempDB was literally the only thing on the drive, then I could probably just leave it like that, but on some servers I have several SQL instances that all share the same TempDB drive.

So how can I shrink these TempDB data files without restarting the instance?

I normally try:

DBCC SHRINKFILE (name = 'tempdev', size = 5000)  

This worked fairly consistently in SQL 2000 and 2005 (assuming the actual tempdb activity had tapered off), but seems to not work very often in 2008 (on my current server, it only worked on 1 of 4 data files, the others continue to remain 2-3x larger).

How to do MySQL User Accounting

Posted: 03 Jun 2013 03:10 PM PDT

I can't seem to find any information on this anywhere. I found a question here called Monitor MySQL activity per database? but that's not quite the answer either.

I'd like to be able to log the following to a system log somewhere, preferably via syslog:

  • User (not master/slave) login
  • User logout
  • User connect time
  • User IP

Something similar to last and lastb for MySQL. Can this be done? Is anything already written and in the Ubuntu and Red Hat Enterprise Linux repositories? Can MaatKit do this?

Thanks!

Is there a way to measure replication lag in MySQL with a resolution less than 1 second?

Posted: 03 Jun 2013 03:07 PM PDT

Is there a way by which you can measure replication lag in MySQL with a resolution of less than 1 second?

That is, can replication lag be measured on the microsecond or millisecond level?

No comments:

Post a Comment

Search This Blog