Tuesday, July 9, 2013

[how to] Can I force mysql server to resolve the external ip to localhost?

[how to] Can I force mysql server to resolve the external ip to localhost?


Can I force mysql server to resolve the external ip to localhost?

Posted: 09 Jul 2013 08:07 PM PDT

I'm finally succeeded with ssh tunneling. My point is to open the mysql server to local users only (user@localhost etc.) while providing a remote control to my customers through ssh tunneling.

The problem is when I connect through the mysql command line tool (mysql.exe -u root -h 127.0.0.1 --port=8600) I've got a permission denied error for user root@my.servers.ip.address (the tunnel redirects from 127.0.0.1:8600 to my 3603 remote server)

Well it makes sense since my root exists in mysql.user only with the localhost host.

But then, is there a way for me to somehow tell mysql that this ip address is actually the server's address and that the root should be allowed to connect?

I've got no clue where to start. To me if such a thing existed it would be some kind of ip resolving table...

Multi-master quorum offsetting performance gain through distribution

Posted: 09 Jul 2013 01:33 PM PDT

Suppose we have a multi-master p2p setup (3 masters) that requires a quorum of 2 to write.

Then, in order to get full consistency, reads also require quorum of 2, because on write, the 3rd server may not be updated and can serve outdated data. Correct?

Then, doesn't the extra quorum IO requests near completely offset the purpose of having a multi-master distribution model? Sure there are 3 server instead of one, but each request becomes three requests, so each server doesn't get a lighter load.

Additionally, isn't this even worse for 2-phase-commit scenarios?

Thanks!

What is the best data modelling technique for a credit business organization? [on hold]

Posted: 09 Jul 2013 01:33 PM PDT

I would like to know what is the difference between notation and technique? which notation is most intuitive? which modelling technique is the best while designing database for a small business organization?

Mysqlbinlog statement reading

Posted: 09 Jul 2013 01:11 PM PDT

I am trying to perform a point in time on a specific database using a snapshot and mysql's transaction logs. I am using the following to pull statements from the time of the snapshot to the time of recovery:

mysqlbinlog --database=**database** --start-datetime="$start_datetime" --stop-datetime="$stop_datetime" list_of_binlog_file_names  

The resulting statements that mysqlbinlog produces include INSERT/UPDATE/DELETE statements for another database that has the form:

INSERT INTO **database**_reporting.tablename VALUES (data);  

So functionally I'm getting statements from 2 different databases, one is our production database, the other is our reporting database, differentiated by nomenclature by the '_reporting'. It would appear to me that our application is inserting to the secondary database while still using the primary one and the binlogs associate both statements with the primary database.

Am I correct that mysqlbinlog is going to read out statements for both databases as long as they are executed after a USE primary_database;? Is there a way to separate out the ones for the reporting database using mysqlbinlog? I have searched quite a bit- I may be barking up the wrong tree here. Let me know if I can clarify the question.

How large are blocks of data in SQL Server?

Posted: 09 Jul 2013 01:05 PM PDT

I'm working in SQL Server 2008 R2 and have created a query that gathers and sums the total of data files and log files' sizes. However, I can't find how much actual disk space a single block of SQL data takes up on the disk so I can convert it into something more meaningful.

Here is the script:

DECLARE @DataSize INT  DECLARE @LogSize INT  SELECT @DataSize = SUM(size) from sys.database_files where type_desc = 'Rows'  SELECT @LogSize = SUM(size) from sys.database_files where type_desc = 'Log'  PRINT @DataSize  PRINT @LogSize  

How large is one block of space? Would it be easy to convert those two integer variables into something more meaningful for a sysadmin?

Would adding indexes to my foreign keys improve performance on this MySQL query?

Posted: 09 Jul 2013 02:35 PM PDT

Consider the following query:

SELECT    `locations`.`id` AS `location_id`,    `locations`.`address`,    `locations`.`lat`,    `locations`.`lng`,    `tickets`.`status_id`,    `customers`.`name`,    `tickets`.`id` AS `id`,    `tickets`.`updated_at` AS `updated_at`,    ( 3959 * acos( cos( radians('39.78222851322262') ) * cos( radians( `lat` ) ) * cos( radians( `lng` ) - radians('-86.16299560000004') ) + sin( radians('39.78222851322262') ) * sin( radians( `lat` ) ) ) ) AS `distance`  FROM `locations`  RIGHT JOIN `tickets`    ON (`tickets`.`location_id` = `locations`.`id`)  LEFT JOIN `customers`    ON (`tickets`.`customer_id` = `customers`.`id`)  WHERE `tickets`.`client_id` = '20'  AND    (      `customers`.`name` LIKE '%Mahoney%'      OR `customers`.`email` LIKE '%Mahoney%'      OR `locations`.`address` LIKE '%Mahoney%'    )  HAVING `distance` < '5'  ORDER BY `distance`  LIMIT 200;  

Using a profiling tool, I got this report:

Speed: 45.569 ms  Query analysis:  · Query: SIMPLE on tickets · Type: ALL · Rows: 160 (Using where; Using temporary; Using filesort)  · Query: SIMPLE on locations · Possible keys: PRIMARY · Key Used: PRIMARY · Type: eq_ref · Rows: 1  · Query: SIMPLE on customers · Possible keys: PRIMARY · Key Used: PRIMARY · Type: eq_ref · Rows: 1 (Using where)  

This is a MySQL database. All the tables are InnoDB with utf8_unicode_ci. The primary keys on each table are called id and are int(11) with indexes on them.

  • Should adding an index on tickets.location_id and/or tickets.customer_id and/or tickets.client_id improve performance of this query at all?
  • Why or why not?
  • Are there any other fields I should consider indexing to improve the efficiency of this query?
  • Should I be explicitly defining my foreign keys in my database schema?

My thinking is that since I'm selecting from locations first, I would want an index on the foreign keys that are being referenced. I read here: indexes, foreign keys and optimization that MySQL requires indexes on all foreign keys. Does explicitly defining the foreign keys in an InnoDB schema improve performance? Sorry for being such a total n00b. Thanks for the help!

Creating Indexed View GROUP BY Epoch Date

Posted: 09 Jul 2013 04:24 PM PDT

I have a few big tables with about 6 billion rows that I was looking to optimize. Clustered key is Epoch (unix date time which is the number of seconds that has passed after 1970) and customer ID. This table records usage data per customer per product type.

For example, if this were for a Telco, TypeID 1 is a local call and the value is how many minutes used for that customer. TypeID2 is a international call and is the value how many minutes were used in that hour for that customer. Let's say TypeID3 is a special discounted rate for domestic calling.

The data is stored in 1 hour intervals. I want the indexed view to store the aggregated 24 hour value so when we run a query for 1 day per customer, it has to only look up 1 row in the indexed view instead of 24 rows in the base table.

This is the base table:

ColRowID (bigint)  AggregateID (int)  Epoch (int)  CustomerID (int)  TypeID  (tinyint)  ErrorID (smallint)  Value (int)  

We don't care about Aggregate or RowID for our reporting purposes, so I figure the indexed view will look like this:

CREATE VIEW [ixvw_AggTbl]  WITH SCHEMABINDING  AS  SELECT Epoch, CustomerID, TypeID, ErrorID, SUM(Value)  FROM DBO.BaseTbl  -- GROUP BY Epoch  (what goes here?? Epoch/86400?  If I do that I have to   -- put Epoch/86400 in the SELECT list as well)  

EDIT:

Sample base data ( i left out the columns we don't need in this case, just assume the ID columns are there). Each "TypeID" will have a value assigned to it, which the value can be 0.

For example,

    Epoch / Customer ID / TypeID / Value      /* Epoch 90,000 is day 2 1am */    90000 (1am  day 2) / 1 / 1 / 200  90000 (1am  day 2) / 1 / 2 / 100  90000 (1am  day 2) / 1 / 3 / 120    /* Customer ID 2 as well */  90000 (1am  day 2) / 2 / 1 / 100  90000 (1am  day 2) / 2 / 2 / 50  90000 (1am  day 2) / 2 / 3 / 310    ... (repeat for 30,000 customers)    /* Customer ID 1 2am day 1) */  93600 (2am day 2) / 1 / 1 / 150  93600 (2am day 2) / 1 / 2 / 0  93600 (2am day 2) / 1 / 3 / 550    /* Customer ID 2 2am day 2) */  93600 / 2 / 1 / 80  93600 / 2 / 2 / 150  93600 / 2 / 3 / 300  ... (repeat for 30,000 customers)  

Let's assume all the other VALUE columns are 0 for the remainder of the day since the system went down and no one could use their phones after 2am. I want my indexed view to record the value column aggregated per day, per customerID and TypeID.

Sample would be:

172800 (Day 3 midnight) / 1 / 1 / 350  --Cust ID 1 aggregated all type id 1 in the past 24 hours  172800 (Day 3 midnight) / 1 / 2 / 100  172800 (Day 3 midnight) / 1 / 3 / 670  172800 (Day 3 midnight) / 2 / 1 / 180  --Cust ID 2 now  172800 (Day 3 midnight) / 2 / 2 / 200  172800 (Day 3 midnight) / 2 / 3 / 610  --Repeat by adding 86400 to the epoch to gather the summary data of the rows for the previous day.  

Audit Queries in SQL 2008 Including Start Stop Times

Posted: 09 Jul 2013 12:56 PM PDT

I have SQL Server 2008 Enterprise. I know how to set up auditing on the SQL server, but it looks like the audit logs don't include query start/stop times. We currently use a product called Splunk to do a lot of data analysis at our organization. I would like to be able to feed Splunk our machine performance data along with a SQL query audit log to be able to correlate high CPU usage to long running queries and things like that. However, I would need query start/stop times to be able to do that. It doesn't look like the built in auditing has that ability. Is there any other way to achieve this?

Why does Log Shipping .TRN file copy just stop

Posted: 09 Jul 2013 12:57 PM PDT

I apologize in advance for a long post but I have had it up to here with this error of having to delete LS configuration and starting it over for any DB thats got this error.

I have LS setup on 3 win2k8r2 servers(pri,sec,monitor) with 100 databases transactions backed up and shipped from the primary to secondary and monitored by monitor. Back ups and copies are run every 15min and then the ones older than 24hrs are deleted. Some DBs are very active and some not so much but shipped regardless for uniformity sake(basically to make secondary server identical to primary). Some DBs are for SP2010 and majority for inhouse app.

The issue is that after all LS configs are setup, all works well for about 3 to 4 days then i go to the Transaction LS Status report on the secondary, I see that randomly some LS jobs have an Alert Status because the time since last copy is over 45min so no restore has occured. This seems random and the only errors i see is from an SP2010 DB(WebAnalyticsServiceApplication_ReportingDB_77a60938_##########) which I belive is a reports db that gets created weekly and LS cannot just figure which the last copy to backup or to restore is. I posted here regarding that and i have yet to find a permanent solution. For my main error(time since last copy) i have not seen anything that could have caused that and i dont get any messages(even though some alert statuses have been ignored for 3 days). Anyway, I would really appreciate any input on understanding whats causing this and how i could fix it. Thanks.

How to determine master in mysql master-slave

Posted: 09 Jul 2013 09:07 PM PDT

I am setting up MySQL Master-slave replication and I am trying to figure out how to handle the failover situation where I promote the slave to master (in the event that the master goes down).

My application server needs to direct all writes to the current master, but I cannot use server level HA between the master and slave (heartbeat, keepalived) since the two db servers are on completely different subnets in different physical locations.

I think this is something that I need to handle at the application level. I can query the two servers and ask which one is a master, then perform all queries to that one.

Is there a query in MySQL to see if the current server is a master in a master-slave replica?

MySQL 5.6 Delayed Replication - cancel delay questions

Posted: 09 Jul 2013 02:13 PM PDT

I'm using the MySQL 5.6 Master/Slave replication, with Delay set for 2 hours.

My questions are :

Q1. What is the proper way to cancel the Delay at the slave - i.e. "roll forward" all the changes executed at the Master ?

Q2. Let's say that the Master became totally unavailable. What is the proper way to cancel the Delay and "roll forward" the Slave so, that it will apply all changes from Master ?

I've tried to : - stop slave ; - change master to master_delay = 0; - start slave ;

(At this moment - the Master is still unavailable, and Slave IO thread status is Connecting.) After the listed above steps - the relay log bin files are deleted at the Slave host, Delay value becomes 0, but the changes from the Master are NOT applied...

If anybody could please provide some useful tips - i'll very appreciate.

Best regards, Avi Vainshtein

Oracle OEM Database Backup Failure

Posted: 09 Jul 2013 02:19 PM PDT

I am trying to back up an oracle database from OEM, but upon completion the job report says the job failed with the following error:

RMAN-03002: failure of backup command at 07/09/2013 10:26:52  RMAN-06059: expected archived log not found, loss of archived log compromises recoverability  ORA-19625: error identifying file C:\APP\RM\FLASH_RECOVERY_AREA\RONNIE\ARCHIVELOG\2013_06_10\O1_MF_1_1508_8VDHNOLY_.ARC  ORA-27041: unable to open file  OSD-04002: unable to open file  

A DBA set this database up but he is not currently available. I have tried running a crosscheck and a delete expired in RMAN but experienced the same problem when trying again.

The path it is looking in: C:\APP\RM\FLASH_RECOVERY_AREA\RONNIE\ARCHIVELOG\2013_06_10\O1_MF_1_1508_8VDHNOLY_.ARC

...does not exist on the server, but it did exist on an original machine from which the DBA copied the database. The path containing the archive log now is:

C:\APP\CS\FLASH_RECOVERY_AREA\RONNIE\ARCHIVELOG\2013_06_10\O1_MF_1_1508_8VDHNOLY_.ARC

Can anyone help?

Thanks

ORA-40341: Access violation on model storage object in Oracle?

Posted: 09 Jul 2013 08:31 PM PDT

While I was trying to drop a table, it throws following error in Oracle SQL Developer:

ORA-40341: Access violation on model storage object  

The tables are temporary tables created while pushing into the database using Oracle R Enterprise. The names of the tables are: DM$PRORE$21_473, DM$PGORE$21_473, ...

I need to drop all these tables as these tables have occupied large space of my database. While googling, I found this link but it provides no solution clues.

Cumulative Game Score SQL

Posted: 09 Jul 2013 08:11 PM PDT

I have developed a game recently and the database is running on MSSQL.

Here is my database structure

Table : Player

PlayerID uniqueIdentifier (PK)  PlayerName nvarchar  

Table : GameResult

ID bigint (PK - Auto Increment)  PlayerID uniqueIdentifier (FK)  DateCreated Datetime  Score int  TimeTaken bigint  PuzzleID int  

I have done an SQL listing Top 50 players that sort by highest score (DESC) and timetaken (ASC) Sql below allowed me to get the result for each puzzle id. I'm not sure if it is 100% but I believe it is correct.

;with ResultSet (PlayerID, maxScore, minTime, playedDate)   AS  (    SELECT TOP 50 PlayerID, MAX(score) as maxScore, MIN(timetaken) as minTime, MIN(datecreated) as playedDate      FROM gameresult      WHERE puzzleID = @PuzzleID      GROUP BY PlayerID      ORDER BY maxScore desc, minTime asc, playedDate asc  )  SELECT RSP.[PlayerID], RSP.[PlayerName], RSA.maxScore, RSA.minTime, RSA.PlayedDate  FROM ResultSet RSA  INNER JOIN Player RSP WITH(NOLOCK)       ON RSA.PlayerID = RSP.PlayerID  ORDER BY       maxScore DESC,       minTime ASC,      playedDate ASC  

Question

1) I need to modify the SQL to do a cumulative rank of 3 puzzle ID. For example, Puzzle 1, 2, 3 and it should be sort by highest sum score (DESC), and sum timetaken (ASC)

2) I also need an overall score population for all the possible 1 to 7 puzzle.

3) Each player only allowed to appear on the list once. First played and first to get highest score will be rank 1st.

I tried using CTE with UNION but the SQL statement doesn't work.

I hope gurus here can help me out on this. Much appreciated.

TOAST Table Growth Out of Control - FULLVAC Does Nothing

Posted: 09 Jul 2013 12:51 PM PDT

Recently, I've had a PostgreSQL 8.2.11 server upgraded to 8.4 in order to take advantage of autovacuum features and be in line with 30ish other PGSQL servers. This was done by a separate IT group who administrates the hardware, so we don't have much choice on any other upgrades (won't see 9+ for a while). The server exists in a very closed environment (isolated network, limited root privileges) and runs on RHEL5.5 (i686). After the upgrade, the database has constantly been growing to the tune of 5-6 GB a day. Normally, the database, as a whole, is ~20GB; currently, it is ~89GB. We have a couple other servers which run equivalent databases and actually synchronize the records to each other via a 3rd party application (one I do not have access to the inner workings). The other databases are ~20GB as they should be.

Running the following SQL, it's fairly obvious there's an issue with a particular table, and, more specifically, its TOAST table.

SELECT nspname || '.' || relname AS "relation",      pg_size_pretty(pg_relation_size(C.oid)) AS "size"    FROM pg_class C    LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)    WHERE nspname NOT IN ('pg_catalog', 'information_schema')    ORDER BY pg_relation_size(C.oid) DESC    LIMIT 20;  

Which produces:

                relation              |  size    ------------------------------------+---------      pg_toast.pg_toast_16874           | 89 GB      fews00.warmstates                 | 1095 MB      ...    (20 rows)  

This TOAST table is for a table called "timeseries" which saves large records of blobbed data. A SUM(LENGTH(blob)/1024./1024.) of all the records in timeseries yields ~16GB for that column. There should be no reason this table's TOAST table should be as large as it is.

I've performed a VACUUM FULL VERBOSE ANALYZE timeseries, and the vacuum runs to completion with no errors.

INFO: vacuuming "pg_toast.pg_toast_16874"
INFO: "pg_toast_16874": found 22483 removable, 10475318 nonremovable row versions in 10448587 pages
DETAIL: 0 dead row versions cannot be removed yet.
Nonremovable row versions range from 37 to 2036 bytes long.
There were 20121422 unused item pointers.
Total free space (including removable row versions) is 0 bytes. 4944885 pages are or will become empty, including 0 at the end of the table. 4944885 pages containing 0 free bytes are potential move destinations.
CPU 75.31s/29.59u sec elapsed 877.79 sec.
INFO: index "pg_toast_16874_index" now contains 10475318 row versions in 179931 pages
DETAIL: 23884 index row versions were removed.
101623 index pages have been deleted, 101623 are currently reusable.
CPU 1.35s/2.46u sec elapsed 21.07 sec.

REINDEXed the table which freed some space (~1GB). I can't CLUSTER the table as there isn't enough space on disk for the process, and I'm waiting to rebuild the table entirely as I'd like to find out why it is so much bigger than equivalent databases we have.

Ran a query from the PostgreSQL wiki here - "Show Database Bloat", and this is what I get:

  current_database | schemaname |           tablename            | tbloat | wastedbytes |              iname              | ibloat | wastedibytes    -----------------+------------+--------------------------------+--------+-------------+---------------------------------+--------+--------------    ptrdb04          | fews00     | timeseries                     |    1.0 |           0 | idx_timeseries_synchlevel       |    0.0 |            0    ptrdb04          | fews00     | timeseries                     |    1.0 |           0 | idx_timeseries_localavail       |    0.0 |            0    ptrdb04          | fews00     | timeseries                     |    1.0 |           0 | idx_timeseries_expirytime       |    0.0 |            0    ptrdb04          | fews00     | timeseries                     |    1.0 |           0 | idx_timeseries_expiry_null      |    0.0 |            0    ptrdb04          | fews00     | timeseries                     |    1.0 |           0 | uniq_localintid                 |    0.0 |            0    ptrdb04          | fews00     | timeseries                     |    1.0 |           0 | pk_timeseries                   |    0.1 |            0    ptrdb04          | fews00     | idx_timeseries_expiry_null     |    0.6 |           0 | ?                               |    0.0 |            0  

It looks like the database doesn't consider this space as "empty," at all, but I just don't see where all the disk space is coming from!

I suspect that this database server is deciding to use 4-5x as much disk space to save the same records pulled from the other data servers. My question is this: Is there a way I can verify the physical disk size of a row? I'd like to compare the size of one row on this database to another "healthy" database.

Thanks for any help you can provide!

UPDATE 1

I ended up rebuilding the table from a dumped schema due to its size (couldn't leave it alone for another day). After synchronizing the data, via the software synch process, the TOAST table was ~35GB; however, I could only account for ~9GB of it from that blob column which should be the longest in terms of values. Not sure where the other 26GB is coming from. CLUSTERed, VACUUM FULLed, and REINDEXed to no avail. The postgresql.conf files between the local and remote data servers are exactly the same. Is there any reason this database might be trying to store each record with a larger space on disk?

UPDATE 2 - Fixed

I finally decided to just completely rebuild the database from the ground up- even going as far as to reinstall the PostgreSQL84 packages on the system. The database path was reinitialized and tablespaces wiped clean. The 3rd party software synchronization process repopulated the tables, and the final size came out to be ~12GB! Unfortunately, this, in no way, helps to solve what the exact source of the issue was here. I'm going to watch it for a day or two and see if there are any major differences with how the revitalized database is handling the TOAST table and post those results here.

Relation Size

    ptrdb04=> SELECT nspname || '.' || relname AS "relation",  ptrdb04->     pg_size_pretty(pg_relation_size(C.oid)) AS "size"  ptrdb04->   FROM pg_class C  ptrdb04->   LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)  ptrdb04->   WHERE nspname NOT IN ('pg_catalog', 'information_schema')  ptrdb04->   ORDER BY pg_relation_size(C.oid) DESC  ptrdb04->   LIMIT 2;            relation         |  size     -------------------------+---------   pg_toast.pg_toast_17269 | 18 GB   fews00.warmstates       | 1224 MB  (2 rows)  

VACUUM VERBOSE ANALYZE timeseries;

  INFO:  "timeseries": found 12699 removable, 681961 nonremovable row versions in 58130 out of 68382 pages  DETAIL:  0 dead row versions cannot be removed yet.  There were 105847 unused item pointers.  0 pages are entirely empty.  CPU 0.83s/2.08u sec elapsed 33.36 sec.  INFO:  vacuuming "pg_toast.pg_toast_17269"  INFO:  scanned index "pg_toast_17269_index" to remove 2055849 row versions  DETAIL:  CPU 0.37s/2.92u sec elapsed 13.29 sec.  INFO:  "pg_toast_17269": removed 2055849 row versions in 518543 pages  DETAIL:  CPU 8.60s/3.21u sec elapsed 358.42 sec.  INFO:  index "pg_toast_17269_index" now contains 7346902 row versions in 36786 pages  DETAIL:  2055849 index row versions were removed.  10410 index pages have been deleted, 5124 are currently reusable.  CPU 0.00s/0.00u sec elapsed 0.01 sec.  INFO:  "pg_toast_17269": found 1286128 removable, 2993389 nonremovable row versions in 1257871 out of 2328079 pages  DETAIL:  0 dead row versions cannot be removed yet.  There were 18847 unused item pointers.  0 pages are entirely empty.  CPU 26.56s/13.04u sec elapsed 714.97 sec.  INFO:  analyzing "fews00.timeseries"  INFO:  "timeseries": scanned 30000 of 68382 pages, containing 360192 live rows and 0 dead rows; 30000 rows in sample, 821022 estimated total rows  

The only noticeable difference after the rebuild (other than disk usage) is

INFO:  "pg_toast_17269": found 1286128 removable, 2993389 nonremovable row versions
as @CraigRinger mentioned in a comment. The nonremovable row count is much smaller than before.

New question: Can other tables affect the size of another table? (via foreign keys and such) Rebuilding the table did nothing, yet rebuilding the whole database proved to fix the problem.

mysqldump freezing on a specific table

Posted: 09 Jul 2013 12:57 PM PDT

I dumped a database (sys_data) which is very big (800GB, all data in one ibdata file) from a remote server. But the dump was blocked at a table (tb_trade_376). My dump command:

mysqldump -uxx -pxx -h192.168.1.xxx --single-transcation sys_data > /home/sys_data.sql  

When the dump was blocked:

show processlist;  5306612 | root | 192.168.1.161:57180 | sys_data      | Query  | 23955 | Sending data | SELECT /*!40001 SQL_NO_CACHE */ * FROM `tb_trade_376`  

On the other hand I can dump the table tb_trade_376 successfully if I just dump the table only.

mysqldump -uxx -pxx -h192.168.1.xxx \    --single-transcation sys_data tb_trade_376 > /home/tb_trade_376.sql  

This works well and quickly! The table tb_trade_376 has about 700,000-800,000 rows.

What is the next step in investigating why I can't dump the whole database? How can I make it work?

Longest prefix search in Oracle

Posted: 09 Jul 2013 08:38 PM PDT

I have a list of phone number prefixes defined for large number of zones (in query defined by gvcode and cgi). I need to efficiently find a longest prefix that matches given number PHONE_NR.

I use inverted LIKE clause on field digits (which contains prefixes in form +48%, +49%, +1%, +1232% and so on).

Therefore I can't use normal index on that field.

I managed to get substantial improvement by using IOT on gvcode and cgi field (which are part (first two cols) of primary key). I also looked at some oracle text indexes but can't find one that will match longer input with shorter prefix in the table.

Is there any other way to perform such search that is faster than this approach.

Here is the query which gives a list of all matched prefixes (I sort it afterwards on digits length).

  select  t.gvcode,  t.digits                  from NUMBERS t                       where                           t.gvcode=ZONE_SET_CODE                           and t.cgi=cgi_f                         and ( PHONE_NR like t.digits)                           order by length(digits) desc   

In place upgrade from MySQL 5.5 to 5.6.11 removes all users from user table

Posted: 09 Jul 2013 01:52 PM PDT

On Windows, I upgraded from 5.1 to 5.5 no problem.

  1. Copied my 5.1 data folder into my 5.5 instance
  2. Started mysqld skipping grants
  3. Ran mysql_upgrade

All good, but going from 5.5 to 5.6:

  1. Copied 5.5 data folder to 5.6 instance
  2. Started mysqld skipping grants
  3. Ran mysql_upgrade

but I get:

C:\Users\QAdmin>mysql_upgrade  Looking for 'mysql.exe' as: C:\Program Files\MySQL\MySQL Server 5.6\bin\mysql.exe  Looking for 'mysqlcheck.exe' as: C:\Program Files\MySQL\MySQL Server 5.6\bin\mysqlcheck.exe  Running 'mysqlcheck' with connection arguments: "--port=3306"  Running 'mysqlcheck' with connection arguments: "--port=3306"  mysql.user_info                                    OK  Running 'mysql_fix_privilege_tables'...  Running 'mysqlcheck' with connection arguments: "--port=3306"  C:\Program Files\MySQL\MySQL Server 5.6\bin\mysqlcheck.exe: Got error: 1130: Host 'localhost' is not allowed to connect to this MySQL server when trying to connect  FATAL ERROR: Upgrade failed  

If I look at the mysql.user table it is completely empty.

  • Has anyone seen this or know what is going on?
  • During the "upgrade" the user table gets erased and when it tries to connect it can't?

Thanks.

SSRS 2008 R2 setup issue

Posted: 09 Jul 2013 04:52 PM PDT

I have installed SSRS 2008 R2 on my desktop and server. When I hit the reports link on my desktop

http://mypc/Reports_mypc/Pages/Folder.aspx  

all I get to see is this home page of the desktop ssrs instance

I cant create a new folder or data source or anything of the sort

On the server where I am attempting to set up SSRS 2008 R2, all I get is a white screen that shows the virtual folder name in large fonts, followed by the version of the reporting services server on the next line. This is not leaving me any clues as to what needs to be fixed. On both pcs I am using the credentials of the local admin. Any clues on what needs to be fixed?

Get all the database in db2 through web

Posted: 09 Jul 2013 02:17 PM PDT

I would like to get all the databases available on a db2 instance from a C# application. The CLP command to be used is LIST DB DIRECTORY.

How can I fire this command from C#? Is there another select statement I can use to get all the databases on one db2 instance?

Is there an effective way to create extended event data that includes stacktraces in SQL Server on a local db instance?

Posted: 09 Jul 2013 06:43 PM PDT

I read Paul Randal's article on getting the symbols for SQL Server. I have a theory that I could get the extended events to read the symbols from the symbol server catch directory by setting the _NT_SYMBOL_PATH environment variable. The only problem is I can't create the right conditions for any data to show up when I create an extended events session with sqlos.spinlock_backoff. I tried the script in this pdf.

I tried HammerDB and I tried this script with adventure works. I don't get any spinlocks. I tried setting max server memory at 256 megs (my default setting on my laptop) and I tried it at 2 gigs. The Extended event session was running, but no data showed up.

Database design for an E-commerce website

Posted: 09 Jul 2013 06:52 PM PDT

I am new to database design. I am designing a database for an E-commerce website, there are a lot of products to be updated, but while designing for product specification table I do not understand whether I need specify all the specifications in a table or do I need to use different tables for different products?

For example, consider the products Mobile and book, each of these having unique specifications (such as color,size, cost, model for mobile and title, ISBN, author, cost, year_of_publication, etc. for book), if it is only less number of products then it is possible for me to design, but when there is a thousands of products, it takes a lot of time.

Can anyone tell me how to design my database to manage this situation?

Shrink database operation in maintenance plan failed

Posted: 09 Jul 2013 11:17 AM PDT

See updates below

I have a nightly DB maintenance plan and it's causing some strange behaviour.

About 1 out of 10 runs it causes the DB to not response to queries. It takes about 5 minutes for the DBM plan to complete at which point it starts responding.

The error logs don't seem to point to any problem at all:

2013-01-13 00:00:56.73 spid23s     This instance of SQL Server has been using a process ID of 1488 since 12/28/2012 10:06:36 AM (local) 12/28/2012 6:06:36 PM (UTC). This is an informational message only; no user action is required.  2013-01-13 22:00:06.07 spid60      Configuration option 'user options' changed from 0 to 0. Run the RECONFIGURE statement to install.  2013-01-13 22:00:06.07 spid60      FILESTREAM: effective level = 0, configured level = 0, file system access share name = 'MSSQLSERVER'.  2013-01-13 22:01:03.78 spid64      DBCC CHECKDB (RackAttackNorthAmerica) WITH no_infomsgs executed by sa found 0 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 57 seconds.  Internal database snapshot has split point LSN = 0000942b:000000d9:0001 and first LSN = 0000942b:000000d8:0001.  2013-01-13 22:02:33.13 spid80      I/O is frozen on database model. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup.  2013-01-13 22:02:33.13 spid81      I/O is frozen on database msdb. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup.  2013-01-13 22:02:33.13 spid82      I/O is frozen on database RackAttackNorthAmerica. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup.  2013-01-13 22:02:34.13 spid83      I/O is frozen on database master. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup.  2013-01-13 22:02:34.48 spid83      I/O was resumed on database master. No user action is required.  2013-01-13 22:02:34.48 spid80      I/O was resumed on database model. No user action is required.  2013-01-13 22:02:34.48 spid81      I/O was resumed on database msdb. No user action is required.  2013-01-13 22:02:34.70 spid82      I/O was resumed on database RackAttackNorthAmerica. No user action is required.  2013-01-13 22:02:35.07 Backup      Database backed up. Database: master, creation date(time): 2012/12/28(10:05:56), pages dumped: 379, first LSN: 685:336:83, last LSN: 685:384:1, number of dump devices: 1, device information: (FILE=1, TYPE=VIRTUAL_DEVICE: {'{6E5AF82F-6BBD-4343-9DA3-286FD7EA5C0E}4'}). This is an informational message only. No user action is required.  2013-01-13 22:02:35.56 Backup      Database backed up. Database: model, creation date(time): 2003/04/08(09:13:36), pages dumped: 170, first LSN: 101:1440:37, last LSN: 101:1464:1, number of dump devices: 1, device information: (FILE=1, TYPE=VIRTUAL_DEVICE: {'{6E5AF82F-6BBD-4343-9DA3-286FD7EA5C0E}1'}). This is an informational message only. No user action is required.  2013-01-13 22:02:35.69 Backup      Database backed up. Database: msdb, creation date(time): 2008/07/09(16:46:27), pages dumped: 1948, first LSN: 1647:152:131, last LSN: 1647:216:1, number of dump devices: 1, device information: (FILE=1, TYPE=VIRTUAL_DEVICE: {'{6E5AF82F-6BBD-4343-9DA3-286FD7EA5C0E}2'}). This is an informational message only. No user action is required.  2013-01-13 22:02:35.77 Backup      Database backed up. Database: RackAttackNorthAmerica, creation date(time): 2011/01/21(10:37:38), pages dumped: 193713, first LSN: 37940:102:1, last LSN: 37946:493:1, number of dump devices: 1, device information: (FILE=1, TYPE=VIRTUAL_DEVICE: {'{6E5AF82F-6BBD-4343-9DA3-286FD7EA5C0E}3'}). This is an informational message only. No user action is required.  2013-01-13 22:06:48.94 Backup      Database backed up. Database: RackAttackNorthAmerica, creation date(time): 2011/01/21(10:37:38), pages dumped: 168202, first LSN: 37974:59:36, last LSN: 37974:83:1, number of dump devices: 1, device information: (FILE=1, TYPE=DISK: {'D:\DB_Backups\RackAttackNorthAmerica_backup_2013_01_13_220612_2311062.bak'}). This is an informational message only. No user action is required.  

I'm not quite sure what info is needed to diagnose the problem.

Please let me know what you need and I'll do my best to get it.

I managed to find this error, not sure if it helps.

Executing the query "DBCC SHRINKDATABASE(N'RackAttackNorthAmerica', 10,..." failed with the following error: "Backup, file manipulation operations (such as ALTER DATABASE ADD FILE) and encryption changes on a database must be serialized. Reissue the statement after the current backup or file manipulation operation is completed.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.  

dm_os_waiting_tasks

waiting_task_address    session_id  exec_context_id wait_duration_ms    wait_type   resource_address    blocking_task_address   blocking_session_id blocking_exec_context_id    resource_description  0x0000000003FAE508  3   0   4907732 XE_DISPATCHER_WAIT  NULL    NULL    NULL    NULL    NULL  0x0000000003FAE988  15  0   11555964    BROKER_EVENTHANDLER NULL    NULL    NULL    NULL    NULL  0x00000000040E8748  5   0   33  LAZYWRITER_SLEEP    NULL    NULL    NULL    NULL    NULL  0x0000000004102508  6   0   9962    LOGMGR_QUEUE    0x0000000003413528  NULL    NULL    NULL    NULL  0x0000000004102748  17  0   1480291018  BROKER_TRANSMITTER  NULL    NULL    NULL    NULL    NULL  0x000000000411C508  4   0   199 REQUEST_FOR_DEADLOCK_SEARCH 0x00000000089A01F8  NULL    NULL    NULL    NULL  0x000000000411C748  10  0   741 SLEEP_TASK  NULL    NULL    NULL    NULL    NULL  0x0000000004136508  7   0   1480284374  KSOURCE_WAKEUP  NULL    NULL    NULL    NULL    NULL  0x0000000004150508  12  0   55058   CHECKPOINT_QUEUE    0x00000000034135A0  NULL    NULL    NULL    NULL  0x0000000004150BC8  NULL    NULL    12814   FT_IFTS_SCHEDULER_IDLE_WAIT NULL    NULL    NULL    NULL    NULL  0x000000000416A508  11  0   1480314263  ONDEMAND_TASK_QUEUE 0x00000000033F3BE0  NULL    NULL    NULL    NULL  0x000000000416A988  16  0   1480291026  BROKER_TRANSMITTER  NULL    NULL    NULL    NULL    NULL  0x0000000004904748  2   0   17621   XE_TIMER_EVENT  NULL    NULL    NULL    NULL    NULL  0x00000000804222C8  9   0   705 SQLTRACE_BUFFER_FLUSH   NULL    NULL    NULL    NULL    NULL  

Update:

DB Shrink removed. Still hanging. I'm 99% it's the rebuild index task. Here are a few example lines from the generated SQL

ALTER INDEX [childCategories] ON [dbo].[category] REBUILD PARTITION = ALL WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = OFF, ONLINE = OFF, SORT_IN_TEMPDB = OFF )   ALTER INDEX [mfgID] ON [dbo].[category] REBUILD PARTITION = ALL WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = OFF, ONLINE = OFF, SORT_IN_TEMPDB = OFF )   

Update 2013-07-09: So I'm continuing to have problems with this maintenance plan. The shrink operation was removed a few months ago and the rebuild index task was modified. I'm continuing to have failures and the web site that is driven by this DB continues to go down for a few minutes on each failure. I managed to dig up the job history task and it appears that the update statistics task is now failing. See log below:

Executed as user: WORKGROUP\WNN1106$. ...  Source: Update Statistics Task    Executing query "use [RackAttackNorthAmerica]  ".: 31% complete  End Progress  Progress: 2013-07-08 23:04:37.03    Source: Update Statistics Task    Executing query "UPDATE STATISTICS [dbo].[news]  WITH FULLSCAN  ".: 32% complete  End Progress  Progress: 2013-07-08 23:04:37.03    Source: Update Statistics Task    Executing query "use [RackAttackNorthAmerica]  ".: 33% complete  End Progress  Progress: 2013-07-08 23:04:42.65    Source: Update Statistics Task    Executing query "UPDATE STATISTICS [dbo].[orderItems]  WITH FULLSC...".: 33% complete  End Progress  Progress: 2013-07-08 23:04:42.65    Source: Update Statistics Task    Executing query "use [RackAttackNorthAmerica]  ".: 34% complete  End Progress  Progress: 2013-07-08 23:04:42.74    Source: Update Statistics Task    Executing query "UPDATE STATISTICS [dbo].[orderItemType]  WITH FUL...".: 35% complete  End Progress  Progress: 2013-07-08 23:04:42.74    Source: Update Statistics Task    Executing query "use [RackAttackNorthAmerica]  ".: 36% complete  End Progress  Progress: 2013-07-08 23:04:42.74    Source: Update Statistics Task    Executing query "UPDATE STATISTICS [dbo].[orderPaymentType]  WITH ...".: 37% complete  End Progress  Progress: 2013-07-08 23:04:42.74    Source: Update Statistics Task    Executing query "use [RackAttackNorthAmerica]  ".: 37% complete  End Progress  Progress: 2013-07-08 23:05:43.25    Source: Update Statistics Task    Executing query "UPDATE STATISTICS [dbo].[orders]  WITH FULLSCAN  ".: 38% complete  End Progress  Progress: 2013-07-08 23:05:43.25    Source: Update Statistics Task    Executing query "use [RackAttackNorthAmerica]  ".: 39% complete  End Progress  Progress: 2013-07-08 23:05:43.30    Source: Update Statistics Task    Executing query "UPDATE STATISTICS [dbo].[orders_deleted]  WITH FU...".: 40% complete  End Progress  Progress: 2013-07-08 23:05:43.30    Source: Update Statistics Task    Executing query "use [RackAttackNorthAmerica]  ".: 41% complete  End Progress  Progress: 2013-07-08 23:05:43.30    Source: Update Statistics Task    Executing query "UPDATE STATISTICS [dbo].[orderSource]  WITH FULLS...".: 41% complete  End Progress  Progress: 2013-07-08 23:05:43.30    Source: Update Statistics Task    Executing query "use [RackAttackNorthAmerica]  ".: 42% complete  End Progress  Progress: 2013-07-08 23:05:43.55    Source: Update Statistics Task    Executing query "UPDATE STATISTICS [dbo].[orderSplit]  WITH FULLSC...".: 43% complete  End Progress  Progress: 2013-07-08 23:05:43.55    Source: Update Statistics Task    Executing query "use [RackAttackNorthAmerica]  ".: 44% complete  End Progress  Progress: 2013-07-08 23:05:43.60    Source: Update Statistics Task    Executing query "UPDATE STATISTICS [dbo].[paymentProcessType]  WIT...".: 45% complete  End Progress  Progress: 2013-07-08 23:05:43.60    Source: Update Statistics Task    Executing query "use [RackAttackNorthAmerica]  ".: 45% complete  End Progress  Progress: 2013-07-08 23:05:43.64    Source: Update Statistics Task    Executing query "UPDATE STATISTICS [dbo].[priceHistory]  WITH FULL...".: 46% complete  End Progress  Progress: 2013-07-08 23:05:43.64    Source: Update Statistics Task    Executing query "use [RackAttackNorthAmerica]  ".: 47% complete  End Progress  Progress: 2013-07-08 23:05:43.71    Source: Update Statistics Task    Executing query "UPDATE STATISTICS [dbo].[priceUpdate]  WITH FULLS...".: 48% complete  End Progress  Progress: 2013-07-08 23:05:43.71    Source: Update Statistics Task    Executing query "use [RackAttackNorthAmerica]  ".: 49% complete  End Progress  Progress: 2013-07-08 23:05:43.75    Source: Update Statistics Task    Executing query "UPDATE STATISTICS [dbo].[priceUpdateCAD]  WITH FU...".: 50% complete  End Progress  Progress: 2013-07-08 23:0...  The package execution fa...  The step failed.  

Should I not be doing an update statistics task in my nightly backup? Or is this pointing to a different issue?

Update 2013-07-09:

The maintenance plan logs to file.

Here is an entry from 2013-07-08:

Execute T-SQL Statement Task (WNN1106)  Execute TSQL on Local server connection  Execution time out: 120  Task start: 2013-07-08T23:01:10.  Task end: 2013-07-08T23:03:11.  Failed:(-1073548784) Executing the query "ALTER INDEX [email] ON [dbo].[customerFeedback] RE..." failed with the following error: "Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.  

Related SQL:

ALTER INDEX [email] ON [dbo].[customerFeedback] REBUILD PARTITION = ALL WITH ( PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = OFF, ONLINE = OFF, SORT_IN_TEMPDB = OFF )  

Here is an entry from 2013-07-04:

Failed:(-1073548784) Executing the query "ALTER INDEX [cEmail] ON [dbo].[orders] REBUILD PAR..." failed with the following error: "Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.  

Related SQL:

ALTER INDEX [cEmail] ON [dbo].[orders] REBUILD PARTITION = ALL WITH ( PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = OFF, ONLINE = OFF, SORT_IN_TEMPDB = OFF )  

Here is an entry from 2013-07-03:

Failed:(-1073548784) Executing the query "ALTER INDEX [email] ON [dbo].[customerFeedback] RE..." failed with the following error: "Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.  

Related SQL:

ALTER INDEX [email] ON [dbo].[customerFeedback] REBUILD PARTITION = ALL WITH ( PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = OFF, ONLINE = OFF, SORT_IN_TEMPDB = OFF )  

All of the maintenance plan tasks are generated by the GUI wizard tool. As you can see they fail on different steps everytime.

What should I do here?

Should I increase the time out? Or will that just take longer to fail?

Should I modify the queries? ALLOW_ROW_LOCKS = OFF ? ONLINE = ON?

Should I remove the rebuild index task completely?

Should I remove the update statistics task completely?

Thanks for your input,

Tomas

Why would mysql "show global status" query be taking 15 minutes?

Posted: 09 Jul 2013 08:52 PM PDT

I'm reviewing the slow log, and on one of my slaves the average time for SHOW GLOBAL STATUS is 914s.

Any idea how to determine the cause of this?

Is there a way to export Oracle's UNDO?

Posted: 09 Jul 2013 07:52 PM PDT

I tried exp utility to dump all database. Looks like this exports only the last version of data skipping undo log. Using flashback queries I see:

01466. 00000 -  "unable to read data - table definition has changed"  *Cause:    Query parsed after tbl (or index) change, and executed             w/old snapshot  

What I'm trying to do is to capture db changes, make backup for later use with the ability to flashback to timestamp.

With rman backup I have similar situation:

ORA-01555: snapshot too old: rollback segment number 3 with name "_SYSSMU3_2472002983$"  too small   01555. 00000 -  "snapshot too old: rollback segment number %s with name \"%s\" too small"  *Cause:    rollback records needed by a reader for consistent read are overwritten by other writers`.   

Update: I managed to do what I needed only by increasing undo retention and direct copying of data files and control file modification on cloned instance.

What is the difference between int(8) and int(5) in mysql?

Posted: 09 Jul 2013 11:52 AM PDT

I found out, that if you have a field defined as INT(8) without ZEROFILL it will behave exactly as INT(5)

in both cases the maximum value is

−2,147,483,648 to 2,147,483,647, from −(2^31) to 2^31 − 1  

or do i miss something?

I found this Question: http://dba.stackexchange.com/a/370/12923

The (5) represents the display width of the field. From the manual, it states:

The display width does not constrain the range of values that can be stored in the column. Nor does it prevent values wider than the column display width from being displayed correctly. For example, a column specified as SMALLINT(3) has the usual SMALLINT range of -32768 to 32767, and values outside the range permitted by three digits are displayed in full using more than three digits.

The display width, from what I can tell, can be used to left-pad numbers that are less than the defined width. So 00322, instead of 322. TBH, I've never used it.

But it doesn't affect the storage size of the column. An int will take up more space than a smallint.

so there seems to be no difference then.

MySQL table relations, inheritance or not?

Posted: 09 Jul 2013 05:52 PM PDT

Im building a micro CMS. Using Mysql as RDMS, and Doctrine ORM for mapping.

I would like to have two types of pages. Static Page, and Blog Page. Static page would have page_url, and page_content stored in database. Blog page would have page_url, but no page_content. Blog would have Posts, Categories...

Lets say I have route like this:

/{pageurl}  

This is page, with page url that can be home, or news, or blog... That page can be either Static page, and then I would joust print page_content. But it can also be Blog Page, and then I would print latest posts as content.

How should I relate these Static Page and Blog Page tables? Is this inheritance, since both are pages, with their URL, but they have different content? Should I use inheritance, so that both Static and Blog page extends Page that would have page_url? Or should I made another table page_types and there store information about available page types?

Temporarily Disable Sql Replication

Posted: 09 Jul 2013 03:37 PM PDT

Due to a business need, I may need to disable transactional replication in my environment (Sql 2k -> Sql 2008). Functionally, I understand this to mean that I will need to drop subscriptions and articles. Is getting the create scripts enough to restore replication back to original state when the conflicting need is addressed?

Thanks.

No comments:

Post a Comment

Search This Blog