Tuesday, July 23, 2013

[how to] table with item names as a composite key - joining with missing combinations

[how to] table with item names as a composite key - joining with missing combinations


table with item names as a composite key - joining with missing combinations

Posted: 23 Jul 2013 08:59 PM PDT

I've got a table design with many levels of item, and each has a name that's inside a separate table with a composite key linking to a language table.

i.e. Country, City, Street, Building, Floor, Suite. Each with the 1 to many relation.

if a country has two languages (fr + en) the database will be polled for both, and expecting a value in both languages whether or not it exists.

I'd like to create a query (a view ideally) that can combine all these rows. The problem I keep hitting is that I either get all variations of languages/names across each item, or i lose entries that do not have that language value and any subsequent(child) items.


language: [id,name]  country: [id]  countryName: [languageid,countryid,name]  city: [id,countryid]  cityName: [languageid,cityid,name]  building: [id,cityid]  buildingName: [languageid,buildingid,name]  

etc...

what I have done is created a table that links all levels (country,city,building, etc.) only the cityname(which determines if there is an alternate name.

then I do a join the tables in a view and get all the names as-well. I've added a line that adds the alternate value which is something like this ISNULL(dbo.buildingNames.name, (SELECT TOP (1) name FROM dbo.buildingNames AS x WHERE (buildingId = dbo.buildings.id)))

it works ok, but I'm it's getting a bit messy and slow. Using views, this requires each item to have it's separate name view, which relies on recreating the hierarchy to check which languages are needed.

the other problem related to this, is that I have a location Table that can have multiple items used.

i.e. assuming that the location is restricted to being on a floor, or in a suite.

ballLocation: [ballid,floorid,suiteid]

if i have both filled out I can use the suiteName view(suite with atlernate names) which already has the floorName, but if I only provided the floorID, then I can't use the same view, but have to use two separate ones.

Do I need to perform any tasks after Postgres goes into recovery mode?

Posted: 23 Jul 2013 06:51 PM PDT

For starters, I do not have any sort of automated failover in place.

After two scenarios, I'm unsure of the state of the database and any required actions, if any, to take:

  1. My master server disappears off the face of the planet and I touch my_trigger_file, converting my standby into a master server.
  2. My master server crashes, then restarts in recovery mode

My assumption is that the server in question in each of those scenarios will first be in recovery mode, will then finish recovering, and will finally be ready as a master server. In scenario 2 my assumption means things are back to normal, while in scenario 1 my assumption means the standby has no idea it was ever a standby and is now simply a master. Are these assumptions correct, or are there other actions that need to take place in either or both scenarios after recovery finishes (aside from creating a new slave in the case of scenario 1)?

Also, is there a notable technical difference between the state of a slave server after becoming a master vs a master server undergoing recovery after a crash, or as far as Postgres is concerned, are they both essentially master servers that just recovered from something?

Note: I'm using Postgres 9.2 with asynchronous streaming replication.

MSSQL Grouping with row value basis

Posted: 23 Jul 2013 06:00 PM PDT

I have a child table that is something like this:

[Mach Status Table]

| Mach Status | Start Time | Duration|
+----------------+-------------+-----------+
| 1 | 2012-04-30 6:30 | 20.00 |
| 1 | 2012-04-30 6:50 | 50.00 |
| 2 | 2012-04-30 7:40 | 10.00 |
| 2 | 2012-04-30 7:50 | 10.00 |
| 1 | 2012-04-30 8:00 | 60.00 |
| 1 | 2012-04-30 9:00 | 10.00 |

I would like to be able to get a result set like this - one record for each status with end date: what can be a better way to achieve this in MsSql?

| Mach Status | Start Time | End Time |
+-------------+-----------------+-------------------+
| 1 | 2012-04-30 6:30 | 2012-04-30 7:40 |
| 2 | 2012-04-30 7:40 | 2012-04-30 8:00 |
| 1 | 2012-04-30 8:00 | 2012-04-30 9:10 |

Where in phpMyAdmin code can I change behavior of table statistics depending on database?

Posted: 23 Jul 2013 02:55 PM PDT

Specifically, I want to hack into the code and add some PHP code like this:

if($database != 'information_schema') { // NEW if statement to exclude information_schema from table statistics    // existing code that gets the table statistics for a db  }  

And, of course, the reason for my question is obvious to anyone who has tried to work with the information_schema database in phpMyAdmin. Once the initial view is loaded, it's fine. Most queries on information_schema tables are just as fast as any. But the initial view can take minutes, or 10's of minutes to load, because SELECT COUNT(*) (and perhaps all aggregate functions) takes a very long time on those tables for some reason.

Note that I have already added this setting under the [mysqld] section of my.cnf:

innodb_stats_on_metadata = 0  

As suggested in this stack overflow answer.

It helped a lot, but it still takes at least a few minutes to load the database page in phpMyAdmin.

Thanks for your help.

NOTE: I'm currently using command line to search through files but have not found the code snippet in question yet:

$ grep -lr "COUNT(*)" /usr/share/phpmyadmin

That's on an Ubuntu/php/mysql 5.5.28 testing server. phpMyAdmin version 3.4.10.1deb1

MySQL Memory Grows Over Time

Posted: 23 Jul 2013 05:58 PM PDT

I've got a server running a LAMP stack with 1GB RAM that is maxed out on RAM usage. What I'm observing is that directly after a reboot, it drops to 25% usage, and then slowly climbs over time so that by the end of the day, it's sitting at 100%. When it peaks out, mysql is using 940MB according to TOP.

I've tried lowering many of the settings to reduce caching and buffer sizes (not ideal I know, but just trying to get the memory usage under control first - then things can be redialed in)

Current /etc/my.cnf:

[mysqld]  # Basic settings  user = mysql  datadir = /var/lib/mysql  port = 3306  socket = /var/lib/mysql/mysql.sock    # Security settings  local-infile = 0  symbolic-links = 0    # Memory and cache settings  query_cache_type = 1  query_cache_size = 4M  thread_cache_size = 4  table_cache = 256  tmp_table_size = 8M  max_heap_table_size = 8M  join_buffer_size = 1M  key_buffer_size = 1M  max_connections = 10  wait_timeout = 300    # Innodb settings  innodb_buffer_pool_size = 16M  innodb_additional_mem_pool_size = 1M  innodb_log_buffer_size = 1M  innodb_thread_concurrency = 2    [mysqld_safe]  # Basic safe settings  log-error = /var/log/mysqld.log  pid-file = /var/run/mysqld/mysqld.pid  

MySQLTuner.pl Output:

# ./mysqltuner.pl    >>  MySQLTuner 1.2.0 - Major Hayden <major@mhtx.net>  >>  Bug reports, feature requests, and downloads at http://mysqltuner.com/  >>  Run with '--help' for additional options and output filtering    -------- General Statistics --------------------------------------------------  [--] Skipped version check for MySQLTuner script  [OK] Currently running supported MySQL version 5.5.30  [OK] Operating on 64-bit architecture    -------- Storage Engine Statistics -------------------------------------------  [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster  [--] Data in MyISAM tables: 51K (Tables: 109)  [--] Data in InnoDB tables: 15M (Tables: 387)  [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17)  [!!] Total fragmented tables: 387    -------- Security Recommendations  -------------------------------------------  [OK] All database users have passwords assigned    -------- Performance Metrics -------------------------------------------------  [--] Up for: 2d 15h 14m 29s (1M q [4.411 qps], 71K conn, TX: 2B, RX: 78M)  [--] Reads / Writes: 62% / 38%  [--] Total buffers: 31.0M global + 3.6M per thread (10 max threads)  [OK] Maximum possible memory usage: 67.2M (6% of installed RAM)  [OK] Slow queries: 0% (0/1M)  [OK] Highest usage of available connections: 80% (8/10)  [OK] Key buffer size / total MyISAM indexes: 1.0M/450.0K  [!!] Key buffer hit rate: 90.7% (935 cached / 87 reads)  [OK] Query cache efficiency: 82.7% (201K cached / 243K selects)  [!!] Query cache prunes per day: 11934  [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 15K sorts)  [OK] Temporary tables created on disk: 9% (17K on disk / 194K total)  [OK] Thread cache hit rate: 99% (20 created / 71K connections)  [!!] Table cache hit rate: 6% (256 open / 3K opened)  [OK] Open file limit used: 9% (102/1K)  [OK] Table locks acquired immediately: 100% (81K immediate / 81K locks)  [OK] InnoDB data size / buffer pool: 15.7M/16.0M    -------- Recommendations -----------------------------------------------------  General recommendations:  > Run OPTIMIZE TABLE to defragment tables for better performance  > Enable the slow query log to troubleshoot bad queries  > Increase table_cache gradually to avoid file descriptor limits  Variables to adjust:  > query_cache_size (> 4M)  > table_cache (> 256)  

What I don't understand is that MySQLTuner says "Maximum possible memory usage: 67.2M (6% of installed RAM)", yet as indicated earlier, MySQL is using 940MB of RAM.

Any ideas on how to resolve this?

Use Arc Map 10.1

Posted: 23 Jul 2013 02:24 PM PDT

I have a dataset without Zipcode. Can I geocode addresses without zipcode using arcmap 10.1? During geocoding address I get the error cannot connect to the server.

How to speed up a mysql restore from large dump file

Posted: 23 Jul 2013 02:16 PM PDT

I have a 350MB msyql dump file that takes over 3 minutes to load. I'd like to speed it up. I've tried using --opt with mysqldump (shaved off 30 seconds). The best I can achieve is 42 seconds faster. Any other techniques to speed up a mysql restore from a dump file?

Deleting the MySQL 'root' user on purpose

Posted: 23 Jul 2013 07:19 PM PDT

I created a new mysql user with all the same privileges as the current 'root' user. For security reasons I thought why not create another user for this so people at least won't know the username of my super user.

I then dropped the root user.

Immediately my DB started throwing connection refused errors for all of my other users.

I quickly recreated the original 'root' user and everything magically started connecting again.

My question is then, is it possible to delete the root user in a MySQL database? And if so how?

Thanks.

MongoDB performance limits of mongo

Posted: 23 Jul 2013 01:26 PM PDT

There are 100 data files, each with 60 fields and over 4 million records. There is a perl program that inserts the records or updates them based on an userdefined _id field. There is also a History collection that stores all values ever written for three fields. A replica set with two servers and an arbiter has been set up. Initially the files were loading into the MongoDB at 45 minutes per file. After around 20 files the speed has dropped considerably. The speed at this time is 20 hours per file. The servers have started slowing down badly. I am unable to use the logout command even quickly.

I have built indexes on the _id field with hashed indexing and for the History collection I have built indexes with id and date field. The number of records at this time in the collections are 4 million for the actual data collection and around 100 million for the History collection. I have two 17 GB RAM processor, of which only around 3.5 gigs are used as per the Mongostat res command. However since the data records are to be inserted date wise sequentially, I cannot exploit parallelism either.

The limits of mongo for the specific scenario have been reached? Is this slowdown to be expected? I have fsynced manually every now and then to ensure files are being written to disk. Is there some other diagnostics that I can run to better explain the situation? Is there a solution to this?

Thanks

PHP MySQL library version 5.0.51a differs from your MySQL server version 5.1.61

Posted: 23 Jul 2013 07:19 PM PDT

After asking last question about $id=$_GET['id']; and security issues I tried a PDO connection to my server. The following message appeared; ERROR: could not find driver.

I connected to phpMyAdmin - 2.11.1.2 and had this message appear; Your PHP MySQL library version 5.0.51a differs from your MySQL server version 5.1.61. This may cause unpredictable behavior.

Is there a connection? Are PDO drivers required on the server? Would MySqli work?

Bob

Insert Speeds for large batches

Posted: 23 Jul 2013 01:51 PM PDT

In my application, my INSERTs seem to be taking a major chunk of the time. I have a large number of objects in memory (~40-50,000) which i want to insert into a table.

Lets take a sample table

CREATE TABLE bill (  id BIGINT(20) PRIMARY KEY,  amount INT(11) DEFAULT 0,  bill_date DATETIME DEFAULT NOW(),  INDEX (bill_date)  ) ENGINE=InnoDB  

Taking 3 rows as my batch size, the following are the approaches i could think of for inserting

Approach 1 - construct and fire 3 raw inserts

INSERT INTO bill (amount, bill_date) VALUES (10, '2012-01-01 00:00:00');  INSERT INTO bill (amount, bill_date) VALUES (20, '2012-01-02 00:00:00');  INSERT INTO bill (amount, bill_date) VALUES (40, '2013-02-05 00:00:00');  

Approach 2 - clubbing the values into 1 query

INSERT INTO bill (amount, bill_date) VALUES   (10, '2012-01-01 00:00:00'),  (20, '2012-01-02 00:00:00'),  (40, '2013-02-05 00:00:00');  

Approach 3 - fire this query 1 time passing 6 parameters

INSERT INTO bill (amount, bill_date) VALUES   (?, ?), (?, ?), (?, ?);  

Approach 4 - Fire this prepared query 3 times changing the 2 parameters each time

INSERT INTO bill (amount, bill_date) VALUES (?, ?);  

Any other approaches are welcome.

My question is

What is the quickest way to make multiple inserts in a table?

I have read this link on mysql insert speed and this guide to JDBC programming, but i'm not able to come to a conclusion.

My case -

Currently my table has ~ 20 columns, most of which are numbers, with a couple of varchar(60) and 1 text column. Mysql version 5.5. Running on INNODB and has 1 index on Integer primary keys. All the queries run in transaction.

I construct my queries from Java and use Spring JDBC to run the queries.

I am currently following Approach 3, It is taking around 10 seconds for 20,000 inserts to an empty table, not including the time it takes to construct the query.

To keep things in perspective, its taking 100-200 millis to fetch the data from the table.

Is there something i am missing? How do i make the inserts faster?

I need help understanding Postgres's archive cleanup functionality

Posted: 23 Jul 2013 08:32 PM PDT

I'm reading http://www.postgresql.org/docs/9.2/static/pgarchivecleanup.html and to me it seems like it will remove any and all WAL segments from my slave server that aren't required to perform recovery upon the slave server crashing (so basically everything except the last 1 or few segments, depending on how far behind my slave is running). I'd like to be able to perform emergency point in time recovery, in the unfortunate case of data corruption or accidental deletion, if possible. To do this, am I left with the following choices:

  1. Keeping all WAL segments since the last base backup was created (in my case, this is the one that the slave server was created from)
  2. Making base backups all the time or doing something with PGBarman, etc.?
  3. Am I misunderstanding something completely?

If option 1 is used, I presume the pg archive cleanup command is rendered useless to me. If option 2 is used, is it true that I should use the cleanup command as intended (deleting all the extra WAL segments immediately after they're committed)? Is there something more that I'm missing (see option 3)?

Note: I'm using Postgres 9.2.4

Proper MySQL database maintenance

Posted: 23 Jul 2013 01:39 PM PDT

I hope this isn't too broad of a question. I'm certain it'll be able to help me and any future dba noobs that stumble upon it.

I'm a systems admin that got put into the roll of DBA (because I helped the CEO with his outlook, so I can obviously manage our databases!). It's not that big or busy of a database server... a mysqldump is about 6GB in size and it's taken us 5 years to get it that big. Munin reports that we're averaging 40-60 queries a second at our peak hours.

My boss paid for me to take the Oracle University systems admin course, but having gone through it, it simply explains the different parts of the mysql server, what things do and how they do it. But it's not that in-depth and you definitely don't come out of that course a DBA.

So as the current DBA, what should I do to make sure everything is running smooth? Are there daily maintenance tasks I can perform? Are there certain metrics I should be checking? Or to put it another way, as DBAs, what do YOU do on a daily basis to keep everything in good shape?

If it'll help tailor the answer a little bit, here are some specifics. We have 171 tables, all but 3 are innodb, the others are myisam. We have Master/Slave replication set up between our primary datacenter and our disaster recovery site, using RBR. Version is 5.5.28.

What can I do?

mysql: how to copy database and required dependencies

Posted: 23 Jul 2013 08:55 PM PDT

I need to set up an automated copy of a single mysql database (not the whole DBMS). If I use mysqldump to dump and recreate a DB, the copy may be unusable because the user accounts need to also be copied. Triggers in the source DB for instance may be copied, but there's no guarantee that they will execute correctly in the new DB because the original triggers were designed to run as some particular user.

A couple of questions:

  • how can I automate copying of the user accounts and permissions?

  • is there a way, when copying a db to check whether all of its dependencies are met? (E.g. are all of the databases referred to by the stored procedures present?)

I understand that I can replicate the whole DBMS - this is overkill (and often not practical for hourly regressions)

Inline edit SQL Server database rows from Visual Studio

Posted: 23 Jul 2013 02:23 PM PDT

I'm pretty sure Microsoft have pulled one of the most useful features for performing quick edits on a SQL Server Database within the Visual Studio IDE. It seems to have only affected SQL 2012 instances, but from the Server Explorer I can no longer right click on a table "Show Table Data", pop open the SQL pane, query the data then perform inline edits on the results (as if I were modifying a spreadsheet).

Show Table Data

This means I now need to go into SSMS to make these kind of quick updates. Does anybody know of a plugin I can use with VS 2012 to bring back this functionality? It seems odd to me that Microsoft have two different development trajectories with SSDT and SSMS? Are they designed to serve different purposes? Can SSMS be integrated into the Visual Studio IDE? I'd rather have a unified development environment if possible.

Any thoughts on a workaround for this problem would be much appreciated.

EDIT

I know some purists would quiver at the thought of treating a database table like a spreadsheet, but semantically they are not a world apart, plus this is supported in SSMS. I'm more in favour of relying on GUI based approaches where I can to speed up routine tasks, why some would balk at this I have no idea?

Issue with SQL Server Aliases on a multi-instance installation

Posted: 23 Jul 2013 04:33 PM PDT

I've a 64-bit installation of SQL 2008 R2 SP2 on Server 2008 R2, with five instances of SQL.

I can get the aliases set up to work fine on the server (i.e. if I remote onto the server, then it works without problems, but from the client machine, there is no joy).

Errors:

SQL Pipes Error 40, Error 53.

Or, if specify port 1433, TCP Error 0, 11004.

If I specify port 63769 (which is mentioned as a port that SQL is listening on), TCP Error 0, 11004.

If I connect using the servername\instancename I have no problems.

I've set up the alias in 32-bit and 64-bit on the server. I've left the ports as dynamic in the TCP settings, because if I define the port as being 1433 for each instance, none of them will start. This is weird, as it's an approach I've used successfully on other servers.

There is no internal firewall, but some ports may, or may not, be shut off...

Any ideas about what to try next?

Thanks!

Do database engines update when the updated value is the same? [on hold]

Posted: 23 Jul 2013 11:53 AM PDT

Suppose I am running multiple column update on one row. Does the database engine physically write new data to the database if some of the columns contain the same values as existed previously in the database? If so, how can it be avoided?

I'm about to implement a job which will run an update on a large table and most of the values will be the same but still, it has to recalculate them all. I am wondering if the update will also rewrite each column even if there is no need to, because the storage media will degrade faster.

Dealing with data stored as arrays in a MySQL DB

Posted: 23 Jul 2013 07:15 PM PDT

So I know storing arrays in a DB field is wrong and would never do it myself, however a 3rd party plugin my company is using stores data in an array and I was wondering if you could help me try to deal with it.

It basically seems to link 2 tables and add a view count. Here is an example of the data:

a:4:{i:4;i:196;i:26;i:27;i:5;i:155;i:34;i:4;}

So I think this means there are 4 entries in the array, each with 2 attributes. The first - i.e. 4, 26, 5, 34 are "store codes". The second lot (196, 27, 155, 4) are number of plays. God knows why they are stored like this as there is already another table that links the video with the stores and they could've just stuck another column there for view count.

Anywho, what I want to do is order by view count based on store id within that array. Do you think this is possible and does anyone have any ideas how to do this? If storing data like this is a standard, do you know the name for it as I could probably take it from there?

Thanks!

Security : SQL Server Object Access From One Schema to another schema

Posted: 23 Jul 2013 05:37 PM PDT

Scenario: I have denied permissions from table1 for USER1. DENY SELECT,INSERT,UPDATE,DELETE ON OBJECT :: dbo.table1 TO USER1

Instead, I have allowed access to a view(vw_table1) which will selects from table1 and filter some data using where clause. The idea is, users won't be able to see entire data of the table. To support insert, update and delete, I have created instead of triggers.

The above scenario works fine when Table1 and VW_table1 are on same schema.

Now, I have a requirement to move the view only in a different schema (sec.vw_table1). After moving the view when I do "select * From sec.vw_table1" It gives access denied error when the view tries to access dbo.table1.

How can we achieve this in SQL Server? I don't want to give access to dbo.Table1 to User1. But should work when it is called from sec.vw_table1.

Any help would be highly appreciated!!

One to one mapping vs many to one mapping on data base design

Posted: 23 Jul 2013 07:00 PM PDT

I'm currently doing call detail records (CDR) data analysis as my project. On course of data-base design, I had separated the date (table contain id, day,month,year), time field (table contain id, second, minute, hour etc) from call table (table contain called number, duration, time_id, date_id etc) as millions of call transaction occurs in a day. Now I got confuse on how to map date, time to call table i.e. do one-to-one mapping or many-to-one mapping. One to one mapping can be done easily but data are redundant but I have no idea about many-to-one mapping. What I think about many to one mapping is I have first extract date and check if it exist or not. if exit then get its id otherwise insert date and get its id. (Background:- I have a cdr data with field card_no, service_key, calling_no, called_no, answer_time, clear_time, duration on text file on tab separated format and have to make data mart as like figure below. enter image description here

I am not good on telling problem so if any information required then comment below.

EDIT

On cdr table

  1. Card_no is mobile number of the subscriber and if the subscriber dialing to the other number than if is equal to calling and if subscriber is receiving the call then card_no is equal to called. In call table its repeating.
  2. sql_time_stamp means answer_time and as duration is already in database then why I have to store clear_time also ( if there is any reason then let me know)

On Subscriber table

Subscriber table contain dob which is static so I create another table as this contain age (which may be changing) just to separate static part and dynamic part also gender is include to reduce multiple table joining as analysis just contain age(_group) and gender.

Negative number of rows inserted

Posted: 23 Jul 2013 04:28 PM PDT

I just ran a rather simple insert statement (that I think processed alot of rows), and I got the message: "-1,311,750,183 rows inserted". Should I be concerned about the fact that it's negative?

Analyse MySQL General Query Log in Real-time?

Posted: 23 Jul 2013 04:37 PM PDT

We want to use mysql general query log to do real-time monitoring and auditing.

Currently our approach is:

  • set general_log=on;
  • sleep 15m;
  • set general_log=off;
  • scp & rm xxx.log;
  • set general_log=on;...

But the main problem is when turn on/off general log it'll cause a peak of slow query.

I also thought of another approach: turn on genlog; tail -f it and send out the log; periodically truncate the logfile (with "> xxx.log" or "cat /dev/null >xxx.log").

I'm wondering whether it's practical.

If only mysql would provide some built-in general log message queue stuff...

Innodb table is taking randomly long time to execute the insert query and in show processlist showing as queryend/updating

Posted: 23 Jul 2013 05:37 PM PDT

Innodb table is taking randomly long time to execute the insert query and in show processlist showing as queryend/updating and it happens for the similar queries for the same table and the queries got strucks for five to fifteen minutes and suddenly process all the queued up list.

I cannot understand what happens suddenly sometimes...

The same queries for the same version, configuratios of the mysql in different server has no problem...

Here is the innodb configurations:

+---------------------------------+----------------------------------+  | Variable_name                   | Value                            |  +---------------------------------+----------------------------------+  | innodb_adaptive_flushing        | ON                               |  | innodb_adaptive_hash_index      | ON                               |  | innodb_additional_mem_pool_size | 1073741824                       |  | innodb_autoextend_increment     | 8                                |  | innodb_autoinc_lock_mode        | 1                                |  | innodb_buffer_pool_instances    | 1                                |  | innodb_buffer_pool_size         | 8589934592                       |  | innodb_change_buffering         | all                              |  | innodb_checksums                | ON                               |  | innodb_commit_concurrency       | 0                                |  | innodb_concurrency_tickets      | 500                              |  | innodb_data_file_path           | ibdata1:4G;ibdata2:4G:autoextend |  | innodb_data_home_dir            | /var/lib/mysql                   |  | innodb_doublewrite              | ON                               |  | innodb_fast_shutdown            | 1                                |  | innodb_file_format              | Antelope                         |  | innodb_file_format_check        | ON                               |  | innodb_file_format_max          | Antelope                         |  | innodb_file_per_table           | ON                               |  | innodb_flush_log_at_trx_commit  | 2                                |  | innodb_flush_method             |                                  |  | innodb_force_load_corrupted     | OFF                              |  | innodb_force_recovery           | 0                                |  | innodb_io_capacity              | 400                              |  | innodb_large_prefix             | OFF                              |  | innodb_lock_wait_timeout        | 50                               |  | innodb_locks_unsafe_for_binlog  | ON                               |  | innodb_log_buffer_size          | 33554432                         |  | innodb_log_file_size            | 536870912                        |  | innodb_log_files_in_group       | 2                                |  | innodb_log_group_home_dir       | ./                               |  | innodb_max_dirty_pages_pct      | 75                               |  | innodb_max_purge_lag            | 0                                |  | innodb_mirrored_log_groups      | 1                                |  | innodb_old_blocks_pct           | 37                               |  | innodb_old_blocks_time          | 0                                |  | innodb_open_files               | 300                              |  | innodb_print_all_deadlocks      | ON                               |  | innodb_purge_batch_size         | 20                               |  | innodb_purge_threads            | 0                                |  | innodb_random_read_ahead        | OFF                              |  | innodb_read_ahead_threshold     | 56                               |  | innodb_read_io_threads          | 10                               |  | innodb_replication_delay        | 0                                |  | innodb_rollback_on_timeout      | OFF                              |  | innodb_rollback_segments        | 128                              |  | innodb_spin_wait_delay          | 6                                |  | innodb_stats_method             | nulls_equal                      |  | innodb_stats_on_metadata        | ON                               |  | innodb_stats_sample_pages       | 8                                |  | innodb_strict_mode              | OFF                              |  | innodb_support_xa               | ON                               |  | innodb_sync_spin_loops          | 30                               |  | innodb_table_locks              | ON                               |  | innodb_thread_concurrency       | 0                                |  | innodb_thread_sleep_delay       | 10000                            |  | innodb_use_native_aio           | ON                                |  | innodb_use_sys_malloc           | ON                               |  | innodb_version                  | 5.5.30                           |  | innodb_write_io_threads         | 10                               |  +---------------------------------+----------------------------------+  

Can some one suggest me... why and what exactly happens

Merging two Access tables into one

Posted: 23 Jul 2013 03:37 PM PDT

I have mostly identical tables, however there are slightly differences like missing columns in older ones. I want to merge 5 tables into one. Is there any easy way to do this?

Why disabling a clustered index makes the table inaccessible?

Posted: 23 Jul 2013 02:47 PM PDT

When an index is disabled, the definition remains in the system catalog but is no longer used.
SQL Server does not maintain the index (as data in the table changes), and the index cannot be used to satisfy queries.
If a Clustered Index is disabled, the entire table becomes inaccessible.

The question is:
why isn't it possible to access the data directly from the table discarding the B-tree?
(most likely by scanning the table row by row)
wouldn't that be more appropriate than inaccessible data at all?

DB2 db2fm proccess

Posted: 23 Jul 2013 06:37 PM PDT

Server is been up for 365 days, however i got some weird repeated procceses.

Are these normal?

ps -fea | grep db2fm

  db2inst1  643284  229516  29   May 25      - 212564:06 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1  671770  229516  56   May 14      - 227447:02 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1  757794 1237058   0   Apr 19  pts/7  0:00 /bin/sh /home/db2inst1/sqllib/bin/db2cc  db2inst1  774232  229516  30   Sep 25      - 94218:54 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1  962750  229516  30   Jul 18      - 145256:01 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1  999450  229516  29   Aug 17      - 117710:27 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1 1179898  229516  58   Nov 02      - 75788:49 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  

ps -fea | grep db2agent

  db2inst1  409770  680100   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1  450750  778412   0   Apr 18      -  0:03 db2agent (idle) 0  db2inst1  618688  680100   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1  651440  680100   0   Nov 17      -  0:20 db2agent (DATABASEA) 0  db2inst1  655508  491676   0   Apr 19      -  0:04 db2agent (idle) 0  db2inst1  684038  680100   0   Mar 23      -  0:03 db2agent (DATABASEA) 0  db2inst1  790706  491676   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1  880672  680100   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1  913438  778412   0   Nov 16      -  0:20 db2agent (idle) 0  db2inst1  946182  491676   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1  991312  778412   0   Apr 17      -  0:16 db2agent (idle) 0  db2inst1 1077466  491676   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1 1134726  680100   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1 1142964  491676   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1 1233112  491676   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1 1261748  778412   0   Jun 15      -  0:18 db2agent (idle) 0  db2inst1 1384678  778412   0   Mar 23      -  0:27 db2agent (idle) 0  db2inst1 1404936  680100   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1 1421368  778412   0   Mar 22      -  0:04 db2agent (idle) 0  db2inst1 1445936  491676   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1 1482864  491676   0   Jun 16      -  0:31 db2agent (idle) 0  db2inst1 1503440  778412   0   Jun 15      -  0:56 db2agent (idle) 0  db2inst1 1519842  778412   0   Mar 23      -  0:00 db2agent (DATABASEA) 0  db2inst1 1531946  680100   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1 1572884  680100   0   Apr 19      -  0:00 db2agent (idle) 0  

Other info

  oslevel -g  Fileset                                 Actual Level        Maintenance Level  -----------------------------------------------------------------------------  bos.rte                                 5.3.0.40            5.3.0.0    db2fm -s -S  Gcf module 'fault monitor' is NOT operable  Gcf module '/home/db2inst1/sqllib/lib/libdb2gcf.a' state is AVAILABLE      uptime    02:14PM   up 365 days,  12:51,  6 users,  load average: 6.69, 6.89, 6.97     db2level  DB21085I  Instance "db2inst1" uses "64" bits and DB2 code release "SQL08020"  with level identifier "03010106".  Informational tokens are "DB2 v8.1.1.64", "s040812", "U498350", and FixPak "7"    

How to snapshot or version a relational database when data changes?

Posted: 23 Jul 2013 02:37 PM PDT

My system receives data feeds. Each data feed will end up creating inserts and/or updates to most tables in the (relational) database.

I need to capture the snapshot of what the entire database looked like after each data feed is received. Basically I need a way to version the database each time a data feed is run through the system.

Note, by capturing a snapshot, I dont mean literally taking a snapshot of the database, but rather writing history records or some such mechanism so that I can query the database across "versions" to see what changed between versions (among other use cases)

Do known data model designs exist that can capture a snapshot of a database version like this?

Rent weekly cost database design

Posted: 23 Jul 2013 01:37 PM PDT

I have a database which contains a table BUILDING with in each row details about some building, another table BUILDING_UNIT contains rows with details about a single building unit which refers with a foreign key to the belonging BUILDING.ID. The BUILDING_UNIT table also refers to a table CATEGORY which tells whether the BUILDING_UNIT is of category A,B,C,D again with a foreign key pointing to CATEGORY.ID.

Now the final cost of renting the building unit depends on its building, category and on the number of days it is rented and specific period of the year. We only rent them weekly so I might as well use weeks only however I'd like it to be as flexible as possible in the future.

I cannot convince myself on a table which can represent this situation.

Do I have to use a table with coefficients for each day of the year and then a table with coefficients for A,B,C,D and then a table with coefficients for each Building and then somehow calculate a result?

Is there some standard and recognized implementation for problems of this type?

Thank you

EDIT: Notice the solution should abstract from the formula for calculating the cost which might change in the future. However I might be asked to make a specific week of the year, for building unit X inside building Y to cost 300$ while the week after 600$. Generally building units inside the same building and in the same week cost the same, however that might change in future so I'd like to treat already all specific cases.

How to add 'root' MySQL user back on MAMP?

Posted: 23 Jul 2013 12:37 PM PDT

On PhpMyAdmin, I removed 'root' user by mistake. I was also logged in as 'root'. How can I add the user 'root' back, on MAMP?

Escaping T-SQL Keywords

Posted: 23 Jul 2013 01:22 PM PDT

$sql = "SELECT Kill FROM tbl_pvporderview";  

Problem is that I end up with: Incorrect syntax near the keyword 'Kill'.

Because kill is a T-SQL command... any way to bypass it?

I can't change the column name because it's used by the software a lot and I cant change the software that's using the database.

So it simply fails if I use sqlserv to select data from that column. '' or "" wont help.

The complete statement would be:

$sql = "SELECT serial,Kill FROM tbl_pvporderview WHERE Kill > (?) ORDER BY Kill DESC ";  

No comments:

Post a Comment

Search This Blog