Saturday, September 21, 2013

[how to] How to reduce table duplication, while ensuring integrity of the database?

[how to] How to reduce table duplication, while ensuring integrity of the database?


How to reduce table duplication, while ensuring integrity of the database?

Posted: 21 Sep 2013 08:57 PM PDT

For example, in my app I have some class called Contact:

case class Contact(name: String, email: String, phone: String)  

And several other classes have one-to-many relation to that contact (for example, there are Companies and Houses).

Currently, I have database table for Contacts set up as something like this:

CREATE TABLE `Contacts` (   `id` bigint(20) NOT NULL AUTO_INCREMENT,   `entityId` bigint(20) NOT NULL,   `entityType` tinyint(4) NOT NULL,   `name` varchar(300),   `phone` varchar(300),   `email` varchar(300),    PRIMARY KEY (`id`)  )  

where entityType is storing the "code" that determines the entity type this contact belongs to (1 for Company, 2 for House, etc).

Now, I understand that this is awful - at the very least, I can't create a foreign key on entityId, thus I can't ensure that my data is consistent.

The alternative will be to create a separate Contacts table for each entity type (CompanyContacts, HouseContacts, etc) but that will result in lots of duplication in both database and application code.

Is there a better way to do this?

How can I find and clean up unused database files?

Posted: 21 Sep 2013 07:34 PM PDT

How can I find and clean up unused (i.e. not used by any attached database) SQL Server database files (.mdf, .ldf, .ndf) that take up extra space on my server?

Help in choosing DBMS [on hold]

Posted: 21 Sep 2013 12:20 PM PDT

I would like to ask you some help. Well, we need to choose DBMS (on the Back-End side) for the project. But the problem is that I don't know all features of different DBMSes really good. So, after some research and spending time with Google, I decided to ask a little bit of help here : )

Well, the project, if fact, is some kind of internet service, all Back-End (and database) is on our side.

We have answered some questions (for ourselves) and our main criterions are:

  • Not Oracle DB;
  • Doesn't matter, SQL or NoSQL;
  • DBMS has to support C# (.NET);
  • Scalability is important, because we really don't know how many users will be - 1 000 or 1 000 000;
  • We don't keep media content on the Back-End side;
  • Fast transactions and getting results are important (there is no possibility to wait 10-20-40 seconds for result of query - users are so impatient :)
  • Approximate ratio of queries: Number of SELECT > Number of UPDATE > Number of INSERT

I'm new in DBA, so I will be very grateful for detailed answers!

After spending time for research I decided that we can use MySQL/PostgreSQL or MongoDB (the last one has great scalability by sharding!).

And one more question - what questions we must answer to make right choice?

And thanks a lot!

query cache hit values is not getting change in my database

Posted: 21 Sep 2013 02:57 AM PDT

query cache hit values is not getting change in my database.It showing zero values even i execute same query for two to three times.

Thanks.

Custom Auto-Generated Sequences with SQL Server

Posted: 21 Sep 2013 03:02 AM PDT

I have Employee table with Empolyee ID column EmpID (nvarchar(5)) ,
and I want to make EmpID's value to

00001    00002    00003    00004    ....  ....      

How can I make Custom Auto-Generated Sequences likes this ?

How to Recover corrupt SQL Sever database?

Posted: 21 Sep 2013 03:03 AM PDT

I am running Windows 2000 OS and while using my SQL application a message popped-up, can anyone figure it out, the message is like that.

Msg 5172, Level 16, State 15, Line 1

How to UPDATE a table for the level of parents in a column?

Posted: 21 Sep 2013 12:03 AM PDT

I have a table with internal parents as

id    parent    parent_level  1     NULL      1  2     1         2  3     1         2  4     3         3  

where parent has a FK to id.

How can I update the table to assign the values of parent_level, according to the parent, and parent of parent of each row.

Import only selected column from tsv into mongodb

Posted: 21 Sep 2013 12:09 AM PDT

Edit

I am uploading very big tsv file thats why i dont want to import unnecessary column in mongodb, I know how to upload file into mongodb(just as attachment) as:-

mongoimport --db xyztsv--c review1 --type tsv --file /Reviews1.tsv -f member_id,product_id,date1,no_helpfeedback,no_feedback,rating,title,body  

where I just want title and body column to be import on mongodb,is their any command or do I have to write any JSP function for it ? I know it can be done through cygwin on windows but Is it possible to do it directly while importing on mongoDb. many thanks.

Can scheduled and continous replication configurations exist side-by-side on the same master/slave servers?

Posted: 21 Sep 2013 09:22 AM PDT

Environment

We have a core sql server cluster. This cluster contains some databases that get replicated to a load-balanced sql cluster of currently 3 servers. These databases are replicated each 12 hours but will eventually be replicated every 4 hours.

Requirement

On this cluster a new database is created and we need this database to be replicated asap to the load-balanced sql cluster. A delay of seconds or minutes is allowed and writes to this database are currently and in the future low (a few per hour).

Questions

Can two different replication plans coexist side-by-side on the same environment?

Is it possible to setup a second replication routine for this scenario (continuous transaction replication) besides the current replication schema for the existing databases?

Does this create a high risk for a large existing scheduled replication job?

Our DBA says that this replication scenario creates a high risk for the existing replication configuration (2x a day).

My brainwaves

I can't imagine that this minor write activity with continuous transaction replication can create issues for the large existing replication job. I can imagine the other way around that our continuous replication will suffer twice a day due to the large replication job. We are perfectly fine with that as replication is required ASAP during regular operation.

MySQL MyISAM index causes query to match no rows; indexes disabled, rows match

Posted: 21 Sep 2013 05:48 AM PDT

I created a table and index as described in this SE post, but when I query the table for a particular ID, no matches are found. When I disable the index, matches are found.

Commands ran:

CREATE TABLE mytable (id1 int, id2 int, score float) ENGINE=MyISAM;  LOAD DATA INFILE '50-billion-records.txt' INTO mytable (id1, id2, score);  ALTER TABLE mytable ADD INDEX id1_index (id1);    SELECT COUNT(*) FROM mytable; # returns 50 billion  SELECT COUNT(DISTINCT id1) FROM mytable; # returns 50K, should be about 50M  SELECT COUNT(*) FROM mytable WHERE id1 = 49302; # returns 0 results    ALTER TABLE mytable DISABLE KEYS;  SELECT * FROM mytable WHERE id1 = 49302 LIMIT 1; # returns 1 row  

Is this a bug with MySQL, or does this behavior make sense for some reason?

Note: When I ran ALTER TABLE mytable ENABLE KEYS; just now, the command is acting like it is building an index for the first time (it's still running after 30 minutes, and memory usage is at 80 GB, which matches my setting of myisam_sort_buffer_size=80G. I'll reply when this command finishes running (the original ALTER .. ADD INDEX.. took 7.5 hours, so it may be a bit).

Update: Running SHOW PROCESSLIST indicates "Repair with keycache" is taking place with my ENABLE KEYS command.

Update 2: I killed the repair job on the original index because after several hours, the memory and IO seemed pretty constant, and I hoped if I started over, it may just work. So in second pass, I rebuilt the table and index, and after doing so, the exact same result occurs.

As requested, here is explain for count queries with index enabled:

mysql> explain select * from mytable where id1 = 49302;  +----+-------------+-----------+------+---------------+-----------+---------+-------+------+-------------+  | id | select_type | table     | type | possible_keys | key       | key_len | ref   | rows | Extra       |  +----+-------------+-----------+------+---------------+-----------+---------+-------+------+-------------+  |  1 | SIMPLE      | mytable   | ref  | id1_index     | id1_index | 5       | const |    1 | Using where |  +----+-------------+-----------+------+---------------+-----------+---------+-------+------+-------------+  1 row in set (0.00 sec)    mysql> explain SELECT COUNT(DISTINCT id1) FROM mytable;  +----+-------------+-----------+-------+---------------+-----------+---------+------+-----------+--------------------------+  | id | select_type | table     | type  | possible_keys | key       | key_len | ref  | rows      | Extra                    |  +----+-------------+-----------+-------+---------------+-----------+---------+------+-----------+--------------------------+  |  1 | SIMPLE      | mytable   | range | NULL          | id1_index | 5       | NULL | 170331743 | Using index for group-by |  +----+-------------+-----------+-------+---------------+-----------+---------+------+-----------+--------------------------+  1 row in set (0.01 sec)  

Here is explains after disabling indexes (Note: 25 billion is correct number of records in table, not 50 billion as mentioned above):

mysql> explain select * from mytable where id1 = 66047071;  +----+-------------+-----------+------+---------------+------+---------+------+-------------+-------------+  | id | select_type | table     | type | possible_keys | key  | key_len | ref  | rows        | Extra       |  +----+-------------+-----------+------+---------------+------+---------+------+-------------+-------------+  |  1 | SIMPLE      | mytable   | ALL  | NULL          | NULL | NULL    | NULL | 25890424835 | Using where |  +----+-------------+-----------+------+---------------+------+---------+------+-------------+-------------+  1 row in set (0.00 sec)    mysql> explain SELECT COUNT(DISTINCT id1) FROM mytable;  +----+-------------+-----------+------+---------------+------+---------+------+-------------+-------+  | id | select_type | table     | type | possible_keys | key  | key_len | ref  | rows        | Extra |  +----+-------------+-----------+------+---------------+------+---------+------+-------------+-------+  |  1 | SIMPLE      | mytable   | ALL  | NULL          | NULL | NULL    | NULL | 25890424835 |       |  +----+-------------+-----------+------+---------------+------+---------+------+-------------+-------+  1 row in set (0.00 sec)  

Update 3: Still hoping to solve this oddity. Is there something I can do with myisamchk that might fix this? Since I completely repopulated the table and rebuilt the index (i.e. starting from scratch) and got the same result, I assume this was not just some freak occurrence, and that it is due to some internal limit I'm unaware of. On a side note, I've tried switching to Postgres for this dataset, but running into some other unrelated issues (that I'll leave to a different question), so fixing this index is still a top priority for me. Thanks!!

Update 4: Running CHECK TABLE currently. Will post back with updates as I have them

Inline edit SQL Server database rows from Visual Studio

Posted: 21 Sep 2013 05:22 PM PDT

I'm pretty sure Microsoft have pulled one of the most useful features for performing quick edits on a SQL Server Database within the Visual Studio IDE. It seems to have only affected SQL 2012 instances, but from the Server Explorer I can no longer right click on a table "Show Table Data", pop open the SQL pane, query the data then perform inline edits on the results (as if I were modifying a spreadsheet).

Show Table Data

This means I now need to go into SSMS to make these kind of quick updates. Does anybody know of a plugin I can use with VS 2012 to bring back this functionality? It seems odd to me that Microsoft have two different development trajectories with SSDT and SSMS? Are they designed to serve different purposes? Can SSMS be integrated into the Visual Studio IDE? I'd rather have a unified development environment if possible.

Any thoughts on a workaround for this problem would be much appreciated.

EDIT

I know some purists would quiver at the thought of treating a database table like a spreadsheet, but semantically they are not a world apart, plus this is supported in SSMS. I'm more in favour of relying on GUI based approaches where I can to speed up routine tasks, why some would balk at this I have no idea?

Dealing with data stored as arrays in a MySQL DB

Posted: 21 Sep 2013 08:22 PM PDT

So I know storing arrays in a DB field is wrong and would never do it myself, however a 3rd party plugin my company is using stores data in an array and I was wondering if you could help me try to deal with it.

It basically seems to link 2 tables and add a view count. Here is an example of the data:

a:4:{i:4;i:196;i:26;i:27;i:5;i:155;i:34;i:4;}

So I think this means there are 4 entries in the array, each with 2 attributes. The first - i.e. 4, 26, 5, 34 are "store codes". The second lot (196, 27, 155, 4) are number of plays. God knows why they are stored like this as there is already another table that links the video with the stores and they could've just stuck another column there for view count.

Anywho, what I want to do is order by view count based on store id within that array. Do you think this is possible and does anyone have any ideas how to do this? If storing data like this is a standard, do you know the name for it as I could probably take it from there?

Thanks!

Where to install Oracle client software on SharePoint 2010 Server

Posted: 21 Sep 2013 06:22 AM PDT

We have our ASP.net web application hosted on a SharePoint Server 2010. Our application uses oracle database to save and retrieve data.For our application to work on SharePoint server we need to install Oracle client software.

Our SharePoint Server architecture consists as following

  • 2 web front ends
  • Index Server
  • Crawl Server
  • Application server
  • Load Balancer.

I am not the admin on the SharePoint server and trying to figure out if i have to install Oracle client on server where do i install it? Do i need to install it on all our servers or just the web fronends. Can some one please help me with this?

Unable to connect oracle as sysdba tables have been dropped

Posted: 21 Sep 2013 03:22 PM PDT

I have a script which lists all tables belonging to the user and executes DROP for all of them.

By mistake, I logged in oracle with 'sys as sysdba' and ran above script. Due to which all sysdba tables are dropped.

Now i can not startup database instance. In alert log, it gives following error:

Sat Jul 20 15:28:21 2013

Errors in file orcl_ora_4276.trc:

ORA-00942: table or view does not exist

Error 942 happened during db open, shutting down database

USER: terminating instance due to error 942

I tried to flashback one droppd table, but it is giving error:

SQL> alter database open;

alter database open

*

ERROR at line 1:

ORA-01092: ORACLE instance terminated. Disconnection forced

SQL> FLASHBACK TABLE MAP_OBJECT TO BEFORE DROP;

ERROR:

ORA-03114: not connected to ORACLE

Please suggest if there is any way to restore all these tables. Or if creating new database is the only way?

Creating a database redundancy based on Mysql

Posted: 21 Sep 2013 12:22 AM PDT

I have this requirement where my web application running on a tomcat is behind a loadbalancer and I am planning to deploy this application on multiple servers.

This webapp though needs a database ( mysql ) for managing profile and other similar stuff, now i can only have a single master database and I want all my front end servers to talk to this database. obviously if I do this it becomes my single point of failure , hence the need to run a redundancy/ dynamic failover.

Can you please guide me on same. my requirement though becomes a little more complicated something which i couldn't find on available articles is how to set up a connection to this cluster. Below is the example of how the server.xml is configured in my environment when it talks to single DB

=========================================================================================================================

    <Resource name="jdbc/master" auth="Container" type="javax.sql.DataSource" maxActive="10"              maxIdle="10" validationQuery="SELECT 1" testOnBorrow="false" testWhileIdle="true" validationQueryTimeout="5"              timeBetweenEvictionRunsMillis="10000" minEvictableIdleTimeMillis="60000" maxWait="10000"              username="XXX" password="XXXXXX" driverClassName="com.mysql.jdbc.Driver" removeAbandoned="true"  

removeAbandonedTimeout="60" logAbandoned="true" **url="jdbc:mysql://localhost:53306/master?autoReconnect=true" />**

Now only thing I can replace is the URL but how is the question. and what should be the localhost pointing to.

Really appreciate any response/suggestions here. please feel free to ask me any information you feel i haven't supplied , suggesting an alternative approach to my problem is equally welcome but please note i cant change the webapp as it is a 3rd party application.

Analyse MySQL General Query Log in Real-time?

Posted: 21 Sep 2013 06:22 PM PDT

We want to use mysql general query log to do real-time monitoring and auditing.

Currently our approach is:

  • set general_log=on;
  • sleep 15m;
  • set general_log=off;
  • scp & rm xxx.log;
  • set general_log=on;...

But the main problem is when turn on/off general log it'll cause a peak of slow query.

I also thought of another approach: turn on genlog; tail -f it and send out the log; periodically truncate the logfile (with "> xxx.log" or "cat /dev/null >xxx.log").

I'm wondering whether it's practical.

If only mysql would provide some built-in general log message queue stuff...

Does Change Data Capture (CDC) work in a SQL Server 2012 AlwaysOn Failover Cluster Instances setup?

Posted: 20 Sep 2013 10:53 PM PDT

Does Change Data Capture (CDC) work in a SQL Server 2012 AlwaysOn Failover Cluster Instances setup?

There would be two nodes in the AlwaysOn Failover Cluster Instances setup and two Domain Controllers.

We are not using AlwaysOn Availability Groups.

Will CDC work? and will it failover?

Mysql settings for query_cache_min_res_unit

Posted: 21 Sep 2013 12:22 PM PDT

What is the best setting for query_cache_min_res_unit for these results:

+-------------------------+-----------+  | Variable_name           | Value     |  +-------------------------+-----------+  | Qcache_free_blocks      | 35327     |  | Qcache_free_memory      | 295242976 |  | Qcache_hits             | 236913188 |  | Qcache_inserts          | 49557287  |  | Qcache_lowmem_prunes    | 0         |  | Qcache_not_cached       | 7128902   |  | Qcache_queries_in_cache | 195659    |  | Qcache_total_blocks     | 426870    |  +-------------------------+-----------+  

Do I need to change any other settings?

My website creates very large results. This is the current setting:

query_cache_min_res_unit = 4096  

Info on the mysql dev website

If most of your queries have large results (check the Qcache_total_blocks and Qcache_queries_in_cache status variables), you can increase performance by increasing query_cache_min_res_unit. However, be careful to not make it too large (see the previous item).

DB2 db2fm proccess

Posted: 21 Sep 2013 07:22 PM PDT

Server is been up for 365 days, however i got some weird repeated procceses.

Are these normal?

ps -fea | grep db2fm

  db2inst1  643284  229516  29   May 25      - 212564:06 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1  671770  229516  56   May 14      - 227447:02 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1  757794 1237058   0   Apr 19  pts/7  0:00 /bin/sh /home/db2inst1/sqllib/bin/db2cc  db2inst1  774232  229516  30   Sep 25      - 94218:54 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1  962750  229516  30   Jul 18      - 145256:01 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1  999450  229516  29   Aug 17      - 117710:27 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  db2inst1 1179898  229516  58   Nov 02      - 75788:49 /home/db2inst1/sqllib/bin/db2fm -i db2inst1 -m /home/db2inst1/sqllib/lib/libdb2gcf.a -S  

ps -fea | grep db2agent

  db2inst1  409770  680100   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1  450750  778412   0   Apr 18      -  0:03 db2agent (idle) 0  db2inst1  618688  680100   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1  651440  680100   0   Nov 17      -  0:20 db2agent (DATABASEA) 0  db2inst1  655508  491676   0   Apr 19      -  0:04 db2agent (idle) 0  db2inst1  684038  680100   0   Mar 23      -  0:03 db2agent (DATABASEA) 0  db2inst1  790706  491676   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1  880672  680100   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1  913438  778412   0   Nov 16      -  0:20 db2agent (idle) 0  db2inst1  946182  491676   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1  991312  778412   0   Apr 17      -  0:16 db2agent (idle) 0  db2inst1 1077466  491676   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1 1134726  680100   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1 1142964  491676   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1 1233112  491676   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1 1261748  778412   0   Jun 15      -  0:18 db2agent (idle) 0  db2inst1 1384678  778412   0   Mar 23      -  0:27 db2agent (idle) 0  db2inst1 1404936  680100   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1 1421368  778412   0   Mar 22      -  0:04 db2agent (idle) 0  db2inst1 1445936  491676   0   Apr 19      -  0:00 db2agent (DATABASEA) 0  db2inst1 1482864  491676   0   Jun 16      -  0:31 db2agent (idle) 0  db2inst1 1503440  778412   0   Jun 15      -  0:56 db2agent (idle) 0  db2inst1 1519842  778412   0   Mar 23      -  0:00 db2agent (DATABASEA) 0  db2inst1 1531946  680100   0   Apr 19      -  0:00 db2agent (idle) 0  db2inst1 1572884  680100   0   Apr 19      -  0:00 db2agent (idle) 0  

Other info

  oslevel -g  Fileset                                 Actual Level        Maintenance Level  -----------------------------------------------------------------------------  bos.rte                                 5.3.0.40            5.3.0.0    db2fm -s -S  Gcf module 'fault monitor' is NOT operable  Gcf module '/home/db2inst1/sqllib/lib/libdb2gcf.a' state is AVAILABLE      uptime    02:14PM   up 365 days,  12:51,  6 users,  load average: 6.69, 6.89, 6.97     db2level  DB21085I  Instance "db2inst1" uses "64" bits and DB2 code release "SQL08020"  with level identifier "03010106".  Informational tokens are "DB2 v8.1.1.64", "s040812", "U498350", and FixPak "7"    

Generic SQL Job Scheduler for multiple RDBMS's?

Posted: 21 Sep 2013 08:22 AM PDT

I have been searching for an answer to this, but can't seem to find anything. So my problem is this - we have an environment with MS SQL Server 2008, MySQL, and RedShift, and have some complex dataflows between the databases. Right now, the scheduling is done through independent systems, but I want to have one scheduler that controls the dataflows from beginning-to-end, and is able to script flows from MS SQL to RedShift, etc. Is there a system that can accomplish this already? I'm not a DBA, so I am guessing someone has had this problem before...

Thanks in advance!

EDIT: So one of our dataflows might look like this - file posted on SFTP --> run normal ETL routines --> compile final complete file --> send to customer/push to S3 --> Run SQL commands on Redshift to load* --> Nightly batch processing on RedShift* --> Unload to S3* --> Load into MySQL*

*These are manually run using a tool that just connects via jdbc (can't remember the program)

My DB-related experience is very light, so I was about to write some python scripts and schedule them in CRON, but that is custom and hard to expand - surely someone has had this problem before. We would like to be able to see a status of the job in one place, create new dataflows/ETL's between all three systems (like an SSIS job).

Login failed for user Error: 18456, Severity: 14, State: 11

Posted: 21 Sep 2013 11:22 AM PDT

I have an AD group XYZ that I have added to SQL Server security with data_reader permissions.

The XYZ group has around 10 users in there who are successfully able to access the SQL Server database. I recently added a new user to this group (at AD level), but this person is not able to access SQL Server (through Mgmt Studio) and he's getting the error below

Login failed for user. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors.

Error: 18456, Severity: 14, State: 11.

I have already verified AD permissions are setup properly, user has restarted his machine, he is not part of any group that has DENY access and the SQL Server XYZ group has been removed and readded to the SQL Server instance in Mgmt Studio and server has been restarted.

Any ideas on how to proceed further?

Thanks!

MySQL data too long error

Posted: 20 Sep 2013 10:22 PM PDT

One of the column in my table was initially created as a varchar(1000). Now there is a need to increase the size of that column to hold around 5000 characters. I used the alter statement to increase the size of that column. DESC table_name as well as SHOW CREATE TABLE table_name clearly indicates the new size of that column to be 5000 characters. However, interestingly when I try to insert any data exceeding 1000 characters ERROR 1406 (22001) : Data too long for column error shows up. Out of desperation I changed the datatype to text, and still it's limited to 1000 characters. Any suggestion?

I created an other dummy table with a coloumn size of varchar(5000) and that works just fine. Engine used is InnoDB and Default Charset is UTF8.

Rent weekly cost database design

Posted: 21 Sep 2013 04:22 PM PDT

I have a database which contains a table BUILDING with in each row details about some building, another table BUILDING_UNIT contains rows with details about a single building unit which refers with a foreign key to the belonging BUILDING.ID. The BUILDING_UNIT table also refers to a table CATEGORY which tells whether the BUILDING_UNIT is of category A,B,C,D again with a foreign key pointing to CATEGORY.ID.

Now the final cost of renting the building unit depends on its building, category and on the number of days it is rented and specific period of the year. We only rent them weekly so I might as well use weeks only however I'd like it to be as flexible as possible in the future.

I cannot convince myself on a table which can represent this situation.

Do I have to use a table with coefficients for each day of the year and then a table with coefficients for A,B,C,D and then a table with coefficients for each Building and then somehow calculate a result?

Is there some standard and recognized implementation for problems of this type?

Thank you

EDIT: Notice the solution should abstract from the formula for calculating the cost which might change in the future. However I might be asked to make a specific week of the year, for building unit X inside building Y to cost 300$ while the week after 600$. Generally building units inside the same building and in the same week cost the same, however that might change in future so I'd like to treat already all specific cases.

Proper procedure for migrating a MySQL database to another Debian machine?

Posted: 21 Sep 2013 02:22 PM PDT

I have one server running an older Debian version with MySQL 5.x and a newer Debian server, also running MySQL.

I've created a backup of all databases on the first server like so:

mysqldump -uuser -ppass --all-databases > dump.sql  

On the other server, I did a:

mysql -uuser -ppass < dump.sql  

At first, everything seemed great. I could browse my databases in phpMyAdmin, but as soon as I tried logging in again, it failed. Turns out, my root password had been overwritten with the one from the older database.

I wanted to reset it, but in order to do so, I would have needed to start mysqld_safe. Which I couldn't because the password for the debian-sys-maint user had been overwritten as well in the database. When I thought all hell had broken loose, I somehow reset both the root and debian-sys-maint passwords to the original values of the new server, and I managed to revert to a clean state.

Since I obviously don't want to go down that road again, here's the question(s):

  • Was I right with my approach of using a complete --all-databases dump?
  • Was there something I needed to do in advance to reading in that dump to prevent this desaster from happening? Or even before creating the dump?

If I'm going about this the wrong way:

  • What is the proper procedure for migrating all databases and their users to another server?

Note that I'm not that experienced with MySQL and server administration at all, so I might be missing something obvious. All the tutorials and how-tos I've found never mention anything like this and just talk about importing the complete dump.

How to add 'root' MySQL user back on MAMP?

Posted: 21 Sep 2013 01:22 PM PDT

On PhpMyAdmin, I removed 'root' user by mistake. I was also logged in as 'root'. How can I add the user 'root' back, on MAMP?

Slow insert with MySQL full-text index

Posted: 21 Sep 2013 10:22 AM PDT

I use a full-text index in a MySQL table, and each insert into this table takes about 3 seconds. It seems that MySQL rebuilds (a part) of the full text index after each insert/update. Is this right?

How can I get better performance from the INSERT? Is there perhaps an option to set when MySQL rebuilds the full-text index?

How to design a relationship database table to store friendship relationship?

Posted: 21 Sep 2013 09:57 AM PDT

I want to design a table to store friendship relationship in my web project

It should satisfy at least the following 4 conditions:

who send the add-friend request e.g.(if A TO B then this column will be A)

who receive the add-friend request e.g.(if A TO B then this column will be B)

current status e.g.(0 denotes rejected whereas 1 denotes accepted or 2 denotes unprocessed

our friend relationship is bilateral

If any of you are experienced with this , any suggestion is welcomed

my current design (I think bad right now) is like this

frienshipId fromUserId toUserId status requestTime these are the columns

No comments:

Post a Comment

Search This Blog