Saturday, July 6, 2013

[how to] Storing different countries' administrative divisions

[how to] Storing different countries' administrative divisions


Storing different countries' administrative divisions

Posted: 06 Jul 2013 08:37 PM PDT

I'm designing a data base which should relate users to their administrative division for statistical purposes, considering that every country has its own type of divisions. Chile, for example, is divided in regions which are divided in provinces which are divided in communes, while Angola is divided in provinces which are divided in municipalities which are divided in communes. I thought a solution could be to have a column called first_administrative_division in the user table, which will refer to a first_administrative_division table, which will have a second_administrative_division column which will refer to a third_administrative_division table which will have a column which will refer to the country table. But reading this I didn't only found that there are countries which have only two administrative divisions, and in other cases, four; some cases are even more complex, like Argentina and Brazil, which have different kind of internal divisions, divided in different ways. So, I'm crying right now.

I don't want to cover the entire world, countries will be added as they are needed. Is it posible to modelate such thing?

Thanks beforehand.

index mysql concatenated columns

Posted: 06 Jul 2013 08:31 PM PDT

I have a table for author names with two fields, first_name & last_name, both varchar(55), both nullable (for crazy business logic reasons) although in reality last_name is unlikely to ever be null.

My where clause for the search screen contains:

WHERE CONCAT(a.first_name, " " , a.last_name) LIKE "%twain%"  

so that "Twain" or "Mark Twain" can be searched on. The table has about 15,000 rows & is expected to gradually grow, but won't ever be more than double that, and not for years.

I understand that there are many other parts of my query that will affect this, but given just that information, how might I best index this?

If it would make a great difference in performance, making last_name not nullable is an option, but not first_name

TIA!

10g to 11gr2 upgrade causes SYS.UTL_HTTP malfunction

Posted: 06 Jul 2013 08:19 PM PDT

After a 10g to 11gr2 upgrade, we are getting ORA-29273 error while using SYS.UTL_HTTP package, upgrade list requires ACL to be disabled.What is the possible way to resolve this problem without enabling ACL? Thanks,

Kayhan

How do you link to MAMP with PHP

Posted: 06 Jul 2013 05:46 PM PDT

I am very VERY inexperienced with PHP and databases, and I was wondering how to send and retrieve info from the MAMP database a created.

MySQL: Row size too large (> 8126)

Posted: 06 Jul 2013 03:58 PM PDT

Here's the issue. I have a table which has 300 columns in it, each up to 255 bytes (there IS a business justification for this). When I try it create it using VARCHAR(255), I go past the limit for the maximum number of bytes. So I create is using 300 TEXT fields. When I then try and insert data, I get the error:

Row size too large (> 8126). Changing some columns to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED may help. In current row format, BLOB prefix of 768 bytes is stored inline.

After reading up on this, I tried to change the table to use Barracuda format by specifying ROW_FORMAT=COMPRESSED. The issue now seems to be that when I try and create the table using that format, I get the same error.

CREATE TABLE T_ObjLarge__c (Id VARCHAR(18), Name VARCHAR(80),  ObjLarge_Field_1__c TEXT,  ObjLarge_Field_2__c TEXT,  ...  ObjLarge_Field_300__c TEXT  ) ROW_FORMAT=COMPRESSED ;  

The error I get is:

Row size too large (> 8126). Changing some columns to TEXT or BLOB may help. In current row format, BLOB prefix of 0 bytes is stored inline.

Please help!

What are the information contained over these objects? [on hold]

Posted: 06 Jul 2013 03:34 PM PDT

What are the information contained over the following

MSreplication_options
spt_fallback_db
spt_fallback_dev
spt_fallback_usg
spt_values
spt_monitor
spt_values

Is there a name for this type of query, and what is an efficient example?

Posted: 06 Jul 2013 01:44 PM PDT

The purpose is to find a parent, given it's children. For example, say you have a marketing package (aka a "combo"), and want to match it based on the products in it. Example table/data:

create table marketing_package_product (    package_id int not null references marketing_package(id),    product_id int not null references product(id),    primary key (package_id, product_id)  );    insert into marketing_package_product values   (1,1),  (1,2),  (1,3),  (2,1),  (2,5);  

Given products 1,2,3, I want to get marketing_package 1. But given products 1,2 only I do not want marketing_package 1.

Is there a name for this type of query, and what is the most efficient way to go about it?

Two primary keys to one foreign key

Posted: 06 Jul 2013 12:00 PM PDT

racing_couple  ID (PRIMARY)    breeding_couple  ID (PRIMARY)    egg  IDpar(FOREIGN KEY)  

I have 2 IDs from 2 different tables (racing_couple and breeding_couple) and they are primary keys. In 3. table (egg) I have IDpar which is foreign key and references ID in racing_couple and breeding_couple. Tables represents racing and breeding couple of pigeons, table "egg" represents egg of racing or breeding couple. And I'm using IDpar to identify which egg belongs to which couple.

Note There are other fields in both database but they are mostly varchar and not so relevant to this problem.

If I have something like this in both databases. How to know which IDpar has value from racing_couple and which IDpar has value from breeding_couple. I think I made mistake by making my database like this, but is there any way to make that work?

ID(racing_couple)  1  2    ID(breeding_couple)  1  2    IDpar(egg)  1 (ID racing_couple)  1 (ID breeding_couple)  2 (ID racing_couple)  2 (ID breeding_couple)  

What if I want to keep using MyISAM

Posted: 06 Jul 2013 10:21 AM PDT

In my current use of MyISAM the databases are extremely small and kept for a few weeks after which time they are deleted and completely rebuilt with a clean slate. The script uses MYISAM tables and other than lack of support in the latest MYSQL versions, I have never had a problem with the way they work.
Changing to INNODB is something that would be time consuming and provide a zero benefit. However, I am concerned that eventually there may come a time when either the Older MySQL version is not available or some other process forces the move.

My question is: Does anyone know of a specific reason to change from MyISAM to MyinnoDB just to keep up with MySQL.

If I can keep using what works for my own specific use, I see no reason to worry with testing and possible errors when there are more pressing issues that do need attention that have problems now. MyISAM does not have a problem at this time.

Log Shipping for SQL Server 2005

Posted: 06 Jul 2013 12:25 PM PDT

When I establish the log shipping on my fail over database the LS_Restore process runs for ~ 1 hour (it runs every 15 minutes) and then fails. I have found that it is looking for something that is in the backup Transaction log file which is compiled once an hour and copied to a different directory than the transaction logs. Since it is a backup file it has an different naming string. Applying the backup trans log through the restore gui gets the process running for an hour and then it fails again.

Any suggestions?

user permission cache in session or check from database?

Posted: 06 Jul 2013 12:23 PM PDT

On each page of my websites 2 or 3 permissions have to be checked and I use role-permission model. Which one is better? to check permissions of each user always through database directly or once calculate it and store in session of user(cache permissions in sessions)?

which one is more common?

How to properly index hstore tags column to faster search for keys

Posted: 06 Jul 2013 10:47 AM PDT

I imported a large area of OpenStreetMap's planet.osm file into a postgresql database. The database contains a table called nodes. Each node has a geometry column called geom and a hstore column called tags. I need to extract nodes along a line that have certain keys in the tags column. To do that I use the following query:

SELECT id, tags    FROM nodes    WHERE ST_DWithin(nodes.geom, ST_MakeLine('{$geom1}', '{$geom2}'), 0.001)    AND tags ? '{$type}';  

$geom1 and $geom2 are geometries for start and end points of my line.
The $type variable contains the key I want to search for. Now, it can have one of the following values: 'historic' or 'tourist'.

The query given above works but it is too slow. I guess searching for a key in tags column takes too much time. I read about GIN and GIST indexes and I generated a GIN index using the following query:

CREATE INDEX nodes_tags_idx ON nodes USING GIN(tags);  

After creating the index I searched again for nodes using the same first query but there is no change in performance.

How can I properly use GIN and GIST to index tags column so I can faster search for nodes that have a certain key in tags column?

Thank you

If there exist multiple similar Values than show Sum of a another column else Show simple Value [on hold]

Posted: 06 Jul 2013 05:11 PM PDT

I am having a probelm. I have a table of Invoice with different Invoices. Each Invoice has a unique Invoice Number and some amount. Invoices are of two type. One is of Invoice Type, and the other is of Settled type. If there's a invoice created, the entry in the table will be of invoice type, the amount will be in positive. If the user paid that invoice, another entry with the same invoice number will be entered in the table but with the negative amount because the user has paid that amount.
Now i want to to show the record if there exist any two records with same invoice number, then show the record with calculating the sum of the Amount (which means calculating the Invoice type record and the settlement type record). And those records which have a unique invoice number (with no settlement record) should be displayed as it is (without sum). Help? I guess the query will be some what of this kind.., but that's not exactly what i want!

    Select InvoiceNumber, Amount = case when COUNT(InvoiceNumber) > 1   then SUM(Amount)   else Amount   end   from Invoice      

Upgrade path - Oracle Standard to Oracle Enterprise

Posted: 06 Jul 2013 07:03 AM PDT

Could I start on Oracle Standard edition and easily upgrade to Oracle Enterprise?

Or do I need to rebuild the entire server? how does the upgrade path work?

How to provide a access to the data base if the user is not having the privilleges

Posted: 06 Jul 2013 02:48 PM PDT

In MS SQL Server 2008 R2 is installed by a user "A" and he has a data base named "DB1" . He left the company. Now the user "B" is trying to access the data base. He is able to see the data abse and but he is not having the privillege to read the data.

We don know the password of sql authentication. Both users are tin the admin group in that machine where the sql server is installed .

now is there any way by which the user is able to read the data

MySQL Continue Handler Problems inside a Function

Posted: 06 Jul 2013 10:49 AM PDT

I am in process of writing a simple MySQL function that looks up a value in a table and returns it. In case there is not value found it returns null. However, even with a continue handler defined I still end up with a warning "No data - zero rows fetched, selected, or processed". My code is below, what am I doing wrong? I really want to get rid of this warning :)

DROP FUNCTION IF EXISTS `getCompanySetting`;  DELIMITER |    CREATE FUNCTION `getCompanySetting`(setting_company_id INT, setting_name VARCHAR(255))      RETURNS TEXT  BEGIN      DECLARE setting_value TEXT DEFAULT NULL;      DECLARE CONTINUE HANDLER FOR SQLWARNING BEGIN END;      DECLARE CONTINUE HANDLER FOR NOT FOUND SET setting_value = NULL;        SELECT          value      FROM company_settings      WHERE          company_id = `setting_company_id`          AND          name       = `setting_name`      INTO setting_value;        RETURN setting_value;  END|    DELIMITER ;    mysql> SELECT getCompanySetting(24, 'observers_active');  +-------------------------------------------+  | getCompanySetting(24, 'observers_active') |  +-------------------------------------------+  | NULL                                      |  +-------------------------------------------+  1 row in set, 1 warning (0.00 sec)    mysql> show warnings;  +---------+------+-----------------------------------------------------+  | Level   | Code | Message                                             |  +---------+------+-----------------------------------------------------+  | Warning | 1329 | No data - zero rows fetched, selected, or processed |  +---------+------+-----------------------------------------------------+  1 row in set (0.00 sec)  

Update: MySQL Version 5.5.25

Data sharing between client and server two ways

Posted: 06 Jul 2013 07:56 AM PDT

One of my Silverlight applications is running. This Application is using one central database (SQL Server 2008) and three other client databases (SQL Server 2008) in server other locations.

Our requirement is that with minor changing or without changing the Silverlight application, when we make changes in the central database (insert, update and delete in multiple tables) these changes automatically occur in all client databases. And when any change is made in any client database all client as well as central database should be updated automatically.

For this I read some about Sync Framework and SQL Server 2008 Change Tracking. But I have no idea about how will I do this. Please help me. How can we achieve this?

How can I query data from a linked server, and pass it parameters to filter by?

Posted: 06 Jul 2013 06:49 AM PDT

I have a really big query that needs to be run on multiple databases, and the results appended to a temp table and returned.

The basic syntax looks something like this:

INSERT INTO #tmpTable (Id, ...)    SELECT T1.Id, ...  FROM Server.Database.dbo.Table1 as T1  INNER JOIN #tmpIds as T ON T1.Id = T.Id    INNER JOIN Server.Database.dbo.Table2 as T2 ON T1.Id = T2.Id  INNER JOIN Server.Database.dbo.Table3 as T3 ON T2.Id = T3.Id  LEFT OUTER JOIN Server.Database.dbo.Table4 as T4 ON T3.Id = T4.Id  LEFT OUTER JOIN Server.Database.dbo.Table5 as T5 ON T4.Id = T5.Id  LEFT OUTER JOIN Server.Database.dbo.Table6 as T6 ON T5.Id = T6.Id  

The query runs quickly if run locally on the the individual servers, however it takes a long time to run if it's run from a linked server using the 4-part names like above.

The problem appears to be it's querying the linked server for the unfiltered result set first, then joining it to the #tmpIds table on the local server afterwards, which makes the query take an very long time to run.

If I hardcode the Ids to filter the result set on the linked server, such as

SELECT T1.Id, ...  FROM Server.Database.dbo.Table1 as T1  -- INNER JOIN #tmpIds as T ON T1.Id = T.Id  INNER JOIN Server.Database.dbo.Table2 as T2 ON T1.Id = T2.Id  INNER JOIN Server.Database.dbo.Table3 as T3 ON T2.Id = T3.Id  LEFT OUTER JOIN Server.Database.dbo.Table4 as T4 ON T3.Id = T4.Id  LEFT OUTER JOIN Server.Database.dbo.Table5 as T5 ON T4.Id = T5.Id  LEFT OUTER JOIN Server.Database.dbo.Table6 as T6 ON T5.Id = T6.Id    WHERE T1.Id IN (1, 2, 3)  

it runs quickly in just a few seconds.

Is there a way to run this query so it filters the result set of the query from the linked server by the #tmpId table first, before returning the result set to the local server?

Some things to note

  • The query is very large and complex, and Dynamic SQL is not a viable option for it due to the maintenance nightmare that causes.

    I would be open to suggestions about how to use Dynamic SQL for something else though, such as running a stored procedure or UDF if there's a way to do that over a linked server (tried a few different methods such as sp_executeSQL, OPENROWSET, and OPENQUERY, but those all failed).

  • Because it uses the 4-part naming convention, I cannot use a UDF on the remote server
  • Distributed Transactions are disabled, so the following does not work

    INSERT INTO #table   EXEC Server.Database.dbo.StoredProcedure @ids  

Handling expiration in a course registration or ticketing system

Posted: 06 Jul 2013 05:34 PM PDT

I want to restructure the database for a web application I inherited for handling summer camp registrations, but my question is relevant to any event registration or ticketing system.

My question is about how to best handle the need for registrations to expire if unpaid for X amount of time.

Currently the database design is something like this (some columns omitted for brevity):

PK = primary key  FK = foreign key    sale_header (      PK id      FK account_id      payment_status      total      payment_method      payment_time  )    sale_item (      PK (sale_header_id, camper_id)      price  )    class_registration (      PK (camper_id, class_id)      time_added      registered (boolean)  )  

When payment is completed, the registered flag is set to true. There is a dump script that runs regularly to clear out unpaid registrations by checking for registrations where `class_registration.registered = 0 AND sale_header.payment_status='not paid'" and for which the configured time allowed until expiration has elapsed.

There are currently some bugs related to this, and anyway it seems to me to be a somewhat strange design given that the registered flag is only set to true when payment is completed, so the payment_status column seems to be unnecessary; it seems like it would be much cleaner to have the sale_header table only contain transactions that were actually completed. Also, even if I kept this approach, it seems it would be better if there were a FK linking sale_item and class_registration.

I've been researching potential database designs for this scenario and one thought I had was to add price info to the class_registration table and get rid of the sale_item table, since the system is only for selling class registrations, nothing else:

sale (      PK id      FK account_id      total      payment_method      payment_time  )    class_registration (      PK (camper_id, class_id)      FK sale_id      price      time_added  )  

In this design, the FK sale_id would initially be null while the parent was still choosing classes for their camper(s), so finding expired, unpaid registrations would be done by querying for registrations where sale_id is NULL, and which had exceeded the expiration time.

Another option I thought of, inspired by this example (scroll down to "Data Modeling in the Sample Application"), would be to have a separate class_availability table, which might be good for querying performance when determining available capacities:

class_registration (      FK camper_id      FK class_id      FK sale_id      price      time_added  )    class_availability (      FK class_id      FK camper_id      FK registration_id   )  

Finally, I considered what it would look like if I kept the sale_item table, although I'm not sure what would be the advantage of doing so:

class_registration (      PK (camper_id, class_id)      FK sale_item_id      time_added  )    sale_item (      PK sale_item_id      FK sale_header_id      price  )  

So my specific concerns are:

  • What are the pros and cons of these approaches?
  • Should expired registrations stay in the database and just be excluded when querying, or actually be deleted?
  • A requirement to keep in mind is that it should be possible to change the configured expiration time in a live system (in case of bugs, which has happened before with this system), so having a column like expiration_time in the database isn't an option; it needs to be calculated based on time_added.
  • (Bonus :) Feel free to skip this one and focus on the above questions.) In my new proposed designs (or some other design), how could I handle the possible need for admins to be able to extend the expiration time for a particular camper in case there was some issue with payment that needed to be worked out, and their registration should be held in the meantime?

MySQL performance tuning + queries stuck on "Copying to tmp table"

Posted: 06 Jul 2013 09:49 AM PDT

The latter part of the question's title (queries stuck on "Copying to tmp table") has been addressed many times, and I have spent a fair amount of time researching this. I would appreciate it if you guys could help me come to a conclusion - especially with my particular server set up in mind.

Quick overview of server:
- Dedicated server with 2 cores and 64 GB RAM
- Only runs MySQL

The setup is in no way tweaked, so current config is to some extent way off. Hopefully your scolding could provide knowledge.

The web application running on the server is a Magento site with 25.000+ products. The query giving the most headache is the one generating sitemaps.

Currently, the following query has been stuck for a little over an hour on "Copying to tmp table":

Note: I do not really need input on how to increase performance by optimizing this query, I would rather see how much I can shave query time off the queries that already exist.

SELECT DISTINCT `e`.`entity_id`, `ur`.`request_path` AS `url`, `stk`.`is_in_stock` FROM `catalog_product_entity` AS `e`  INNER JOIN `catalog_product_website` AS `w` ON e.entity_id=w.product_id  LEFT JOIN `core_url_rewrite` AS `ur` ON e.entity_id=ur.product_id AND ur.category_id IS NULL AND ur.store_id='1' AND ur.is_system=1  INNER JOIN `catalog_category_product_index` AS `cat_index` ON e.entity_id=cat_index.product_id AND cat_index.store_id='1' AND cat_index.category_id in ('2', '3', '68', '86', '145', '163', '182', '196', '198', '214', '249', '252', '285', '286', '288', '289', '290', '292', '549') AND cat_index.position!=0  INNER JOIN `cataloginventory_stock_item` AS `stk` ON e.entity_id=stk.product_id AND stk.is_in_stock=1  INNER JOIN `catalog_product_entity_int` AS `t1_visibility` ON e.entity_id=t1_visibility.entity_id AND t1_visibility.store_id=0  LEFT JOIN `catalog_product_entity_int` AS `t2_visibility` ON t1_visibility.entity_id = t2_visibility.entity_id AND t1_visibility.attribute_id = t2_visibility.attribute_id AND t2_visibility.store_id='1'  INNER JOIN `catalog_product_entity_int` AS `t1_status` ON e.entity_id=t1_status.entity_id AND t1_status.store_id=0  LEFT JOIN `catalog_product_entity_int` AS `t2_status` ON t1_status.entity_id = t2_status.entity_id AND t1_status.attribute_id = t2_status.attribute_id AND t2_status.store_id='1' WHERE (w.website_id='1') AND (t1_visibility.attribute_id='102') AND ((IF(t2_visibility.value_id > 0, t2_visibility.value, t1_visibility.value)) IN(3, 2, 4)) AND (t1_status.attribute_id='96') AND ((IF(t2_status.value_id > 0, t2_status.value, t1_status.value)) IN(1))    

Relevant config:

Server buffers:

max_connections = 1500;    key_buffer_size = 22G;    innodb_buffer_pool_size = 16G;    innodb_additional_mem_pool_size = 2G;    innodb_log_buffer_size = 400M;    query_cache_size = 64M;    

Per thread buffers:

read_buffer_size = 2M;    read_rnd_buffer_size = 16M;    sort_buffer_size = 128M;    thread_stack = 192K;    join_buffer_size = 8M;    

Question: Does any of these variables seem way off to any of you?

The above memory limits will allow me to use 130% more RAM than is physically installed on my system. Obviously, something must change.

According to the MySQLTuner Perl script, I am currently in danger of using:
Total buffers: 40.7G global + 28.2M per thread (1500 max threads)
Maximum possible memory usage: 82.0G (130% of installed RAM)

Questions: What would give the most performance increase of the above variables, or: will it be more useful to increase the total server buffer limit or the per thread buffer limits?

The two variables that seem to affect "Copying to tmp table" the most are:
- tmp_table_size
- max_heap_table_size

Mine are both set to 0.25G

Question: Any recommendations specifically for those to variables?

There are two suggested fixes that seem to get mentioned more that others:
- Kill process. Do flush table.
- Kill process. Do repair / optimize tables.

Question: Which of the two above suggested solution do you think is the most viable?

How to find when last reindexing happened in sql server?

Posted: 06 Jul 2013 04:49 AM PDT

I have set 'job' for re-indexing, I want to know when the last re-indexing happened in datetime.

Deriving formulas for input/output

Posted: 06 Jul 2013 08:49 PM PDT

I'm currently enrolled in a DBS class and am having problem with an assignment. I've searched around and have been unable to understand what it is I'm meant to be doing with this derivation formula.

A plant file with TREE-GENUS as the key field includes records with the following TREE-GENUS values: Tsuga, Ficus , Arbutus, Quercus, Melaleuca, Tristaniopsis, Cornus, Sequoiadendron, Lithocarpus, Liriodendron, Pittosporum.
Suppose that records with these search field values are inserted into a random (heap) file with a maximum of 3 records per block. Derive a formula for the expected number of disk I/O to scan these records and to search for a particular record

I've been using some software that was given with the assignment and it also asks what are the maximum number of blocks that are allowed and that is not given by the above brief. I'm not really sure how to derive a formula for this. I've assumed that because there are 3 records per block there are 4 blocks required and that a random heap file uses 1 disk i/o per write/read.

If this is a larger topic than is worth explaining a link to a reliable few pages is also helpful.

innobackupex is failing while exporting backup

Posted: 06 Jul 2013 03:49 AM PDT

I am using perl script which is working fine in one of my other box which used to take individual schema backups & finally take all schema together using percona innobackupex.

I have following information in the Log(posting One day Log details):

--slave-info is used with --no-lock but without --safe-slave-backup. The  binlog position cannot be consistent with the backup data.  Died at /apps/mysql/scripts/mysql_backup.pl line 214.      [Fri Oct 26 04:20:01 2012] [14670]    [Fri Oct 26 04:20:01 2012] [14670] Starting individual backups    [Fri Oct 26 04:20:01 2012] [14670]    [Fri Oct 26 04:20:01 2012] [14670] Backing up proddb    [Fri Oct 26 04:20:01 2012] [14670] Creating /tmp/mysql_backup-14670.fifo    [Fri Oct 26 04:20:01 2012] [14670] Starting /usr/local/bin/zipmt -s -t 4     -c - < /tmp/mysql_backup-14670.fifo > /apps/dbbackup/proddb02.backup-    2012-10-26-042001-proddb.dmp.bz2    [Fri Oct 26 04:20:01 2012] [14670] Starting /usr/bin/innobackupex     --slave-info --no-lock --stream=tar --databases proddb ./ >    /tmp/mysql_backup-14670.fifo    [Fri Oct 26 04:20:02 2012] [14670] FAIL: /usr/bin/innobackupex     --slave-info --no-lock --stream=tar --databases proddb ./ >     /tmp/mysql_backup-14670.fifo failed 256  

Full details are in /apps/dbbackup/backup.log.

 Cause:     /usr/bin/innobackupex --slave-info --no-lock --stream=tar --databases    proddb ./ > /tmp/mysql_backup-14670.fifo failed 256  

Named pipes are existing in the /tmp even after backup getting failed :

  ls -lrth /tmp     =============     prw-r--r--. 1 root root    0 Oct 25 04:20 mysql_backup-18215.fifo   prw-r--r--. 1 root root    0 Oct 26 04:20 mysql_backup-14670.fifo   prw-r--r--. 1 root root    0 Oct 27 04:20 mysql_backup-11278.fifo   prw-r--r--. 1 root root    0 Oct 28 04:20 mysql_backup-7163.fifo   prw-r--r--. 1 root root    0 Oct 29 04:20 mysql_backup-4191.fifo   prw-r--r--. 1 root root    0 Oct 30 04:20 mysql_backup-595.fifo  

Any idea, why the backup is failing?

"Could not find stored procedure' even though the stored procedure have been created in MS SQL Server Management Studio

Posted: 06 Jul 2013 06:59 AM PDT

I have created a table testtable inside the database testbase that have the following structure:

product_no (int, not null)  product_name (varchar(30), not null)  price (money, null)  expire_date (date, null)  expire_time (time(7), null)  

which I used the Microsoft SQL Server 2008 Management Studio.

I created a stored procedure testtable_pricesmaller as follows

use testbase  go  create procedure testtable_pricesmaller      @pricelimit money  as  select * from testtable where price = @pricelimit;  go  

and are able to view the Stored Procedures on the Object Explorer of the Microsoft SQL Server Management Studio. (It is listed in the following tree structure of the Object Explorer)

Databases      + testbase          + Tables              + dbo.testtable          + Programmability              + Stored Procedures                  + dbo.testtable_pricesmaller  

I find it very strange when I receive the following error:

Could not find the stored procedure 'dbo.testtable_pricesmaller'.  

when I execute the following SQL statement:

execute dbo.testtable_pricesmaller 50  

What could it be missing?

How to search a MySQL database with encrypted fields

Posted: 06 Jul 2013 10:52 AM PDT

Suppose I need to encrypt certain table-fields of a MySQL database. Additionally, I need to search some of those fields I did encrypt.

How would one search those fields anyway?

Decrypting each record step by step is no option: Suppose I have multiple of thousands of records. It would take too much time and space to decrypt each record and check if each single record matches the search.

UPDATE 2012-09-07

Adding further details to the database schema would be OK, since I'm about to implement a new application. Furthermore, I need to extend applications currently running in production. But even for those application, adding further details would be OK.

UPDATE 2012-09-08

Encryption is the kernel of this question.

Access restrictions, as proposed by some answers, already apply - but do not fit the formal requirement to encrypt data.

This formal requirement is not Payment Card Industry Data Security Standard [PCI].

SQL Server 2008: How do I change the default schema of the dbo?

Posted: 06 Jul 2013 07:43 AM PDT

When I attempt to change the default schema of the dbo in SQL Server Management Studio, the ALTER fails and states that the dbo account's schema cannot be changed. Are there any tricks that can make is possible to change the the dbo default schema of "dbo" to "xxx"?

Thank you!

What are the main differences between InnoDB and MyISAM?

Posted: 06 Jul 2013 09:05 AM PDT

What are the main differences between InnoDB and MyISAM?

No comments:

Post a Comment

Search This Blog