Saturday, July 6, 2013

[how to] Storing different countries' administrative divisions

[how to] Storing different countries' administrative divisions


Storing different countries' administrative divisions

Posted: 06 Jul 2013 08:37 PM PDT

I'm designing a data base which should relate users to their administrative division for statistical purposes, considering that every country has its own type of divisions. Chile, for example, is divided in regions which are divided in provinces which are divided in communes, while Angola is divided in provinces which are divided in municipalities which are divided in communes. I thought a solution could be to have a column called first_administrative_division in the user table, which will refer to a first_administrative_division table, which will have a second_administrative_division column which will refer to a third_administrative_division table which will have a column which will refer to the country table. But reading this I didn't only found that there are countries which have only two administrative divisions, and in other cases, four; some cases are even more complex, like Argentina and Brazil, which have different kind of internal divisions, divided in different ways. So, I'm crying right now.

I don't want to cover the entire world, countries will be added as they are needed. Is it posible to modelate such thing?

Thanks beforehand.

index mysql concatenated columns

Posted: 06 Jul 2013 08:31 PM PDT

I have a table for author names with two fields, first_name & last_name, both varchar(55), both nullable (for crazy business logic reasons) although in reality last_name is unlikely to ever be null.

My where clause for the search screen contains:

WHERE CONCAT(a.first_name, " " , a.last_name) LIKE "%twain%"  

so that "Twain" or "Mark Twain" can be searched on. The table has about 15,000 rows & is expected to gradually grow, but won't ever be more than double that, and not for years.

I understand that there are many other parts of my query that will affect this, but given just that information, how might I best index this?

If it would make a great difference in performance, making last_name not nullable is an option, but not first_name

TIA!

10g to 11gr2 upgrade causes SYS.UTL_HTTP malfunction

Posted: 06 Jul 2013 08:19 PM PDT

After a 10g to 11gr2 upgrade, we are getting ORA-29273 error while using SYS.UTL_HTTP package, upgrade list requires ACL to be disabled.What is the possible way to resolve this problem without enabling ACL? Thanks,

Kayhan

How do you link to MAMP with PHP

Posted: 06 Jul 2013 05:46 PM PDT

I am very VERY inexperienced with PHP and databases, and I was wondering how to send and retrieve info from the MAMP database a created.

MySQL: Row size too large (> 8126)

Posted: 06 Jul 2013 03:58 PM PDT

Here's the issue. I have a table which has 300 columns in it, each up to 255 bytes (there IS a business justification for this). When I try it create it using VARCHAR(255), I go past the limit for the maximum number of bytes. So I create is using 300 TEXT fields. When I then try and insert data, I get the error:

Row size too large (> 8126). Changing some columns to TEXT or BLOB or using ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED may help. In current row format, BLOB prefix of 768 bytes is stored inline.

After reading up on this, I tried to change the table to use Barracuda format by specifying ROW_FORMAT=COMPRESSED. The issue now seems to be that when I try and create the table using that format, I get the same error.

CREATE TABLE T_ObjLarge__c (Id VARCHAR(18), Name VARCHAR(80),  ObjLarge_Field_1__c TEXT,  ObjLarge_Field_2__c TEXT,  ...  ObjLarge_Field_300__c TEXT  ) ROW_FORMAT=COMPRESSED ;  

The error I get is:

Row size too large (> 8126). Changing some columns to TEXT or BLOB may help. In current row format, BLOB prefix of 0 bytes is stored inline.

Please help!

What are the information contained over these objects? [on hold]

Posted: 06 Jul 2013 03:34 PM PDT

What are the information contained over the following

MSreplication_options
spt_fallback_db
spt_fallback_dev
spt_fallback_usg
spt_values
spt_monitor
spt_values

Is there a name for this type of query, and what is an efficient example?

Posted: 06 Jul 2013 01:44 PM PDT

The purpose is to find a parent, given it's children. For example, say you have a marketing package (aka a "combo"), and want to match it based on the products in it. Example table/data:

create table marketing_package_product (    package_id int not null references marketing_package(id),    product_id int not null references product(id),    primary key (package_id, product_id)  );    insert into marketing_package_product values   (1,1),  (1,2),  (1,3),  (2,1),  (2,5);  

Given products 1,2,3, I want to get marketing_package 1. But given products 1,2 only I do not want marketing_package 1.

Is there a name for this type of query, and what is the most efficient way to go about it?

Two primary keys to one foreign key

Posted: 06 Jul 2013 12:00 PM PDT

racing_couple  ID (PRIMARY)    breeding_couple  ID (PRIMARY)    egg  IDpar(FOREIGN KEY)  

I have 2 IDs from 2 different tables (racing_couple and breeding_couple) and they are primary keys. In 3. table (egg) I have IDpar which is foreign key and references ID in racing_couple and breeding_couple. Tables represents racing and breeding couple of pigeons, table "egg" represents egg of racing or breeding couple. And I'm using IDpar to identify which egg belongs to which couple.

Note There are other fields in both database but they are mostly varchar and not so relevant to this problem.

If I have something like this in both databases. How to know which IDpar has value from racing_couple and which IDpar has value from breeding_couple. I think I made mistake by making my database like this, but is there any way to make that work?

ID(racing_couple)  1  2    ID(breeding_couple)  1  2    IDpar(egg)  1 (ID racing_couple)  1 (ID breeding_couple)  2 (ID racing_couple)  2 (ID breeding_couple)  

What if I want to keep using MyISAM

Posted: 06 Jul 2013 10:21 AM PDT

In my current use of MyISAM the databases are extremely small and kept for a few weeks after which time they are deleted and completely rebuilt with a clean slate. The script uses MYISAM tables and other than lack of support in the latest MYSQL versions, I have never had a problem with the way they work.
Changing to INNODB is something that would be time consuming and provide a zero benefit. However, I am concerned that eventually there may come a time when either the Older MySQL version is not available or some other process forces the move.

My question is: Does anyone know of a specific reason to change from MyISAM to MyinnoDB just to keep up with MySQL.

If I can keep using what works for my own specific use, I see no reason to worry with testing and possible errors when there are more pressing issues that do need attention that have problems now. MyISAM does not have a problem at this time.

Log Shipping for SQL Server 2005

Posted: 06 Jul 2013 12:25 PM PDT

When I establish the log shipping on my fail over database the LS_Restore process runs for ~ 1 hour (it runs every 15 minutes) and then fails. I have found that it is looking for something that is in the backup Transaction log file which is compiled once an hour and copied to a different directory than the transaction logs. Since it is a backup file it has an different naming string. Applying the backup trans log through the restore gui gets the process running for an hour and then it fails again.

Any suggestions?

user permission cache in session or check from database?

Posted: 06 Jul 2013 12:23 PM PDT

On each page of my websites 2 or 3 permissions have to be checked and I use role-permission model. Which one is better? to check permissions of each user always through database directly or once calculate it and store in session of user(cache permissions in sessions)?

which one is more common?

How to properly index hstore tags column to faster search for keys

Posted: 06 Jul 2013 10:47 AM PDT

I imported a large area of OpenStreetMap's planet.osm file into a postgresql database. The database contains a table called nodes. Each node has a geometry column called geom and a hstore column called tags. I need to extract nodes along a line that have certain keys in the tags column. To do that I use the following query:

SELECT id, tags    FROM nodes    WHERE ST_DWithin(nodes.geom, ST_MakeLine('{$geom1}', '{$geom2}'), 0.001)    AND tags ? '{$type}';  

$geom1 and $geom2 are geometries for start and end points of my line.
The $type variable contains the key I want to search for. Now, it can have one of the following values: 'historic' or 'tourist'.

The query given above works but it is too slow. I guess searching for a key in tags column takes too much time. I read about GIN and GIST indexes and I generated a GIN index using the following query:

CREATE INDEX nodes_tags_idx ON nodes USING GIN(tags);  

After creating the index I searched again for nodes using the same first query but there is no change in performance.

How can I properly use GIN and GIST to index tags column so I can faster search for nodes that have a certain key in tags column?

Thank you

If there exist multiple similar Values than show Sum of a another column else Show simple Value [on hold]

Posted: 06 Jul 2013 05:11 PM PDT

I am having a probelm. I have a table of Invoice with different Invoices. Each Invoice has a unique Invoice Number and some amount. Invoices are of two type. One is of Invoice Type, and the other is of Settled type. If there's a invoice created, the entry in the table will be of invoice type, the amount will be in positive. If the user paid that invoice, another entry with the same invoice number will be entered in the table but with the negative amount because the user has paid that amount.
Now i want to to show the record if there exist any two records with same invoice number, then show the record with calculating the sum of the Amount (which means calculating the Invoice type record and the settlement type record). And those records which have a unique invoice number (with no settlement record) should be displayed as it is (without sum). Help? I guess the query will be some what of this kind.., but that's not exactly what i want!

    Select InvoiceNumber, Amount = case when COUNT(InvoiceNumber) > 1   then SUM(Amount)   else Amount   end   from Invoice      

Upgrade path - Oracle Standard to Oracle Enterprise

Posted: 06 Jul 2013 07:03 AM PDT

Could I start on Oracle Standard edition and easily upgrade to Oracle Enterprise?

Or do I need to rebuild the entire server? how does the upgrade path work?

How to provide a access to the data base if the user is not having the privilleges

Posted: 06 Jul 2013 02:48 PM PDT

In MS SQL Server 2008 R2 is installed by a user "A" and he has a data base named "DB1" . He left the company. Now the user "B" is trying to access the data base. He is able to see the data abse and but he is not having the privillege to read the data.

We don know the password of sql authentication. Both users are tin the admin group in that machine where the sql server is installed .

now is there any way by which the user is able to read the data

MySQL Continue Handler Problems inside a Function

Posted: 06 Jul 2013 10:49 AM PDT

I am in process of writing a simple MySQL function that looks up a value in a table and returns it. In case there is not value found it returns null. However, even with a continue handler defined I still end up with a warning "No data - zero rows fetched, selected, or processed". My code is below, what am I doing wrong? I really want to get rid of this warning :)

DROP FUNCTION IF EXISTS `getCompanySetting`;  DELIMITER |    CREATE FUNCTION `getCompanySetting`(setting_company_id INT, setting_name VARCHAR(255))      RETURNS TEXT  BEGIN      DECLARE setting_value TEXT DEFAULT NULL;      DECLARE CONTINUE HANDLER FOR SQLWARNING BEGIN END;      DECLARE CONTINUE HANDLER FOR NOT FOUND SET setting_value = NULL;        SELECT          value      FROM company_settings      WHERE          company_id = `setting_company_id`          AND          name       = `setting_name`      INTO setting_value;        RETURN setting_value;  END|    DELIMITER ;    mysql> SELECT getCompanySetting(24, 'observers_active');  +-------------------------------------------+  | getCompanySetting(24, 'observers_active') |  +-------------------------------------------+  | NULL                                      |  +-------------------------------------------+  1 row in set, 1 warning (0.00 sec)    mysql> show warnings;  +---------+------+-----------------------------------------------------+  | Level   | Code | Message                                             |  +---------+------+-----------------------------------------------------+  | Warning | 1329 | No data - zero rows fetched, selected, or processed |  +---------+------+-----------------------------------------------------+  1 row in set (0.00 sec)  

Update: MySQL Version 5.5.25

Data sharing between client and server two ways

Posted: 06 Jul 2013 07:56 AM PDT

One of my Silverlight applications is running. This Application is using one central database (SQL Server 2008) and three other client databases (SQL Server 2008) in server other locations.

Our requirement is that with minor changing or without changing the Silverlight application, when we make changes in the central database (insert, update and delete in multiple tables) these changes automatically occur in all client databases. And when any change is made in any client database all client as well as central database should be updated automatically.

For this I read some about Sync Framework and SQL Server 2008 Change Tracking. But I have no idea about how will I do this. Please help me. How can we achieve this?

How can I query data from a linked server, and pass it parameters to filter by?

Posted: 06 Jul 2013 06:49 AM PDT

I have a really big query that needs to be run on multiple databases, and the results appended to a temp table and returned.

The basic syntax looks something like this:

INSERT INTO #tmpTable (Id, ...)    SELECT T1.Id, ...  FROM Server.Database.dbo.Table1 as T1  INNER JOIN #tmpIds as T ON T1.Id = T.Id    INNER JOIN Server.Database.dbo.Table2 as T2 ON T1.Id = T2.Id  INNER JOIN Server.Database.dbo.Table3 as T3 ON T2.Id = T3.Id  LEFT OUTER JOIN Server.Database.dbo.Table4 as T4 ON T3.Id = T4.Id  LEFT OUTER JOIN Server.Database.dbo.Table5 as T5 ON T4.Id = T5.Id  LEFT OUTER JOIN Server.Database.dbo.Table6 as T6 ON T5.Id = T6.Id  

The query runs quickly if run locally on the the individual servers, however it takes a long time to run if it's run from a linked server using the 4-part names like above.

The problem appears to be it's querying the linked server for the unfiltered result set first, then joining it to the #tmpIds table on the local server afterwards, which makes the query take an very long time to run.

If I hardcode the Ids to filter the result set on the linked server, such as

SELECT T1.Id, ...  FROM Server.Database.dbo.Table1 as T1  -- INNER JOIN #tmpIds as T ON T1.Id = T.Id  INNER JOIN Server.Database.dbo.Table2 as T2 ON T1.Id = T2.Id  INNER JOIN Server.Database.dbo.Table3 as T3 ON T2.Id = T3.Id  LEFT OUTER JOIN Server.Database.dbo.Table4 as T4 ON T3.Id = T4.Id  LEFT OUTER JOIN Server.Database.dbo.Table5 as T5 ON T4.Id = T5.Id  LEFT OUTER JOIN Server.Database.dbo.Table6 as T6 ON T5.Id = T6.Id    WHERE T1.Id IN (1, 2, 3)  

it runs quickly in just a few seconds.

Is there a way to run this query so it filters the result set of the query from the linked server by the #tmpId table first, before returning the result set to the local server?

Some things to note

  • The query is very large and complex, and Dynamic SQL is not a viable option for it due to the maintenance nightmare that causes.

    I would be open to suggestions about how to use Dynamic SQL for something else though, such as running a stored procedure or UDF if there's a way to do that over a linked server (tried a few different methods such as sp_executeSQL, OPENROWSET, and OPENQUERY, but those all failed).

  • Because it uses the 4-part naming convention, I cannot use a UDF on the remote server
  • Distributed Transactions are disabled, so the following does not work

    INSERT INTO #table   EXEC Server.Database.dbo.StoredProcedure @ids  

Handling expiration in a course registration or ticketing system

Posted: 06 Jul 2013 05:34 PM PDT

I want to restructure the database for a web application I inherited for handling summer camp registrations, but my question is relevant to any event registration or ticketing system.

My question is about how to best handle the need for registrations to expire if unpaid for X amount of time.

Currently the database design is something like this (some columns omitted for brevity):

PK = primary key  FK = foreign key    sale_header (      PK id      FK account_id      payment_status      total      payment_method      payment_time  )    sale_item (      PK (sale_header_id, camper_id)      price  )    class_registration (      PK (camper_id, class_id)      time_added      registered (boolean)  )  

When payment is completed, the registered flag is set to true. There is a dump script that runs regularly to clear out unpaid registrations by checking for registrations where `class_registration.registered = 0 AND sale_header.payment_status='not paid'" and for which the configured time allowed until expiration has elapsed.

There are currently some bugs related to this, and anyway it seems to me to be a somewhat strange design given that the registered flag is only set to true when payment is completed, so the payment_status column seems to be unnecessary; it seems like it would be much cleaner to have the sale_header table only contain transactions that were actually completed. Also, even if I kept this approach, it seems it would be better if there were a FK linking sale_item and class_registration.

I've been researching potential database designs for this scenario and one thought I had was to add price info to the class_registration table and get rid of the sale_item table, since the system is only for selling class registrations, nothing else:

sale (      PK id      FK account_id      total      payment_method      payment_time  )    class_registration (      PK (camper_id, class_id)      FK sale_id      price      time_added  )  

In this design, the FK sale_id would initially be null while the parent was still choosing classes for their camper(s), so finding expired, unpaid registrations would be done by querying for registrations where sale_id is NULL, and which had exceeded the expiration time.

Another option I thought of, inspired by this example (scroll down to "Data Modeling in the Sample Application"), would be to have a separate class_availability table, which might be good for querying performance when determining available capacities:

class_registration (      FK camper_id      FK class_id      FK sale_id      price      time_added  )    class_availability (      FK class_id      FK camper_id      FK registration_id   )  

Finally, I considered what it would look like if I kept the sale_item table, although I'm not sure what would be the advantage of doing so:

class_registration (      PK (camper_id, class_id)      FK sale_item_id      time_added  )    sale_item (      PK sale_item_id      FK sale_header_id      price  )  

So my specific concerns are:

  • What are the pros and cons of these approaches?
  • Should expired registrations stay in the database and just be excluded when querying, or actually be deleted?
  • A requirement to keep in mind is that it should be possible to change the configured expiration time in a live system (in case of bugs, which has happened before with this system), so having a column like expiration_time in the database isn't an option; it needs to be calculated based on time_added.
  • (Bonus :) Feel free to skip this one and focus on the above questions.) In my new proposed designs (or some other design), how could I handle the possible need for admins to be able to extend the expiration time for a particular camper in case there was some issue with payment that needed to be worked out, and their registration should be held in the meantime?

MySQL performance tuning + queries stuck on "Copying to tmp table"

Posted: 06 Jul 2013 09:49 AM PDT

The latter part of the question's title (queries stuck on "Copying to tmp table") has been addressed many times, and I have spent a fair amount of time researching this. I would appreciate it if you guys could help me come to a conclusion - especially with my particular server set up in mind.

Quick overview of server:
- Dedicated server with 2 cores and 64 GB RAM
- Only runs MySQL

The setup is in no way tweaked, so current config is to some extent way off. Hopefully your scolding could provide knowledge.

The web application running on the server is a Magento site with 25.000+ products. The query giving the most headache is the one generating sitemaps.

Currently, the following query has been stuck for a little over an hour on "Copying to tmp table":

Note: I do not really need input on how to increase performance by optimizing this query, I would rather see how much I can shave query time off the queries that already exist.

SELECT DISTINCT `e`.`entity_id`, `ur`.`request_path` AS `url`, `stk`.`is_in_stock` FROM `catalog_product_entity` AS `e`  INNER JOIN `catalog_product_website` AS `w` ON e.entity_id=w.product_id  LEFT JOIN `core_url_rewrite` AS `ur` ON e.entity_id=ur.product_id AND ur.category_id IS NULL AND ur.store_id='1' AND ur.is_system=1  INNER JOIN `catalog_category_product_index` AS `cat_index` ON e.entity_id=cat_index.product_id AND cat_index.store_id='1' AND cat_index.category_id in ('2', '3', '68', '86', '145', '163', '182', '196', '198', '214', '249', '252', '285', '286', '288', '289', '290', '292', '549') AND cat_index.position!=0  INNER JOIN `cataloginventory_stock_item` AS `stk` ON e.entity_id=stk.product_id AND stk.is_in_stock=1  INNER JOIN `catalog_product_entity_int` AS `t1_visibility` ON e.entity_id=t1_visibility.entity_id AND t1_visibility.store_id=0  LEFT JOIN `catalog_product_entity_int` AS `t2_visibility` ON t1_visibility.entity_id = t2_visibility.entity_id AND t1_visibility.attribute_id = t2_visibility.attribute_id AND t2_visibility.store_id='1'  INNER JOIN `catalog_product_entity_int` AS `t1_status` ON e.entity_id=t1_status.entity_id AND t1_status.store_id=0  LEFT JOIN `catalog_product_entity_int` AS `t2_status` ON t1_status.entity_id = t2_status.entity_id AND t1_status.attribute_id = t2_status.attribute_id AND t2_status.store_id='1' WHERE (w.website_id='1') AND (t1_visibility.attribute_id='102') AND ((IF(t2_visibility.value_id > 0, t2_visibility.value, t1_visibility.value)) IN(3, 2, 4)) AND (t1_status.attribute_id='96') AND ((IF(t2_status.value_id > 0, t2_status.value, t1_status.value)) IN(1))    

Relevant config:

Server buffers:

max_connections = 1500;    key_buffer_size = 22G;    innodb_buffer_pool_size = 16G;    innodb_additional_mem_pool_size = 2G;    innodb_log_buffer_size = 400M;    query_cache_size = 64M;    

Per thread buffers:

read_buffer_size = 2M;    read_rnd_buffer_size = 16M;    sort_buffer_size = 128M;    thread_stack = 192K;    join_buffer_size = 8M;    

Question: Does any of these variables seem way off to any of you?

The above memory limits will allow me to use 130% more RAM than is physically installed on my system. Obviously, something must change.

According to the MySQLTuner Perl script, I am currently in danger of using:
Total buffers: 40.7G global + 28.2M per thread (1500 max threads)
Maximum possible memory usage: 82.0G (130% of installed RAM)

Questions: What would give the most performance increase of the above variables, or: will it be more useful to increase the total server buffer limit or the per thread buffer limits?

The two variables that seem to affect "Copying to tmp table" the most are:
- tmp_table_size
- max_heap_table_size

Mine are both set to 0.25G

Question: Any recommendations specifically for those to variables?

There are two suggested fixes that seem to get mentioned more that others:
- Kill process. Do flush table.
- Kill process. Do repair / optimize tables.

Question: Which of the two above suggested solution do you think is the most viable?

How to find when last reindexing happened in sql server?

Posted: 06 Jul 2013 04:49 AM PDT

I have set 'job' for re-indexing, I want to know when the last re-indexing happened in datetime.

Deriving formulas for input/output

Posted: 06 Jul 2013 08:49 PM PDT

I'm currently enrolled in a DBS class and am having problem with an assignment. I've searched around and have been unable to understand what it is I'm meant to be doing with this derivation formula.

A plant file with TREE-GENUS as the key field includes records with the following TREE-GENUS values: Tsuga, Ficus , Arbutus, Quercus, Melaleuca, Tristaniopsis, Cornus, Sequoiadendron, Lithocarpus, Liriodendron, Pittosporum.
Suppose that records with these search field values are inserted into a random (heap) file with a maximum of 3 records per block. Derive a formula for the expected number of disk I/O to scan these records and to search for a particular record

I've been using some software that was given with the assignment and it also asks what are the maximum number of blocks that are allowed and that is not given by the above brief. I'm not really sure how to derive a formula for this. I've assumed that because there are 3 records per block there are 4 blocks required and that a random heap file uses 1 disk i/o per write/read.

If this is a larger topic than is worth explaining a link to a reliable few pages is also helpful.

innobackupex is failing while exporting backup

Posted: 06 Jul 2013 03:49 AM PDT

I am using perl script which is working fine in one of my other box which used to take individual schema backups & finally take all schema together using percona innobackupex.

I have following information in the Log(posting One day Log details):

--slave-info is used with --no-lock but without --safe-slave-backup. The  binlog position cannot be consistent with the backup data.  Died at /apps/mysql/scripts/mysql_backup.pl line 214.      [Fri Oct 26 04:20:01 2012] [14670]    [Fri Oct 26 04:20:01 2012] [14670] Starting individual backups    [Fri Oct 26 04:20:01 2012] [14670]    [Fri Oct 26 04:20:01 2012] [14670] Backing up proddb    [Fri Oct 26 04:20:01 2012] [14670] Creating /tmp/mysql_backup-14670.fifo    [Fri Oct 26 04:20:01 2012] [14670] Starting /usr/local/bin/zipmt -s -t 4     -c - < /tmp/mysql_backup-14670.fifo > /apps/dbbackup/proddb02.backup-    2012-10-26-042001-proddb.dmp.bz2    [Fri Oct 26 04:20:01 2012] [14670] Starting /usr/bin/innobackupex     --slave-info --no-lock --stream=tar --databases proddb ./ >    /tmp/mysql_backup-14670.fifo    [Fri Oct 26 04:20:02 2012] [14670] FAIL: /usr/bin/innobackupex     --slave-info --no-lock --stream=tar --databases proddb ./ >     /tmp/mysql_backup-14670.fifo failed 256  

Full details are in /apps/dbbackup/backup.log.

 Cause:     /usr/bin/innobackupex --slave-info --no-lock --stream=tar --databases    proddb ./ > /tmp/mysql_backup-14670.fifo failed 256  

Named pipes are existing in the /tmp even after backup getting failed :

  ls -lrth /tmp     =============     prw-r--r--. 1 root root    0 Oct 25 04:20 mysql_backup-18215.fifo   prw-r--r--. 1 root root    0 Oct 26 04:20 mysql_backup-14670.fifo   prw-r--r--. 1 root root    0 Oct 27 04:20 mysql_backup-11278.fifo   prw-r--r--. 1 root root    0 Oct 28 04:20 mysql_backup-7163.fifo   prw-r--r--. 1 root root    0 Oct 29 04:20 mysql_backup-4191.fifo   prw-r--r--. 1 root root    0 Oct 30 04:20 mysql_backup-595.fifo  

Any idea, why the backup is failing?

"Could not find stored procedure' even though the stored procedure have been created in MS SQL Server Management Studio

Posted: 06 Jul 2013 06:59 AM PDT

I have created a table testtable inside the database testbase that have the following structure:

product_no (int, not null)  product_name (varchar(30), not null)  price (money, null)  expire_date (date, null)  expire_time (time(7), null)  

which I used the Microsoft SQL Server 2008 Management Studio.

I created a stored procedure testtable_pricesmaller as follows

use testbase  go  create procedure testtable_pricesmaller      @pricelimit money  as  select * from testtable where price = @pricelimit;  go  

and are able to view the Stored Procedures on the Object Explorer of the Microsoft SQL Server Management Studio. (It is listed in the following tree structure of the Object Explorer)

Databases      + testbase          + Tables              + dbo.testtable          + Programmability              + Stored Procedures                  + dbo.testtable_pricesmaller  

I find it very strange when I receive the following error:

Could not find the stored procedure 'dbo.testtable_pricesmaller'.  

when I execute the following SQL statement:

execute dbo.testtable_pricesmaller 50  

What could it be missing?

How to search a MySQL database with encrypted fields

Posted: 06 Jul 2013 10:52 AM PDT

Suppose I need to encrypt certain table-fields of a MySQL database. Additionally, I need to search some of those fields I did encrypt.

How would one search those fields anyway?

Decrypting each record step by step is no option: Suppose I have multiple of thousands of records. It would take too much time and space to decrypt each record and check if each single record matches the search.

UPDATE 2012-09-07

Adding further details to the database schema would be OK, since I'm about to implement a new application. Furthermore, I need to extend applications currently running in production. But even for those application, adding further details would be OK.

UPDATE 2012-09-08

Encryption is the kernel of this question.

Access restrictions, as proposed by some answers, already apply - but do not fit the formal requirement to encrypt data.

This formal requirement is not Payment Card Industry Data Security Standard [PCI].

SQL Server 2008: How do I change the default schema of the dbo?

Posted: 06 Jul 2013 07:43 AM PDT

When I attempt to change the default schema of the dbo in SQL Server Management Studio, the ALTER fails and states that the dbo account's schema cannot be changed. Are there any tricks that can make is possible to change the the dbo default schema of "dbo" to "xxx"?

Thank you!

What are the main differences between InnoDB and MyISAM?

Posted: 06 Jul 2013 09:05 AM PDT

What are the main differences between InnoDB and MyISAM?

[MS SQL Server] issue with tempdb

[MS SQL Server] issue with tempdb


issue with tempdb

Posted: 05 Jul 2013 05:36 AM PDT

I needed to move the temp db that was in e:\mssql\data to c:\ This was temporary, because I was replacing the e drive. The sql application is installed on c drive in the default location, but tempdb was in e drive. I ran the code below and restarted the sql services and now SQL studio won't work for me. I can't connect to the server. Is reinstalling SQL server 2008 the only way to fix this?USE master;GOALTER DATABASE tempdbMODIFY FILE (NAME = tempdev, FILENAME = 'c:\tempdb.mdf');GOALTER DATABASE tempdbMODIFY FILE (NAME = templog, FILENAME = 'c:\templog.ldf')

[SQL 2012] The transaction log for database 'tempdb' is full due to 'NOTHING' (during cluster install)

[SQL 2012] The transaction log for database 'tempdb' is full due to 'NOTHING' (during cluster install)


The transaction log for database 'tempdb' is full due to 'NOTHING' (during cluster install)

Posted: 13 Jun 2013 06:10 AM PDT

I'm wondering if anyone has seen this rather peculiar error while installing SQL 2012 on a cluster before, specifically if putting tempdb on the local disks during setup.There's no serious urgency on my end, I'm literally messing around with the cluster installation on a test cluster exactly for this reason - to learn about any new good-to-know gotchas before I do the real thing. Already found a couple minor things (like the cluster machine account needs "Create Computer Object" on the pre-staged computer account's [u]OU[/u] ... not just full control on the account itself, which used to be good enough in 2008).This one has me a tad stumped though; during the "Advanced cluster completion" phase (both nodes were already prepared successfully, with no issues):[b]The transaction log for database 'tempdb' is full due to 'NOTHING'[/b]Never mind it being at least a candidate for a "best errors" list...Again this is during installation, there is no actively running instance involved. I'm installing a clustered SQL, default instance, with tempdb on the local disks. It's not the usual "log full" for a good reason; the instance isn't even fully installed yet. It's not disk space; both nodes have 20+ GB free on the path I'm putting it in. It's not a mistyped path or permissions; SQL setup clearly found it and was able to work with it, at least at some point, because the MSSQLSERVER service account (NT SERVICE\MSSQLSERVER) was added to the folder's ACL - with Full Control no less - and I didn't do it. There are no tempdb files (or anything else) in it though. There's nothing special about the folder at all; it's just an ordinary subfolder I created, C:\TEMPDB. It's not even a mounted volume (on the real cluster the intent is to put tempdb on lettered, local SSD drives).I'll keep poking at it, but meanwhile if anyone has already encountered this and knows what particular weirdness I've run into here... please let me know. I know in principle it works - I've already tested installing on a cluster a couple times before, including with tempdb on the C:\ drives of the nodes. But something strange happened here, and I'd really like to know what in case I run into it again.

User Defined Server Roles in SQL2012

Posted: 05 Jul 2013 05:51 AM PDT

When using UD Server Roles do you still have to map logins to db's at the "Security>Login level"? Was hoping to just add logins to the server role without mapping db users to each login.

[T-SQL] Barcode font 128b -SQL function

[T-SQL] Barcode font 128b -SQL function


Barcode font 128b -SQL function

Posted: 05 Jul 2013 11:06 PM PDT

Hi,I am looking for an SQL function that will generate the correct characters to make barcode font code128 work when used in Microsoft Word. The data is passed to word with the data tag using the barcode font.This is a web link to the rules and where the font can be downloaded.http://grandzebu.net/informatique/codbar-en/code128.htmA bit more info on the font herehttp://en.wikipedia.org/wiki/Code_128And I triedhttp://www.sqlteam.com/forums/topic.asp?TOPIC_ID=70901http://www.sqlservercentral.com/Forums/Topic1065554-149-1.aspxhttp://www.sqlservercentral.com/Forums/Topic1062489-1291-1.aspxBut none of these solutions appear to work when tested with a barcode reader. The bar code reader does work because we tested it against web sites generating the barcodes and it is code128b we are working withAnyone any ideas/SQL functions that do work.Many thanksEliza

blocking remove

Posted: 05 Jul 2013 06:53 AM PDT

hii am adding new column with default valuethis table has millions of rows, so while i am executing my query ,its causing blocking to other user.how to solve thisexamplealter table empadd column senti_v int default 0thanks

Sequential numeric

Posted: 05 Jul 2013 06:52 AM PDT

I have a need to update numeric values in a table column so that they are unique and sequencial, the order of which is determined by another column in the same table, and by data in a 2nd table. I am running into a problem when there is duplicate data, and I can't figure out how to get the numeric values to be unique.For the data below, I want the seq_nbr column in TABLE1 to be in order 1,2,3,4 based on the order the codes are listed horizontally in TABLE2. TABLE1seq_no seq_nbr dx_codeE3CD8342-1294-4CBA-9201-D51C07E9FB0C 1 366.16997312BA-8C90-4773-B0FC-1838C46A4728 3 370.035DC781A2-71BC-4148-9D56-DA95D3F8F081 4 362.52E65354B3-F404-430B-8153-EDD7D1921431 4 362.52TABLE2dx_code1 dx_code2 dx_code3 dx_code4366.16 362.52 370.03 362.52[code="sql"]CREATE TABLE Table1 (seq_no UNIQUEIDENTIFIER,seq_nbr INT,dx_code VARCHAR(6)) CREATE TABLE Table2 (dx_code1 VARCHAR(6),dx_code2 VARCHAR(6),dx_code3 VARCHAR(6),dx_code4 VARCHAR(6)) INSERT INTO Table1(seq_no,seq_nbr,dx_code) SELECT NEWID(),'1','366.16'GOINSERT INTO Table1 (seq_no,seq_nbr,dx_code)SELECT NEWID(),'3','370.03'GOINSERT INTO Table1 (seq_no,seq_nbr,dx_code) SELECT NEWID(),'4','362.52' GOINSERT INTO Table1 (seq_no,seq_nbr,dx_code) SELECT NEWID(),'4','362.52'GO INSERT INTO Table2 (dx_code1,dx_code2,dx_code3,dx_code4) VALUES('366.16','362.52','370.03','362.52') GO[/code]Pivot TABLE2 [code="sql"]CREATE TABLE #diag_codes(dx INT IDENTITY,sequence char(8),dx_code VARCHAR(6))INSERT INTO #diag_codes (sequence,dx_code)SELECT sequence,dx_codeFROM(SELECT dx_code1,dx_code2,dx_code3,dx_code4FROM Table2 ) dUNPIVOT(dx_code FOR sequence IN (dx_code1,dx_code2,dx_code3,dx_code4))AS unpvt_assessselect * from #diag_codesDROP TABLE #diag_codes[/code]//EditHit post too soon.I have the syntax above for creating the 2 tables and adding data for the test scenario.I tried some queries with the pivot of TABLE2 to get my numeric values updated, but was running into difficulty due to the duplicate dx_code values.

Running Total Query

Posted: 05 Jul 2013 01:28 AM PDT

Hi there people,I'm writting a T-SQL procedure to get a running total over a certain period, the data may have lapses in time between data rows, here is a example of the schema[code="sql"]CREATE TABLE #TranType1( value int, TranDate datetime)CREATE TABLE #TranType2( value int, TranDate datetime)CREATE TABLE #TranType3( value int, TranDate datetime)INSERT INTO #TranType1SELECT 10,'2013-06-14'UNIONSELECT 0,'2013-06-19'UNIONSELECT 0,'2013-06-24'INSERT INTO #TranType2SELECT 0,'2013-06-26'UNIONSELECT 5208,'2013-06-28'UNIONSELECT 10,'2013-06-29'UNIONSELECT 16005,'2013-07-01'INSERT INTO #TranType3SELECT 23135,'2013-07-02'UNIONSELECT 35070,'2013-07-03'UNIONSELECT 26509,'2013-07-04'UNIONSELECT 7000,'2013-07-05'[/code]And here is the procedure I wrote[code="sql"]DECLARE @DtBegin datetime,@DtEnd date,@dtDif intset @DtBegin = '5/1/2013'SET @DtEnd = GETDATE()set @dtDif = DATEDIFF(day,@DtBegin,@DtEnd);WITH ctePingInfo(sumValue,TranDate) AS ( SELECT SUM(SumValue),Trandate FROM( SELECT COALESCE(value,0),CAST(Trandate AS date) FROM #TranType1 UNION ALL SELECT COALESCE(value,0),CAST(Trandate AS date) FROM #TranType2 UNION ALL SELECT COALESCE(value,0),CAST(Trandate AS date) FROM #TranType3) ValInfo(SumValue,TranDate) WHERE Trandate BETWEEN @DtBegin AND @DtEnd GROUP BY TranDate ), E1(N) AS ( --=== 1x10^1 SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 --10 ), E2(N) AS (SELECT 1 FROM E1 a, E1 b), --1x10^2 E4(N) AS (SELECT 1 FROM E2 a, E2 b), --1x10^4 cteTally(N) AS (SELECT ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) -1 FROM E4), cteDateRange(dateRange) AS (SELECT TOP (@dtDif+1) dateadd(day,n,@DtBegin) FROM cteTally), cteResult(totValue,TranDate) AS ( SELECT COALESCE(cpi.Sumvalue,0),cdr.daterange FROM cteDateRange cdr LEFT JOIN ctePingInfo cpi ON cdr.daterange = cpi.TranDate )SELECT DISTINCT SUM(res2.totvalue) OVER (PARTITION BY res.TranDate),res.TranDate FROM cteResult res INNER JOIN cteResult res2 ON res.TranDate >= res2.TranDate ORDER BY res.TranDateDROP TABLE #TranType1DROP TABLE #TranType2DROP TABLE #TranType3[/code]It does return exatcly what I want, but given I believe this kind of request to be very common, I was wondering if there are any ways to speed it up, creating the physical tally table might not be an option, I believe there must be other topics on this very subject, but I'm not aware of what terms to look up for, any help is appreciated thanks !//EditPlease people nevermind, just found this link http://www.sqlservercentral.com/articles/T-SQL/68467/Don't know how that didn't come up on google, just had to make some minor changes

Fun task for the afternoon - grouping data

Posted: 05 Jul 2013 01:53 AM PDT

Hi,I am trying to create a table with a grouping based on a merged column. Best to explain with SQL[code="sql"]CREATE TABLE [dbo].[#Table_1]( [ID] [int] IDENTITY(1,1) NOT NULL, [Region] [varchar](50) NULL, [Prop1] [varchar](50) NULL, [Prop2] [varchar](50) NULL, [Prop3] [varchar](50) NULL, CONSTRAINT [PK_Table_1] PRIMARY KEY CLUSTERED ( [ID] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY]GOinsert into #Table_1 values('ANZ', 'A', 'B', 'C'),('CAM', 'F', 'G', 'C'),('CAM', 'A', 'B', 'C'),('ME', 'H', 'V', 'N'),('SAM', 'A', 'B', 'C')select * from #Table_1--SOME QUERY HERE to turn the data into something like this:CREATE TABLE [dbo].[#Table_2]( [Region] [varchar](50) NULL, [Prop1] [varchar](50) NULL, [Prop2] [varchar](50) NULL, [Prop3] [varchar](50) NULL)insert into #Table_2 values('ANZ,CAM,SAM', 'A', 'B', 'C'),('CAM', 'F', 'G', 'C'),('ME', 'H', 'V', 'N')select * from #Table_2--drop Table #Table_1--drop Table #Table_2[/code]I am thinking that at somepoint I need to use the XML PATH trick to concatenate the fields but I am a bit lost as to how to do this with the above senario. Any help you can pass on would be very useful.Many Thanks,Oliver

[SQL Server 2008 issues] Incorrect totals when summing counts for past 12 month

[SQL Server 2008 issues] Incorrect totals when summing counts for past 12 month


Incorrect totals when summing counts for past 12 month

Posted: 05 Jul 2013 07:08 PM PDT

Hello everyone,I have run into an issue where the query below returns monthly totals that include the counts with "lastupdatedate" on the 1st day of the next month as well. In other words, counts in [b]April 2013[/b] also include records with "lastupdatedate" value of 05/01/2013 as well. I can't seem to find a solution to this one. Any help is appreciated!The data type of the "lastupdatedate" column is datetime.[code="sql"]SELECT DATEDIFF(SECOND,{d '1970-01-01'}, MIN(ins.lastUpdateDate)) AS Ids, CASE WHEN MONTH(ins.lastUpdateDate) = 1 THEN 'Jan, ' + CAST(YEAR(ins.lastUpdateDate) as nvarchar(4)) WHEN MONTH(ins.lastUpdateDate) = 2 THEN 'Feb, ' + CAST(YEAR(ins.lastUpdateDate) as nvarchar(4)) WHEN MONTH(ins.lastUpdateDate) = 3 THEN 'Mar, ' + CAST(YEAR(ins.lastUpdateDate) as nvarchar(4)) WHEN MONTH(ins.lastUpdateDate) = 4 THEN 'Apr, ' + CAST(YEAR(ins.lastUpdateDate) as nvarchar(4)) WHEN MONTH(ins.lastUpdateDate) = 5 THEN 'May, ' + CAST(YEAR(ins.lastUpdateDate) as nvarchar(4)) WHEN MONTH(ins.lastUpdateDate) = 6 THEN 'Jun, ' + CAST(YEAR(ins.lastUpdateDate) as nvarchar(4)) WHEN MONTH(ins.lastUpdateDate) = 7 THEN 'Jul, ' + CAST(YEAR(ins.lastUpdateDate) as nvarchar(4)) WHEN MONTH(ins.lastUpdateDate) = 8 THEN 'Aug, ' + CAST(YEAR(ins.lastUpdateDate) as nvarchar(4)) WHEN MONTH(ins.lastUpdateDate) = 9 THEN 'Sep, ' + CAST(YEAR(ins.lastUpdateDate) as nvarchar(4)) WHEN MONTH(ins.lastUpdateDate) = 10 THEN 'Oct, ' + CAST(YEAR(ins.lastUpdateDate) as nvarchar(4)) WHEN MONTH(ins.lastUpdateDate) = 11 THEN 'Nov, ' + CAST(YEAR(ins.lastUpdateDate) as nvarchar(4)) WHEN MONTH(ins.lastUpdateDate) = 12 THEN 'Dec, ' + CAST(YEAR(ins.lastUpdateDate) as nvarchar(4)) END AS Groups, COUNT(ins.Id) AS Counts FROM tblInsp ins WHERE ins.custID = 1234and ins.lastUpdateDate > Dateadd(year,-1,getdate()) GROUP BY MONTH(ins.lastUpdateDate), YEAR(ins.lastUpdateDate)[/code]Thanks a bunch!

george carlin quotes religion

Posted: 05 Jul 2013 05:53 PM PDT

[b][url=http://www.searchquotes.com/search/George_Carlin_Religion/]george carlin quotes religion[/url][/b]

How to Store Data in Database using Data table

Posted: 01 Jul 2013 05:10 PM PDT

c# - Insert data from DataTable to database table

Effect of Clustered Index on Non-Clustered Index

Posted: 02 Jul 2013 01:50 AM PDT

Not remotely a DBA so forgive me if this is a novice question, but will having a clustered index in a database improve the performance of non-clustered indexes?Sean

collation with OPENROWSET BULK, file in different collation than server default

Posted: 05 Jul 2013 10:55 AM PDT

I am having a difficulty with SQL Server collations (2008 R2) that is driving me up the wall. I've learned everything I can about MSSQL and Windows collations and character sets (lots of crazy details!), but I still can't tell what's going wrong. Hoping somebody here can save me...My goal is simple:-- I have a program that takes text files (all in Windows Code Page 1252) and bulk inserts them into database tables. -- Every CHAR and VARCHAR field in every table has collation SQL_Latin1_General_CP1_CI_AS. -- The format files mark these fields as SQLCHAR, likewise with collation SQL_Latin1_General_CP1_CI_AS.-- The database and table default collations are Latin1_General_CI_AS.-- The server default collation is Japanese_CI_AS-- The result of sp_helpsort is Japanese, case-insensitive, accent-sensitive, kanatype-insensitive, width-insensitiveI'm using MERGE commands, with the file as a source table. The relevant part:[CODE]SELECT mycol1, mycol2 FROM OPENROWSET (BULK 'C:\my\file.txt', FORMATFILE='C:\my\format.fmt', FIRSTROW = 2) AS Z[/CODE]I receive the error:[CODE]Msg 7339, Level 16, State 1, Line 2OLE DB provider 'BULK' for linked server '(null)' returned invalid data for column '[BULK].title'.[/CODE]I've narrowed the problem down to accented "é" characters. When I remove those from the file, it loads properly. Since the file itself, the format file, and the table all understand the single-byte é, I don't see why it should cause problems.The only idea I have -- and I hope this isn't right -- is if MSSQL is trying to convert the data to Unicode even though it's coming from and going to Windows-1252, due to the Server default of Japanese_CI_AS. Unfortunately, this page suggests that might be the case:[URL]http://msdn.microsoft.com/en-us/library/ms191145(v=sql.90).aspx[/URL][QUOTE]"The use remote collation option specifies whether the collation of a remote column or of a local server will be used... When false, distributed queries always use the default collation of the local server instance, and collation name and the collation of remote columns are ignored...."Using the linked server options is the only way to enable using remote collations. Therefore, queries that are constructed that use ad hoc names provided by OPENROWSET and OPENDATASOURCE cannot use the collation information of remote character data. Additionally, all linked servers in SQL Server version 7.0 that are upgraded to SQL Server 2000 or later are set to use remote collation = false."[/QUOTE]I interpret this as meaning that it will read my Windows-1252 file as Unicode instead (Japanese_CI_AS). That would explain the error, since "é" is a two-byte character in Unicode and the one-byte equivalent wouldn't be understood. But why are my settings in the format file and table structure being ignored? More importantly, is there a way around this? This is an automated process and never processes the same file twice, so setting up the every new file as a "remote database" isn't an option unless that can also be automated.Is there anything I can change to fix this? I can change the format file, the table settings, or the database settings. I can change the query, but only somewhat since I have to use MERGE as these are UPSERT operations. I can't change the server settings or the data files themselves.

SELECTING A CASE VALUE ???

Posted: 05 Jul 2013 07:44 AM PDT

HiI am using the following to retrive a clients "secure" name if alias-type of 004 exists.Problem I have is if the client has more than one alias type(001,002, 003) they all show upWhat I need to do is if a client has an alias code of 004 then use that no matter how many they haveif they don't have an oo4 and have multiple say 003's then use the client name ..Thanks SELECT CASE WHEN at.CODE = '004' THEN a.FName + ' ' + isnull(a.MName, '') + ' ' + + a.lname ELSE c.FName + ' ' + ISNULL(c.MName, '') + ' ' + c.LName END AS Tag_Data FROM Client AS C LEFT JOIN -- Note the order of the clausing here. It causes an -- inner join to the rest of the tables BEFORE the -- outer join is included. Client_To_Alias_Collection AS AC INNER JOIN Alias AS A ON AC.OID_Link = A.OID INNER JOIN Alias_Type AS AT ON AT.OID = A.Alias_Type_Moniker ON c.oid = ac.oid AND A.EXPDate IS NULL WHERE c.OID = '12345'

Implicit conversion using COALESCE or CASE

Posted: 03 Jul 2013 06:07 AM PDT

Hi,I've discovered that I always seem to get an implicit conversion to INT when using this syntax:SELECT ... ORDER BY COALESCE(SectionOrder, 99); (Column [SectionOrder] is defined as SMALLINT).The execution plan COMPUTE SCALAR operation shows this syntax as being changed to a CASE statement:CASE WHEN SectionOrder IS NOT NULL THEN convert_implicit(int,SectionOrder,0) ELSE (99) ENDIs this conversion a "feature" of SQL Server? Any info is appreciated.~ Jeff

Curious question about Transaction Replication from a backup

Posted: 05 Jul 2013 06:31 AM PDT

I've read various articles on how to do this but tried a different approach and while it appears to work for the most part, there were still some issues that I'm hoping someone can point out the obvious. We have replication already set up and going on a reporting server. There is a separate publisher, distributor, and subscriber server. Our attempt is to recreate "clone" the existing replication set up on the existing reporting server to a new reporting server.Our process: - We stopped the distributor agent - Waited for replicated transactions to complete at the subscriber - Took a full backup of the subscribed databases - Restored these databases on the new reporting server (at this point both subscribers should be 100% the same) - Created a new subscription to the existing publications on the new server - Re-enabled the distribution agent and waited for replicated commands to hit both subscribersThis seems to work, but for some tables we're off by a few hundred, to a few thousand rows...The only error we can find is "The row was not found at the Subscriber when applying the replicated command" - which to keep things flowing we added -SKIPERRORS 20598It appears very random, but is keeping things in sync for the most part (minus the original missing rows)Any suggestions or ideas as to we are getting these errors? Shouldn't both database servers be exactly the same in the method we took?

credential/proxy

Posted: 30 Jun 2013 01:05 PM PDT

I have a proc as below ALTER PROCEDURE [dbo].[sp_proc_test] WITH EXECUTE AS 'ABCD\svcaccount'AS EXEC msdb.dbo.sp_start_job @job_name = 'JB_ABC_Run'GO I setup a proxy account for windows login and mapped a SQL login to it. When I try to execute through a SQL login, I am getting following error. The EXECUTE permission was denied on the object 'sp_start_job', database 'msdb', schema 'dbo'. I have made sure the windows (ABCD\svcaccount) login has full access (owner) to the msdb db. What am I missing?

Issue while creating indexed view after joining two non related tables

Posted: 04 Jul 2013 09:00 PM PDT

I am trying an create the indexed view after joining two non related tables. Please correct me if my approach is wrong and add your thoughts for the issue I had mentioned below.[b]Syntax used for creating view[/b]CREATE VIEW [i]MyViewName[/i]WITH SCHEMABINDINGASselect field1, field 2 from tableOne (not the exact query)FULL OUTER JOIN select field1, field2 from tableTwo (not the exact query)[b]Syntax used for creating index on view[/b]CREATE UNIQUE CLUSTERED INDEX ClusteredIndexTestON dbo.[i]MyViewName[/i](field1,field2)[b]Issue occurred while trying to create index[/b]Msg 10109Cannot create index on view "db.dbo.MyViewName" because it references derived table "tableOne" (defined by SELECT statement in FROM clause). Consider removing the reference to the derived table or not indexing the view.

How to Check Change in the Status

Posted: 02 Jul 2013 06:34 AM PDT

I have a table for instructors which contains subjects, date and time and Status information about their classes.Once they register on the website they get the pending status, and somebody from administration either Approve, deny or Cancel their class.so the status changes from Pending to either Approv, deny Or cancel.I want to write a query that to piush the information to instructors if their Status gets changed from Pending to Approve, Deny Or CancelHow do I do that.Thaks,Blyzzard

Error descriptions

Posted: 05 Jul 2013 04:54 AM PDT

Is the quality of error descriptions better trapped at the ssis package level rather than at the sql agent job level?We have ssis packages executed from sql agent jobs, but if the job fails, we get 'The job failed. The job was invoked by user xxxx The last step to run was step 1'Instead of this, is there any value to logging the error earlier from the ssis package level by using a script in the error event handler? Do the ssis errors give a better, more detailed explanation of what might have gone wrong?It turned out that a stored procedure was changed that the ssis package depended on, but it was not obvious from the error message. So I was wondering whether ssis error collection would have told us that, instead of having to figure it out from starting from the sql job level...iow, I thionk what happened is the package error bubbled up to the job, but the detail arising from ssis was not trapped or available from there.thanks a lot

Practical Use of Filegroups, Multi-data files, and other IO Tweaks

Posted: 05 Jul 2013 04:24 AM PDT

I'm curious to hear from folks that have decided to deploy (or I guess also decided not to deploy) various IO tweaks in their environment on a properly sized, multi-use (many databases, mixed load OLTP) SQL server.Here's why. For lots of reasons I won't go into...our DB servers are not in our virtual racks...nor are they on our SAN. They are still dedicated chassis with cherry picked IO systems. They sport a single RAID for all things data (meaning, not: OS, program files, swap files, etc. Just data) That RAID is usually around 30 x 15K RPM SAS drives with max battery backed RAID cache (1.5GB currently) and I jump around between RAID 5 and 6...with additional hot spares...point here is I can usually lose up to 3 drives before I'm in trouble).So, my IO approach has always been to over-provision. Buy more space than needed, by even more IO throughput than needed, buy a big cache card, and let the RAID & IO subsystem work out the details. One major reason for this is my DB servers are usually sporting 100+ databases with a mix of vendor apps, SharePoint, custom DB's APP's, all serving a combination of OLTP and big reporting loads. Breaking out log files per the typical definition of IO isolation seems senseless in that I would be separating out 100+ log files (thereby turning 100+ synchronous workloads into something inherently chaotic and therefore back to something non-sequential).mdf, ldf, and index isolation (by physical disks) always seems heavily laced with pro vs con. If I broke up my 30 drives into say, 3 sets of 10 drive arrays, trying to do my own IO isolation seems like it could just as often introduce as many problems as I solve (the thought being that I would occationally experience a bottleneck on the 10 drive array I would not have experienced on my original 30 drive array).So as to stop rambling I'll open the floor to other folks that may have a test environment and better opportunities for testing different IO isolation...Am wrong headed here? Are there folks out there with a similar environment that are breaking apart big arrays, engaging in custom IO islolation (using any SQL methods available) and finding it worth the trouble.Thanks.

Mixed Mode Authentication not consistent between Windows Users using Mgmt Studio

Posted: 05 Jul 2013 12:28 AM PDT

Currently experiencing strange issue I have not seen before on a SQL Server 2008 R2 instance running on Windows Server 2008 R2. Access to SQL Server via Mgmt Studio under the 'sa' user does not work regardless of which Windows User is logged in. The only way to access SQL Server via Mgmt Studio is to log into Windows with the administrator user and use Windows Authentication to authenticate against SQL Server.[u]SQL Server Configuration[/u]:Mixed Mode AuthenticationShared Memory Prorocol Access Only (Other protocols have been tried)SQL Native Client 10 being used to access Sql ServerAll of the above is only experienced when using Management Studio. When accessing via ODBC programmatically, configured SQL Server user works just fine regardless of which Windows user is logged in.

Deadlocks after migration from SQL server 2005 to 2008R2

Posted: 05 Jul 2013 12:46 AM PDT

Have migrated the SQL server 2005 to 2008R2 and we are now facing lots of deadlocks issues. Have enabled the trace flag 1222 and can see numerous deadlocks loaded in error logs. One of them is below. Can somebody please help what may be the issue?------------------------------------------------------------------------------07/05/2013 15:38:08,spid12s,Unknown,ResType:LockOwner Stype:'OR'Xdes:0x00000005B5989950 Mode: Sch-M SPID:80 BatchID:0 ECID:0 TaskProxy:(0x0000000D5FF36540) Value:0xd297600 Cost:(0/0)07/05/2013 15:38:08,spid12s,Unknown,Victim Resource Owner:07/05/2013 15:38:08,spid12s,Unknown,Log Viewer could not read information for this log entry. Cause: Data is Null. This method or property cannot be called on Null values.. Content:07/05/2013 15:38:08,spid12s,Unknown,ResType:LockOwner Stype:'OR'Xdes:0x00000000800845F0 Mode: Sch-S SPID:139 BatchID:0 ECID:0 TaskProxy:(0x0000000F5CE68518) Value:0xd1c3640 Cost:(0/0)07/05/2013 15:38:08,spid12s,Unknown,Requested by:07/05/2013 15:38:08,spid12s,Unknown,Input Buf: RPC Event: Proc [Database Id = 12 Object Id = 1508968502]07/05/2013 15:38:08,spid12s,Unknown,SPID: 80 ECID: 0 Statement Type: ALTER TABLE Line #: 2507/05/2013 15:38:08,spid12s,Unknown,Owner:0x000000000CFCDC80 Mode: Sch-M Flg:0x40 Ref:0 Life:20000000 SPID:80 ECID:0 XactLockInfo: 0x00000005B598999007/05/2013 15:38:08,spid12s,Unknown,Grant List 0:07/05/2013 15:38:08,spid12s,Unknown,OBJECT: 2:1250113027:4 CleanCnt:2 Mode:Sch-M Flags: 0x107/05/2013 15:38:08,spid12s,Unknown,Node:207/05/2013 15:38:08,spid12s,Unknown,Log Viewer could not read information for this log entry. Cause: Data is Null. This method or property cannot be called on Null values.. Content:07/05/2013 15:38:08,spid12s,Unknown,ResType:LockOwner Stype:'OR'Xdes:0x00000005B5989950 Mode: Sch-M SPID:80 BatchID:0 ECID:0 TaskProxy:(0x0000000D5FF36540) Value:0xd297600 Cost:(0/0)07/05/2013 15:38:08,spid12s,Unknown,Requested by:07/05/2013 15:38:08,spid12s,Unknown,Input Buf: RPC Event: Proc [Database Id = 12 Object Id = 1508968502]07/05/2013 15:38:08,spid12s,Unknown,SPID: 139 ECID: 0 Statement Type: UPDATE Line #: 37107/05/2013 15:38:08,spid12s,Unknown,Owner:0x000000000C5DC5C0 Mode: Sch-S Flg:0x40 Ref:1 Life:00000000 SPID:139 ECID:0 XactLockInfo: 0x0000000E570E1BA007/05/2013 15:38:08,spid12s,Unknown,Grant List 0:07/05/2013 15:38:08,spid12s,Unknown,OBJECT: 2:1250113027:6 CleanCnt:2 Mode:Sch-S Flags: 0x107/05/2013 15:38:08,spid12s,Unknown,Node:107/05/2013 15:38:08,spid12s,Unknown,Log Viewer could not read information for this log entry. Cause: Data is Null. This method or property cannot be called on Null values.. Content:07/05/2013 15:38:08,spid12s,Unknown,Wait-for graph07/05/2013 15:38:08,spid12s,Unknown,Deadlock encountered .... Printing deadlock information07/05/2013 15:37:51,spid12s,Unknown,ResType:LockOwner Stype:'OR'Xdes:0x0000000D38AE59D0 Mode: Sch-S SPID:179 BatchID:0 ECID:0 TaskProxy:(0x0000000EB2060518) Value:0xcdfc380 Cost:(0/0)07/05/2013 15:37:51,spid12s,Unknown,Victim Resource Owner:07/05/2013 15:37:51,spid12s,Unknown,Log Viewer could not read information for this log entry. Cause: Data is Null. This method or property cannot be called on Null values.. Content:07/05/2013 15:37:51,spid12s,Unknown,ResType:LockOwner Stype:'OR'Xdes:0x00000000B5490E80 Mode: Sch-M SPID:96 BatchID:0 ECID:0 TaskProxy:(0x0000000CE0EAE540) Value:0xdc44280 Cost:(0/0)07/05/2013 15:37:51,spid12s,Unknown,Requested by:07/05/2013 15:37:51,spid12s,Unknown,Input Buf: RPC Event: Proc [Database Id = 12 Object Id = 1508968502]07/05/2013 15:37:51,spid12s,Unknown,SPID: 179 ECID: 0 Statement Type: UPDATE Line #: 11607/05/2013 15:37:51,spid12s,Unknown,Owner:0x000000000C6B0C40 Mode: Sch-S Flg:0x40 Ref:1 Life:00000000 SPID:179 ECID:0 XactLockInfo: 0x000000011F019BA007/05/2013 15:37:51,spid12s,Unknown,Grant List 0:07/05/2013 15:37:51,spid12s,Unknown,OBJECT: 2:1401065560:10 CleanCnt:2 Mode:Sch-S Flags: 0x107/05/2013 15:37:51,spid12s,Unknown,Node:207/05/2013 15:37:51,spid12s,Unknown,Log Viewer could not read information for this log entry. Cause: Data is Null. This method or property cannot be called on Null values.. Content:07/05/2013 15:37:51,spid12s,Unknown,ResType:LockOwner Stype:'OR'Xdes:0x0000000D38AE59D0 Mode: Sch-S SPID:179 BatchID:0 ECID:0 TaskProxy:(0x0000000EB2060518) Value:0xcdfc380 Cost:(0/0)07/05/2013 15:37:51,spid12s,Unknown,Requested by:07/05/2013 15:37:51,spid12s,Unknown,Input Buf: RPC Event: Proc [Database Id = 12 Object Id = 1508968502]07/05/2013 15:37:51,spid12s,Unknown,SPID: 96 ECID: 0 Statement Type: ALTER TABLE Line #: 2407/05/2013 15:37:51,spid12s,Unknown,Owner:0x000000000EAE1D00 Mode: Sch-M Flg:0x40 Ref:0 Life:20000000 SPID:96 ECID:0 XactLockInfo: 0x00000000B5490EC007/05/2013 15:37:51,spid12s,Unknown,Grant List 1:07/05/2013 15:37:51,spid12s,Unknown,OBJECT: 2:1401065560:6 CleanCnt:2 Mode:Sch-M Flags: 0x107/05/2013 15:37:51,spid12s,Unknown,Node:107/05/2013 15:37:51,spid12s,Unknown,Log Viewer could not read information for this log entry. Cause: Data is Null. This method or property cannot be called on Null values.. Content:07/05/2013 15:37:51,spid12s,Unknown,Wait-for graph07/05/2013 15:37:51,spid12s,Unknown,Deadlock encountered .... Printing deadlock information

Massive slowdown in SELECT statement in Cursor from SQL 2000 to 2008 -- need help!

Posted: 28 Jun 2013 04:46 AM PDT

I have a SELECT statement in a cursor in a stored procedure that ran extremely fast in SQL Server 2000. I just migrated our database to SQL Server 2008 R2 (SP2) for a customer, and the same SELECT now takes fifty times longer to execute. I have logging that measure this performance, which is how I found out. The database on the new 2008 R2 server is set to 2008 compatibility. The slow performance occurs in both 2000 and 2008 compatibility mode. Some notes: I need to keep the SELECT statement inside the cursor. It calculates the number of cross sales for an employee in a given day, which is then used to figure out what incentive earnings they get for the cross sales. The SVAccountsHistory table has three million rows; SVCrossSales has 16,000; SVSales has 74,000; SVAccounts has 90,000. The joins between table use the primary keys in these tables. I have a number of other SELECT statements that run just fine for the incentive calculations that I do elsewhere in the stored procedure, so it's just this SELECT in particular. What I'm guessing is the problem in SQL 2008 is that it doesn't like the DATEDIFF compare or the COUNT(SELECT DISTINCT xxx) calculation. It's extremely maddening, I have to say. I'm at my wit's end, and desperately need to figure out what SQL 2008 is choking on in comparison to SQL 2000. Thank you for your help!Here's the SELECT statement:[quote]SELECT @tmpCalcValue1 = COUNT(DISTINCT SVCrossSalesHistory.SalesNum) * @EmpRuleValue1FROM SVCrossSalesHistory INNER JOIN SVSales ON ( SVCrossSalesHistory.INum = SVSales.INum AND SVCrossSalesHistory.SalesNum = SVSales.SalesNum AND SVSales.SaleDate = @tmpDate AND -- Go back to the day of consideration SVSales.OrgNum = @OrgNum AND -- Selected emp SVSales.DeleteFlg = 'N') INNER JOIN SVGoalProdXref ON ( SVSales.INum = SVGoalProdXref.INum AND SVSales.ProdNum = SVGoalProdXref.ProdNum AND SVGoalProdXref.GoalNum = @GoalNum AND -- Go against all the prods for the selected goal SVGoalProdXref.DeleteFlg = 'N') INNER JOIN SVAccounts ON ( SVSales.INum = SVAccounts.INum AND SVSales.FullAcctNum = SVAccounts.FullAcctNum AND ( -- If the account was closed, determine if the minimum # active days was met during the life of the account DATEDIFF(DD,OpenDate, CASE WHEN CloseDate = '1/1/1900' THEN NULL ELSE CloseDate END) > @EmpRuleValue2 OR -- @EmpRuleValue2 = 'x # days active' DATEDIFF(DD,OpenDate, CASE WHEN CloseDate = '1/1/1900' THEN NULL ELSE CloseDate END) IS NULL)) INNER JOIN SVAccountsHistory ON ( SVAccounts.INum = SVAccountsHistory.INum AND SVAccounts.FullAcctNum = SVAccountsHistory.FullAcctNum AND SVAccountsHistory.HistoryDate = @StartTime AND -- Today SVAccountsHistory.Balance > ISNULL(@EmpRuleValue5,0)) -- '$' value in 'balance > $'WHERE SVCrossSalesHistory.INum = @INum AND SVCrossSalesHistory.CSFlg = 'Y' AND -- Must be a cross sale SVCrossSalesHistory.IsNewCustFlg = 'Y' -- Consider new customers only[/quote]

getting all the days of week eevn if data doesnt exist

Posted: 05 Jul 2013 01:39 AM PDT

Hi,I've tried this before but just not getting it... sorry..I have an aux.date table with date info date, dayofweek, kindofday etcI want to have a start and end date(using SSRS) to find recorded servicesso lets say the put in 7/1/13 thru 7/5/13 and 7/4 is a holiday and no recorded services existI want output similar to:[b]date record serv KINDOFDAY[/b]7/1/13 recservdata1 .. Weekday7/1/13 recservdata2... WeekDay7/2/13 recservdata3... Weekday7/3/13 recservdata4... Weekday7/4/13 NULL Holiday7/5/13 recservdata5... Weekdayetc... I have a view which grabs the days of the week (rpt_cal_Select)then I attached to this but on get records with data... SELECT dbo.RECORDED_SERVICE.SERVICE_ITEM_MONIKER, dbo.rpt_Cal_Select.Date AS Expr1, dbo.SERVICE_ITEM.Code, dbo.rpt_Cal_Select.*, dbo.RECORDED_SERVICE.OIDFROM dbo.rpt_Cal_Select INNER JOIN dbo.RECORDED_SERVICE ON dbo.rpt_Cal_Select.Date = DATEADD(dd, 0, DATEDIFF(dd, 0, dbo.RECORDED_SERVICE.STARTTIME)) LEFT OUTER JOIN dbo.SERVICE_ITEM ON dbo.RECORDED_SERVICE.SERVICE_ITEM_MONIKER = dbo.SERVICE_ITEM.OIDThanksJoe

I can't get my fulltext catalog to populate automatically

Posted: 05 Jul 2013 01:01 AM PDT

I have one fulltext index I created three days ago in my default fulltext catalog. It's set for automatic change tracking, but it doesn't seem to be updating automatically. Not sure where I'm going wrong. I had assumed that, after creation, a fulltext index would automatically update itself. Any ideas?

help with joining to table with non unique index and duplicate rows

Posted: 04 Jul 2013 09:43 PM PDT

Hi,I have the below query to join several tables, and retrieve customer data:[code="sql"]select sv.SVSEQ, sc.SCACO, ac.f1 as AccountNo, ne.NEAN, ne.NEEAN,bg.BGCFN1,bg.BGCFN2, bg.BGCFN3, bg.BGCUS, sv.SVNA1 ,sv.SVNA2, sv.SVNA3, sv.SVNA4, sv.SVNA5, sv.SVPZIP,sv.SVCSA, sc.SCACO,sx.SXDLMfrom NEPF ne inner join tmpAccountList ac on NEEAN=ac.f1 inner join BGPF bg on bg.BGCUS=ne.NEAN inner join SCPF sc on sc.SCAN=bg.BGCUS and bg.BGCUS=ne.NEAN and sc.SCAN=LEFT(ac.f1,6) inner join sxpf sx on sx.SXCUS=LEFT(ac.f1,6) INNER JOIN SVPF SV ON sv.SVDLM=sx.SXDLM[/code]my issue is with the table [b]SVPF[/b] which has thousands of duplicate records.There is a column [i]SVSEQ[/i] which is numbered, but not uniquely. I want to join to that table, but only get one row for each individual Sequence in my results set.Without that table, I have 150 records, with that table, I have several hundred thousand.How can I add that table using the [i]SVSEQ[/i] column without getting duplicates?

problem select insert

Posted: 04 Jul 2013 11:32 PM PDT

hi in database have many tables for selectionfor example table 1phone kod1 kod2 adress kod3 email kod4 kod5--- --- --- --- --- ------- ---- --- ---- ----table2name1 kod1---- ----christie 1james 2table3name2 kod2------- -------jhony 1paul 2table3 name2 kod2----- -------hasan 1rashid 2 table4 name4 kod4------ ---asif 1natiq 2table5 name5 kod5---------- -------arnold 1vandame 1insert table111111 ,james ,jhony,chicago,hasan,www.mail.ru , asif,arnoldafter inserted table 1phone kod1 kod2 adress kod3 email kod4 kod5--- --- --- --- --- ------- ---- --- 11111 2 1 chicago 1 www.mail.ru 1 1

SQL Varchar(max)

Posted: 04 Jul 2013 11:36 PM PDT

Hi Team,In sql server 2008R2, what is the max characters for Varchar(max).i've declared a variable in a stored proc as Temp Varchar(max), but am passing a parameter with 95000 characters, is it Work..?Please Suggest...

The system has rebooted from a Automatic Server Recovery (ASR) event.

Posted: 01 Jul 2013 11:44 PM PDT

We are facing a frequent restart of our production server once in a month or so. The event log is as below..[i]User ActionDetermine the nature of the Automatic Server Recovery (ASR) event, and take corrective action. WBEM Indication PropertiesAlertingElementFormat: 1 0x1 (Other)AlertType: 5 0x5 (Device Alert)BladeBay: "11"BladeName: "DBServer.histogenetics.com"Description: "The system has rebooted from a Automatic Server Recovery (ASR) event."EnclosureName: "OB-005DD323242"EventCategory: 16 0x10 (System Power)EventID: "1"ImpactedDomain: 3 0x3 (Enclosure)IndicationIdentifier: "{B211AC7F-9D6F-435C-825E-560CB382D0CF}"IndicationTime: "20130701224253.923000-240"NetworkAddresses[0]: "fe80::a8d7:c0f3:ac5f:b15d%10"NetworkAddresses[1]: "xxx.xxx.xxx.xx"OSType: 103 0x67 (Microsoft Windows Server 2008 R2)OSVersion: "6.1.7601"PerceivedSeverity: 5 0x5 (Major)ProbableCause: 111 0x6f (Timeout)ProbableCauseDescription: "ASR Reboot Occurred"ProviderName: "HP Recovery"ProviderVersion: "2.7.0.0"RackName: ""RackUID: ""RecommendedActions[0]: "Determine the nature of the Automatic Server Recovery (ASR) event, and take corrective action."Summary: "ASR reboot occurred"SystemCreationClassName: "HP_WinComputerSystem"SystemFirmwareVersion[0]: "2010.03.30"SystemFirmwareVersion[1]: "2010.03.30"SystemGUID: "33424235-3532-584D-51343-425235235"SystemModel: "ProLiant BL460c G6"SystemName: "DBServer.histogenetics.com"SystemProductID: "785725-K22"SystemSerialNumber: "XDfs898sd"TIME_CREATED: 130172065740131089 0x1ce76cdda1e8b11[/i]Did anybody faced a similar problem and get it solved..

Dynamicly make datamarts fore bussiness

Posted: 04 Jul 2013 10:39 PM PDT

Hello,I am trying to make a SSIS load package with I can make datamarts/Exports fore the bussines.I want to do the setup of al the exports in 1 table like:IDNameDescriptionFrequentieSourse SQL statementTarget DatabaseTarget SchemeTarget TableI have build a SSIS package with a Execute SQL task to get the setup variables.That go to a For Each ContainerWith a Data Flow task with a source with the Source SQL statement variable.But then i have problems to setup a destination and to dynamicly drop and create the destination table.I am so close but can't find a solution.Hope you could help me on how to do the rest.If you need more information please ask.

Days Since Last Backup

Posted: 04 Jul 2013 10:20 PM PDT

Hi How to calculate Days since last Backup

How to do tempdb utilazation??

Posted: 04 Jul 2013 08:15 PM PDT

How to do tempdb utilazation in mssq; server ?

SQL Server replication service Status

Posted: 04 Jul 2013 09:48 PM PDT

Hi How to get the status of the SQL Server replication service by T-SQL Statement

Search This Blog