Monday, July 1, 2013

[how to] Any solution to fix circular replication fail-over?

[how to] Any solution to fix circular replication fail-over?


Any solution to fix circular replication fail-over?

Posted: 01 Jul 2013 08:45 PM PDT

Here's my current situation ...

I've developed a P.O.S system for our corporation, which has 25+ branch across the country, each branch has it's own P.O.S database on wamp server.

I want for every event occur on any branch to affect the other branches.

Some people advised me with ring topology (circular replication), but there's a big problem within it, that when one server gets down, the replication will stop and will go through infinite loop.

Please advice.

SUM 1 column with same ID and SELECT multiple columns from multiple tables

Posted: 01 Jul 2013 07:00 PM PDT

I would like to SUM the values of the column TimeTook when the MPRO column is the same. I also require to grab data from multiple tables and select multiple columns. I have 2 queries with the information and need to know how to mix them together in 1.

    SQL1 = "SELECT MPRO.MPRO, SUM(MPRO.TimeTook) AS TimeTook, MPRO.Carrier, MPRO.Product, MPRO.Province, MPRO.Record, MASTER.DateIn, JOBS.JobTypeCode "      SQL2 = "FROM Worklog.Worklog.MPRO MPRO INNER JOIN WorkLog.WorkLog.MASTER "      SQL3 = "MASTER ON MPRO.Record=MASTER.Record JOIN WorkLog.WorkLog.JOBS "      SQL4 = "JOBS ON MPRO.MPro=JOBS.MPro WHERE (MASTER.DateIn>={ts '" + Str(StartYear) + "-" + StartMonthStr + "-" + StartDayStr + " 00:00:00'} "      SQL5 = "AND MASTER.DateIn<={ts '" + Str(EndYear) + "-" + EndMonthStr + "-" + EndDayStr + " 00:00:01'}) ) "      SQL7 = "GROUP BY MPRO.MPRO, MPRO.Carrier, MPRO.Product, MPRO.Province, MPRO.Record, MASTER.DateIn, JOBS.JobTypeCode"        SQLStatement = "SELECT MPRO, Carrier, Product, Province, SUM(TimeTook) AS TimeTook FROM MPRO GROUP BY MPRO, Carrier, Product, Province"      SQLStatement = SQL1 + SQL2 + SQL3 + SQL4 + SQL5 + SQL7e  

Duplicate GROUP_CONCAT values

Posted: 01 Jul 2013 02:13 PM PDT

Writing a plugin for Wordpress using a custom post type and an additional alias table to hold repetition of the post events. The problem comes with duplicate post meta values.

wp_post_meta table:

meta_id | post_id | meta_key           | meta_value  --------+---------+--------------------+---------------------  2748    | 5456    | calendar_venue_id  | 5  2749    | 5456    | calendar_start_date| 2013-06-28 21:13:00  2750    | 5456    | calendar_end_date  | 2013-06-28 22:13:00  2751    | 5456    | calendar_link      | null  2752    | 5456    | calendar_recur     | D  2753    | 5456    | _edit_last         | 1  2754    | 5456    | _edit_lock         | 1372460794:1  2755    | 5456    | calendar_repeat_on | 16  2756    | 5456    | calendar_repeats   | 1  2757    | 5456    | calendar_recur_end | 1  

wp_post table:

ID   | post_author | post_date           | post_date_gmt       | post_content  | post_title           | post_excerpt | post_status | post_type  -----+-------------+---------------------+---------------------+---------------+----------------------+--------------+-------------+----------  5456 | 1           | 2013-06-28 21:48:18 | 2013-06-28 21:48:18 | werk w/ me!!! | come one alias table | null         | publish     | event  

wp_calendar_alias

alias_event | alias_venue | alias_start_date    | alias_end_date  ------------+-------------+---------------------+---------------------  5456        | 5           | 2013-06-28 21:06:00 | 2013-06-28 22:06:00  5456        | 5           | 2013-06-28 21:13:00 | 2013-06-28 22:13:00  5456        | 5           | 2013-06-29 21:06:00 | 2013-06-29 22:06:00  5456        | 5           | 2013-06-29 21:13:00 | 2013-06-29 22:13:00  5456        | 5           | 2013-06-30 21:06:00 | 2013-06-30 22:06:00  5456        | 5           | 2013-06-30 21:13:00 | 2013-06-30 22:13:00  5456        | 5           | 2013-07-01 21:07:00 | 2013-07-01 22:07:00  5456        | 5           | 2013-07-01 21:13:00 | 2013-07-01 22:13:00  5456        | 5           | 2013-07-02 21:07:00 | 2013-07-02 22:07:00  5456        | 5           | 2013-07-02 21:13:00 | 2013-07-02 22:13:00  5456        | 5           | 2013-07-03 21:07:00 | 2013-07-03 22:07:00  5456        | 5           | 2013-07-03 21:13:00 | 2013-07-03 22:13:00  5456        | 5           | 2013-07-04 21:07:00 | 2013-07-04 22:07:00  5456        | 5           | 2013-07-04 21:13:00 | 2013-07-04 22:13:00  5456        | 5           | 2013-07-05 21:07:00 | 2013-07-05 22:07:00  5456        | 5           | 2013-07-05 21:13:00 | 2013-07-05 22:13:00  5456        | 5           | 2013-07-06 21:07:00 | 2013-07-06 22:07:00  5456        | 5           | 2013-07-06 21:13:00 | 2013-07-06 22:13:00  5456        | 5           | 2013-07-07 21:13:00 | 2013-07-07 22:13:00  5456        | 5           | 2013-07-08 21:13:00 | 2013-07-08 22:13:00  5456        | 5           | 2013-07-09 21:13:00 | 2013-07-09 22:13:00  5456        | 5           | 2013-07-10 21:13:00 | 2013-07-10 22:13:00  

MYSQL QUERY:

SELECT GROUP_CONCAT(DISTINCT postmeta.meta_key ORDER BY postmeta.meta_key ASC) AS post_meta_keys,   GROUP_CONCAT(postmeta.meta_value ORDER BY postmeta.meta_key ASC) AS post_meta_values,   GROUP_CONCAT(DISTINCT alias.alias_start_date ORDER BY alias.alias_start_date ASC) AS alias_start_dates,   GROUP_CONCAT(DISTINCT alias.alias_end_date ORDER BY alias.alias_start_date ASC) AS alias_end_dates,   posts.*,   venues.*   FROM wp_posts AS posts   LEFT JOIN wp_postmeta AS postmeta ON posts.ID = postmeta.post_id   LEFT JOIN wp_calendar_alias AS alias ON posts.ID = alias.alias_event   LEFT JOIN wp_calendar_venues AS venues ON alias.alias_venue = venues.venue_id   WHERE posts.ID = 5456 AND posts.post_type = 'event'   GROUP BY postmeta.post_id  

The query gives me a single row result but the post_meta_values repeats the same values. post_meta_keys would as well except I applied distinct to it. The problem is that post_meta_values may have same meta_value but for different meta_values.

I am still a MYSQL novice so I am not sure what to do. It looks like it applies the alias results to every post_meta result. Not sure how to make it not do that.

Thanks, Anthony

Why monitor bytes sent and received on mysql

Posted: 01 Jul 2013 03:02 PM PDT

I want to know why we should monitor bytes sent and received on mysql ? I hope I have not asked a vague question.

Thanks.

SQL Server freezes (because of application), need logging

Posted: 01 Jul 2013 04:58 PM PDT

We have an application running ontop of our SQL Server 2005 instance and a couple of times a week this application (unannounced) will cause the SQL Server to freeze. I can't even restart the SQL Server service; I have to restart the entire machine.

Needless to say, I can't open a query window to run sp_who2 to find the cause. It could be days before the issue shows up again. Is there any type of logging I can put in place to better track down what is causing the SQL Server to freeze up?

exec xp_readerrorlog only shows me what has happened after the restart so it isn't much help.

At the time it freezes, CPU is pegged at 90-97% and memory is maxed at 8 GB. Server has 12 GB but the max is set to 8192 for SQL Server.

Grouping results based on link table

Posted: 01 Jul 2013 01:52 PM PDT

I have three tables:

users

+----+-------+  | id | name  |   +----+-------+  | 1  | dave  |  | 2  | cher  |  | 3  | meg   |  | 4  | sarah |  +----+-------+  

groups

+----+-------+  | id | name  |  +----+-------+  | 1  | admin |  | 2  | super |  +----+-------+  

user_groups

+----+---------+----------+  | id | user_id | group_id |  +----+---------+----------+  | 1  | 1       | 1        |  | 1  | 2       | 1        |  +----+---------+----------+  

I want to create a query so that I return an array of group names which contains an array of users. I don't want to return groups which have no users, but I would like to return users which have no group.

admin      - dave      - cher  - meg  - sarah  

At the moment I am getting the groups which have items first.

SELECT name FROM groups  INNER JOIN user_groups ON groups.id = user_groups.group_id  

Then iterating through the groups and adding users.

SELECT id, name FROM users  INNER JOIN user_groups ON users.id = user_groups.user_id  WHERE user_groups.group_id = $group_id  

Is there a way to combine these queries?

How to optimize table_cache?

Posted: 01 Jul 2013 12:22 PM PDT

I searched google but couldnt find an exact answer, some have this value at 10000 or even 20000. Others say that even 1000 is too much for 1gb of RAM. Confusion.

My number of opened_tables is growing. Mysqltuner says I should raise the table_cache value, I tried to raise it to 2K, then 3K, then 4K, then I made it 512 to see what happens. Here is the report:

Currently running supported MySQL version 5.0.67-community-nt-log  Operating on 32-bit architecture with less than 2GB RAM  Archive Engine Installed  Berkeley DB Engine Not Installed  Federated Engine Installed  InnoDB Engine Installed  ISAM Engine Not Installed  NDBCLUSTER Engine Not Installed  Data in InnoDB tables: 12M (Tables: 3)  Data in MEMORY tables: 380K (Tables: 2)  Data in MyISAM tables: 83M (Tables: 84)  Total fragmented tables: 16  All database users have passwords assigned  Up for: 16h 12m 55s (161K q [2.000 qps], 6K conn, TX: 281M, RX: 22M)  Reads / Writes: 69% / 31%  Total buffers: 128.0M global + 2.1M per thread (100 max threads)  Maximum possible memory usage: 334.3M (16% of installed RAM)  Slow queries: 1% (11/161K)  Highest usage of available connections: 10% (10/100)  Key buffer size / total MyISAM indexes: 32.0M/4.1M  Key buffer hit rate: 97% (849K cached / 20K reads)  Query cache efficiency: 28% (26K cached / 93K selects)  Query cache prunes per day: 0  Sorts requiring temporary tables: 1% (1 temp sorts / 21K sorts)  Temporary tables created on disk: 1% (2 on disk / 15K total)  Thread cache hit rate: 99% (10 created / 6K connections)  Table cache hit rate: 5% (139 open / 2K opened)  Open file limit used: 20% (232/1K)  Table locks acquired immediately: 99% (116K immediate / 116K locks)  InnoDB data size / buffer pool: 12.5M/32.0M  Run OPTIMIZE TABLE to defragment tables for better performance  MySQL started within last 24 hours - recommendations may be inaccurate  Increase table_cache gradually to avoid file descriptor limits  table_cache (> 512)  Scan Complete    mysql> show global STATUS LIKE 'open%';  +---------------+-------+  | Variable_name | Value |  +---------------+-------+  | Open_files    | 222   |  | Open_streams  | 0     |  | Open_tables   | 127   |  | Opened_tables | 2335  |  +---------------+-------+  4 rows in set (0.00 sec)  

But opened_tables is still growing, should I raise it again?? Or is not worth attention?

Also some people say that the problem is usually in creating a lot of temp tables, as we can see from the stats its not my case. Whats the ratio between table_cache and RAM ?

thanks

Will table partitioning speed up my queries?

Posted: 01 Jul 2013 04:56 PM PDT

My database is set up with what we call our staging tables and our live tables. Currently for we have about 2 million rows for a table called products. Our customers we call manufacturers load their data into the staging tables and when they are done updating, adding, deleting data we then push that data into our live tables.

These live tables feed a web service for a mobile app and websites. The pushing of data from staging to live consists of deleting all the data in the live tables and inserting all of the data from the staging tables. All of this is segregated by a column called ManufacturerID that exists in every table.

Some manufacturers have 500 products, some have 75,000. Depending on this we sometimes have REALLY REALLY slow web service responses because of paging through all of the 2 million records. Deleting the data from the live tables also seems to be getting ridiculously slow.

Would partitioning my products table by ManufacturerID help this situation? From what I have read this would basically mean when I am querying my products I would only be querying a small subset of the database on that ManufacturerID and therefore see a huge improvement in overall response time.

Where should a default setting field be?

Posted: 01 Jul 2013 10:40 AM PDT

I am creating a database to track occurrences of incidents on buses based on an existing system. Currently the drivers can set a default bus number, which is used to pre-populate the bus number on the incident form. Where they can change that then, if they are driving another bus. This is a selection made by the driver. Not a bus assigned to them, per say. I was going to put this field in the staff table, not the bus table. However, someone said I should not, that I should put it in the bus table. I guess it would technically work either way, but I don't feel like the field belongs to the bus - it belongs to the staff member. Am I overthinking this?

Create multiple columns from a single column filtering criteria?

Posted: 01 Jul 2013 11:25 AM PDT

thanks in advance for your time, what I need to achieve is generate multiple columns result from a single query, the thing is this:

I got a table were is storing the next:

id | user_id | revision  1       1      Approved  2       1      Rejected  3       1      Pending  4       1      Pending  5       1      Pending  

What I need is to get the next result:

Total | User | Pending | Rejected | Approved    5       1       3         1           1  

I have tried using Group by like this:

SELECT count(id) Total, user_id User, revision FROM table1 GROUP BY user_id, revision  

But the output is way to far from what I am spectating to get:

Total | User | revision    3      1      Pending    1      1      Rejected    1      1      Approved  

The idea is to get the proper object returned to avoid over cycling the server side code with unnecessary processes, so If the database can do the handling in one call this would be great.

I tried googling for hours and all I could get were a lot of results for 'single column from multiple rows' which is something I do not need, thanks for your help in advance.

The final result set with many users should look like this:

Total | User | Pending | Rejected | Approved    5       1       3         1           1    5       2       4         0           1    5       7       2         3           0  

Updated: After more researching I found this: http://stackoverflow.com/questions/14172282/sql-query-needed-to-get-result-in-two-columns-grouped-by-type-and-percentage-in

This is almost what I need after some editing I still can't get it to send it in solid numbers instead percentages, I lack the knowledge for working group cases, I will keep trying if any of you can help it is greatly appreciated.

Solution Hey I managed to do this, it was easy, but my lack of knowledge was in the middle, I read the chapter Generating Summaries in MySQL Cookbook, this is where they explain that you can create count, sum and other methods inside your query to gain the proper values in those columns you define, so this is the query I ended up using:

SELECT user_id as userid, count(id) Total,  COUNT(IF(revision = 'pending', 1, NULL)) as pending,  COUNT(IF(revision = 'approved', 1, NULL)) as approved,  COUNT(IF(revision = 'rejected', 1, NULL)) as rejected  FROM table1  GROUP BY user_id  

This will fill those values by count only if criteria is met.

How would you track DB storage use and then extrapolate it?

Posted: 01 Jul 2013 09:26 AM PDT

I need to track the size of various DBs we have, then see it's growth rate so that we can guess where it will be at certain times. I'm using Oracle 11g first, but will eventually be applying this to other DBMSs like DB2 and Sybase.

I'm not even sure where to begin or what commands/queries I'll need to run.

What does the wait type PREEMPTIVE_XE_CALLBACKEXECUTE mean?

Posted: 01 Jul 2013 10:15 AM PDT

I have two instances in the last week where a DROP command for an Extended Events session has taken over a day to complete, sitting with the wait: PREEMPTIVE_XE_CALLBACKEXECUTE.

Can anyone explain what this wait type means?

More Background Info: In the first instance, I ran a T-SQL command to drop the session and it completed a day later. Using sp_whoisactive, I saw that the query was waiting on PREEMPTIVE_XE_CALLBACKEXECUTE.

During this time, Object Explorer's queries to gather metadata were being blocked and recieving lock-timeouts, but there were no other complaints (or so I thought at the time).

I tried to drop another session on Friday, and the same behavior occurred, except it didn't just go away after a day as the with the first event. Instead, I found out this morning that the client application could not connect. It was being blocked by the DROP EVENT SESSION query.

A restart of the SQL Service cleared out the blocking query.

I can't find anything that helps to diagnose what this wait type this is... and why my event sessions won't drop like they should. Can you help?

Server Info: SQL 2008 R2 Enterprise with SP2

bad ELF interpreter while installing Oracle 9.2.0.4 in Red Hat 6.2

Posted: 01 Jul 2013 08:56 AM PDT

Fellow DBAs

When trying to run runInstaller for Oracle 9.2.0.4 in a Red Hat server I get the following error:

./runInstaller: /lib/ld-linux.so.2: bad  ELF interpreter: No such file or directory  

Both the installer disks and the OS are 64 bits.

Since we haven't that version of Oracle installed in that version of Red Hat before, we wonder whether the OS isn't correctly installed (ie missing libraries) or Oracle 9.2.0.4 won't run on Red Hat 6.2.

Currently, we don't have Oracle support.

Please shed some light on this regard.

Transactional Replication - Snapshot metrics

Posted: 01 Jul 2013 08:00 PM PDT

I have an application that is using Transactional Replication. We are using snapshots to initialize the subscriptions. All publishers and subscribers are using SQL Server 2008.

Due to the way our deployment works, I end up having to reinitialize the subscribers every time we do a build. This is time consuming and annoys the business, so I have to make sure that our developers are aware of the impact their changes will have on the size of the snapshots and the time it will take to apply them. I also want to measure the impact of settings changes that I make on the publications and agents (bcp threads, batch size, etc).

The metrics I want to capture are:

  1. The size of the snapshot generated for each publication.
  2. The time it took to generate the snapshot.
  3. The time it took to apply the snapshot at the subscriber.

Is it possible to get this information by querying distribution? I realize that I could get snapshot size by looking at the filessystem after it is generated, but I would prefer to avoid that if possible.

How to determine Oracle LOB storage footprint?

Posted: 01 Jul 2013 08:41 AM PDT

With SECUREFILE storage I can specify whether I want compression (and the level of it) and deduplication, and it's all a trade-off between time and space.

Timing is fairly easy to profile but what's the easiest way to get a reasonably accurate measurement of how much space a specific LOB column takes up?

Where to store versioned/historized records?

Posted: 01 Jul 2013 11:38 AM PDT

In our application we are using versioning records using triggers. The application is supposed to handle millions of records in the future and the history/versioning tables are there to ensure that there is an audit trail for various transactions.

Currently everything is stored into one database. My concern is that these tables will bloat the transactional database, and I really want to move all record history into another database mainly because:

  1. transactional database can be backed up more frequently since it is smaller.
  2. indices can be scheduled to be rebuilt more frequently on the transactional db.
  3. The history can be archived more easily if needed.

Are there any performance advantages to choosing this strategy or is the extra database another moving part which is not needed? What is the prescribed solution for this sort of thing?

Update column value in entire database

Posted: 01 Jul 2013 11:01 AM PDT

In my database, I have around 30 tables, each with an emp_number (INT) column.

How can I update the value of emp_number in all 30 tables?

Filtering results by date [migrated]

Posted: 01 Jul 2013 08:32 AM PDT

I have the following query:

SELECT STOCK.ITEM_DATE, STOCK.AMOUNT, STOCK.OPERATIONTYPE            ,ITEM.ITEM_NAME   FROM ITEM INNER JOIN STOCK ON STOCK.ITEM_ID = ITEM.ITEM_ID      WHERE STOCK.ITEM_DATE > to_date('06/26/2013','mm/dd,yyyy')            AND ITEM.CATEGORY_ID = 1  

I want to select all items before '06/26/2013' INCLUDING '06/26/2013', but the result is before '06/26/2013' NOT including '06/26/2013'.

Any ideas how to do that, i tried >= but nothing changed.

MySQL Master-Slave setup: master is not indexed and slave is indexed

Posted: 01 Jul 2013 12:56 PM PDT

Is it possible to have a MySQL master-slave setup where the master has no indexes (for faster insertion) and the slave will be indexed?

Thanks in advance!

"Mysql2::Error: Table doesn't exist" after power outtage

Posted: 01 Jul 2013 09:31 AM PDT

The power went out while my site was running and now when trying to access my site running locally on my mac I'm getting this error:

ActiveRecord::StatementInvalid: Mysql2::Error: Table 'mysite.users' doesn't exist:  

Is there anything I can do other than dropping the database and recreating it?

EDIT:

I'm getting this when trying to drop the database:

rake db:drop  Couldn't drop MySite : #<Mysql2::Error: Unknown database 'mysite'>  

InnoDB tables inaccessible after reboot

Posted: 01 Jul 2013 11:32 AM PDT

After reboot each time, I can't see my InnoDB table. However, when use command mode I can see my tables, but cannot access to it.

mysql> show tables;  +----------------------+  | Tables_in_xpl_ticket |  +----------------------+  | active_list          |  | bill_block           |  | bill_block_list      |  | block                |  | block_list           |  | box                  |  | cisco_switch         |  | mac_add              |  | mac_change           |  | month                |  | new_connect          |  | new_user             |  | open                 |  | package_change       |  | paid_list            |  | pay                  |  | problem              |  | re_open              |  | refund               |  |  ticket              |  | user                 |  | user_log             |  +----------------------+  22 rows in set (0.00 sec)  

But when I want to access a table it says table doesn't exist;

mysql> select * from active_list;  ERROR 1146 (42S02): Table 'xpl_ticket.active_list' doesn't exist  

Edit by RolandoMySQLDBA

Please run the following in MySQL:

SELECT VERSION();  SELECT CONCAT('[',table_name,']') tbl_name  FROM information_schema.tables WHERE table_schema='xpl_ticket';  

Please run the following in the OS:

cd /var/lib/mysql/xpl_ticket  ls -l  

Hi RolandoMySQLDBA,

Thank you for your quick reply. Here is all those outputs.

mysql> SELECT VERSION();  +-------------------------+  | VERSION()               |  +-------------------------+  | 5.5.31-0ubuntu0.13.04.1 |  +-------------------------+  1 row in set (0.00 sec)      mysql> SELECT CONCAT('[',table_name,']') tbl_name  -> FROM information_schema.tables WHERE table_schema='xpl_ticket';  +-------------------+  | tbl_name          |  +-------------------+  | [active_list]     |  | [bill_block]      |  | [bill_block_list] |  | [block]           |  | [block_list]      |  | [box]             |  | [cisco_switch]    |  | [mac_add]         |  | [mac_change]      |  | [month]           |  | [new_connect]     |  | [new_user]        |  | [open]            |  | [package_change]  |  | [paid_list]       |  | [pay]             |  | [problem]         |  | [re_open]         |  | [refund]          |  | [ticket]          |  | [user]            |  | [user_log]        |  +-------------------+  22 rows in set (0.03 sec)  

My data directory is not in "/var/lib/mysql" path. I have been changed it to "/var/www/xampp/mysql/data" path. So I am going to execute the following code.

Lurid / # cd /var/www/xampp/mysql/data/xpl_ticket/  Lurid xpl_ticket # ls -l  total 265  -rwxrwxrwx 1 root root 9272 Jun  6 12:48 active_list.frm  -rwxrwxrwx 1 root root 8654 Jun  6 12:48 bill_block.frm  -rwxrwxrwx 1 root root 9272 Jun  6 12:48 bill_block_list.frm  -rwxrwxrwx 1 root root 8654 Jun  6 12:48 block.frm  -rwxrwxrwx 1 root root 9272 Jun  6 12:48 block_list.frm  -rwxrwxrwx 1 root root 8802 Jun  6 12:48 box.frm  -rwxrwxrwx 1 root root 8648 Jun  6 12:48 cisco_switch.frm  -rwxrwxrwx 1 root root   65 Jun  6 12:48 db.opt  -rwxrwxrwx 1 root root 8690 Jun  6 12:48 mac_add.frm  -rwxrwxrwx 1 root root 8698 Jun  6 12:48 mac_change.frm  -rwxrwxrwx 1 root root 8618 Jun  6 12:48 month.frm  -rwxrwxrwx 1 root root 8716 Jun  6 12:48 new_connect.frm  -rwxrwxrwx 1 root root 9002 Jun  6 12:48 new_user.frm  -rwxrwxrwx 1 root root 8680 Jun  6 12:48 open.frm  -rwxrwxrwx 1 root root 8724 Jun  6 12:48 package_change.frm  -rwxrwxrwx 1 root root 8692 Jun  6 12:48 paid_list.frm  -rwxrwxrwx 1 root root 8592 Jun  6 12:48 pay.frm  -rwxrwxrwx 1 root root 8802 Jun  6 12:48 problem.frm  -rwxrwxrwx 1 root root 8670 Jun  6 12:48 refund.frm  -rwxrwxrwx 1 root root 8714 Jun  6 12:48 re_open.frm  -rwxrwxrwx 1 root root 8900 Jun  6 12:48 ticket.frm  -rwxrwxrwx 1 root root 8704 Jun  6 12:48 user.frm  -rwxrwxrwx 1 root root 8808 Jun  6 12:48 user_log.frm  

Hi Mannoj

These tables are created in these same database.

I have some errors in my "/var/log/mysql/error.log". I don't understand what are those. Here are they ....

130611 12:41:28 [Note] /usr/sbin/mysqld: Normal shutdown    130611 12:41:29 [Note] Event Scheduler: Purging the queue. 0 events  130611 12:41:30  InnoDB: Starting shutdown...  130611 12:41:31  InnoDB: Shutdown completed; log sequence number 1595675  130611 12:41:31 [Note] /usr/sbin/mysqld: Shutdown complete    130611 12:43:15 [Note] Plugin 'FEDERATED' is disabled.  130611 12:43:15 InnoDB: The InnoDB memory heap is disabled  130611 12:43:15 InnoDB: Mutexes and rw_locks use GCC atomic builtins  130611 12:43:15 InnoDB: Compressed tables use zlib 1.2.7  130611 12:43:15 InnoDB: Using Linux native AIO  130611 12:43:15 InnoDB: Initializing buffer pool, size = 128.0M  130611 12:43:15 InnoDB: Completed initialization of buffer pool  InnoDB: The first specified data file ./ibdata1 did not exist:  InnoDB: a new database to be created!  130611 12:43:15  InnoDB: Setting file ./ibdata1 size to 10 MB  InnoDB: Database physically writes the file full: wait...  130611 12:43:15  InnoDB: Log file ./ib_logfile0 did not exist: new to be created  InnoDB: Setting log file ./ib_logfile0 size to 5 MB  InnoDB: Database physically writes the file full: wait...  130611 12:43:16  InnoDB: Log file ./ib_logfile1 did not exist: new to be created  InnoDB: Setting log file ./ib_logfile1 size to 5 MB  InnoDB: Database physically writes the file full: wait...  InnoDB: Doublewrite buffer not found: creating new  

Thanks to both of you.

Fulltext stoplist replication

Posted: 01 Jul 2013 07:45 PM PDT

In MS SQL Server 2008 R2 there is replication of table with fulltext index on it. But stoplist that is associated with replicated fulltext index doesn't replicate.

Is there any possibility to replicate stoplist also?

Process attempted to unlock a resource it does not own

Posted: 01 Jul 2013 01:45 PM PDT

SQL Server 2005 SP4 32-Bit

I have a DBCC CHECKDB job running nightly. Last night, soon after the job started, I got the errors below. The database is NOT in suspect mode, and CHECKDB comes back clean when I run it now. The database is fully accessible. Should I be concerned? I'd hate to go back to a backup at this point.

 2013-04-02 02:10:55.53 spid56      Error: 1203, Severity: 20, State: 1.   2013-04-02 02:10:55.53 spid56      Process ID 56 attempted to unlock a resource it                                      does not own: PAGE: 34:1:388664. Retry the                                       transaction, because this error may be caused                                       by a timing condition. If the problem persists,                                       contact the database administrator.   2013-04-02 02:10:55.58 spid56      Error: 3314, Severity: 17, State: 3.   2013-04-02 02:10:55.58 spid56      During undoing of a logged operation in                                       database 'MY_DATABASE_NAME', an error occurred                                       at log record ID (1342973:12519:37). Typically,                                       the specific failure is logged previously as                                       an error in the Windows Event Log service.                                       Restore the database or file from a backup,                                       or repair the database.  

compare the same table

Posted: 01 Jul 2013 11:45 AM PDT

I am facing an issue with the following query. When I execute the query, it takes very long. I broke the query into two parts, compared with a shell script, but is there any chance to go with one query?

Any suggestion welcome.

select distinct substring(mobile_num,3,12)  from mobile  where  status ='INACTIVE'    and date(unsub_date) >= DATE(CURDATE() - INTERVAL 90 DAY)    and mobile_num not in(select distinct mobile_num from mobile where status='ACTIVE')  order by updtm;  
| mobile_num  | varchar(12)   | keyword     | varchar(45)   | sub_date    | datetime     | unsub_date  | datetime     | circle_name | varchar(45)   | type        | varchar(45)   | status      | varchar(45)  | operator    | varchar(45)   | act_mode    | varchar(45)   | deact_mode  | varchar(45)   | id          | bigint(20)    | updtm       | timestamp     

sql server database sharding - what to do with common data / non sharded data

Posted: 01 Jul 2013 06:45 PM PDT

We have a very large scale enterprise level database. As part of our business model all web users hit our web servers at the same time each month which in turn hammer our sql box. The traffic is very heavy and continues to grow heavier the larger the company grows. sql proc optimization has been performed and hardware has already been scaled up to a very high level.

We are looking to shard the database now to ensure that we can handle company growth and future loads.

We have decided what particular data should be sharded. It is a subset of our database which is highly utilized.

However, my question is regarding the non sharded data which is common/universal. An example of data like this may be an Inventory table for instance or possibly an Employee table, user table etc .

I see two options to handle this common/universal data:

1) design 1 - Place the common/universal data in an external database. All writes will occur here. This data will then be replicated down to each shard allowing each shard to read this data and inner join to this data in t-sql procs.

2) design 2 - Give each shard its own copy of all common/universal data. Let each shard write locally to these tables and utilize sql merge replication to update/sync this data on all other shards.

concerns about design #1

1) Transactional issues: If you have a situation in which you must write or update data in a shard and then write/update a common/universal table in 1 stored proc for instance, you will no longer be able to do this easily. The data now exists on seperate sql instances and databases. You may need to involve MS DTS to see if you can wrap these writes into a transaction since they are in a separate database. Performance is a concern here and possible rewrites may be involved for procs that write to sharded and common data.

2)a loss of referential integrity. Not possible to do cross database referential integrity.

3) Recoding large areas of the system so that it knows to write common data to the new universal database but read common data from the shards.

4). increased database trips. Like #1 above, when you run into a situation in which you must update sharded data and common data you are going to make multiple round trips to accomplish this since the data is now in separate databases. Some network latency here but I am not worried about this issue as much as the above 3.

concerns about design #2

In design #2 each shard gets its own instance of all common/universal data. This means that all code that joins to or updates common data continues to work/run just like it does today. There is very little recoding/rewriting needed from the development team. However, this design completely depends on merge replication to keep data in sync across all shards. the dbas are highly skilled and are very concerned that merge replication may not be able to handle this and should merge replication fail, that recovery from this failure is not great and could impact us very negatively.

I am curious to know if anyone has gone with design option #2. I am also curious to know if i am overlooking a 3rd or 4th design option that I do not see.

thank you in advance.

ProcessID Owner with third party app

Posted: 01 Jul 2013 11:35 AM PDT

I am having problems trying to audit a third party app with a SQLServer backend.

The app manages users and user access via a SQL table, using a single SQL login to access the databases.

I am trying to audit this by using the Host PID much like Windows Task Manager associates a PID and Owner for every process.

Here is what I have attempted and tried

  • I was able to figure out how to extract the app.exe PID.
  • I cannot figure out how to get the Windows Owner associated to that PID.
  • I have tried using xp_cmdshell to query the Windows tasklist and even wrote a .Net console app which gets called by SQL to collect the information but every time I try to extract the Owner it is blank.

Any thoughts on how I can get the Owner?

Merge two packages into one package in Oracle 10g

Posted: 01 Jul 2013 11:39 AM PDT

I want to combine two different packages into one single new package in Oracle 10g. Is there some way to accomplish this?

Sys.WebForms.PageRequestManager Error in SSRS

Posted: 01 Jul 2013 11:45 AM PDT

When i am executing on SSRS report i am getting below error

Sys.WebForms.PageRequestManagerServerErrorException: An unknown error occurred while processing the request on the server. The status code returned from the server was: 500  Line: 5  Char: 62099  Code: 0  

I'm getting this error at the bottom left yellow colored exclamation mark.

What could be the cause of getting this error in SSRS??

Please help!

Could not open File Control Bank (FCB) for invalid file ID X when restoring from snapshot

Posted: 01 Jul 2013 11:49 AM PDT

We have a scenario in which we, during integration tests, make a backup of an existing database and restore it under a new name. We then create a snapshot of our "new" database. After each test we restore the database from the snapshot so that different tests don't affect each other. This works well except for in a specific scenario where the restore fails with the following error message:

System.Data.SqlClient.SqlException : Unable to open the physical file "\server\path\MyDb_IntegrationTestsSnapshot.ss". Operating system error 58: "58(failed to retrieve text for this error. Reason: 15105)". Could not open File Control Bank (FCB) for invalid file ID 2 in database 'MyDb_IntegrationTestsSnapshot'. Verify the file location. Execute DBCC CHECKDB. RESTORE DATABASE is terminating abnormally.

The code that produces that looks like this:

using (var con = new SqlConnection(ConfigurationManager.ConnectionStrings["MyDb"].ConnectionString))  {      con.Open();      var databaseName = GetDatabaseName();      var sql =          string.Format(              @"use master               alter database {0} set single_user with rollback immediate;              RESTORE DATABASE {0} FROM DATABASE_SNAPSHOT = '" + GetSnapshotName() + @"';              ALTER DATABASE {0} SET MULTI_USER WITH NO_WAIT",              databaseName);      var command = new SqlCommand(sql, con);        command.ExecuteNonQuery();        con.Close();        SqlConnection.ClearAllPools();  }  

In almost all cases the above code works just fine, however when the test executes a few specific SQL commands that uses temporary tables AND performs deletes the above error occurs. Note however that in other cases commands that uses temporary tables works just fine. Likewise, other deletes work well as well.

Furthermore, this ONLY happens when we run the tests on our build server against a separate database machine. Running the tests locally against a local SQL Server instance works fine. So does running the tests locally against the same database server that the build server uses.

Finally, note that other tests that uses this rollback setup works fine on the build server. Even the rollback after the now failing tests ran just fine at a previous point in time, but have now stopped working. The only real difference between now and then seems to be that some additional data has been added.

We're using SQL Server 2008 R2 and .NET 4.

UPDATE: Based on the answer by Pondlife we attempted to run DBCC but without success as it reported the snapshot marked as suspect. We then checked the logs and found "Error: 17053, Severity: 16, State: 1.". Trying to find information about that we found that one cause of this could be when using VMWare and having a PVSCSI adapter configured. The build server does run on VMWare (the db server is native) but it isn't configured with a PVSCSI adapter but a LSI Logic SAS adapter.

UPDATE 2: We've narrowed this down to tests which cause relatively many (330) deletes. If we change a test whose teardown usually causes this error so that the actual test code only triggers a single delete instead of hundreds it works fine.

[SQL Server] Performance Tuning on Very Small Databases - When is it worth it

[SQL Server] Performance Tuning on Very Small Databases - When is it worth it


Performance Tuning on Very Small Databases - When is it worth it

Posted: 01 Jul 2013 03:06 AM PDT

When does rebuilding an index make sense? When does Shrink Database make sense on a Very Small Database? Some say that on a Very Small Database, the indexing overhead can be larger than a disk cache. So, on a table of 10,000 records, less indexing might be better.The user forms and user look-ups seem to be very fast. The difference in indexing doesn't seem to make any difference so far.The front-end application is MS Access using DSN-Less linked tables with the SQL Server Native Client 11.0.The Citrix server resides in a rack and host the MS Access application in the same rack as the SQL Server 2008.The SQL Server 2008 has 6 small databases. The largest Database is 150 MB.All are new servers with plenty of RAM and processor, typically well under 20% system resource usage (processor, RAM, ...).Number of concurrent users is 5 to 25.There are no large imports and no batch jobs to transact. In a Compliance Database, most existing records have existing fields updated.When rebuilding indexes on the largest tables (between 20,000 rows to 300,000 rows in a big table) the percent fragmentation is 5% to 35%.The Database(s) will not grow 100% per year in size.

Replicating tables (temp) on another server

Posted: 30 Jun 2013 11:43 PM PDT

Hello all.I am in a situtation where I need to take data from a few of tables from our internal SQL Server (secured), create a few temp tables, and throw them out on a public web server every morning. Not having done this before, I am asking for some advice from anyone who has done this before. Generally speaking, what would be the best method of doing this?

[Articles] Give

[Articles] Give


Give

Posted: 30 Jun 2013 11:00 PM PDT

The world is bigger than you, and when you help others, you realize that. Making that change might also make you happier, and more successful, in your career.

[MS SQL Server] Rebuilding / creating indexes as a bluk-logged operation and point-in-time recovery

[MS SQL Server] Rebuilding / creating indexes as a bluk-logged operation and point-in-time recovery


Rebuilding / creating indexes as a bluk-logged operation and point-in-time recovery

Posted: 01 Jul 2013 04:58 AM PDT

So, skimming blogs this fine day, I came across a suggestion to switch a DB from Full Recovery to Bulk-Logged when doing index rebuilds (especially LARGE indexes.) The stated benefit is that you don't lose the ability to recover to point-in-time (well, presumably except for the period when you're in Bulk-Logged)[url=http://msdn.microsoft.com/en-us/library/ms191484(v=sql.105).aspx]MSDN certainly backs up the recommendation[/url] to go to Bulk-Logged for such things, but doesn't (including in the "[url=http://msdn.microsoft.com/en-us/library/ms175987(v=sql.105).aspx]Choosing a recovery model[/url]" topic) explicitly state that you won't lose PiTR.So, is this true? Can you, say when you're planning an index rebuild, switch the DB in question from Full Recovery to Bulk-Logged, run your rebuild, then kick back to Full and still have a valid log chain? Hypothetically, say taking a log backup before you start (and before you change the recovery model,) then one when you're done, kick it back to Full, and the next log backup will still be OK?Thanks,Jason

Refresh Development from Production Backup Disk Space Error

Posted: 01 Jul 2013 04:18 AM PDT

Post removed.

stack dumps

Posted: 01 Jul 2013 01:43 AM PDT

I'm running integrity checks on my databases and a couple of days ago I started seeing this job failed. It's basically:USE [database]GODBCC CHECKDB(N'database') WITH NO_INFOMSGSWhen I ran the script above manually I get:Msg 0, Level 11, State 0, Line 0A severe error occurred on the current command. The results, if any, should be discarded.Msg 0, Level 20, State 0, Line 0A severe error occurred on the current command. The results, if any, should be discarded.I'm also getting files created under the logs directory with stack dumps. I'm not experience with this problem, I haven't actually gone through something like this. What does it mean? How do I start troubleshooting it? How do I fix it?Thank you.

Snapshot agent failing: Cannot promote the transaction to a distributed transaction

Posted: 30 Jun 2013 11:07 PM PDT

Hi,I'm receiving the next error when i create my publication snapshot (my publication only have a table/article):"Error messages:Message: Cannot promote the transaction to a distributed transaction because there is an active save point in this transaction. Command Text: sp_MSactivate_auto_subParameters: @publication = Publication_Tables @article = % @status = initiatedStack: at Microsoft.SqlServer.Replication.AgentCore.ReMapSqlException(SqlException e, SqlCommand command) at Microsoft.SqlServer.Replication.AgentCore.ExecuteWithResults(CommandSetupDelegate commandSetupDelegate, ProcessResultsDelegate processResultsDelegate, Int32 queryTimeout) at Microsoft.SqlServer.Replication.Snapshot.TransSnapshotProvider.SetPublisherTranSequenceNumViaAutoSub(PublicationActivationState publicationActivationState, SqlConnection connection) at Microsoft.SqlServer.Replication.Snapshot.TransSnapshotProvider.ActivateCSSPublicationAndSetTranSequenceNums(PublicationActivationState publicationActivationState, SqlConnection connection) at Microsoft.SqlServer.Replication.Snapshot.TransSnapshotProvider.ConcurrentPreArticleFilesGenerationTransaction(SqlConnection connection) at Microsoft.SqlServer.Replication.RetryableSqlServerTransactionManager.ExecuteTransaction(Boolean bLeaveTransactionOpen) at Microsoft.SqlServer.Replication.Snapshot.TransSnapshotProvider.DoConcurrentPreArticleFilesGenerationProcessing() at Microsoft.SqlServer.Replication.Snapshot.TransSnapshotProvider.DoPreArticleFilesGenerationProcessing() at Microsoft.SqlServer.Replication.Snapshot.SqlServerSnapshotProvider.GenerateSnapshot() at Microsoft.SqlServer.Replication.SnapshotGenerationAgent.InternalRun() at Microsoft.SqlServer.Replication.AgentCore.Run() (Source: MSSQLServer, Error number: 3933)Get help: http://help/3933Server MYSERVER, Level 16, State 1, Procedure sp_MSrepl_changesubstatus, Line 1249Cannot promote the transaction to a distributed transaction because there is an active save point in this transaction. (Source: MSSQLServer, Error number: 3933)Get help: http://help/3933"I check and there aren't any open tran. I don't know what is the problem.Anyone could say me how solve the issue?Thanks.

[SQL 2012] ssis - deployed package/project

[SQL 2012] ssis - deployed package/project


ssis - deployed package/project

Posted: 01 Jul 2013 01:45 AM PDT

Hi there, I have deployed my first project to an SSIS Catelog.When in Management Studio - Integration Services Catalogs - SSISDB, when I right click my project and select Configure - it asks me for my Parameters (which I have set up via the my SSIS package)In the "Set Parameter Value" scren - I'm having to enter the parameter again and the "Use default value from package is greyed out??Why is this as I have these paramters in SSIS before I have deployed them - what is the point of having to type them in again once deployed?

[T-SQL] Move Log file

[T-SQL] Move Log file


Move Log file

Posted: 01 Jul 2013 12:28 AM PDT

Hi,I have data file and log file at different location.data file at location:D:\Folder1\test.mdfwhile log file at place D:\folder2\test_log.ldfNow I want to move this log file to D\folder1\test_log.ldfHow can I achieve this?

Top One by one

Posted: 30 Jun 2013 09:04 PM PDT

Hi All,I have a list of 800000+ records. Now I want to see if someone selects 1 he will get top 10000 records again if he selects 2 then he will able to see next 10000 records (from 10,001 to 20,001) and so on....How will I write this query?

Query suggestion

Posted: 30 Jun 2013 05:09 PM PDT

I have a table A (VoucherNo) and table B (VoucherNo, ModifiedDate)What I want is whenever any voucherno gets inserted in table A, I have to modify the column modifydate of table B with current date as table B contains all the vouchers list in column voucherno...Please suggest me way to achive this..

compare SQL syntax but need the total and %???

Posted: 30 Jun 2013 06:32 PM PDT

Hello SQL GuRu's,A few weeks ago I asked the following (http://www.sqlservercentral.com/Forums/Topic1459631-150-1.aspx?Update=1). For this I had received a clear answer and a good syntax that works nicely. Only I wish now that he takes the total of the syntax (item_keya) and is already compare network name. From the number of devices that come there must be a number of per cent will be given of how many devices are not seen in the database.Here again my syntax;SELECT snetworkname, REPLACE(sNetworkName,'.noc','') FROM WhatsUp.dbo.NetworkInterface RIGHT OUTER JOIN WhatsUp.dbo.device ON WhatsUp.dbo.NetworkInterface.nNetworkInterfaceID = whatsup.dbo.device.nDefaultNetworkInterfaceIDWHERE REPLACE(sNetworkName,'.noc','') NOT IN (SELECT REPLACE(item_keya,'.noc','') AS snetwork FROM dbo.item WHERE item_keya IS NOT NULL AND item_keya <> '') It so my total (item_keya) and that he looks what devices not found in the database to give a percentage number.

Running Dynamic SQL

Posted: 30 Jun 2013 04:38 AM PDT

Hi All, I have in the past created dynamic SQL in a tally table and looped through to execute it. This is probably not best practice but what alternatives are there, apart from cursors and/or while loops that could be used in this instance?Thanks for your thoughts

Help on triggers

Posted: 30 Jun 2013 02:52 AM PDT

Dear Friend,Am I able to capture the query inside the trigger that I was executed..?For example, I have instead of delete trigger for a table say SampleTable.[code="sql"] create table sampletable (Id int ,name varchar(800)) create trigger sampletriger on sampletable instead of delete as Print 'Desired Query' insert into sampletable values(1, 'heavenguy') insert into sampletable values(2, 'tristan') insert into sampletable values(3, 'Jack') [/code]Now i'm trying to delete the sampletable...[code="sql"]delete from sampletable where id = 1[/code]I wanted the trigger to return the query that I executed...output:-delete from sampletable where id = 1

[SQL Server 2008 issues] Export using bcp problem

[SQL Server 2008 issues] Export using bcp problem


Export using bcp problem

Posted: 30 Jun 2013 02:48 PM PDT

Hi AllI have a stored procedure which extracts data and creates a file in the cleansed directory.All works fine except obne thingwhen the filename is longer than 16 characters the procedure works bu the file will not be created in the folder.any ideas why[code]USE [TestData]GO/****** Object: StoredProcedure [dbo].[exporttocsv] Script Date: 07/01/2013 13:42:26 ******/SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOALTER procedure [dbo].[exporttocsv]@importedquery nvarchar(1000), /* The query to be executed */@importedcsvname nvarchar(150) /* To name the exported file back to the original name */asBEGIN DECLARE @path varchar(50)DECLARE @filename varchar(30)DECLARE @dbname varchar(30)DECLARE @sql varchar(2000)DECLARE @bcpcommand varchar(1000)SELECT @path = 'C:\inetpub\wwwroot\cleansed\'SELECT @filename = @importedcsvname + ' -c -t, -d 'SELECT @dbname = 'TestData -U sa -P sqldba'SELECT @bcpcommand = 'bcp "' + @importedquery + '" queryout 'SELECT @sql = @bcpcommand + @path + @filename + @dbname--print @sqlEXEC master..xp_cmdshell @sqlend[/code]

Missing Index

Posted: 30 Jun 2013 06:28 PM PDT

Good Day,I ran a report on one of the SQL Server 2008R2 databases which identified an index as missing . I looked in the database and the index exists . The statistics are being kept up to date automatically . Is there any reason why SQL Server would keep on identifying this index as being missing ?Thanks in advance .

Get the LOG ( Text ) file name in SSIS Script task

Posted: 30 Dec 2010 01:55 AM PST

Hello,I have configured Logging to generate the log files, filename example given below, using expressionsto create a Logging Text file.Since this Log file is generated dynamically( based on time in milliseconds) , I have problems accessing the Name of the file in the SSIS Script TASK. I would like to Mail this Log file ( or atleast Log file name with with full path to administrator in the last step in case SSIS package FAILS.Is there a System variable name representing the name of LOG file that I Can Access in Script task ? [center]D:\SSIS_ExecutionLogs\Hummer_AutomatedCommissioning\Hummer_AutomatedCommissioning_2010-12-13_173342_Log[/center]Expression to create a Logging Text file.[code="other"]"D:\\SSIS_ExecutionLogs\\" + @[System::PackageName] + "\\" + @[System::PackageName] +"_" + (DT_STR,4,1252)DATEPART( "yyyy" , @[System::StartTime] ) + "-" +RIGHT("0" + (DT_STR,4,1252)DATEPART( "mm" , @[System::StartTime] ), 2) + "-" +RIGHT("0" + (DT_STR,4,1252)DATEPART( "dd" , @[System::StartTime] ), 2) + "_" +RIGHT("0" + (DT_STR,4,1252)DATEPART( "hh" , @[System::StartTime] ), 2) + RIGHT("0" + (DT_STR,4,1252)DATEPART( "mi" , @[System::StartTime] ), 2) + RIGHT("0" + (DT_STR,4,1252)DATEPART( "ss" , @[System::StartTime] ), 2) + "_Log" +".TXT"[/code]Please Help....

Transfer The Logins and The Passwords Between Instances of SQL Server

Posted: 30 Jun 2013 11:09 AM PDT

we want to Transfer The Logins and The Passwords Between Instances of SQL Server 2008we have 78 logins and 78 database userswhat are the clear step by step of going by moving all the logins and users and fixing orphan users in concise manner thanks

Linked Server Creation Stalling

Posted: 30 Jun 2013 07:00 AM PDT

I am trying to create a linked server to a MySQL server. My SQL box has the MySQL 5.2 ODBC drivers installed. This SQL Server installation has two instances on it FlyingAce2008 and Morrow2008, both are running SQL Server 2008.I can create my linked server just fine on the FlyingAce2008 instance, but when I go to set it up on the Morrow2008 instance, with the exact same settings as the FlyingAce2008, I click OK and the green circle animation just sit there and spins and I cannot stop or close the Link Server Properties dialog box without opening the task manager and killing Management Studio.I don't understand why I can set it up just fine on one instance, but when I go to set up the link server on the other instance using the exact same properties, management studio hangs. Why look under my Linked Servers, it shows up, but when I test the connection Management Studio hangs. Any ideas?

Transfer The Logins and The Passwords Between Instances of SQL Server

Posted: 30 Jun 2013 11:08 AM PDT

we want to Transfer The Logins and The Passwords Between Instances of SQL Server 2008we have 78 logins and 78 database userswhat are the clear step by step of going by moving all the logins and users and fixing orphan users in concise manner thanks

credential/proxy

Posted: 30 Jun 2013 01:05 PM PDT

I have a proc as below ALTER PROCEDURE [dbo].[sp_proc_test] WITH EXECUTE AS 'ABCD\svcaccount'AS EXEC msdb.dbo.sp_start_job @job_name = 'JB_ABC_Run'GO I setup a proxy account for windows login and mapped a SQL login to it. When I try to execute through a SQL login, I am getting following error. The EXECUTE permission was denied on the object 'sp_start_job', database 'msdb', schema 'dbo'. I have made sure the windows (ABCD\svcaccount) login has full access (owner) to the msdb db. What am I missing?

Bulk insert with data field encryption

Posted: 30 Jun 2013 03:03 AM PDT

We get a daily file of employee ids and their associated social security numbers. Right now, I have a stored procedure [code="sql"]create procedure put_secret(@id int, @ssn int)as begin open symmetric key ... decryption by certificate ... insert into secret(id,ssn) values (@id,EncryptByKey(Key_GUID(...), @ssn)) close symmetric key ...end[/code]and a program that reads one record from the file, calls the stored procedure [i]put_secret[/i] passing the record, and I do this in a while (not EOF). Everything is working well. But it is s-l-o-w.Is there a way to either create the table so that it knows that a column is, by default, encrypted, [code="sql"]create table secret( id int not null, ssn varbinary(MAX) default encryptByKey(Key_GUID(...)), primary key(employee_number))[/code]or a way to have bcp XML format file to do this[code="xml"] <RECORD> <FIELD ID="ID" xsi:type="CharFixed" LENGTH="9" /> <FIELD ID="SSN" xsi:type="CharFixed" LENGTH="9" /> </RECORD> <ROW> <COLUMN SOURCE="ID" NAME="id" xsi:type="SQLINT"/> <COLUMN SOURCE="SSN" NAME="ssn" xsi:type="SQLVARBIN" ENCRYPTION="..."/> </ROW>[/code]or a way to have [b]BULK INSERT[/b], or [b]OPENROWSET(BULK...)[/b] do this?Again, the while loop works and no one is complaining (so I guess it's academic), but I can foresee some shop somewhere in the world with a huge number of records where this might be useful.

Database Mail not sending mail if number of recipients are more

Posted: 26 Jun 2013 12:21 PM PDT

Hi All,In one of the servers, database mail is not sending mails if the To address is more than 4 and CC is more than 2. I am getting the below error[b]"The mail could not be sent to the recipients because of the mail server failure. (Sending Mail using Account 4 (2013-06-26T18:11:38). Exception Message: Cannot send mails to mail server. (The operation has timed out.)."[/b]There is no issues with SMTP server. It was working last month but stopped working this month. Running out of clues.. Thanks for your help!!!

SQLSERVER INTEGRATION SERVICE

Posted: 30 Jun 2013 02:58 AM PDT

1. How to provide security for the configuration file (xml package configuration file)?2. Different approaches of deployment of package in ssis?3. Three tasks are running in your package and 2 tasks are successfully executed and third task is failed, in this situation I need to rollback 2nd and 3rd tasks, so what can u do in SSIS package and sqlserver?4. There are 10 records in a flat file source, among them 9 will be executed successfully and 10th record is failed, in this scenario I need to get all 10 records source level to target level, in target level then 10th record failure error will be displayed, what I need to do?5. Explain the dynamic behavior of your project?6. Explain the validations of a package at runtime?7. What are isolations in SSIS, and where u can use this?8. What is optimizing a packages?9. Tell me one complex packages in your project?(which task mostly we are used)10. What is linked server?11. I created one package with some file. I have diff servers having that package with diff configuration file. Is possible to execute that package in servers simultaneously?15. What is incremental loading and decremental loading?16. Microsoft office 2007 excel sheet supported by sql server 2005 or not?17. What is the difference between file system and sql server ( at the time of deployment)?I have one package in d(d drive) folder I want move that package in to e(e drive) folder how to move the package?18. I have table like thisSno Sname1,2 Sreenivas3,4 Reddy5,6 Raja7,8 Reddy I want like the following tableSno Sname1 Sreenivas2 Sreenivas3 Reddy4 Reddy5 Raja6 Raja7 Reddy8 reddy19.I have one package that package scheduled by daily 6 am but the job is failed at Saturday then what I need to do?(where we go how to resolve)21. What is parallel execution in ssis?22. What type errors occurred commonly in your project and what are those names?24. I have one package and that package is already scheduled is it possible to apply the transaction for that package?25.suppose I have one folder with 5 file text files by using for each file enumerator we store the files in to one folder but suddenly tomorrow one file add to that folder how to store the file into same destination?26. in source table data having like thisEno Ename Esloc Deptno1 Sreenu Hyd 10,20,30,40,50I want like thisEno Ename Esloc Deptno1 Sreenu Hyd 101 Sreenu Hyd 201 Sreenu Hyd 301 Sreenu Hyd 401 Sreenu Hyd 50 How to do this?27. Suppose I have one destination table with some data suddenly 2 excel files data want to insert into the destination table but how to know this data is already inserted into the destination and only new data is inserted into the destination?28. Why we are using xml file configuration file?29. How to access and execute the packages clients?30. In ssis package I created a data ware house by using slowly change dimension.Cname Cadd StatusSreenu Bangalore TrueSreenu Hyderabad FalseSreenu Kadapa FalseSreenu Badvel FalseSreenu Pml false From the above how to know second row?31. What is smtp server and what is the main purpose?32. In my source table having 1000 records from that I want move 10 to 990 rows then what I need to do?33. I have two sources with two tables and one table is having data and another table having conditions how to use the conditions in the table?34. I have one ssis package. How to know the how much time take for executing this package and after improving the performance how to see the time?35. I developed one package how to know the whether the package having data or not?36, I have one parent, child package in case the errors found child package how to handle that errors?36.in my sql server one package is there how to move that package into some other server?37. How to Concat row data through ssis?Source:Ename EmpNoStev 100Methew 100John 101 Tom 101Target:Ename EmpNoStev methew 100John tom 10138. How to send Unique (Distinct) records into One target and duplicates into another tatget?Source:Ename EmpNoStev 100Stev 100John 101Mathew 102Output:Target_1: Ename EmpNoStev 100John 101Mathew 102Target_2:Ename EmpNoStev 10038. How do u populate 1st record to 1st target , 2nd record to 2nd target ,3rd record to 3rd target and 4th record to 1st target through ssis?39. We have a target source table containing 3 columns : Col1, Col2 and Col3. There is only 1 row in the table as follows:Col1Col2Col3----------------- a b cThere is target table containg only 1 column Col. Design a mapping so that the target table contains 3 rows as follows:Col-----abc40. There is a source table that contains duplicate rows.Design a mapping to load all the unique rows in 1 target while all the duplicate rows (only 1 occurence) in another target.41.There is a source table containing 2 columns Col1 and Col2 with data as follows:Col1 Col2 a l b p a m a n b q x yDesign a mapping to load a target table with following values from the above mentioned source:Col1 Col2 a l,m,n b p,q x yDesign an ssis package to load first half records to 1 target while other half records to a separate target.

upgrade 2005 SP3 to 2008 R2

Posted: 29 Jun 2013 09:08 PM PDT

Hi,maybe already posted somewhere, but I can't find:we want to upgrade "in place" from 2005 SP3 to 2008 R2 RTM which fails(update from SP3 not supported). Because deinstallation of SP3 is not possible,what is the solution ? Can we update to 2008 R2 SP1 ?If yes, where can I find an installation of 2008R2 with SP1 included ?Or do we have to create a slipstream ?

1. How to provide security for the configuration file (xml package configuration file)? 2. Different approaches of deployment of package in ssis? 3. Three tasks are running in your package and 2 tasks are successfully executed and third task is failed, in this situation I need to roll

Posted: 30 Jun 2013 02:56 AM PDT

[email][/email]

DB Size Growing after moving fields

Posted: 10 Jun 2013 12:30 AM PDT

Hi All,I am new here, but have read the site a lot. I have been doing some SQL maintenance on various servers that I administer. One of the scripts takes 20-30 fields or so, and moves the data from the old column to a new column, and once complete it drops the old column. After doing this, the DB file (MDF) grew from about 3.5GB to about 5GB.Why?I did a shrink on both the file and database. This particular box is SQL 2008.Let me know any questions.Thanks for the help!

triggers not found in replicated table

Posted: 29 Jun 2013 07:19 PM PDT

Dear,I have done Snapshot Replication in my database. I noticed that the triggers corresponding to a specific table are not replicated.Please give me a solution so that I can find my triggers in replicated table.ThanksAkbar

Search This Blog