Wednesday, July 31, 2013

[how to] How do I determine when scaling is necessary or helpful?

[how to] How do I determine when scaling is necessary or helpful?


How do I determine when scaling is necessary or helpful?

Posted: 31 Jul 2013 05:43 PM PDT

I have a MySQL Server that I use for the back-end storage of an online game. The game writes to the database to persist character information such as world location, health status, ability stats, etc .. This data is written every few seconds to the database for each character and vehicle. I have been noticing that as more and more users join the game that queries I write via the database seem slower. Also, as I make the online world more complex, via content added via the database, the game tends to slow.

I am looking for the bottleneck and I am wondering how I go about determining if scaling my database (vertically or horizontally) will make a difference. Knowing the database is MySQL how do I go about seeing if my single database server is the bottleneck and if scaling it would be helpful? Thank you.

How to allow each user to access his own db, and superuser to all dbs in PostgreSQL?

Posted: 31 Jul 2013 03:57 PM PDT

I want to set up some pretty traditional permissions: each user can access his own db, and superuser can access all the dbs.

So far I got the first part right (I think). I edited pg_hba.conf, and changed this:

local all all  host all all  host all all  

for this:

local sameuser all  host sameuser all  host sameuser all  

But now I can't access any db with my superuser. I tried adding "localhost all myuser" but doesn't seem to work.

Any ideas?

PostgreSQL COPY: is it always faster to delete and recreate indexes?

Posted: 31 Jul 2013 06:46 PM PDT

In this answer, Erwin Brandstetter recommends

It is also substantially faster to delete indexes before a huge bulk INSERT / COPY and recreate them afterwards as it is much more costly to incrementally adjust indexes for every inserted row than to create them at once.

1) Is this true even when the table is already large? If not, is there an easy way to estimate the point of diminishing return? For example, I'm doing COPY of 30,000 records at a time into a table which already has say 100 million rows. Each batch is relatively small, but on the other hand the total batches on hand to import at the moment will more than double the size of the table.

2) If I were to delete the index before COPY, would recreating it be faster if the COPY is done with records already in index order?

There is only one index on the table, which is a 5-column primary key.

I'm doing this on a non-production machine, and I've already disabled fsync, syncronous_commit, autovacuum, etc. I'm executing the COPY operation from four separate processes concurrently on an 8-core machine. I've got 12GB of memory, so I could set maintenance_work_mem to 1GB or more if it would help.

Changing autogrow settings with database mirroring

Posted: 31 Jul 2013 02:15 PM PDT

Recently I changed the Autogrow settings for some of the databases on our SQL Server 2008 R2 server. These are involved in a database mirroring configuration, with the principal on SERVER1 and the mirror on SERVER2.

This week I failed over three databases- now SERVER1 is the mirror and SERVER2 is the principal for these databases. It appears that the autogrow settings did not move over to SERVER2 properly, as the databases now show that they grow by a percentage (two are set to 32768%, the other to 131072%).

This is different than the settings that used to be there (I believe it was the default- 10%), and also different that the 256MB I set on SERVER1.

To make things more confusing, this is only happening on the primary file- the secondary files and log file has retained the settings I set on SERVER1.

My suspicion is that this is a bug- I did patch SQL after changing the autogrow settings. My question is- has anyone seen this scenario before? Is there a method to make sure all of the settings are correct on both servers without failing all the databases over?

UPDATE: Using a 4th database that I'll call DB1, I set the autogrow on SERVER1 to 512MB (after failing the database over, witnessing the same issue, and failing it back). When I failed it over to SERVER2 after that, it shows growth of 65536%. The takeaway is that the is_autogrow_percent value in sys.master_files is not moving to SERVER2, but the growth value is.

UPDATE2: With DB1 on SERVER1, I changed the autogrow to 10% and failed it to SERVER2. The 10% value remained. I then repeated this, setting autogrow back to 256MB this time. Again, the growth value changed but the is_autogrow_percent did not.

Transfer logins to another server

Posted: 31 Jul 2013 01:30 PM PDT

I moved a database from SQL Server 2005 to SQL Server 2012. Is there any way of restoring all the logins on the new server?

SQL Server logs in Gmail [on hold]

Posted: 31 Jul 2013 02:09 PM PDT

First off, I'm using SQL Server 2008 R2.

So we all know Gmail uses the conversation view for emails. I like this...except when my SQL Server backup logs get emailed to me from the same email address. So Gmail collapses them into a single conversation even though the logs are for different databases.

Is there a way to trick Gmail to separating them out based on the database? Do I have to create a new distribution list for each database I have or do I have to create a new Operator (SQL Server Agent -> Operator) for every database?

Does anyone else have to deal with this issue?

What is a good name for a column representing the display order of the rows?

Posted: 31 Jul 2013 02:11 PM PDT

For example, a junction table associating a product and its pictures.

create table product_picture (    product_id bigint not null references product(id),    picture_id bigint not null references picture(id),    some_column int not null  );  

What is a good, short, general, name for "some_column" if it represents the display order of the photos?

"order", "sort", and "sequence" are out, as they are keywords.

Caclulating employee's work time

Posted: 31 Jul 2013 11:44 AM PDT

I need to store and analyze employee's work time.

Currently I've developed this table structure

CREATE TABLE `hirefire` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `employeeid` int(10) unsigned NOT NULL,    `hired` date NOT NULL,    `fired` date DEFAULT NULL,    `firereason` enum('fired','left','parental','vacation','death') COLLATE utf8_bin DEFAULT NULL,    PRIMARY KEY (`id`),    KEY `hired` (`hired`),    KEY `fired` (`fired`),    KEY `employee` (`employeeid`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin

The fields employeeid, hired, and fired are self-explanatory (fired may be NULL when a person was hired but not yet fired).

firereason:

  • 'fired' - forced to be fired by the boss
  • 'left' - left job by his own will
  • 'parental' - parental leave
  • 'vacation' - his non-paid vacation
  • 'death' - he or she died

This my table scheme may be (I think) good to check whether a given employee is listed as working at least a day in a given month (BTW, an SQL code for this would be appreciated),

But now a complexity comes out:

We will also need to be able to count employee's total working years. Note that parental leave and non-paid vacations should be not subtracted from the working years. This time my database structure goes non convenient for this kind of calculation.

Which DB structure would you suggest to use? (Is my (above) variant is the best structure for these tasks, or can it be improved?)

What's about allocating a whole DB row for every hire/fire event (that is having a separate row for hire data and fire date not one the same row for paired fire/hire)? For which tasks this would be better and for which worse?

MySQL 5.

Just how secure is the data I put on SQL Azure?

Posted: 31 Jul 2013 01:20 PM PDT

My wife has been asking me to create a web application for her business and from what I've read using SQL Azure would be perfect. Except for a couple of small (really really big) gotchas.

  • She works in the financial industry and is HEAVILY regulated.
  • The data she works with is very sensitive and has to have a minimum risk of exposure.

So my question is, if I put her companies information on SQL Azure how secure will it be? I'm not asking about general SQL Server/Azure security. I'm talking about the fact that this is a shared space. How possible is it to hack into the data, bypassing the security that I put into place?

Reporting Services TempDB keeps going into restoring mode

Posted: 31 Jul 2013 10:53 AM PDT

I am looking into an issue with SQL Server 2008 Reporting Services. It has been working fine until yesterday morning. Now I notice that ReportServerTempDB keeps on going into Restoring... mode. I restored it to online mode with RESTORE DATABASE [ReportServerQ4TempDB] WITH RECOVERY.

The reports work fine after this and then after 10-15 minutes, then ReportServerTempDB goes back into Restoring mode. Report execution gives the following error:

An error occurred within the report server database. This may be due to a connection failure, timeout or low disk condition within the database. (rsReportServerDatabaseError) For more information about this error navigate to the report server on the local server machine, or enable remote errors...

Any suggestions why it is doing that.

I have checked the following

  1. Memory - 50% available
  2. CPU - only 10% utilized
  3. Storage - 30% space remaining

There are multiple instance of SSRS running and SQL instance hosting the database for SSRS on the same server.

I have checked that AutoClose is not enabled for this database, and I have discovered that the database goes into Restoring... mode every hour, when our log backup job runs.

enter image description here

MySQL table architecture

Posted: 31 Jul 2013 09:58 AM PDT

Background information:

I have a table containing upload information, every day I get more and more uploads but most of the queries I run center around information gathered within the last 12 months; frequently it uses an even tighter time scale and queries are restricted to information gathered in the last 30 days

This system has been in existence from 2004 when there were 400 uploads, today there are 2+ million

Table structure:

CREATE TABLE `data_mediagallery` (    `id` int(11) NOT NULL AUTO_INCREMENT,    `status` tinyint(3) unsigned NOT NULL DEFAULT '0',    `contenttype` char(40) NOT NULL DEFAULT '',    `filename` varchar(100) NOT NULL DEFAULT '',    `injector` char(40) NOT NULL DEFAULT '',    `hits` int(11) DEFAULT '0',    `message` longtext,    `date` datetime DEFAULT NULL,    `title` varchar(80) NOT NULL DEFAULT '',    `tags` varchar(255) NOT NULL DEFAULT '',    `metadata` blob,    `location` char(8) NOT NULL DEFAULT '',    `uid` int(11) unsigned NOT NULL DEFAULT '0',    `filesize` bigint(20) NOT NULL DEFAULT '0',    `upload` datetime DEFAULT NULL,    `privacy` tinyint(3) unsigned NOT NULL DEFAULT '0',    `width` int(10) unsigned NOT NULL DEFAULT '0',    `height` int(10) unsigned NOT NULL DEFAULT '0',    `offensive` int(10) unsigned NOT NULL DEFAULT '0',    `sourcelocation` char(8) NOT NULL DEFAULT '',    `autoblog` tinyint(1) NOT NULL DEFAULT '0',    `extension` char(10) NOT NULL DEFAULT '',    `filetype` tinyint(3) unsigned NOT NULL DEFAULT '0',    `conversiontime` float NOT NULL DEFAULT '0',    `converttime` datetime DEFAULT NULL,    `sender` varchar(100) NOT NULL DEFAULT '',    `vhost` int(10) unsigned NOT NULL DEFAULT '0',    `channel` int(10) unsigned NOT NULL DEFAULT '0',    `rotation` tinyint(3) unsigned NOT NULL DEFAULT '0',    `ofilesize` bigint(20) NOT NULL DEFAULT '0',    `moderationstatus` tinyint(3) unsigned NOT NULL DEFAULT '0',    `rating` decimal(8,6) DEFAULT NULL,    `votecount` int(10) unsigned NOT NULL DEFAULT '0',    `url` varchar(150) NOT NULL DEFAULT '',    `geo_latitude` double DEFAULT NULL,    `geo_longitude` double DEFAULT NULL,    `length` decimal(8,2) DEFAULT '0.00',    `parentid` int(11) NOT NULL DEFAULT '0',    `language` char(2) NOT NULL DEFAULT '',    `author` varchar(100) NOT NULL DEFAULT '',    `context` tinyint(3) unsigned NOT NULL DEFAULT '0',    `externalid` varchar(255) DEFAULT NULL,    `originalsaved` bit(1) NOT NULL DEFAULT b'1',    `hidden` tinyint(4) NOT NULL DEFAULT '0',    `commentcount` int(11) NOT NULL DEFAULT '0',    `approvedcomments` int(11) NOT NULL DEFAULT '0',    `notdeniedcomments` int(11) NOT NULL DEFAULT '0',    `lastupdatetime` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,    `channelleft` int(10) unsigned NOT NULL DEFAULT '0',    `originalLocation` char(8) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `upload` (`upload`),    KEY `vhostupload` (`vhost`,`upload`),    KEY `vhostmodstatus` (`vhost`,`status`,`moderationstatus`,`uid`),    KEY `complexfiletype` (`vhost`,`status`,`moderationstatus`,`filetype`,`channel`),    KEY `vhostcontext` (`vhost`,`moderationstatus`,`context`,`parentid`,`status`,`filetype`),    KEY `externalid` (`externalid`),    KEY `externalcomments`.    KEY `vhostchannel` (`vhost`,`status`,`moderationstatus`,`context`,`channelleft`)  ) ENGINE=InnoDB;  

Questions

Is there a way to partition the table that would make the most sense? Does partitioning even make sense? How do I deal with new data if I do partition?

Can I force a user to use WITH NOLOCK?

Posted: 31 Jul 2013 02:10 PM PDT

Can I force a user's queries to always run with the hint NOLOCK? e.g. they type:

select * from customer  

But what is executed on the server is:

select * from customer with (nolock)  

THIS QUESTION IS NOT about the various pros and cons of NOLOCK.

How to determine Windows Server start time using T-SQL

Posted: 31 Jul 2013 09:22 AM PDT

I am looking for a way to find out when the Windows Server was last started using only t-sql commands.

I am trying to stay within the default security configurations (i.e. I don't want to enable xp_cmdshell)

TokuDB database size unknown in phpmyadmin

Posted: 31 Jul 2013 01:51 PM PDT

In mysql server, I installed the TokuDB storage engine. I converted database engine InnoDB to TokuDB, but my table sizes show unknown in phpmyadmin. enter image description here

select query not displaying column names with records

Posted: 31 Jul 2013 03:47 PM PDT

If I execute a select query it is displaying only records not column names like as follows

mysql> select movie_desc from movies limit 1\G;  

************* 1. row *************
If the story-writer of 'Alias Janaki' had gone to a Rajamouli or a Shekar Kammula to narrate the story, the listener could have said this after the narration: "Flashback over?

What is the problem?

best practice either create multiple databases or multiple users

Posted: 31 Jul 2013 06:45 PM PDT

I have several application which currently running on my local. I have installed oracle 11g and create a global database name as oracle11. Now should i create again different database for each application using Database configuration assistance or should i create different users for each application under that global database? What would be the best practice? If i create different users its also behave same as creating different databases right?

What if in a production environment where server house for couple of application? Do i need to do the same as my local by creating one database and different users for each application?

DBCC CHECKDB unfixable corruption: Indexed view contains rows that were not produced by the view definition

Posted: 31 Jul 2013 08:34 AM PDT

TL;DR: I've got an unfixable corruption in an indexed view. Here the details:


Running

DBCC CHECKDB([DbName]) WITH EXTENDED_LOGICAL_CHECKS, DATA_PURITY, NO_INFOMSGS, ALL_ERRORMSGS

on one of my databases produces the following error:

Msg 8907, Level 16, State 1, Line 1 The spatial index, XML index or indexed view 'ViewName' (object ID 784109934) contains rows that were not produced by the view definition. This does not necessarily represent an integrity issue with the data in this database. (...)

CHECKDB found 0 allocation errors and 1 consistency errors in table 'ViewName'.

repair_rebuild is the minimum repair level (...).

I do understand that this message indicates that the materialized data of the indexed view 'ViewName' is not identical with what the underlying query produces. However, manually verifying the data does not turn up any discrepancies:

SELECT * FROM ViewName WITH (NOEXPAND)  EXCEPT  SELECT ...  from T1 WITH (FORCESCAN)  join T2 on ...    SELECT ...  from T1 WITH (FORCESCAN)  join T2 on ...  EXCEPT  SELECT * FROM ViewName WITH (NOEXPAND)  

NOEXPAND was used to force use of the (only) index on ViewName. FORCESCAN was used to prevent indexed view matching from happening. The execution plan confirms both measures to be working.

No rows are being returned here, meaning that the two tables are identical. (There are only integer and guid columns, collations do not come into play).

The error cannot be fixed by recreating the index on the view or by running DBCC CHECKDB REPAIR_ALLOW_DATA_LOSS. Repeating the fixes also did not help. Why does DBCC CHECKDB report this error? How to get rid of it?

(Even if rebuilding fixed it my question would still stand - why is an error reported although my data checking queries run successfully?)

PostgreSQL CREATE TABLE creates with incorrect owner

Posted: 31 Jul 2013 10:59 AM PDT

I'm using PostgreSQL 9.2.4. When I create a table as a non-superuser in a database owned by that non-superuser, it is owned by the postgres user, so I can't put any data into it unless I explicitly grant myself permission.

I created the non-superuser like this:

admin_user=# create role "test1" NOINHERIT LOGIN ENCRYPTED PASSWORD 'wibble' CONNECTION LIMIT 10;  

Then I created a database owned by test1 like this:

admin_user=# create database "test1$db1" with owner "test1";  

Then I started a new psql as user test1, and created a table:

test1$db1=> create table test_table (column_name varchar(50));  

But I couldn't insert into it:

test1$db1=> insert into test_table values ('some data');                                                                                                      ERROR:  permission denied for relation test_table  

Checking the permissions shows that the table is owned by the postgres user:

test1$db1=> \dt               List of relations   Schema |      Name      | Type  |  Owner     --------+----------------+-------+----------   public | test_table     | table | postgres  

However, I can grant myself permissions and do stuff:

test1$db1=> grant insert, select on test_table to test1;                                                                                                      GRANT  test1$db1=> insert into test_table values ('some data');   INSERT 0 1  test1$db1=> select * from test_table;   column_name   -------------   some data  (1 row)  

What's going on? I'm pretty sure this used to work. And the PostgreSQL docs for CREATE TABLE say

CREATE TABLE will create a new, initially empty table in the current database. The table will be owned by the user issuing the command.

Having to grant permissions to myself on my own tables doesn't sound like it's what I should have to do.

Any help much appreciated!

[UPDATE]

It looks like this might be caused by some kind of change to the PostgreSQL package at http://apt.postgresql.org/pub/repos/apt/

We have three machine images, all of which installed PostgreSQL from that source:

  • Install at 23 July 2013, 18:47 UTC -- correct table ownership
  • Install at 23 July 2013, 19:39 UTC -- correct table ownership
  • Install at 24 July 2013, 13:16 UTC -- error

So perhaps something has been changed in the packaging. I'll drop a line to the mailing list for that.

Proper MySQL database maintenance

Posted: 31 Jul 2013 01:31 PM PDT

I hope this isn't too broad of a question. I'm certain it'll be able to help me and any future dba noobs that stumble upon it.

I'm a systems admin that got put into the roll of DBA (because I helped the CEO with his outlook, so I can obviously manage our databases!). It's not that big or busy of a database server... a mysqldump is about 6GB in size and it's taken us 5 years to get it that big. Munin reports that we're averaging 40-60 queries a second at our peak hours.

My boss paid for me to take the Oracle University systems admin course, but having gone through it, it simply explains the different parts of the mysql server, what things do and how they do it. But it's not that in-depth and you definitely don't come out of that course a DBA.

So as the current DBA, what should I do to make sure everything is running smooth? Are there daily maintenance tasks I can perform? Are there certain metrics I should be checking? Or to put it another way, as DBAs, what do YOU do on a daily basis to keep everything in good shape?

If it'll help tailor the answer a little bit, here are some specifics. We have 171 tables, all but 3 are innodb, the others are myisam. We have Master/Slave replication set up between our primary datacenter and our disaster recovery site, using RBR. Version is 5.5.28.

What can I do?

Teradata : Disadvantage of Collect stats

Posted: 31 Jul 2013 01:13 PM PDT

I want to know any disadvantage can occur by doing collect stats on a column that is being widely used in join condition across so many procedures ? (Eg. the column is accessed 300+ times access in 60 days)

While searching in google, i came to know the only issue that can cause performance degradation is out of date Stats. In this case, this is not going to happen as whenever data of table changes, we are going to run collect stats.

Any other disadvantage is there because of adding collect stats other than space consumption ?

Can't change root password: "The system cannot find the file specified."

Posted: 31 Jul 2013 08:46 AM PDT

I'm trying to change the root password in MySQL on my development machine (I've just installed MySQL, so it currently doesn't have a password), but it keeps failing with the following error message:

The system cannot find the file specified.

I'm using MySQL 5.1.70 (x86, 32-bit) on Windows 7 SP1 (64 bits). I've added MySQL's "bin" directory to my "Path" environment variable.

In the comments of the MySQL documentation, I read that I should have installed the service using the absolute path, so I stopped MySQL, and uninstalled it:

C:\Windows\system32>mysqld --remove  Service successfully removed.  

Then I installed it again, using the absolute path this time:

C:\Windows\system32>C:\web\mysql-5.1.70\bin\mysqld.exe --install  Service successfully installed.  

I started MySQL, and tried to change the password again:

C:\Windows\system32>mysqladmin -u root password Pe%8XiduwOqdZ<ZFE5!  The system cannot find the file specified.  

I also tried with quotes:

C:\Windows\system32>mysqladmin -u root password 'Pe%8XiduwOqdZ<ZFE5!'  The system cannot find the file specified.  

I also tried to change the current directory to MySQL's "bin" directory:

C:\Windows\system32>cd C:\web\mysql-5.1.70\bin    C:\web\mysql-5.1.70\bin>mysqladmin -u root password Pe%8XiduwOqdZ<ZFE5!  The system cannot find the file specified.    C:\web\mysql-5.1.70\bin>mysqladmin -u root password 'Pe%8XiduwOqdZ<ZFE5!'  The system cannot find the file specified.  

What's wrong?

InnoDB tables inaccessible after reboot

Posted: 31 Jul 2013 11:46 AM PDT

After reboot each time, I can't see my InnoDB table. However, when use command mode I can see my tables, but cannot access to it.

mysql> show tables;  +----------------------+  | Tables_in_xpl_ticket |  +----------------------+  | active_list          |  | bill_block           |  | bill_block_list      |  | block                |  | block_list           |  | box                  |  | cisco_switch         |  | mac_add              |  | mac_change           |  | month                |  | new_connect          |  | new_user             |  | open                 |  | package_change       |  | paid_list            |  | pay                  |  | problem              |  | re_open              |  | refund               |  |  ticket              |  | user                 |  | user_log             |  +----------------------+  22 rows in set (0.00 sec)  

But when I want to access a table it says table doesn't exist;

mysql> select * from active_list;  ERROR 1146 (42S02): Table 'xpl_ticket.active_list' doesn't exist  

Edit by RolandoMySQLDBA

Please run the following in MySQL:

SELECT VERSION();  SELECT CONCAT('[',table_name,']') tbl_name  FROM information_schema.tables WHERE table_schema='xpl_ticket';  

Please run the following in the OS:

cd /var/lib/mysql/xpl_ticket  ls -l  

Hi RolandoMySQLDBA,

Thank you for your quick reply. Here is all those outputs.

mysql> SELECT VERSION();  +-------------------------+  | VERSION()               |  +-------------------------+  | 5.5.31-0ubuntu0.13.04.1 |  +-------------------------+  1 row in set (0.00 sec)      mysql> SELECT CONCAT('[',table_name,']') tbl_name  -> FROM information_schema.tables WHERE table_schema='xpl_ticket';  +-------------------+  | tbl_name          |  +-------------------+  | [active_list]     |  | [bill_block]      |  | [bill_block_list] |  | [block]           |  | [block_list]      |  | [box]             |  | [cisco_switch]    |  | [mac_add]         |  | [mac_change]      |  | [month]           |  | [new_connect]     |  | [new_user]        |  | [open]            |  | [package_change]  |  | [paid_list]       |  | [pay]             |  | [problem]         |  | [re_open]         |  | [refund]          |  | [ticket]          |  | [user]            |  | [user_log]        |  +-------------------+  22 rows in set (0.03 sec)  

My data directory is not in "/var/lib/mysql" path. I have been changed it to "/var/www/xampp/mysql/data" path. So I am going to execute the following code.

Lurid / # cd /var/www/xampp/mysql/data/xpl_ticket/  Lurid xpl_ticket # ls -l  total 265  -rwxrwxrwx 1 root root 9272 Jun  6 12:48 active_list.frm  -rwxrwxrwx 1 root root 8654 Jun  6 12:48 bill_block.frm  -rwxrwxrwx 1 root root 9272 Jun  6 12:48 bill_block_list.frm  -rwxrwxrwx 1 root root 8654 Jun  6 12:48 block.frm  -rwxrwxrwx 1 root root 9272 Jun  6 12:48 block_list.frm  -rwxrwxrwx 1 root root 8802 Jun  6 12:48 box.frm  -rwxrwxrwx 1 root root 8648 Jun  6 12:48 cisco_switch.frm  -rwxrwxrwx 1 root root   65 Jun  6 12:48 db.opt  -rwxrwxrwx 1 root root 8690 Jun  6 12:48 mac_add.frm  -rwxrwxrwx 1 root root 8698 Jun  6 12:48 mac_change.frm  -rwxrwxrwx 1 root root 8618 Jun  6 12:48 month.frm  -rwxrwxrwx 1 root root 8716 Jun  6 12:48 new_connect.frm  -rwxrwxrwx 1 root root 9002 Jun  6 12:48 new_user.frm  -rwxrwxrwx 1 root root 8680 Jun  6 12:48 open.frm  -rwxrwxrwx 1 root root 8724 Jun  6 12:48 package_change.frm  -rwxrwxrwx 1 root root 8692 Jun  6 12:48 paid_list.frm  -rwxrwxrwx 1 root root 8592 Jun  6 12:48 pay.frm  -rwxrwxrwx 1 root root 8802 Jun  6 12:48 problem.frm  -rwxrwxrwx 1 root root 8670 Jun  6 12:48 refund.frm  -rwxrwxrwx 1 root root 8714 Jun  6 12:48 re_open.frm  -rwxrwxrwx 1 root root 8900 Jun  6 12:48 ticket.frm  -rwxrwxrwx 1 root root 8704 Jun  6 12:48 user.frm  -rwxrwxrwx 1 root root 8808 Jun  6 12:48 user_log.frm  

Hi Mannoj

These tables are created in these same database.

I have some errors in my "/var/log/mysql/error.log". I don't understand what are those. Here are they ....

130611 12:41:28 [Note] /usr/sbin/mysqld: Normal shutdown    130611 12:41:29 [Note] Event Scheduler: Purging the queue. 0 events  130611 12:41:30  InnoDB: Starting shutdown...  130611 12:41:31  InnoDB: Shutdown completed; log sequence number 1595675  130611 12:41:31 [Note] /usr/sbin/mysqld: Shutdown complete    130611 12:43:15 [Note] Plugin 'FEDERATED' is disabled.  130611 12:43:15 InnoDB: The InnoDB memory heap is disabled  130611 12:43:15 InnoDB: Mutexes and rw_locks use GCC atomic builtins  130611 12:43:15 InnoDB: Compressed tables use zlib 1.2.7  130611 12:43:15 InnoDB: Using Linux native AIO  130611 12:43:15 InnoDB: Initializing buffer pool, size = 128.0M  130611 12:43:15 InnoDB: Completed initialization of buffer pool  InnoDB: The first specified data file ./ibdata1 did not exist:  InnoDB: a new database to be created!  130611 12:43:15  InnoDB: Setting file ./ibdata1 size to 10 MB  InnoDB: Database physically writes the file full: wait...  130611 12:43:15  InnoDB: Log file ./ib_logfile0 did not exist: new to be created  InnoDB: Setting log file ./ib_logfile0 size to 5 MB  InnoDB: Database physically writes the file full: wait...  130611 12:43:16  InnoDB: Log file ./ib_logfile1 did not exist: new to be created  InnoDB: Setting log file ./ib_logfile1 size to 5 MB  InnoDB: Database physically writes the file full: wait...  InnoDB: Doublewrite buffer not found: creating new  

Thanks to both of you.

Fulltext stoplist replication

Posted: 31 Jul 2013 07:47 PM PDT

In MS SQL Server 2008 R2 there is replication of table with fulltext index on it. But stoplist that is associated with replicated fulltext index doesn't replicate.

Is there any possibility to replicate stoplist also?

sql server database sharding - what to do with common data / non sharded data

Posted: 31 Jul 2013 06:47 PM PDT

We have a very large scale enterprise level database. As part of our business model all web users hit our web servers at the same time each month which in turn hammer our sql box. The traffic is very heavy and continues to grow heavier the larger the company grows. sql proc optimization has been performed and hardware has already been scaled up to a very high level.

We are looking to shard the database now to ensure that we can handle company growth and future loads.

We have decided what particular data should be sharded. It is a subset of our database which is highly utilized.

However, my question is regarding the non sharded data which is common/universal. An example of data like this may be an Inventory table for instance or possibly an Employee table, user table etc .

I see two options to handle this common/universal data:

1) design 1 - Place the common/universal data in an external database. All writes will occur here. This data will then be replicated down to each shard allowing each shard to read this data and inner join to this data in t-sql procs.

2) design 2 - Give each shard its own copy of all common/universal data. Let each shard write locally to these tables and utilize sql merge replication to update/sync this data on all other shards.

concerns about design #1

1) Transactional issues: If you have a situation in which you must write or update data in a shard and then write/update a common/universal table in 1 stored proc for instance, you will no longer be able to do this easily. The data now exists on seperate sql instances and databases. You may need to involve MS DTS to see if you can wrap these writes into a transaction since they are in a separate database. Performance is a concern here and possible rewrites may be involved for procs that write to sharded and common data.

2)a loss of referential integrity. Not possible to do cross database referential integrity.

3) Recoding large areas of the system so that it knows to write common data to the new universal database but read common data from the shards.

4). increased database trips. Like #1 above, when you run into a situation in which you must update sharded data and common data you are going to make multiple round trips to accomplish this since the data is now in separate databases. Some network latency here but I am not worried about this issue as much as the above 3.

concerns about design #2

In design #2 each shard gets its own instance of all common/universal data. This means that all code that joins to or updates common data continues to work/run just like it does today. There is very little recoding/rewriting needed from the development team. However, this design completely depends on merge replication to keep data in sync across all shards. the dbas are highly skilled and are very concerned that merge replication may not be able to handle this and should merge replication fail, that recovery from this failure is not great and could impact us very negatively.

I am curious to know if anyone has gone with design option #2. I am also curious to know if i am overlooking a 3rd or 4th design option that I do not see.

thank you in advance.

Defragmenting SQL Server data and log files live with MoveFile API

Posted: 31 Jul 2013 02:26 PM PDT

My employer is looking at deploying NTFS defragmentation software that uses the Windows MoveFile API to defragment open files. This would get deployed to thousands of SQL Server servers running SQL versions from 2005-2012 and Windows versions from 2003 and 2008 R2. If it matters, the product I'm talking about is PerfectDisk, but I believe there are similar programs that work the same way.

Testing so far hasn't turned up many problems aside from occasional I/O performance issues, which is no surprise and can be worked around by rescheduling & tuning the defragmentation. However, I'm more concerned about the risk of data corruption.

Does anyone here have experience with running this type of software in production on database servers? Did you experience any data corruption?

I'm rather uneasy about this despite being unable to find any solid evidence that it poses a problem.

Thanks for any replies.

Edited to add: This horrible idea was thankfully just forgotten about, possibly in part due to some of the warnings I gave.

What's the difference between a temp table and table variable in SQL Server?

Posted: 31 Jul 2013 08:43 AM PDT

This seems to be an area with quite a few myths and conflicting views.

So what is the difference between a table variable and a local temporary table in SQL Server?

Logging/profiling an unpredictably slow stored procedure

Posted: 31 Jul 2013 10:40 AM PDT

I have a stored procedure that sometimes runs significantly slower than others. I have two different timers measuring the execution time at different scopes. One is in the client code, measuring only the OCI execute call. The second is inside the PL/SQL procedure itself, timing the inside of the procedure's main block.

Normally, if the procedure's timer takes, say, 5 seconds to run, the client's timer is only a few milliseconds greater. This is completely reasonable. The problem is that sometimes the client's timer will be much greater than the procedure's, even double.

I've eliminated any issue on the client or in the transport layer. I enabled client and server logging in sqlnet.ora at support level. On the client, the gap between request and response packets is essentially equal to the client code's timer. On the server, a similar gap is seen between the incoming request and the outgoing response.

The question remains, if the overall call takes 10 seconds, and inside the procedure takes only 5 seconds, where did the other 5 seconds go?

I don't expect the exact answer to the above question, but I would like to hear ideas on how to go about finding the answer. What logs can I turn on to give me insight into what Oracle is doing both after the request arrives but before the actual call to the procedure, and after the procedure until the response is sent back to the client?

Edit: Perhaps there's something wrong with my timing methodology? I'm using dbms_utility.get_cpu_time() (which provides time in hundredths of a second, hence the division by 100.)

PROCEDURE foo (elapsed_time OUT NUMBER) IS    start_time NUMBER;  BEGIN    start_time := dbms_utility.get_cpu_time();    -- ... PL/SQL stuffs    elapsed_time := (dbms_utility.get_cpu_time() - start_time) / 100;  END;  

Any Open Source / free ETL out there?

Posted: 31 Jul 2013 12:35 PM PDT

I was using Pentaho Data Integration even before Pentaho bought it and call it that. I have the last free version.

I went on their website recently to see if they had released another version only to find out my favorite open source etl is not much open anymore and not quite free.

Does any of you know of alternatives in affordable, easy to use ETL tools?

[SQL Server] Query hangs on table variable

[SQL Server] Query hangs on table variable


Query hangs on table variable

Posted: 06 Mar 2013 02:07 PM PST

I have a query that consistently runs fast and returns the expected output, but when I try to insert the output into a table variable it just runs indefinitely, and I can't find the problem in the code. I can't really post the query or the output (this is a data mining task with healthcare data and there are strict privacy rules) but I will describe the situation as best as I can.The query that runs fine takes these steps:Declares datetime variables for StartDate and EndDateDeclares a table variable (an ID to ID crosswalk) with 2 fields and inserts 691,969 records with a select queryDeclares another table variable (simple list of codes) with 1 field and inserts 465 records directlyFinally, there is a union select that pulls 3 fields each from 3 different tables, each inner joined to the crosswalk table variable and where Date is between StartDate and EndDate and the code is in the code list table variable.This query returns 53,463 records in about 50 seconds.When I try to insert this output into another table variable, it doesn't throw an error, it just runs - I have let it go over 26 hours before just to see what would happen - it just keeps executing...Am I pushing my luck with all the table variables? I'm stumped.

tempdb and permaent tables

Posted: 30 Jul 2013 03:35 PM PDT

Hi allI have created some permenent tables(mistake) tables in temp db. But can not view them in managment studio. How do i view and delete them all?

[MS SQL Server] How to give users access only to certain columns of certain tables in a database?

[MS SQL Server] How to give users access only to certain columns of certain tables in a database?


How to give users access only to certain columns of certain tables in a database?

Posted: 30 Jul 2013 07:19 AM PDT

Hello,I am new to SQL Server 2008 administration and I have been asked to give a user access to only 1 table of a database. In addition i have been asked to give them access to only 2 columns within that table. How can I do this? I was told there is more than one way if you know multiple ways please share. If I have to type a command where do I type it? please start from scratch with me. Thank you very much I appreciate your input.

Connect to SQL Server

Posted: 30 Jul 2013 09:08 PM PDT

Hi Experts,We lost SQL Server sa password and our windows authentication is not allowing us to login .How can i connect to server now?

Monitoring performance after index deployment.

Posted: 30 Jul 2013 09:26 AM PDT

I am trying to figure out how to monitor performance of the database before and after deploying an index. Are there any simple ways of doing this?

Error while installing

Posted: 30 Jul 2013 11:30 PM PDT

Hi allhere i am trying to install the sql server in our server but finally it is showing some error message like " external component has thrown exception"here in c drive we have less space so we are installing data files in e drive.firewall is disabledstill same problem is coming.can u please help me out

SQl2k8 R2 Ent stops installation after installing setup files

Posted: 14 Jul 2012 10:31 AM PDT

Hi all.I'm having trouble installing SQL 2k8 R2 Ent on a Windows Server 2008 R2 box. This box had 2k8 express, 2k8 web edition and 2012 R0. Since an upgrade failed, I removed all 3 and attempted to do a fresh install but ran in to a new problem.The initial error mentioned something about reporting services files and database still being on the server. A google search mentioned removing these manually, which I did. But as soon as I did that, rebooted and attempted another install, I ran in to a different problem.When I run setup.exe, it launches to the initial screen and I click "New installation or add features to an existing installation" and the installation starts. As soon as the installation finishes installing the set-up files, the install window closes and I'm back to the installation center window that originally opens when you run setup.exe. No error, just appears to close the current attempt at installation.I checked my setup bootstrap/log files and there are numerous text files in this folder but nothing jumps out as a straight "error" or "cancelling setup". Below is the only thing that may relate to an error and it's the end of the file named "Detail_ComponentUpdate.txt".Copying the files from the DVD to the hard drive yielded the same results. I uninstalled everything and manually cleared the install directory. I also tried the upgrade option. I deleted what registry keys I could find relating to SQL Server. Still no luck although these files installed fine on another server.Any ideas?Thanks,Mark[code="other"]2012-07-15 19:02:17 Slp: C:\Users\Administrator\Desktop\SQL Server 2008 R2\x64\FixSqlRegistryKey_x86.exe exit code: 02012-07-15 19:02:17 Slp: Completed Action: FixSqlRegistryKey, returned True2012-07-15 19:02:17 Slp: ----------------------------------------------------------------------2012-07-15 19:02:17 Slp: Running Action: PostMsiTimingConfigAction2012-07-15 19:02:17 Slp: The object state property 'workflow' is null/empty.2012-07-15 19:02:17 Slp: Error: Action "PostMsiTimingConfigAction" failed during execution.2012-07-15 19:02:17 Slp: Completed Action: PostMsiTimingConfigAction, returned False2012-07-15 19:02:17 Slp: Completed Action: ExecuteStandardTimingsWorkflow, returned True2012-07-15 19:02:17 Slp: ----------------------------------------------------------------------2012-07-15 19:02:17 Slp: Running Action: ExecuteCloseWorkflow2012-07-15 19:02:17 Slp: Workflow to execute: 'CLOSE'2012-07-15 19:02:17 Slp: ----------------------------------------------------------------------2012-07-15 19:02:17 Slp: Skipping Action: ProduceStatusLogs2012-07-15 19:02:17 Slp: Action is being skipped due to the following restrictions: 2012-07-15 19:02:17 Slp: Condition "Is the datastore flag /Scenario/GlobalFlags/AreProductFeaturesSupported set" did not pass as it returned false and true was expected.2012-07-15 19:02:17 Slp: ----------------------------------------------------------------------2012-07-15 19:02:17 Slp: Running Action: FinalizeProgressStatus2012-07-15 19:02:17 Slp: Completed Action: FinalizeProgressStatus, returned True2012-07-15 19:02:17 Slp: ----------------------------------------------------------------------2012-07-15 19:02:17 Slp: Running Action: RebootMessageAction2012-07-15 19:02:17 Slp: Completed Action: RebootMessageAction, returned True2012-07-15 19:02:17 Slp: ----------------------------------------------------------------------2012-07-15 19:02:17 Slp: Skipping Action: FinishPage2012-07-15 19:02:17 Slp: Action is being skipped due to the following restrictions: 2012-07-15 19:02:17 Slp: Condition "Is the user's scenario set to EditionUpgrade" did not pass as it returned false and true was expected.2012-07-15 19:02:17 Slp: ----------------------------------------------------------------------2012-07-15 19:02:17 Slp: Skipping property RequireMode of class: StopAction2012-07-15 19:02:17 Slp: Input of property will not be set due to the following failed restriction(s): 2012-07-15 19:02:17 Slp: Condition "Is the user's scenario set to ComponentUpdate" did not pass as it returned true and false was expected.2012-07-15 19:02:17 Slp: ----------------------------------------------------------------------2012-07-15 19:02:17 Slp: Running Action: CloseUI2012-07-15 19:02:17 Slp: Stop Action2012-07-15 19:02:17 Slp: Completed Action: CloseUI, returned True2012-07-15 19:02:17 Slp: Completed Action: ExecuteCloseWorkflow, returned True2012-07-15 19:02:17 Slp: ----------------------------------------------------------------------2012-07-15 19:02:17 Slp: Running Action: PassPidBackFromComponentUpdate2012-07-15 19:02:17 Slp: Pass the flag CuPidDisplayed to the main setup2012-07-15 19:02:17 Slp: ENU value specified, trying to pass the value to main setup2012-07-15 19:02:17 Slp: PID value specified, trying to pass the value to main setup2012-07-15 19:02:17 Slp: SQMREPORTING value is specified in CU workflow, trying to pass the value to main setup2012-07-15 19:02:17 Slp: Completed Action: PassPidBackFromComponentUpdate, returned True2012-07-15 19:02:19 Slp: 2012-07-15 19:02:19 Slp: 2012-07-15 19:02:19 Slp: ----------------------------------------------------------------------2012-07-15 19:02:19 Slp: 2012-07-15 19:02:19 Slp: Setup result: 0[/code]

[Articles] Less QA?

[Articles] Less QA?


Less QA?

Posted: 30 Jul 2013 11:00 PM PDT

It seems that QA is being cut more and more as software development advances. Is that a good thing? Steve Jones has some thoughts.

[SQL 2012] SQL Server TempDB ballooning to more than 75 GB

[SQL 2012] SQL Server TempDB ballooning to more than 75 GB


SQL Server TempDB ballooning to more than 75 GB

Posted: 30 Jul 2013 09:43 AM PDT

Hi,In one of our database while executing a procedure serving an important business logic, which has 11 joins, then a filtering on a specific column, sorting using order by on a specific column. In the join condition, as I see them separately, there are millions of rows fetched. Now I see that TempDB is ballooning to 75 GB size, which is the available disk space and the query fails.What could be the possible reason:- Is the TempDB, behaving incorrectly, can I really expect that kind of size (75 GB).- If the TempDB behavior is correct, is there something I can do to mitigate the situation, will the faster execution of the query, having more statistics, relevant index, more seeking than scanning of index / table, will that help is solving the situation.I know a possible situation would be relocate the db to a much a disk space, but I want to figure out the tuning options first, since I do not know, what is the maximum size TempDb will bloat up toAny relevant suggestion would be great.thanks,Mrinal

SQL WHERE clause in Excel? HELP!!!

Posted: 31 Jul 2013 02:34 AM PDT

I have three variables on an [red]Excel form [/red] (version 2010) that connects to a Teradata database. The first variable is a date format (DateworkedF and DateworkedT) the other two are text fields. (StatusX and ErrorTypeX)I can return a record set from Teradata using either "Status" or "ErrorType" but can not get the date from/to to work?? Also, if i leave any of the fields blank, i get no records returned??I want to be able to search on any or all of these fields. (If the field is blank return all values)Can't figure out the syntax ???[code]Query = "SEL SRN_ACCT_NUM, QUEUE_NAME, ERROR_TYPE, SUB_ERROR_TYPE, DATE_WORKED, MONTH_WORKED, DATE_APPLICATION_RECEIVED, ASSOC_WORKED, ACCT_ID, STATUS, COMMENTS, REVIEWED_IND, REVIEWED_AGENT, LOAD_DT " & _"FROM UD402.JD_MCP_MASTER WHERE " & _"(DATE_WORKED >= #" & DateworkedF & "# Or #" & DateworkedF & "# IS NULL)" & _"AND (DATE_WORKED <= #" & DateworkedT & "# Or #" & DateworkedT & "# IS NULL)" & _"AND (STATUS = '" & StatusX & "' OR '" & StatusX & "' IS NULL)" & _"AND (ERROR_TYPE = '" & ErrorTypeX & "' or '" & ErrorTypeX & "' IS NULL);"

Annual Count

Posted: 31 Jul 2013 12:26 AM PDT

I am working on an HR project and I have one final component that I am stuck on. I have an Excel File that is loaded into a folder every month. I have built a package that captures the data from the excel file and loads it into a staging table (transforming a few bits of data).I then combine it with another table in a view. I have another package that loads that view into a Master table and I have added a Slowly Changing Dimension so that it only updates what has been changed. (it's a table of all employees, positions, hire dates, term dates etc).Our HR wants to have this data in a report (with charts and tables) and they wanted it to be in a familiar format. So I made a data connection with Excel loading the data into a series of pivot tables.I have one final component that i cant seem to figure out. At the end of every year I need to capture a count of all Active Employees and all Termed employees for that year. Just a count. So the data will look like this. |Year|HistoricalHC|NumbTermedEmp||2010|447 |57 ||2011|419 |67 ||2012|420 |51 |The data is in one table labeled [EEMaster]. To test the count I have the following. SELECT COUNT([PersNo]) AS HistoricalHCFROM [dbo].[EEMaster]WHERE [ChangeStatus] = 'Current' AND [EmpStatusName] = 'Active'this returns the HistoricalHC for 2013 as 418.SELECT COUNT([PersNo]) AS NumbOfTermEEFROM [dbo].[EEMaster]WHERE [ChangeStatus] = 'Current' AND [EmpStatusName] = 'Withdrawn' AND [TermYear] = '2013'This returns the Number of Termed employees for 2013 as 42. I have created a table to report from called [dbo.TORateFY] that I have manually entered previous years data into. |Year|HistoricalHC|NumbTermedEmp||2010|447 |57 ||2011|419 |67 ||2012|420 |51 |I need a script (or possibly a couple of scripts) that will add the numbers every year with the year that the data came from. (so on Dec 31st this package will run and add |2013|418|42| to the next row, and so on. Thank you in advance.

SQL Instance Name not appearing in drop down list on Cluster Node Configuration Window

Posted: 29 May 2013 03:50 PM PDT

I am trying to install the MS SQL Server 2012 Failover Cluster on Windows Server 2012 . I successfully Installed the Failover cluster instance on my primary node. But when I am running the installation process on passive node to add node in the failover cluster I am stuck with very unique kind of issue.I am following the Standard process of Installation and I am getting the same windows for each next process, but after License Agreement window when I get the Cluster Node Configuration window, then in "SQL Server Instance Name" drop down box I am not getting the Name of Instance which is already installed on the primary node. But this Instance complete information is appearing in below given box.Only issue is Instance name is not appearing in the drop down list, that's why I am not able to select and when I click on next it trough error and do not proceed.Please can you tell me the possible cause behind this issue, and I am first time doing cluster installation in SQL Server 2012.I truly appreciate your help.

What is the performance point at which it makes sense to get Enterprise instead of Standard

Posted: 22 Jul 2013 02:09 AM PDT

We are looking to build a SQL Server server. We are looking to get SQL 2012, but not sure which version to get. At this point, I think Standard is all we need, but to make sure we cover our bases, I need to know at what point does it make sense to get Enterprise. Is there a number of users or a database size, amount of usable memory or something else that would be a reasonable indicator that we should get Enterprise?

CREATE Login doesnt recognise a service account

Posted: 30 Jul 2013 05:38 AM PDT

I am unable to use T-SQL statement "CREATE LOGIN xxx\yyy FROM Windows" and create a Login in SQL Server. I can manually go through SSMS and create the login by doing 'New Login' but when I try to do with T-SQL, it says that xxx\yyy is not a windows user or group and errors out. When I do manually through SSMS, it does recognise that account but not with T-SQL.What am I missing?

AlwaysOn Synchronous Replica in Synchronizing State

Posted: 30 Jul 2013 10:23 PM PDT

I have an AlwaysOn secondary node that is set to synchronous commit that is stuck in the Synchronizing state. This had previously been working fine but I needed to restart the secondary node because we added another processor to the virtual machine. The server stated fine but it will not synchronize. How can I troubleshoot this issue? There is very limited information from Microsoft on how to determine what the problem is. I have checked the AlwaysOn Health Events but there is nothing that suggests a problem. I can't believe that this is due to network latency because these servers are beside each other on the same subnet and had previously been synchronizing fine. Could it be due to the addition of the CPU? Any suggestions appreciated.

Error Configuring Master Data Services SQL Server 2012

Posted: 29 May 2012 12:10 PM PDT

Hi AllI would appreciate a bit of help with this known issue.Jose Chinchilla (http://sqljoe.wordpress.com/2011/11/29/sql-server-2012-master-data-services-error-the-required-svc-handler-mappings-are-not-installed-in-iis/) pointed out that this was a known issue with RC0.When one tries to configure Master Data Services the first screen is shown indicating that IIS is not configured correctly and that a .svc file is not configured correctly.This is the message[b]"Before creating an MDS web application, you must correct the following errors:Internet Information Services (IIS) is not configured on this server.The required .svc handler mappings are not installed in IIS. For more information, see http://go.microsoft.com/fwlink/?LinkId=226284.For web application requirements, see http://go.microsoft.com/fwlink/?LinkId=215355.[/b][b]This issue has been communicated to Microsoft via Connect ID: 701993 and is documented in the Technet Article "Troubleshoot Installation and Configuration Issues (Master Data Services in SQL Server 2012)"[/b]Now, my operating system is Windows 7 Ultimate.Further, I believe that I have removed all of the Enterprise edition RTM trial from my PC and I have installed the DEVELOPER edition copy of 2012 that I have purchased.The plot thickens...I configured IIS to the settings as recommended on page 16 of Tyler Graham and Suzanne Selhorn's book. Should anyone have run into this issue and have any ideas, I would be most grateful.regards Steve SimonP.S. MDS 2008 work perfectly on my machine( no issue).

How to loop throught prameters to use the same sqlquerys

Posted: 30 Jul 2013 04:36 AM PDT

create table #temp(ID int,Type nvarchar(10),Value int)insert into #tempvalues (1, 'A', '25')insert into #tempvalues (2, 'A', '47')insert into #tempvalues (3, 'B', '3')insert into #tempvalues (4, 'C', '3')insert into #tempvalues (5, 'D', '7')-- use the same sql with type A and B , first only with A and then only with B-- parameter/loop ?-- do something with type A -- STARTSELECT ID, Type, Value + 1 from #temp where type = N'A'-- more then one sqlquerys-- cleanup-- END-- use the same sql with type B-- STARTSELECT ID, Type, Value + 1 from #temp where type = N'B'-- more then one sqlquerys-- cleanup-- END-- Do something else with type C and D/Clas

Replication - Adding article to a publication causing the snapshot of all other articles

Posted: 30 Jul 2013 12:44 PM PDT

Hi guys,Adding an article(table) to a publication, and then running the snapshot agent should create a snapshot only for the new article. However, sometimes replication decides to re-snapshot every article in the publication. So, I have 2 questions:1. Why sometimes all articles are re-snapshotted if only one article was added?2. Is it possible to know beforehand which articles are marked to be snapshotted next time the snapshot agent runs (such as in a DMV or system table) ?Thanks a lot.

Always On database as transactional subscriber

Posted: 30 Jul 2013 12:04 PM PDT

[u][b]What's the background of this question?[/b][/u]I'm having my first project using Always On High Availability Groups. We want to fill our databases using transactional replication and we are dealing with some special behaviour that makes us some troubles.It seems that there is nearly no information about some things and so I'm wondering if we are the first and only company on this planet using this constellation?

Partion Diffrent Connection Strings

Posted: 23 Jul 2013 08:09 PM PDT

I have 3 partion and ı want to change2 partion to diffrent connection is it possible?

Need a help in MCSA SQL Server exam 2012 70-461, 462, 463

Posted: 30 Jul 2013 10:03 AM PDT

Hello friends,after getting good experience at current workplace, I am planing to get certified in MS SQL SERVER 2008/2012.I went through the webpage that provide the exam information.I am looking for any preparation material, or web site that can help me with the preparation.because, I can not afford the books, which are mentioned in the website for preparation of the exam.Please let me know where I can start reading or get some practice materials.thanks in advance.Pratik

Management Studio and Multi-Subnet Availability Group Listener

Posted: 30 Jul 2013 06:03 AM PDT

Hello,Does anyone know if SSMS is able to use the MultiSubnetFailover parameter to connect to an AG listener? I tried listing it in the advanced connection properties and it didn't work (Management Studio version 11.0.2100.60). We have a multisubnet availablity group with a listener and after much effort, we are able to connect via SQLCMD with the -M and -l30 flags, but I can't connect to the server via the listener name through management studio. I can connect to the individual servers, but the listener name times out even if I set the connection timeout to 60 seconds. It seems odd that SSMS wouldn't support the new connection parameters.Thanks!Jason

What happened to retail purchase of SQL Server?

Posted: 30 Jul 2013 03:46 AM PDT

Once upon a day, armed with the appropriate MS part number, you could approach any number of software retailers and buy SQL Server and they would ship you a shrink-wrapped box. Now it appears the only way to 'purchase' SQL Server is to belong to one of the MS licensing programs. Is my understanding correct? I work for a small company and just need to get SQL Server 2012 Standard edition installed on a 4 core box. I'd rather not have to understand and enter into a licensing program that may not be in my best interest next month or next year. What am I missing? Am I making this too difficult?Thanks,Alan

[T-SQL] how to get members of date dimension from current month

[T-SQL] how to get members of date dimension from current month


how to get members of date dimension from current month

Posted: 30 Jul 2013 07:25 PM PDT

Hi ,my query is:select distinct calendarmonthkey, calendarmonth from dim.dateorder by calendarmonthkey[b]Result:[/b] 205801 jan 2058205802 feb 2058........198001 jan 1980[b]expected result is :[/b]201307 jul 2013201306 jun 2013201305 may 2013...........198001 jan 1980or else it can be starts with previous month(jun 2013).please suggest Regards,Niharika

SSIS ( cannot able to change datatype column using datatype conversion......

Posted: 30 Jul 2013 09:42 PM PDT

Hi, Source: multiple text fileRequirement: 1. Define age columns as small integer.2. Convert gross column from $ 667 00 to 667.00 and change datatype from string to money.3. Convert date format from 12122012 to 12/12/2012 and change datatype from string to date data type.4. Finally load all flat file to a db table in sql server using a single destination.For which I converted gross column using dervied column transformation and for date column using substring.Now i can't able to change gross and date columns datatype from string to money and date data type.Please help me to solve this issue....Thanks in advance

T-SQL subquery

Posted: 30 Jul 2013 05:37 PM PDT

Hello guys, I have performance question or what is better for serverI have select[quote]SELECT TOP 5 CONVERT(DATETIME,WOT.ORDER_DATE) ORDER_DATE, WOT.COMMIT_NO, WOT.BODY_NO, WOT.CAR_CODE, WOT.PROD_FLAG FROM ( SELECT TOP 2 WO.ORDER_DATE, WO.COMMIT_NO, WO.BODY_NO, WO.CAR_CODE + ' ' + WO.DRIVE_TYPE CAR_CODE , 'T' PROD_FLAG FROM TB_WORK_ORDER_TRSB1 WO LEFT JOIN TB_TRACKING_TRSB1 TR ON WO.ORDER_DATE = TR.ORDER_DATE AND WO.COMMIT_NO = TR.COMMIT_NO WHERE WO.ORDER_DATE <= @vLAST_ORDER_DATE AND WO.ORDER_DATE + WO.COMMIT_NO <= @vLAST_ORDERCOMMIT AND DATA_TYPE <> 'SD' ORDER BY WO.ORDER_DATE DESC, WO.COMMIT_NO DESC UNION ALL SELECT WO.ORDER_DATE, WO.COMMIT_NO, WO.BODY_NO, WO.CAR_CODE + ' ' + WO.DRIVE_TYPE CAR_CODE , CASE ISNULL(TR.WORK_COMPLETE,'') WHEN '' THEN 'F' WHEN 'F' THEN 'Y' WHEN 'T' THEN 'G' END PROD_FLAG FROM TB_WORK_ORDER_TRSB1 WO LEFT JOIN TB_TRACKING_TRSB1 TR ON WO.ORDER_DATE = TR.ORDER_DATE AND WO.COMMIT_NO = TR.COMMIT_NO WHERE WO.ORDER_DATE >= @vLAST_ORDER_DATE AND WO.ORDER_DATE + WO.COMMIT_NO > @vLAST_ORDERCOMMIT AND DATA_TYPE <> 'SD' ) WOT [/quote][b]And what do you think what is it better .. to each clausule WHERE add query below or OR add behind *TEMP* table WOT on end of query to WHERE ? [/b][quote]AND WO.ORDER_DATE+WO.COMMIT_NO IN ( SELECT distinct ORDER_DATE+COMMIT_NO FROM TB_MASTER_TRSB1 WHERE ORDER_DATE >= @vLAST_ORDER_DATE AND STATION_ID = @rSTATION_ID )[/quote]Here are result from statistics [quote]--- NEW with MASTER in each union CPU time = 0 ms, elapsed time = 0 ms.Table 'TB_TRACKING_TRSB1'. Scan count 2, logical reads 66, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Worktable'. Scan count 1, logical reads 1153, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'TB_WORK_ORDER_TRSB1'. Scan count 2, logical reads 34, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'TB_MASTER_TRSB1'. Scan count 2, logical reads 364, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 31 ms, elapsed time = 26 ms. SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms. SQL Server Execution Times: CPU time = 31 ms, elapsed time = 26 ms. --- NEW with MASTER in TEMP WOT Table 'Worktable'. Scan count 1, logical reads 1678, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'TB_MASTER_TRSB1'. Scan count 2, logical reads 364, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'TB_WORK_ORDER_TRSB1'. Scan count 2, logical reads 31, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'TB_TRACKING_TRSB1'. Scan count 2, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. SQL Server Execution Times: CPU time = 31 ms, elapsed time = 25 ms. SQL Server Execution Times: CPU time = 0 ms, elapsed time = 0 ms. SQL Server Execution Times: CPU time = 31 ms, elapsed time = 26 ms.[/quote]What do you think about it ?

Need Help TSQL Code.

Posted: 30 Jul 2013 07:54 AM PDT

I have source file with columnFull NameJim k SmithRobert K LinChris Timand My destination hasFirstName and LastNameI want to parse FullNameI know how I can use SSIS Expression to parse the address, Here it isFirstName = LTRIM(RTRIM(TRIM(REVERSE(SUBSTRING(REVERSE([Full Name]),1,FINDSTRING(REVERSE([Full Name])," ",1))))))LastName = LTRIM(RTRIM(TRIM(SUBSTRING([Full Name],1,FINDSTRING([Full Name]," ",1)))))Is any one can help me to transfer in T-SQL?Thank You in advance.

Universal CSV-Importer

Posted: 30 Jul 2013 07:17 AM PDT

Is there a way, to import csv Files with different count of fields (all nvarchar (max)) with one Stored-Procedure in one fix table (with Import-ID) , without having to make much adjustment? The Files are flat Staging csv-Files. Maybe with pivot/unpivot Function etc...Field and Rowdelimiter are allways the same.Thank youKind RegardsNicole ;-)

[SQL Server 2008 issues] Blocking Query

[SQL Server 2008 issues] Blocking Query


Blocking Query

Posted: 30 Jul 2013 07:02 PM PDT

Hi Team,I used to run 'sp_who2' to check the locks, i found that some sp_id are showing in BlkBy column, and in command column it is showing as 'Select'I to find the full query, which is blocked.

Convert Varchar to Date??

Posted: 30 Jul 2013 02:00 AM PDT

HiI have a date field stored in a varchar as "Jul 24 2013 8:05AM"I would like to convert to a date field to do calculations like this "2013-07-24 00:00:00.000"Time is not important..Thanks in AdvanceJoe

Tool for User Administration for Level 1

Posted: 30 Jul 2013 05:16 PM PDT

Can you suggest me a tool for User administration like create user, assgn roles, permmissions for sql server.My goal is to assign these activites to my level 1 team who knows nothing at sql server.SQL Managemenet Studio I donot want to provide.

How to add an analysis server into SQL server 2008 R2?

Posted: 30 Jul 2013 08:49 AM PDT

I am very new to data warehouse. I just installed Sql server 2008 R2, but when I accessed it, there was no analysis server which is for data warehouse. I also downloaded the Aventure Works DW 2008 database sample, but I did not know where to attach it?Does anyone know about it?thank in advance.

Identifying SQL Agent Job Name based on the job id.

Posted: 30 Jul 2013 11:33 AM PDT

I'm trying to figure out which Job is running, based on this the following name. Where can I find this information?SQLAgent - TSQL JobStep ([b]Job 0xFB668E27919DA3489E3DD97061F25B31[/b] : Step 1)

Need a help in MCSA SQL Server exam 2012 70-461, 462, 463

Posted: 30 Jul 2013 09:32 AM PDT

Hello friends,after getting good experience at current workplace, I am planing to get certified in MS SQL SERVER 2008/2012.I went through the webpage that provide the exam information.I am looking for any preparation material, or web site that can help me with the preparation.because, I can not afford the books, which are mentioned in the website for preparation of the exam.Please let me know where I can start reading or get some practice materials.thanks in advance.Pratik.

Getting Back Previous Database

Posted: 30 Jul 2013 02:55 PM PDT

Dear,I have restored a database with an existing database. But there was some important information in my previous database. I forgot to backup the previous database.Now I want to get previous database. But how to get it?I am using SQL 2008 R2. Please help me.Regards,Akbar

SSIS Lookup Transform Bulk insert error - stumped!

Posted: 30 Jul 2013 01:36 PM PDT

Ok - I'm stumped. I have an SSIS package that reads our CDR data and i need to do an incremental load nightly to only add new records. I can successfully run the package and insert directly in the table with out the look up transform, but as soon as i try to see if any of the records are in the destination table already the lookup transform fails with the following error:[Error Rows [169]] Error: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80040E14.An OLE DB record is available. Source: "Microsoft SQL Server Native Client 10.0" Hresult: 0x80040E14 Description: "Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".".An OLE DB record is available. Source: "Microsoft SQL Server Native Client 10.0" Hresult: 0x80040E14 Description: "The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.".An OLE DB record is available. Source: "Microsoft SQL Server Native Client 10.0" Hresult: 0x80040E14 Description: "The Bulk Insert operation of SQL Server Destination has timed out. Please consider increasing the value of Timeout property on the SQL Server Destination in the dataflow.".I have sys admin rights on the server. I am running in admin mode with 32 bit enabled. Bulk Insert is allowed on the server and the package is ran locally on the server using the bulk insert operation.SSIS Data Flow Task:flat file source ->derive some columns->conditional split to find all non null values for insert ->lookup transform to see if the record exists->insert new records into OLE DB Destination TableI'm stumped on how to get the lookup transform to work. Is there another option to only insert new records via ssis efficiently?Thanks in advance for your help!Katie

Deploy multiple SSIS Packages in one batch file to SQL Server

Posted: 27 Mar 2011 07:08 PM PDT

Hi,I need to write a Batch file which can Deploy multiple SSIS packages kept at a location. The Batch file should iterate to check all the SSIS packages and Deploy them one by one.Server to deploy should be passed as parameter.

Output query result to excel sheet

Posted: 30 Jul 2013 12:29 AM PDT

Hi,I want to put results into excel from Sql script , In results it is having three different result sets returnedLike as below.Select * from table 1Select * from table 2Select * from table 3

Find out if index rebuilding/reorganizing is happening on a certain table

Posted: 30 Jul 2013 11:11 AM PDT

Dear Expert,I would like to know if there is a query to findout if index is current rebuilding/reorganizing for table X so that I can make the program wait for it to complete before issue certain command against the table for example SqlBulkCopy as SqlBulkCopy is fails consistently if re-index is happening at the same time. I know drop/recreate indexes would be a work around but with the way my application works, I cant afford to drop indexes or stop the re-indexing process either. Thanks for your help!

openquery single quote

Posted: 30 Jul 2013 09:44 AM PDT

Hi All, I have an application which automatically generates SQL - To connect my application to my SAAS database I am having to use a LINKED server. The problem I have is with single quotes as the SQL used has to be enclosed in single quotes. The string my application adds to the sql is: a.acctnum like 'ME4%' and e.entityid='100AAA' I need a way of taking this where clause and added an addtional set of single quotes.

Opening .dts package from SQL Server 2000 in Visual Studio 2008

Posted: 30 Jul 2013 07:39 AM PDT

I can't believe I'm having this much trouble with this, but I am stuck. I have a .dts package that was saved to a SQL Server 2000 database. Unlike what I'm used to, I cannot seem to access this file in Visual Studio 2008.I know that there will be a compatability issue with this, and the main thing I am trying to do is test out how to do this so that I will have a solution ready when I need to make these changes in the near future. Can anyone offer any advice? Can you tell me if what I'm seeing sounds correct (i.e. - I cannot track down an acctual .dts file. Instead I only see the package created on the server. I thought it was in SQL Server 2005 that first allowed that.)?Thank you,Jarid

CheckDB inconsistency error

Posted: 29 Jul 2013 11:36 PM PDT

hello there, I have an issue with CheckDB giving the error below.If DBCC printed error messages, contact your system administrator. [SQLSTATE 01000]checking xxxx [SQLSTATE 01000]Msg 8914, Sev 16, State 1, Line 1 : Incorrect PFS free space information for page (1:208) in object ID 60, index ID 1, partition ID 281474980642816,Is there anything i can do about that i have read that this is a bug.. Thank you in advance

Catch text before symbol

Posted: 30 Jul 2013 03:15 AM PDT

Hi geniuses,consider:[code="sql"]create table #mytable( Value varchar(50))INSERT INTO #mytable (Value) SELECT 'First.Second'[/code] [b]I need to isolate what's before the symbol: '.'[/b]I was able to catch whats after the symbol:[code="sql"] SELECT Value, (substring(Value,charindex('.',Value)+1,LEN(Value))) AS 'beforedot' FROM #mytable[/code]Thanks in advance!

SQL Native Client - How can you tell it's being utilised as a connection?

Posted: 29 Jul 2013 07:53 PM PDT

Hi All, Apart from turning it off with all of the associated protocols, how can you tell it's being used?Thanks in advance

ID and Exclude Top & Bottom 2.5 by Contract

Posted: 30 Jul 2013 12:04 AM PDT

Hi All, I'm trying to filter out my results the top and bottom 2.5 percent records by meeting lengths per contract.I currently have this query which ID's the top and bottom 2.5% but as a whole and not by Top 2.5 for Contract X and Top 2.5 for Contract Y etc.[code="sql"], x2 AS( SELECT caseref, [contract] FROM ( SELECT TOP 2.5 PERCENT caseref, [contract] FROM @avgep ORDER BY DATEDIFF(SECOND, casereceived, CaseEnd) ) AS lowest UNION ALL SELECT caseref, [contract] FROM ( SELECT TOP 2.5 PERCENT caseref , [contract] FROM @avgep ORDER BY DATEDIFF(SECOND, casereceived, CaseEnd) DESC ) AS highest)[/code]I imagine a CET is involved but I can't see the wood for the trees at the moment. Any help would be great.

multiple insert from one table depending on the number of rows in another table

Posted: 30 Jul 2013 01:17 AM PDT

Hi I have 2 tables as below which I import to sql server from supplied text files. I would like to repeatedly select / insert all the same rows from Table1 into a new table depending on the number of rows there are in Table2 adding the ID from Table2 each time per insert. As per the results table below. Both table1 and Table2 could have a variable number of rows. I'm not sure how to accomplish this at he moment, maybe to use a for each loop or a cursor. Any help would be appreciated. Table 1[b]chr start end[/b]chr1 14501 18250chr1 102501 103750Table 2 [b]ID[/b]100001100007result [b]chr start end ID[/b]chr1 14501 18250 100001chr1 102501 103750 100001chr1 14501 18250 100007chr1 102501 103750 100007

configure replication

Posted: 29 Jul 2013 09:13 PM PDT

How to configure replication in different domain ?

Non-Pageable Memory

Posted: 29 Jul 2013 10:17 PM PDT

Suddenly got struck in a confusion(I might be wrong too). I will really thank and appreciate any light sheds. AWE is non-pageable? right. It means it cant be hard-fault. Then why we need LOCK PAGE in MEMORY?

Migrating System Databases Between Clusters

Posted: 29 Jul 2013 10:26 PM PDT

Hi there,I have the need to migrate a 2008 R2 SQL Cluster onto a new site (unfortunately physically moving the kit is not an option). We will prepare a new cluster to be identical to the existing one in terms of SQL version, file structure etc.Once that is ready I would ideally like to backup and restore the system databases to the new cluster to make the process easier. Does anyone know if there are any special considerations when transferring system databases from one cluster to another, or is it exactly the same process as it would be if they were stand-alone servers?I'm aware moving system databases can sometimes be a bit of a minefield, but it would save a lot of work!Thanks,Matt

Find and update the Schema changes of a table?

Posted: 26 Jul 2013 12:02 AM PDT

Hi Friends, I have around 25 tables where i am frequently using to put the data from live server to my local . The schema's of the live database gets changed frequently. I mean we add one or more columns evertime. When i use my SSIS package , to pull the data from live server to my local, it is messing up. I have to drop the database and recreate it with new schemas and update the package as well. So what i am looking for isJust Curious..1. How do we create a script to update the schema ? I know there is Red Gate tool which does this comparision of the schemas. But is there a way to create some SQL statements to update the schema? (because I can't buy the red gate software. price is not affordable :-))2. How can we avoid the SSIS package getting this error? You know if we have added some fields in the table, then i have to update the package as well....and then using it.Any suggestions would be really appreciated. Sorry in case , if my questions are meaning less...

Table Partition

Posted: 29 Jul 2013 09:52 PM PDT

Hi All,I have partitioned VOICE_CALL table, size=130 GB(spill-ted into 10 ndf files) But before partition mdf files size 350 GB,After partition mdf file size 351 GB.10 Ndf files size 99 GB so total database size is 449 GB [b]Why mdf file size not decreased ? even-though I was moved data to ndf files ?[/b]Thanks,Ramana

SAN disk size expansion

Posted: 25 Jul 2013 04:04 PM PDT

Hi,We were trying to expand SAN disk size more than 2 TB as part our requirment. But it failed due to the limitation, NTFS can support maximum of 2 TB. Is it so? I am unaware such windows limitations! On the other way, how to work around and hot to expand a SAN disk more than 2 TB. Any suggesstion would be highly appreciated.My environment is: Windows Server 2008 R2, enterprise editionSQL Server 2008 R2, enterprise edition6 SAN disks are configured with >1 TB to < 1.5 TB. We want to expand those SAN disks to more than 3 TB each. Thanks!

Search This Blog