Friday, April 26, 2013

[how to] Should I keep pg_xlog on the same disk as data if using disk snapshotting?

[how to] Should I keep pg_xlog on the same disk as data if using disk snapshotting?


Should I keep pg_xlog on the same disk as data if using disk snapshotting?

Posted: 26 Apr 2013 06:46 PM PDT

We're running on EBS volumes on EC2. We're interested in leveraging EBS snapshotting for backups. However, does this mean we'd need to ensure our pg_xlog is on the same EBS volume as our data?

(I believe) the usual reasoning for separating pg_xlog onto a separate volume is for performance. However, if they are on different volumes, the snapshots may be out of sync.

Defining system scope and boundary

Posted: 26 Apr 2013 05:29 PM PDT

I have been given a homework task where I am to create an ER model of a database from a set of client requirements. Another part of this task is to create a diagram that shows the scope and boundary of the database system I create. I am having problems creating a scope/boundary diagram as I have been unable to find any examples in my textbooks or on-line. Is anyone able to suggest or provide an example of how this could be done?

Optimizing bulk update performance in Postgresql

Posted: 26 Apr 2013 05:20 PM PDT

It currently takes up to 24h for us to run a large set of UPDATE statements on a database, which are of the form:

UPDATE table  SET field1 = constant1, field2 = constant2, ...  WHERE id = constid  

(We're just overwriting fields of objects identified by ID.)

The tables have handfuls of indices each and no foreign key constraints. No COMMIT is made till the end.

It takes 2h to import a pg_dump of the entire DB. This seems like a baseline we should reasonably target.

Short of producing a custom program that somehow reconstructs a dataset for Postgresql to re-import, is there anything we can do to bring the bulk UPDATE performance closer to that of the import? (This is an area that we believe log-structured merge trees handle well, but we're wondering if there's anything we can do within Postgresql.)

Some ideas:

Basically there's a bunch of things to try and we're not sure what the most effective are or if we're overlooking other things. We'll be spending the next few days experimenting, but we thought we'd ask here as well.

Thanks.

SQL nested select (query rows after variable), running slow

Posted: 26 Apr 2013 05:32 PM PDT

Have large table (Sybase) rows = 6276225. I'm pulling call status and my query is very slow.

In effect what I'm trying to do is pull data for any event after my variable. While querying for just one acct, the query takes around 15 sec, anything larger and my query runs forever.

How can I speed up my query?

declare   @RSP varchar (2)  SET  @RSP = 'WN'    select  a.contact_list_name,  a.agent_login_name,  a.account_number,  a.ov_phone_number,  a.time_of_contact,  a.response_status    from table a    where  a.time_of_contact >=                        (SELECT                          b.time_of_contact                        FROM table b                        WHERE  a.account_number = b.account_number                               and a.ov_phone_number = b.ov_phone_number                               and a.time_of_contact >= b.time_of_contact                               and b.contact_list_name ='Prime_list'                               and b.response_status = @RSP)  

Network Databases - 2

Posted: 26 Apr 2013 03:39 PM PDT

I fail to see why my previous question was closed. I merely asked whether or not there were any free/open source network (IDMS-type) databases, with the implication that, if there were, an indication of what they were would be made.

You say "As it currently stands, this question is not a good fit for our Q&A format". Why not, it's a straight question?

"We expect answers to be supported by facts, references, or specific expertise, but this question will likely solicit debate, arguments, polling, or extended discussion.". A simple reply would not "solicit debate, arguments, polling, or extended discussion." - it's a straight factual answer.

Total Memory capped far below available memory

Posted: 26 Apr 2013 03:58 PM PDT

We are testing out a new install of SQL Server 2012 on a virtual server and the memory allocated to SQL Server seems to be capped around 16 GB even though much more is available. What could cause this?

I've tried bumping up the SQL minimum server memory setting, but the change had no effect. I've listed some of the server details below.

SQL Server 2012 Enterprise  Windows Server 2008 R2 Enterprise x64  29.93 GHz (8 processors)  77 GB RAM  Min Server Memory: 48 GB  Max Server Memory: 56 GB  Database size: 44 GB  

PerfMon info:

Working Set: ~17 GB  Page life expectency: ~7,500  Target Server Memory: ~73 GB  Total Server Memory: ~15.7 GB  

adding fields to 2LIS_12_VCITM datasource in ECC

Posted: 26 Apr 2013 01:57 PM PDT

From what I have read the datasource 2LIS_12_VCITM is built off of the ECC tables LIKP, LIPS, VBUK, and VBUP. I would like to add a couple of fields that are in the LIPS table to this datasource and am wondering on the correct way to do it. Do I simply add the fields to the extract structure of the datasource under the include section for the LIPS table?

Is it safe to delete mysql-bin files?

Posted: 26 Apr 2013 02:57 PM PDT

I have MM Replication in mysql, and I want to squeeze some free space in the box be deleting unnecessary files, I came across these mysql-bin files inside /var/db/mysql/ There are hundreds of those files like mysql-bin.000123, mysql-bin.000223 etc. I have checked the mysql replication by doing show master status and show slave status they are using some mysql-bin files at certain positions, but I guess all the other bin files are leftovers which will not be used anymore. In this case is it safe to delete all those mysql-bin files except the ones that replication is currently pointing at ?

If it is safe to delete, then is there anything I could do to automatically delete those files once they are not in use ?

Sharded key-value store using MongoDB

Posted: 26 Apr 2013 12:41 PM PDT

Would like to set up a key-value store that is sharded across multiple machines.

We are currently using MongoDB, is there a reason why we shouldn't use MongoDB for this purpose?

We also use Redis, however for this use case, we would like to use the hard drive and Redis is in-RAM only.

using row data for column headers

Posted: 26 Apr 2013 03:30 PM PDT

I have a table

  CREATE TABLE "AUTOMATION"."TESTRESULTS"      (      "BUILDNUMBER" VARCHAR2(30 BYTE),       "TESTNAME" VARCHAR2(100 BYTE),       "STATUS" VARCHAR2(100 BYTE)      )   

sample data would be

    buildnumber  testname   statusid      1            testname1  pass       1            testname2  fail      1            testname3  warning           2            testname1  fail      2            testname2  fail      2            testname3  fail      3            testname1  fail      3            testname2  pass      3            testname3  fail  

I would like to get a query to output:

               3         2           1  testname1      fail      fail        pass  testname2      pass      fail        fail   testname3      fail      fail        warning  

What is the best approach to getting this output (buildnumbers as columns)? I am using Oracle 11g and the test names are not known at the time of building the report.

Usage of COPY FROM STDIN in postgres

Posted: 26 Apr 2013 01:27 PM PDT

I just started using Postgres and I'm trying to create a sample db to understand its functions, looking around, I found some scripts in pgfoundry.org. I understand the commands, since I previously used both Oracle and MS-SQL, but all the scripts I'm running return errors when they reach the "COPY FROM" instruction. More precisely, the error is thrown at the first element that should be inserted in the given table.

I've tried running the scripts both as queries and as pgScripts, but in both ways I'm getting an error at the first row after the COPY FROM.

I'm using pgAdminIII and I used StackBuilder to install PostgreSQL 9.2.4.1 as a DB Driver. May I be missing some basic configuration that's preventing me from running this command, or I just did not understand they way it works?

EDIT:
The error is:

ERROR:  syntax error at or near "7"  LINE 5600: 7 4 13 37 2012-03-10 16:41:43.797787 2012-03-10 16:41:43.797...             ^    ********** Error **********    ERROR: syntax error at or near "7"  SQL status: 42601  Char: 140891`  

where the text is:

COPY action_abilitations (id, group_action_id, partecipation_role_id, group_id, created_at, updated_at) FROM stdin;  7   4   13  37  2012-03-10 16:41:43.797787  2012-03-10 16:41:43.797787`  

In Excel 2007-2010, how does one "attach" or "glue" or "stick" information in one cell to information in another cell?

Posted: 26 Apr 2013 10:40 AM PDT

Column A = Column Z (a name). Column B (a number) is information specific to each name in Column A. The information in Columns Z thru ZZ is deleted every night and repopulated the next day, but the names are shuffled.

If Joe is in Cell Z4 today, his name also appears in Cell A4 and a number (5.55) is next to his name in Cell B4. Tomorrow, Joe's name appears in Cell Z10 when Column Z is populated via copy & paste.

I need the number 5.55 to follow Joe no matter where he appears in Column Z.

The numbers in Column B are manually entered and not a part of what is pasted in Column Z.

(Columns C thru J contain Joe-specific information. All information in Columns K thru Y are formulas that are calculated based on what is pasted in Columns Z thru ZZ).

I usually analyze the information, but this time I have to create it. Thanks. -Jon

What is advantages and disadvantages in Horizontal Scaling and Vertical Scaling? [closed]

Posted: 26 Apr 2013 10:29 AM PDT

one of the excellence in nosql is Vertical Scaling but i want to know What is advantages and disadvantages in Horizontal Scaling and Vertical Scaling?

Creating a partial unique constraint for MySQL

Posted: 26 Apr 2013 02:08 PM PDT

I have the same question as asked in a previous post: PostgreSQL multi-column unique constraint and NULL values. But the solution there is not applicable as I am using MySQL instead of PostgreSQL.

My table looks like this:

CREATE TABLE `my_table` (    `id` bigint(20) NOT NULL,    `col1` bigint(20) NOT NULL,    `col2` bigint(20) NOT NULL,    `col3` bigint(20) DEFAULT NULL,    PRIMARY KEY (`id`)  )  

I want the three columns col1, col2 and col3 to be unique. But a normal unique constraint

UNIQUE KEY `unique_constraint` (`col1`,`col2`,`col3`)  

still allows me to insert identical rows whose col3 is NULL. I want to forbid entries like these:

1  | 1 | 1 | NULL  2  | 1 | 1 | NULL  

I was searching for a partial unique constraint in MySQL as proposed in the post above but apparently there are not supported in MySQL 5.5.

Can somebody give me a hint how to solve this best in mysql, I am also hoping there will be a performant solution!

Thanks from a newbie in mysql (coming from PostgreSQL)!

maintaining full text index on large table

Posted: 26 Apr 2013 02:31 PM PDT

I'm on SQL Server 2008 and have a table, for reporting purposes, with 500,000 records that will easily reach the millions. The table will employ a full text index for rapid searching on a handful of columns.

As this is a reporting table and not the source transactional table, sometimes new records will be added, and other times existing records will have to be removed due to changes going on in the source table.

My question is in regards to the best way to build (ongoing) the reporting table and maintain the full text index for this table.

Once the full text index is added, should I:

(1) leave the index alone, and delete/add records as appropriate
(2) leave the index alone, truncate the reporting table, and then insert all appropriate records
(3) other?

I've come across these articles so far while researching, but best practice for this scenario is not readily apparent.
1. http://technet.microsoft.com/en-us/library/ms142497.aspx
2. http://sqlcat.com/sqlcat/b/msdnmirror/archive/2008/11/05/best-practices-for-integrated-full-text-search-ifts-in-sql-2008.aspx
3. http://social.msdn.microsoft.com/Forums/en-US/sqldatabaseengine/thread/c743f6e4-88c6-478b-954b-ba8207e391c5

limit the number of rows returned when a condition is met?

Posted: 26 Apr 2013 12:52 PM PDT

Is it possible to limit the number of rows returned when a condition is met? I am working on a query to check if a student is ready to graduate, and they need to meet a certain number of credits per subject. I don't want all classes, because any class past the number of credits needed can be used for electives.

EDIT: I forgot to mention that this is SQL 2008 R2

I was hoping to be able to do something like this (which I know doesn't work)

select top(sum(credits) > 3.0) c.name, c.subject, c.credits, c.pass  from classes as c  where c.Subject = 'Math' and c.passed = 1  

Any help would be great


Data

Subject        |  Name             | Grade | Credit | Pass  Social Studies | DL Psychology     | 83    | 0.50   | 1  Social Studies | Global Studies 10 | 82    | 1.00   | 1  Social Studies | Global Studies 9  | 83    | 1.00   | 1  Social Studies | Part. In Govern   | 84    | 0.50   | 1  Social Studies | US History 11     | 87    | 1.00   | 1  

Query

select c.Subject,              c.Name,              c.credits,              c.pass,              c.score      from @classes as c       where (c.Subject = 'Social Studies' and c.pass = 1 and c.Name like '%Econ%')      or    (c.Subject = 'Social Studies' and c.pass = 1 and c.Name like '%Gov%')      or    (c.Subject = 'Social Studies' and c.pass = 1)      group by c.Subject, c.Name, c.credits, c.pass, c.score      having Sum(credits) <= 2.0  

Im exprecting to see these rows returned

Expected Results

Subject        | Name              | Grade | Credit | Pass  Social Studies | Part. In Govern   | 84    | 0.50   | 1  Social Studies | DL Psychology     | 83    | 0.50   | 1  Social Studies | Global Studies 10 | 82    | 1.00   | 1  

Easier way to handle so many isnull() situation

Posted: 26 Apr 2013 01:06 PM PDT

Is there any good way to avoid writing so many times isnull() function inside sproc ?
I have stored procedure that use almost 30 times isnull() function, I think that I am miss a concept , but until I find better way to get my data ill love to clean my code from so many isnull() functions.
Can I for while set MSSQL 2008R2 server to using null values as float 0.
Moost of my isnull()-s adding just zero 0 value if there is no data so I can do maths operations.

EDIT: I am not lazy, I just trying to clean my code and avoid select parts looks like this

select       vrsta.NAZIV     ,isnull(sum(prod.prod_mpci),0) as prod_MPCI     ,isnull(sum(isnull(mal_MPCI,0) + (vel_kol * isnull(mal_MPC,vel_VPC))),0) as lager_mpci     ,isnull(sum(vel_NCI),0)+isnulL(sum(mal_NCI),0) as lager_nci     ,sum(   case               when isnull(vel_KOL,0)+isnull(mal_KOL,0) > isnull(prod.prod_kol,0) and isnull(dzn.dzn,'2010-01-01') < @dzu                      then ((isnull(vel_KOL,0)+isnull(mal_KOL,0))-isnull(prod.prod_kol,0)) * isnull(mal_MPC,vel_VPC)                      else 0 end              ) as visak_MPC     ,sum(   case               when isnull(vel_KOL,0)+isnull(mal_KOL,0) > isnull(prod.prod_kol,0) and isnull(dzn.dzn,'2010-01-01') < @dzu                      then ((isnull(vel_KOL,0)+isnull(mal_KOL,0))-isnull(prod.prod_kol,0)) * isnull(mal_NC,vel_NC)                      else 0 end              ) as visak_MPC      ,sum(   case               when isnull(vel_KOL,0)+isnull(mal_KOL,0) <= isnull(prod.prod_kol,0)                       then ((isnull(vel_KOL,0)+isnull(mal_KOL,0))-isnull(prod.prod_kol,0)) * isnull(mal_MPC,vel_VPC)                      else 0 end              ) as manjak_MPC        ,sum(   case               when isnull(vel_KOL,0)+isnull(mal_KOL,0) <= isnull(prod.prod_kol,0)                       then ((isnull(vel_KOL,0)+isnull(mal_KOL,0))-isnull(prod.prod_kol,0)) * isnull(mal_NC,vel_NC)                      else 0 end              ) as manjak_NC  

Summation query with complex grouping criteria

Posted: 26 Apr 2013 11:45 AM PDT

I have a database of coded actions with this structure.

{User, Date, ActionCode}  

I need to generate reports based on dates and action codes, and the action codes are not clean because we have to import them from multiple sources.

I can handle most of them (like the average number of times users perform ActionCode 13).

But, here's what I'm having trouble with: The average number of times users perform an action that can be defined by any one of the action codes 61, 62, 700, or 701. This action can also be defined by using both action codes 84 and 85 or by using the three action codes 10, 11, and 12.

Here is what I have that handles the first set of action codes:

select    average(cnt)  from  (    select      count(distinct(date)) as cnt    from      codes    where      actioncode in (61, 62, 700, 701)    group by user  )  

I use distinct date because some users code to multiple places and we get back more than one code for the action. How do I add "or they had both 84 and 85 on the same date"?

Creating a PostgreSQL SERIAL column using pgAdmin3

Posted: 26 Apr 2013 01:36 PM PDT

When I use pgAdmin3 to create a column of type serial, why does it instead create nextval('prices_id_seq'::regclass)?

Should we keep this type defined by pgAdmin? Manually create the table with SQL defining id column as type serial causes the pgAdmin SQL pane to again display it as nextval('prices_id_seq'::regclass).

pgAdmin SQL Pane

id integer NOT NULL DEFAULT nextval('prices_id_seq'::regclass)  

Why PostreSQL 9.2 does not use index scan if it can?

Posted: 26 Apr 2013 11:23 AM PDT

My PosgreSQL planner (9.2) does not pick index and rather choose to do seq scan.

Here is my table. The index I am talking about is name_left_prefix

cwu=# \d web_city;                                    Table "public.web_city"     Column   |         Type          |                       Modifiers                         ------------+-----------------------+-------------------------------------------------------   id         | integer               | not null default nextval('web_city_id_seq'::regclass)   name       | character varying(64) | not null   latitude   | double precision      |    longitude  | double precision      |    time_zone  | character varying(64) | not null   population | bigint                |    country_id | integer               |    location   | geometry(Point,4326)  |   Indexes:      "web_city_pkey" PRIMARY KEY, btree (id)      "name_left_prefix" btree ("left"(name::text, 5))      "web_city_country_id" btree (country_id)      "web_city_location_id" gist (location)  Foreign-key constraints:      "country_id_refs_id_55d3b49fd200671" FOREIGN KEY (country_id) REFERENCES web_country(id) DEFERRABLE INITIALLY DEFERRED  

Now I run the query that is supposed to use that index.

cwu=# explain analyze select count(*) from web_city GROUP BY (left(name,5));                                                           QUERY PLAN                                                            -----------------------------------------------------------------------------------------------------------------------------   GroupAggregate  (cost=19016.35..21188.28 rows=94677 width=10) (actual time=502.308..691.288 rows=64459 loops=1)     ->  Sort  (cost=19016.35..19345.84 rows=131796 width=10) (actual time=502.300..660.121 rows=131796 loops=1)           Sort Key: ("left"((name)::text, 5))           Sort Method: external merge  Disk: 2048kB           ->  Seq Scan on web_city  (cost=0.00..5554.45 rows=131796 width=10) (actual time=0.077..46.672 rows=131796 loops=1)   Total runtime: 694.977 ms  (6 rows)  

It does not use it though. But it does use it when I disable seq_scan and the resulting time is considerably better!

cwu=# set enable_seqscan=false;  SET  cwu=# explain analyze select count(*) from web_city GROUP BY (left(name,5));                                                                     QUERY PLAN                                                                      -------------------------------------------------------------------------------------------------------------------------------------------------   GroupAggregate  (cost=0.00..21237.14 rows=94677 width=10) (actual time=0.187..114.887 rows=64459 loops=1)     ->  Index Scan using name_left_prefix on web_city  (cost=0.00..19394.69 rows=131796 width=10) (actual time=0.163..79.829 rows=131796 loops=1)   Total runtime: 118.095 ms  (3 rows)  

Why this is happening? I don't really desire to set enable_seqscan=false; from my application before making that query.

EDIT: I have just found out that this does not happen on my server that has PostgreSQL 9.2.3. But it does happen on my laptop with PostgreSQL 9.2.4. Maybe I'll try to downgrade just test it out.

EDIT2: After setting work_mem = 4MB to avoid Sort Method: external merge Disk: 2048kB(Thank you, Igor!) the time has got considerably better for the Seq Scan. Now I approve the decision not to use the index:).

cwu=# explain analyze select count(*) from web_city GROUP BY (left(name,5));                                                        QUERY PLAN                                                         -----------------------------------------------------------------------------------------------------------------------   HashAggregate  (cost=6213.43..6671.27 rows=36627 width=10) (actual time=95.879..111.447 rows=64459 loops=1)     ->  Seq Scan on web_city  (cost=0.00..5554.45 rows=131796 width=10) (actual time=0.424..45.285 rows=131796 loops=1)   Total runtime: 115.838 ms  

T-SQL Issues With Defining 'AS'

Posted: 26 Apr 2013 12:36 PM PDT

I am creating a fully dynamic application but have ran into a bit of a hiccup. There are multiple 'undefined' fields that can be defined by the users. The only problem is redisplaying them. Currently they are showing as [UDF_0] to [UDF_7], i.e. :

SELECT [TABLE].[UDF_0],         [TABLE].[UDF_1],         [TABLE].[UDF_2],         [TABLE].[UDF_3],         [TABLE].[UDF_4],         [TABLE].[UDF_5],         [TABLE].[UDF_6],         [TABLE].[UDF_7]  FROM [TABLE]  

Would obviously display as:

UDF_0 || UDF_1 || etc...  

What I would like to be able to do is display them something to this effect, but I can't for the life of me figure it out.

EXECUTE PROCEDURE [dbo].[GetProjectInfo] @Project varchar(100)    AS  BEGIN    SELECT [TABLE].[UDF_0] AS (SELECT [TBL_PROJECT_DESC].[UDF_0]                             FROM [TBL_PROJECT_DESC]                             WHERE [TBL_PROJECT_DESC].[PROJECT_NAME]=@Project),         --etc....  FROM [TABLE]  

Desired display would be :

Field Name 0 || Field Name 1 || etc...  

SQL Server 2012 AlwaysOn: need to automatically add databases by script- T-sql or PowerShell

Posted: 26 Apr 2013 11:12 AM PDT

I've installed and successfully configured our SQL Server 2012 AlwaysOn 2-node servers for our new "Intranet" that is coming out. I've gotten AlwaysOn working great, and our Front End servers for the Intranet will be using SharePoint 2013. The glitch is that SharePoint 2013 is configured to add databases automatically to our SQL Server 2012 back end, but NOT to AlwaysOn. In reading about this and in contacting Microsoft MSDN support, the default answer is "you must manually find, select, back-up and then add those new databases individually to get them into AlwaysOn."

But wait; that can be quite a task, constantly checking the SQL Server back-end servers to see what databases were created, then having to add them into AlwaysOn, 7/24! I'm looking for a script or process that will check for new databases, back those new databases up in FULL mode, (for being added to AlwaysOn, of course) then add those databases to AlwaysOn, all automatically. Or have this run every...1-2 hours? (without user intervention)

What I've come up with so far is this script that actually identifies the newly-added databases, (not yet in AlwaysOn), and then backs them up to a shared location. My next task is to find those newly-added databases and through the various processes needed, get them added to AlwaysOn. This will involve some sort of looping action, I imagine. I'm not a T-SQL/scripting guru; is there any solution or script that I might access that would do this? (add databases to AlwaysOn automatically)?

Please advise, I'm sure I'm not the first person to have this issue. I have seen previous posts on various Internet Sites (including this one!) , and the solution is either incorrect, or states something like "sure, go ahead and just script that!". Thanks, but I need just a little more detail there.

Thanks again,

-Allen

DECLARE @name VARCHAR(50) -- database name    DECLARE @path VARCHAR(256) -- path for backup files    DECLARE @fileName VARCHAR(256) -- filename for backup      -- specify database backup directory  SET @path = '\\atel-web-be2\backups\'      DECLARE db_cursor CURSOR FOR    select name from sys.databases  where group_database_id is null and replica_id is null       and name not in('master','model','msdb','tempdb')    OPEN db_cursor     FETCH NEXT FROM db_cursor INTO @name     WHILE @@FETCH_STATUS = 0     BEGIN         SET @fileName = @path + @name + '.BAK'        BACKUP DATABASE @name TO DISK = @fileName         FETCH NEXT FROM db_cursor INTO @name     END     CLOSE db_cursor     DEALLOCATE db_cursor  

Immediate refresh materialized views (MQT) with OUTER JOIN under DB2

Posted: 26 Apr 2013 11:48 AM PDT

DB2 accepts a materialized view based on an "OUTER JOIN" query, as long it doesn't refresh automatically (REFRESH IMMEDIATE).

I cannot find a simple solution to get round this problem. Perhaps with (a lot of) triggers, but it isn't really nice...

As example, my database is

CREATE TABLE PHONEUSER (USERID INT NOT NULL PRIMARY KEY, NAME VARCHAR(30));  CREATE TABLE PHONECALL (CALLID INT NOT NULL PRIMARY KEY, USERID INT, HOUR TIMESTAMP);    INSERT INTO PHONEUSER VALUES (1, 'Olivier');  INSERT INTO PHONEUSER VALUES (2, 'Arthur');    INSERT INTO PHONECALL VALUES (1,1,CURRENT_TIMESTAMP);  

and I want each call of each user. If an user has never called, I want a null value for him (it has sense in my application):

SELECT PU.USERID, CALLID, HOUR FROM PHONEUSER PU LEFT JOIN PHONECALL PC ON PC.USERID=PU.USERID;    USERID  CALLID  HOUR                     ------  ------  -----------------------       2    null  null                          1       1  2013-04-09 16:36:08.952  

This query runs well, but cannot be used for an MQT with refresh immediate:

CREATE TABLE CALLLIST AS (SELECT PU.USERID, CALLID, HOUR FROM PHONEUSER PU LEFT JOIN PHONECALL PC ON PC.USERID=PU.USERID) DATA INITIALLY DEFERRED REFRESH IMMEDIATE ;  The fullselect specified for the materialized query table CALLLIST is not valid. Reason code = "10".. SQLCODE=-20058, SQLSTATE=428EC, DRIVER=4.7.85  

Newly discovered SQL Server Express has all databases inaccessible/offline

Posted: 26 Apr 2013 04:04 PM PDT

We have several remotes sites at my company and today one of the employees came to me and said their SQL reporting tool stopped working. Up until this point I did not even know this server or database or reporting service existed!

I have RDP access to the server and can open SQL Server Management Studio 2008 R2 and the databases appear to be SQL Server Express. Currently I can login to the instance with my domain admin account but every database gives me the following message when I try to expand it:

enter image description here The webpage they go to for the reports gives them the following error:

•   An error has occurred during report processing. (rsProcessingAborted)      o   Cannot create a connection to data source 'DNPDataSource'. (rsErrorOpeningConnection)          ?   For more information about this error navigate to the report server on the local server machine, or enable remote errors  

When I try to view the SQL Server Error log I get this error:

enter image description here

I do not have the sa password. I'm not a DBA but need to try and figure this one out, can anyone point me in a direction to start troubleshooting this? I'm completely lost.


here is the ERROR.LOG

2013-03-27 13:14:24.34 Server      Microsoft SQL Server 2008 R2 (SP1) - 10.50.2550.0 (X64)       Jun 11 2012 16:41:53       Copyright (c) Microsoft Corporation      Express Edition with Advanced Services (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1) (Hypervisor)    2013-03-27 13:14:24.34 Server      (c) Microsoft Corporation.  2013-03-27 13:14:24.34 Server      All rights reserved.  2013-03-27 13:14:24.34 Server      Server process ID is 9040.  2013-03-27 13:14:24.34 Server      System Manufacturer: 'Intel Corporation', System Model: 'S5520UR'.  2013-03-27 13:14:24.34 Server      Authentication mode is MIXED.  2013-03-27 13:14:24.34 Server      Logging SQL Server messages in file 'c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\Log\ERRORLOG'.  2013-03-27 13:14:24.34 Server      This instance of SQL Server last reported using a process ID of 2428 at 3/27/2013 1:14:02 PM (local) 3/27/2013 7:14:02 PM (UTC). This is an informational message only; no user action is required.  2013-03-27 13:14:24.34 Server      Registry startup parameters:        -d c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA\master.mdf       -e c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\Log\ERRORLOG       -l c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA\mastlog.ldf  2013-03-27 13:14:24.37 Server      SQL Server is starting at normal priority base (=7). This is an informational message only. No user action is required.  2013-03-27 13:14:24.37 Server      Detected 16 CPUs. This is an informational message; no user action is required.  2013-03-27 13:14:24.51 Server      Using dynamic lock allocation.  Initial allocation of 2500 Lock blocks and 5000 Lock Owner blocks per node.  This is an informational message only.  No user action is required.  2013-03-27 13:14:24.51 Server      Lock partitioning is enabled.  This is an informational message only. No user action is required.  2013-03-27 13:14:24.56 Server      Node configuration: node 0: CPU mask: 0x00000000000000ff:0 Active CPU mask: 0x00000000000000ff:0. This message provides a description of the NUMA configuration for this computer. This is an informational message only. No user action is required.  2013-03-27 13:14:24.62 spid7s      Starting up database 'master'.  2013-03-27 13:14:24.69 spid7s      2 transactions rolled forward in database 'master' (1). This is an informational message only. No user action is required.  2013-03-27 13:14:24.69 spid7s      0 transactions rolled back in database 'master' (1). This is an informational message only. No user action is required.  2013-03-27 13:14:24.69 spid7s      Recovery is writing a checkpoint in database 'master' (1). This is an informational message only. No user action is required.  2013-03-27 13:14:24.78 spid7s      FILESTREAM: effective level = 0, configured level = 0, file system access share name = 'SQLEXPRESS'.  2013-03-27 13:14:24.83 spid7s      SQL Trace ID 1 was started by login "sa".  2013-03-27 13:14:24.85 spid7s      Starting up database 'mssqlsystemresource'.  2013-03-27 13:14:24.87 spid7s      The resource database build version is 10.50.2500. This is an informational message only. No user action is required.  2013-03-27 13:14:25.09 spid10s     Starting up database 'model'.  2013-03-27 13:14:25.09 spid7s      Server name is 'WCCKEMAPP\SQLEXPRESS'. This is an informational message only. No user action is required.  2013-03-27 13:14:25.21 spid10s     The tail of the log for database model is being rewritten to match the new sector size of 4096 bytes.  2560 bytes at offset 99840 in file c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA\modellog.ldf will be written.  2013-03-27 13:14:25.31 spid10s     Clearing tempdb database.  2013-03-27 13:14:25.32 spid13s     A new instance of the full-text filter daemon host process has been successfully started.  2013-03-27 13:14:25.37 spid7s      Starting up database 'msdb'.  2013-03-27 13:14:25.40 Server      A self-generated certificate was successfully loaded for encryption.  2013-03-27 13:14:25.40 Server      Server is listening on [ 'any' <ipv6> 54547].  2013-03-27 13:14:25.40 Server      Server is listening on [ 'any' <ipv4> 54547].  2013-03-27 13:14:25.40 Server      Server local connection provider is ready to accept connection on [ \\.\pipe\SQLLocal\SQLEXPRESS ].  2013-03-27 13:14:25.40 Server      Server named pipe provider is ready to accept connection on [ \\.\pipe\MSSQL$SQLEXPRESS\sql\query ].  2013-03-27 13:14:25.40 Server      Dedicated administrator connection support was not started because it is disabled on this edition of SQL Server. If you want to use a dedicated administrator connection, restart SQL Server using the trace flag 7806. This is an informational message only. No user action is required.  2013-03-27 13:14:25.56 spid10s     Starting up database 'tempdb'.  2013-03-27 13:14:25.60 spid13s     The Service Broker protocol transport is disabled or not configured.  2013-03-27 13:14:25.60 spid13s     The Database Mirroring protocol transport is disabled or not configured.  2013-03-27 13:14:25.61 spid13s     Service Broker manager has started.  2013-03-27 13:14:25.77 spid7s      The tail of the log for database msdb is being rewritten to match the new sector size of 4096 bytes.  2048 bytes at offset 12007424 in file c:\Program Files\Microsoft SQL Server\MSSQL10_50.SQLEXPRESS\MSSQL\DATA\MSDBLog.ldf will be written.  2013-03-27 13:14:25.84 spid7s      Recovery is complete. This is an informational message only. No user action is required.  2013-03-27 13:14:25.90 Logon       Error: 17187, Severity: 16, State: 1.  2013-03-27 13:14:25.90 Logon       SQL Server is not ready to accept new client connections. Wait a few minutes before trying again. If you have access to the error log, look for the informational message that indicates that SQL Server is ready before trying to connect again.  [CLIENT: ::1]  2013-03-27 13:14:25.90 Logon       Error: 17187, Severity: 16, State: 1.  2013-03-27 13:14:25.90 Logon       SQL Server is not ready to accept new client connections. Wait a few minutes before trying again. If you have access to the error log, look for the informational message that indicates that SQL Server is ready before trying to connect again.  [CLIENT: 172.17.0.210]  2013-03-27 13:14:26.48 Server      The SQL Server Network Interface library successfully registered the Service Principal Name (SPN) [ MSSQLSvc/WCCKemAPP.WCC.LOCAL:SQLEXPRESS ] for the SQL Server service.   2013-03-27 13:14:26.48 Server      The SQL Server Network Interface library successfully registered the Service Principal Name (SPN) [ MSSQLSvc/WCCKemAPP.WCC.LOCAL:54547 ] for the SQL Server service.   2013-03-27 13:14:26.48 Server      SQL Server is now ready for client connections. This is an informational message; no user action is required.  2013-03-27 13:14:31.04 spid51      Starting up database 'ReportServer'.  2013-03-27 13:14:31.37 spid51      Starting up database 'ReportServerTempDB'.  2013-03-27 13:14:31.76 spid51      Starting up database 'ReportServerTempDB'.  2013-03-27 13:14:32.07 spid51      Starting up database 'ReportServerTempDB'.  2013-03-27 13:14:32.24 Logon       Error: 18456, Severity: 14, State: 5.  2013-03-27 13:14:32.24 Logon       Login failed for user 'drily'. Reason: Could not find a login matching the name provided. [CLIENT: 172.17.0.210]  2013-03-27 13:15:12.28 spid55      Starting up database 'DNP'.  2013-03-27 13:15:13.75 spid55      Starting up database 'DSS'.  2013-03-27 13:19:36.62 spid57      Starting up database 'ReportServerTempDB'.  2013-03-27 13:25:31.18 spid53      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 13:25:36.38 spid53      Starting up database 'DSSDL'.  2013-03-27 13:25:38.89 spid53      Starting up database 'DSSUSERDIR'.  2013-03-27 13:25:41.26 spid53      Starting up database 'permissionsAudit'.  2013-03-27 13:25:45.00 spid53      Starting up database 'PMKemmererProduction'.  2013-03-27 13:25:48.05 spid53      Starting up database 'PMKemmererProductionTEST'.  2013-03-27 13:26:01.57 spid54      Attempting to load library 'xpstar.dll' into memory. This is an informational message only. No user action is required.  2013-03-27 13:26:01.58 spid54      Using 'xpstar.dll' version '2009.100.1600' to execute extended stored procedure 'xp_instance_regread'. This is an informational message only; no user action is required.  2013-03-27 13:26:52.10 Logon       Error: 18456, Severity: 14, State: 38.  2013-03-27 13:26:52.10 Logon       Login failed for user 'WCC\baadmin'. Reason: Failed to open the explicitly specified database. [CLIENT: <local machine>]  2013-03-27 13:26:53.37 spid59      Starting up database 'DSSDL'.  2013-03-27 13:26:53.60 spid59      Starting up database 'DSSUSERDIR'.  2013-03-27 13:26:53.92 spid59      Starting up database 'QuietDose'.  2013-03-27 13:26:54.16 spid59      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 13:26:54.36 spid59      Starting up database 'ReportServer$SQLEXPRESSTempDB'.  2013-03-27 13:26:54.66 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 13:26:54.89 spid59      Starting up database 'STX'.  2013-03-27 13:26:55.57 spid59      Starting up database 'Test'.  2013-03-27 13:26:55.76 spid59      Starting up database 'DSSDL'.  2013-03-27 13:26:55.91 spid59      Starting up database 'DSSUSERDIR'.  2013-03-27 13:26:56.08 spid59      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 13:26:56.31 spid59      Starting up database 'ReportServer$SQLEXPRESSTempDB'.  2013-03-27 13:26:56.52 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 13:26:56.68 spid59      Starting up database 'STX'.  2013-03-27 13:26:57.24 spid59      Starting up database 'DSSDL'.  2013-03-27 13:26:57.28 spid59      Starting up database 'DSSUSERDIR'.  2013-03-27 13:26:57.45 spid59      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 13:26:57.55 spid59      Starting up database 'ReportServer$SQLEXPRESSTempDB'.  2013-03-27 13:26:57.74 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 13:26:57.83 spid59      Starting up database 'STX'.  2013-03-27 13:29:36.55 spid54      Starting up database 'ReportServerTempDB'.  2013-03-27 13:39:36.57 spid56      Starting up database 'ReportServerTempDB'.  2013-03-27 13:41:59.55 Logon       Error: 18456, Severity: 14, State: 8.  2013-03-27 13:41:59.55 Logon       Login failed for user 'sa'. Reason: Password did not match that for the login provided. [CLIENT: <local machine>]  2013-03-27 13:44:07.70 Logon       Error: 18456, Severity: 14, State: 8.  2013-03-27 13:44:07.70 Logon       Login failed for user 'sa'. Reason: Password did not match that for the login provided. [CLIENT: <local machine>]  2013-03-27 13:49:36.57 spid53      Starting up database 'ReportServerTempDB'.  2013-03-27 13:59:36.57 spid54      Starting up database 'ReportServerTempDB'.  2013-03-27 14:09:36.56 spid53      Starting up database 'ReportServerTempDB'.  2013-03-27 14:15:50.50 spid54      Starting up database 'DSSDL'.  2013-03-27 14:15:50.75 spid54      Starting up database 'DSSUSERDIR'.  2013-03-27 14:15:51.92 spid54      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 14:15:52.25 spid54      Starting up database 'ReportServer$SQLEXPRESSTempDB'.  2013-03-27 14:15:52.51 spid54      Starting up database 'ReportServerTempDB'.  2013-03-27 14:15:52.70 spid54      Starting up database 'STX'.  2013-03-27 14:18:02.83 spid51      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 14:18:46.58 Logon       Error: 18456, Severity: 14, State: 38.  2013-03-27 14:18:46.58 Logon       Login failed for user 'WCC\baadmin'. Reason: Failed to open the explicitly specified database. [CLIENT: <local machine>]  2013-03-27 14:18:47.49 spid59      Starting up database 'DSSDL'.  2013-03-27 14:18:47.70 spid59      Starting up database 'DSSUSERDIR'.  2013-03-27 14:18:47.92 spid59      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 14:18:48.04 spid59      Starting up database 'ReportServer$SQLEXPRESSTempDB'.  2013-03-27 14:18:48.33 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 14:18:48.53 spid59      Starting up database 'STX'.  2013-03-27 14:18:49.12 spid59      Starting up database 'DSSDL'.  2013-03-27 14:18:49.33 spid59      Starting up database 'DSSUSERDIR'.  2013-03-27 14:18:49.44 spid59      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 14:18:49.60 spid59      Starting up database 'ReportServer$SQLEXPRESSTempDB'.  2013-03-27 14:18:49.84 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 14:18:49.98 spid59      Starting up database 'STX'.  2013-03-27 14:18:50.28 spid59      Starting up database 'DSSDL'.  2013-03-27 14:18:50.39 spid59      Starting up database 'DSSUSERDIR'.  2013-03-27 14:18:50.48 spid59      Starting up database 'ReportServer$SQLExpress'.  2013-03-27 14:18:50.53 spid59      Starting up database 'ReportServer$SQLEXPRESSTempDB'.  2013-03-27 14:18:50.66 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 14:18:50.73 spid59      Starting up database 'STX'.  2013-03-27 14:19:36.54 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 14:19:36.93 spid59      Starting up database 'ReportServerTempDB'.  2013-03-27 14:29:36.55 spid53      Starting up database 'ReportServerTempDB'.  2013-03-27 14:39:36.57 spid54      Starting up database 'ReportServerTempDB'.  2013-03-27 14:49:36.57 spid53      Starting up database 'ReportServerTempDB'.  2013-03-27 14:59:36.58 spid54      Starting up database 'ReportServerTempDB'.  2013-03-27 15:09:36.60 spid53      Starting up database 'ReportServerTempDB'.  

I can provide older logs too if it will be beneficial. I'm reading those now from here.

Results from query select name, state_desc, is_in_standby, is_cleanly_shutdown from sys.databases;

enter image description here


Image for Kin's request

enter image description here

SQL Server 2008 R2 SP1 - Tempdb "Available free space" showing negative value

Posted: 26 Apr 2013 06:04 PM PDT

Please help as this issue is driving me nuts...

Tempdb has four data files & one log file. They are all located on the same dedicated drive.

When I right click on tempdb in SQL Server Management Studio (SSMS) to view Properties --> General...I'm seeing "Size" as ~65000 MB with "Space Available" as ~64000 MB. However, when I try to shrink the files via the GUI, I see that the "currently allocated space" hasn't changed & is still showing the initial size.

This is also where the "Available free space" is shown as a negative number. The only work around I have found so far is to restart SQL Server to flush out tempdb, but I can't keep on doing this.

Has anyone ever come across this & know the root cause/permanent solution for it please?

Thanks in advance!

MySQL subqueries that use range based on values of main queries don't use indices properly

Posted: 26 Apr 2013 07:04 PM PDT

I think I've isolated a problem that has been affecting many of my queries lately. And would like some help to figure out a solution for this.

Ok so my findings are that a normal query that runs very fast using like a couple of rows can actually use indexes improperly when used in a subquery which is based on values from the main query.

Lets take an example:

DROP TEMPORARY TABLE IF EXISTS Person;  DROP TEMPORARY TABLE IF EXISTS CofeeBreaks;  CREATE TEMPORARY TABLE IF NOT EXISTS Person  (      `person_id` INT(11) AUTO_INCREMENT,      `age` INT,      `lastCofee` DATETIME,      KEY `idkey` (`person_id`) USING BTREE,      KEY `datekey` (`lastCofee`) USING BTREE  ) ENGINE = MEMORY;  CREATE TEMPORARY TABLE IF NOT EXISTS CofeeBreaks  (      `id` INT(11) AUTO_INCREMENT,      `cofeeBreakStart` DATETIME,      `cofeeBreakEnd` DATETIME,      KEY `brekIdKey`(`id`) USING BTREE  ) ENGINE = MEMORY;  INSERT INTO Person (age, lastCofee) VALUES (24, '2013-03-27 14:45:34');  INSERT INTO Person (age, lastCofee) VALUES (34, '2013-03-27 14:46:38');  INSERT INTO Person (age, lastCofee) VALUES (26, '2013-03-27 15:25:24');  INSERT INTO Person (age, lastCofee) VALUES (28, '2013-03-27 16:33:54');  INSERT INTO Person (age, lastCofee) VALUES (46, '2013-03-27 17:11:03');  INSERT INTO CofeeBreaks (cofeeBreakStart, cofeeBreakEnd) VALUES ('2013-03-27 15:11:03', '2013-03-27 17:25:24');  INSERT INTO CofeeBreaks (cofeeBreakStart, cofeeBreakEnd) VALUES ('2013-03-27 14:45:34', '2013-03-27 15:25:24');  INSERT INTO CofeeBreaks (cofeeBreakStart, cofeeBreakEnd) VALUES ('2013-03-27 17:11:03', '2013-03-27 17:11:03');  SELECT * FROM Person WHERE lastCofee BETWEEN '2013-03-27 15:11:03' AND '2013-03-27 17:11:03';  SELECT      *,      (SELECT AVG(Person.age) FROM Person WHERE Person.lastCofee BETWEEN CofeeBreaks.cofeeBreakStart AND CofeeBreaks.cofeeBreakEnd) AS averageAge  FROM CofeeBreaks;  

So the explain results for the first select are as follow:

1   SIMPLE  Person  range   datekey datekey 9       1   Using where  

But the second query doesn't use the index properly in the subquery and analyses more rows than necessary:

id  select_type         table   type    possible_keys   key key_len ref rows  1   PRIMARY CofeeBreaks ALL                                               3  2   DEPENDENT SUBQUERY  Person  ALL       datekey                         5  

As we can see the subquery needs to analyse all rows in the person table when none of the cofeebreaks ranges surrounds all of the 5 persons.

The way I've been fixing the performance issues in a very busy database is by doing a select into a temporary table and than looping the rows of that table and updating it with the aggregates that i need. This is obviously ugly code and shouldn't be built that way. But I really haven't found a way to optimize queries for this kind of subqueries and I've tried a lot of possible ways to do this without success on the optimization.

Thanks in advance for any inputs.

MySQL, Installation process, my.ini

Posted: 26 Apr 2013 05:04 PM PDT

Recently i decided to install apache + php + mysql separately from each other instead using one of the packets such as wamp,xampp,zend server,etc..., So far i installed apache server with php engine, Now at mysql part i will be happy to get some answers for the questions below:

  • After i unzip the zip file downloaded from mysql website i have my-default.ini, File if i am not wrong it's basically a template i should copy the file, Edit the name to my.ini and take the file back to the folder my-default.ini is.
  • Just for a general Knowledge, What configuration file i can edit so for example: I can rename my.ini to anotherName.ini, So that mysql server will look for file name anotherName.ini.
  • Also i am looking for a good guide about my.ini configuration.

Thank you all and have a nice day.

Import batches of rows from Excel in parallel

Posted: 26 Apr 2013 08:04 PM PDT

We receive an Excel file weekly currently containing about 250k rows. The file grows by about 300 rows each week.

In SSIS I've hooked up an Excel source to an OLE DB destination to import the file but it takes a long time.

I've looked at using the SQL Command to specify how to select part of the sheet but I can't see how to turn this in to a parallel operation.

Is it possible in SSIS to split the Excel file (by rows) and import it in parallel?

Moving tables to another SQL2008 database (including indexes, triggers, etc.)

Posted: 26 Apr 2013 12:45 PM PDT

I need to move a whole bunch (100+) of large (millions of rows) tables from one SQL2008 database to another.

I originally just used the Import/Export Wizard, but all the destination tables were missing primary and foreign keys, indexes, constraints, triggers, etc. (Identity columns were also converted to plain INTs, but I think I just missed a checkbox in the wizard.)

What's the right way to do this?

If this were just a couple of tables, I would go back to the source, script out the table definition (with all indexes, etc), then run the index creation portions of the script on the destination. But with so many tables, this seems impractical.

If there wasn't quite so much data, I could use the "Create Scripts..." wizard to script out the source, including data, but a 72m row script just doesn't seem like a good idea!

[MS SQL Server] dbo user in the database has a domain login account

[MS SQL Server] dbo user in the database has a domain login account


dbo user in the database has a domain login account

Posted: 26 Apr 2013 03:13 AM PDT

I'm going over my server security to remove our previous DBA. One of the odd things a found on our security server, among other things, is that when I check the properties of the database user dbo the login name mapped to it is our previous DBA windows domain account. So I have a couple of questions:1- is this right? Shouldn't the dbo database user be mapped to sa?2- how got it mapped like that? Any ideas?3- if it's wrong, how do I change it and to what?Security seems to be running out of whack here and we need to keep it under control. I'm trying to make the best out of it and to make sense of it as well. Any ideas would be very appreciated.Thanks.

Suggestions on ways to stay abreast of SQL updates?

Posted: 25 Apr 2013 11:23 PM PDT

So my new job requires that SQL instances be kept up to current and supported versions, including security updates / fixes.I'm sure MS has a page which lists the current updates available for SQL and when support for them ends.Ideally, this would also allow me to look over the updates / fixes / improvements, and decide if I want to load them now, or later.Thanks,Jason A.

huge log file

Posted: 25 Apr 2013 02:23 PM PDT

Dear ExpertsI have a 3 GB mdf file and 355 GB ldf file.What should I do, I have taken a backup to the transaction log, should i delete the log fileThanks

Need to maintain the databases

Posted: 25 Apr 2013 11:43 PM PDT

I have a database in a simple mode as we do not use transaction logs for restore purposes, I ran some queries to delete some data and not nsed tables to free up some space.I give you an example, I have a database 70 GB in size, I deleted the data and I still see my database is showing the same size, can I shrink the data files,what is the risk please? Can I get the best way to go around to get the space freed up,Thanks in Advance,Bubby

Installing SP AND CU on Cluster

Posted: 25 Apr 2013 07:26 AM PDT

What is the best practice for installing a SQL Service Pack and a Cumulative Update on a clustered index during the same downtime? We have a production SQL 2008 R2 RTM 2-node cluster and want to upgrade to SP2, CU3. Is it1) Install SP & CU on the passive node2) Failover3) Install SP & CU on the other node.OR1) Install SP on the passive node2) Failover3) Install SP & CU on the other node.4) Failover5) Install CU on the 1st (now passive) nodeOR1) Install SP on the passive node2) Failover3) Install SP on the other node.4) Failover5) Install CU on the 1st (now passive) node6) Failover7) Install CU on the (now passive) node.My concern: Is there a significant risk in installing a CU before the instance is brought online (and completes the script upgrade) for the previous Service Pack?

How can we avoid before changes inserts into CDC Tables in SQL Server

Posted: 25 Apr 2013 10:04 AM PDT

HI All,As you all know we have 1 insert in CDC Table for respective insert operation in base table. 2 inserts in CDC table for respective update operation in base table. 1 = delete2 = insert3 = update (old values)4 = update (new values)can we avoid 3 = update (old values) in CDC tables? Is there a way? I was just curious.

[Articles] The Vacation Dilemma

[Articles] The Vacation Dilemma


The Vacation Dilemma

Posted: 25 Apr 2013 11:00 PM PDT

Does vacation cause extra stress at your job before or after you leave? Steve Jones asks a poll question this Friday to see how you feel. This editorial was originally published on June 12, 2008. It is being re-run as Steve is traveling.

[SQL 2012] multiple instance failover solution

[SQL 2012] multiple instance failover solution


multiple instance failover solution

Posted: 26 Apr 2013 02:15 AM PDT

I need to create a solution for a mutl instance (needed by 3rd party app) 2012 failover solution. I will have 2 physical high end servers (128 gig of RAM), plenty of CPU power. It will be implemented on Standard edition (could possibly change that) 1st thought was an simple active passive cluster (I am assuming you can do this with the multiple instances?). ANy other ideas on a novel approach?

SSIS Project Dynamic OLEDB question

Posted: 26 Apr 2013 12:54 AM PDT

I have a project which I will need to deploy to servers not in our network and thus will not know certain attributes such as the server and database names. So I set up a parameter in the "Project.params" area. In this string parameter I placed: "Data Source=" + @[$Project::ServerName] + ";User ID=" + @[$Project::UserName] + ";Initial Catalog=" + @[$Project::InitialCatalog]; + "Provider=SQLNCLI11.1;Persist Security Info=True;" I copied the connection from the original one I set up before attempting this.When I set the project level OLEDB Connection manager to this parameter in an expression (for the connection string), I get the string just as I typed and not the values of the other parameters. In other words when evaluated the expression appears just as above. Doing so invalidates all of the components in the package which use the connection. Any ideas on what I am doing wrong? Thank you in advance.

Report No Longer Working

Posted: 25 Apr 2013 08:13 AM PDT

We just recently upgraded from SSRS 2005 to SSRS 2012. I have a report with two tables using two different datasets. The first table displays data correctly, but the second displays no data found (custom). If I run the report and do a SQL Server Profiler trace, I see the stored procedure and the parameters passed hitting the database server. If I copy the stored procedure execute statement from profiler and paste it into SSMS, data is returned. Any ideas why my table is not being populated with the data?

[T-SQL] Search data.

[T-SQL] Search data.


Search data.

Posted: 26 Apr 2013 12:23 AM PDT

Hi,I want to search student information depending upon the passing either values of firstname,lastname,email. My requirement is even if I entered first 3 characters then the information of student should display. I have written SP below.its working fine but if I am not passing any search condition then it shows all records.But i dont want to display any record if i pass blank record.CREATE PROCEDURE [dbo].[ProSearchStudent] @Search varchar(50)AS Set NoCount On Declare @SQL varchar(max) Select @SQL = 'select StudentId ,LastName ,FirstName ,Email from StuDentInfo WHERE LTRIM(RTRIM(LastName)) like '''+@Search+'%'' or LTRIM(RTRIM(FirstName)) like '''+@Search+'%'' or LTRIM(RTRIM(Email)) like '''+@Search+'%'' ' exec(@SQL) print @sql --exec Usp_SearchStudents ''thanksAbhas.

Recursion 100 has been exhausted

Posted: 25 Apr 2013 11:47 PM PDT

HiI have a query below and I get The maximum recursion 100 has been exhausted, how do I fix or go on around it, while the while loop be a better escape?--create PR_LINK_INV_HST temp table--Create TABLE #PR_LINK_INV_HST (CLIENT_ID varchar(15), NAME varchar(15), LINK_CLIENT_ID varchar(30))--insert into PR_LINK_INV_HST temp table--Insert into #PR_LINK_INV_HST (CLIENT_ID,NAME,LINK_CLIENT_ID) values('1','John','2')Insert into #PR_LINK_INV_HST (CLIENT_ID,NAME,LINK_CLIENT_ID) values('2','John','3')Insert into #PR_LINK_INV_HST (CLIENT_ID,NAME,LINK_CLIENT_ID) values('3','Diana','1')DECLARE @CLIENT_ID VARCHAR(15)SET @CLIENT_ID = '1' -- This CTE search for the linked clients --;WITH pr_linked(CLIENT_ID, NAME, LINK_CLIENT_ID, [LEVEL])AS(/* Anchor member - the selected client*/SELECT CLIENT_ID, Name, LINK_CLIENT_ID, 0FROM #PR_LINK_INV_HST WHERE CLIENT_ID = @CLIENT_ID/* Recursive member to search for the child clients. */UNION ALLSELECT HST.CLIENT_ID, HST.Name, HST.LINK_CLIENT_ID, LNK.[LEVEL] + 1FROM #PR_LINK_INV_HST HST JOIN pr_linked LNKON LNK.LINK_CLIENT_ID = HST.CLIENT_ID where lnk.LEVEL >= 0 /* Recursive member to search for the parent clients. */UNION ALLSELECT HST.CLIENT_ID, HST.Name, HST.LINK_CLIENT_ID, LNK.[LEVEL] - 1FROM #PR_LINK_INV_HST HST JOIN pr_linked LNKON LNK.CLIENT_ID = HST.LINK_CLIENT_IDwhere lnk.LEVEL <= 0)SELECT distinct * INTO #RESULTSFROM pr_linked-- display resultSELECT *FROM #RESULTSorder by LEVEL, NAMEdrop table #RESULTSdrop table #PR_LINK_INV_HST

Urgent Help: Validating Updation of a column

Posted: 25 Apr 2013 10:14 PM PDT

if OBJECT_ID('..test')>0 drop table testcreate table test(id int ,name varchar(2));Goinsert into testselect 1,'a'union allselect 2,'b'union allselect 3,'c'Go:Select * from Test>>> Gives Result.id name1 a2 b3 cNow, What i want is the to create a trigger or any constraint that we cannot update Column 'Name' of the table. But allowed to update Column 'ID'.Need Urgent Help:

Which Systems are using MasterAccess User

Posted: 25 Apr 2013 05:20 PM PDT

Hi All,I have a user by MasterAccess. As a DBA I want to see who are logged with this user. I wan to get the client_net_address (IP address of that System). I am getting all the Address who are using my 1.2 server by SELECT *FROM sys.dm_exec_connectionsBut I want to check those user only who are logged in with MasterAccess user.Please help

Cannot shrind log

Posted: 25 Apr 2013 11:14 AM PDT

My Transaction log is about 3GB and I can't shrink it.It is using a FULL backup and I try over and over again to back and shrink it and it won't do it.DECLARE @DatabaseName VARCHAR(50); SET @DatabaseName = 'myDatabase' SELECT name, recovery_model_desc, log_reuse_wait_desc FROM sys.databases WHERE name = @DatabaseNameThis gives me:LOG_BACKUP in the log_reuse_wait_desc.I have been all over the web looking for an answer but anything I try still leaves it the same size. I tried to change the inital size to 600MB but that didn't work.Any ideas what I am missing?Thanks,Tom

Pivot and grouping question

Posted: 25 Apr 2013 04:52 AM PDT

I posted this on the general thread but this looks like the better forum:I have a table with structure Education(edu_id, emp_id, school, year, degree) where an employee can have multiple entries if they have multiple degrees.I am trying to do a data extract with the following columns needed: emp_id, school1, degree1, school2, degree2, ..., school5, degree5. So this looks like a pivot query. My idea is to create a CTE on the original table to add schooln and degreen on the original table based on the top 5 schools of an employee and then do a pivot. Any elegant idea on how to implement this? Thanks,ErickMy solution so far involves 4 CTEs: 1st uses RANK OVER PARTITION to append the pivot column to the table and 3 CTEs for pivots on school, year and degree plus a SELECT that joins all 4 CTEs. Any ideas for a more elegant solution?

[SQL Server Data Warehousing] Recovery model for DWH databases


thanks Kieran and mike.


We have ETLs which run throught out the night and populate data in Daawarehouse and dartmart dbs and ETLs will not run during the Day time. The database size will be around 300GB for earch DB. As mike said, we can take fulll backup of the DBs but in our case, we are in processin of designing a DR strategy for DWH databases and so transferring the full backup over the network is not feasable (even after compression enabled), hence we are planning for a weekly fulls and daily differential with simple recovery model...


what is your opinion on this solutuin or you can suggest some alternate plans...


Thanks for the help



.

social.technet.microsoft.com/Forums

[SQL Server Data Warehousing] Optimum DR strategy for datawarehouse DBs


We have a SQL Server 2008R2 servers with Datawarehouse DBs which gets populated by  SSIS packages for ETLs. The database sizes are around 380GB - 400GB. The ETLs run throughout the night and load the data and we load around 1 million rows of data and no activity happens during the day time. We can afford a data loss of 1 day at any point. The business wants to design an DR stretegy for our DWH databases.  I want to keep into account the performance factor, network bandwidth for this environment.


I do not want to just rely on full backups as the size of each full backup for DWH dbs is around 80GB (After compression enabled) and we have 5 DBs and it will choke network bandwidth if i copy the backups to DR site over the network.


i have proposed the following solution


1. All the DBs in simple recovery model  


2. Take full backups (compression enabled) every weekend


3. Take differntial backups(compression enabled) during the weekday


Can you guys suggest if this is a correct DR strategy for Datawarehouse DBs considering performance,network and dataloss affordable and please help out with any other stretegy you can think of...


-sqluser



.

social.technet.microsoft.com/Forums

Search This Blog