Thursday, July 4, 2013

[how to] Can SQL Server Access (Grant/Revoke/Restore) be categorized under DB Backup and Recovery?

[how to] Can SQL Server Access (Grant/Revoke/Restore) be categorized under DB Backup and Recovery?


Can SQL Server Access (Grant/Revoke/Restore) be categorized under DB Backup and Recovery?

Posted: 04 Jul 2013 07:30 PM PDT

Lately I've been performing a lot of backup/recovery for databases from one server to another.

Aside from that, I restore the access of sanctioned logins/accounts more than half the times I perform a restore. There are times when I need to restore a database back into a development box as well as grant approved access to a new developer.

So when I need to categorize the things I do, is it okay to put access granting/revoking/recovering under "Database Backup/Restore" ?

Can't I place it under when I perform a restoration of a database?

Stored function checking Booking availability

Posted: 04 Jul 2013 06:05 PM PDT

Trying to create a boolean function that checks booking availability.

Lets say BOOKING table has attribute BOOK_Start and BOOK_End that are of date type.

I want to create a function that receives 2 date parameters and tells if this time frame is available.

SELECT lock only for inserts

Posted: 04 Jul 2013 06:29 PM PDT

I need to get the MAX aggregate of certain field in a transaction. I use this value further inside the transaction. I want to avoid inserts in the table from which I'm getting the MAX.

I researched the isolation modes and lock hints available. The closest seems to be TABLOCKX, which is the only one that avoids inserts in the entire table (also updates and selects).

Since I no will be affected by updates or selects in the table I wish to avoid only inserts during the transaction. This is possible?

Movie Database, storing multiple genres

Posted: 04 Jul 2013 04:58 PM PDT

I'm trying to build a database that will store information on movies.

    Title      Plot      Genre      Rating      Director  

The only think that is bothering me is that most films don't just have one genre and I'm struggling to figure out how to store this on a MySQL Database. At first I was thinking that I'll just have one table and store all the genres in one column, separating them by a comma and when I want to retrieve them separate them using PHP, but I'm not sure this is the best way as I think I would have trouble sorting and searching for a specific genre e.g. Horror when the collumn contains 'Horror, Thriller, Action'.

Website is working very slowly

Posted: 04 Jul 2013 09:01 PM PDT

We have replication from Oracle to MySQL via Oracle Golden Gate.

Yesterday because of some issues we have rebuilt the replication from Oracle to MySQL (this is called opal replication).

Now our website is very slow

Website load time is really slow today. 10.72 sec vs 6.4 sec last week".

We didn't find any replication issues and there is no lagging. We didn't find the the actual issue causing website load time to be really slow.

Could anyone please help me?

how to configure MySQL for an application that does all read result buffering? [on hold]

Posted: 04 Jul 2013 08:41 PM PDT

The database and css mystify me. I'm more comfortable with the middle, server and client side code, so please bear with me. Also, I'm still a server management novice. My understanding of "buffer" and "cache" is probably different from MySQL's usage, so I think I'll probably misuse them.

I'm writing a websocket++ server on ubuntu and am using the application's persistent memory to buffer read data such as custom std::vectors of PRIMARY keys for rapid dynamic single-webpage pagination over entire tables, like stack's question ranking criteria such as votes, newest, frequent, etc, and the most frequently viewed content by connected users.

I'd like to limit the database from storing client results in memory while allowing it to use whatever memory's necessary for high performance reading from & writing to disk. All the flags I've found restrict both without distinction.

I'd also like to restrict the amount of memory allocated to the amount it uses, to prevent the websocket++ server from using the page file for memory while RAM is unused yet inacessible. All the flags I've found restrict total memory usage and preallocation both without restriction.

What settings are necessary to achieve this?

Greyed out menu options, broken activity monitor

Posted: 04 Jul 2013 12:31 PM PDT

I recently installed SP1 on my 2012 SQL Server. Ever since then SSMS has behaved a little weirdly when I connect from my pc rather than from the local server itself.

I no longer see the green "running" icons on the server and on the agent, and several menu options are greyed out (Start/Stop/Restart on server menu and agent menu).

enter image description here

Also, when I try to access the Activity Monitor, I get the following error:

enter image description here

Error Detail

I found the following article about the Activity Monitor. I followed the steps there (although I am a local admin on the sql server and should have had all of those permissions already) to no avail. The server is running Windows 2008 R2 Datacenter SP1. When I remote desktop to the server itself and use SSMS directly I have none of these issues, and when a domain admin connects to the server through SSMS remotely everything also works for him.

Any ideas?

How to represent it with UNIQUE indexes and foreign keys?

Posted: 04 Jul 2013 01:42 PM PDT

"Group" is a group of pupils. "Topic" is a topic of lessons.

To every group corresponds exactly one topic. (So there may be multiple groups per topic.)

To a pupil corresponds multiple groups, but maximum one group per topic.

How to implement this in terms of UNIQUE indexes and foreign keys in MySQL?

Data not showing in English, unless you click on it - In sqldeveloper

Posted: 04 Jul 2013 09:22 AM PDT

I have installed OracleXE and imported data into it. From SQLPLUS, I can query the data and it all shows up in English. However, in SQLDEVELOPER, if I look at the data, it's in a nonsense format, unless I double click on it, where it turns into english? If I query the data through the SQL Worksheet, it displays in rubbish format, but again click on a data item, and it turns into English again?

I am utterly confused.

Any ideas?

Using inner join to get data from multiple tables [on hold]

Posted: 04 Jul 2013 09:22 AM PDT

I have 5 tables:

A(a_id, a1, a2);

B(b_id, b1, b2, a_id);

C(c_id, c1, c2, b_id);

D(d_id, d1, b_id);

E(e_id, e1, e2, d_id);

table b have a_id match with table a(a_id).

table c have b_id match with table b(b_id).

table d have b_id match with table b(b_id).

table e have d_id match with table d(d_id).

Now, i want to use inner join to get data from those tables. please help me!

Unable to call function [on hold]

Posted: 04 Jul 2013 08:58 AM PDT

I've written this function:

CREATE FUNCTION check_if_if_exist(      id INTEGER,       table_name character(50),       table_column character(20)   ) RETURNS BOOLEAN AS   $$    BEGIN       RETURN EXECUTE 'SELECT EXISTS(SELECT * FROM table_name WHERE table_column = id)';  END;    $$ LANGUAGE plpgsql;  

but I'm not able to call it (I obtain always an error). How can I call it correctly?

Streaming replication setup, sane setup?

Posted: 04 Jul 2013 08:51 AM PDT

Not being a DBA as main profession, I would like to know if the following assumptions are true. I have been searching the net for days and I think I have understand it correctly, but I would like to hear the response from people when faced with my exact configuration.

I want to utilize streaming replication, which I have a working setup for.

I have turned off WAL archiving, since I believe that wal_keep_segments will contain enough backlog to get a slave up to date if necessary (700, you might think this high, I just calculated how much "buffer" I could spare for replication backlog, please tell me if this is the wrong approach).

I have the same configuration on master and slave, which includes the following:

wal_level = hot_standby      hot_standby = on  #(this is ignored on master, as I understand it?)  

Now, the main question is the first entry here, I could theoretically set this to minimal, correct? However, I'm thinking for ease of failover, I should keep it this way?

Finally, I never specify restore_command in my recovery.conf, because as I have understood it, this is for log shipping (when rsyncing the WAL files between the two nodes or similar), or if I were to bring back a slave node from a long disconnect and archive_mode was on. 12 GB of WAL segments should be good for atleast a week (unless I do some big restore of a DB), and I assume that all segments that are in pg_xlog will be streamed to the slave upon reconnect if it is behind?

And finally, what would be a good way to surveillance that the replication works as it should? I know about repmgr, but it would be nice to be aware of how the surveillance would consider the slave to be out of date.

any documentation on tpcc-mysql benchmark tool

Posted: 04 Jul 2013 11:05 AM PDT

Where can I find tpcc-mysql documentation? I google it but I can't find any documentation about it.

Splitting a table based on enum value: One, Two or Three Tables?

Posted: 04 Jul 2013 07:05 PM PDT

I am re-designing our company database, and am wondering whether or not to split this table up into two (or even three) different tables based on the type field. Prospects can become clients, and in rare cases, clients can revert to being prospects again (usually correction of a user error). Currently there are no suppliers who are clients, but theoretically there's nothing to stop that from happening in the future. As you can see, several fields are only needed in one case or another.

CREATE TABLE `companies` (    /* book-keeping fields every data table should have */    `id` INT UNSIGNED NOT NULL AUTO_INCREMENT,    `created` TIMESTAMP NULL DEFAULT NULL, /* would love to specify DEFAULT CURRENT_TIMESTAMP here but MySQL won't let me */    `modified` TIMESTAMP NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,    /* fields for all company types */    `type` ENUM('prospect','client','supplier') CHARACTER SET ascii COLLATE ascii_bin NOT NULL DEFAULT 'prospect',    `name` VARCHAR(25) DEFAULT NULL,    `abbr_name` VARCHAR(8) DEFAULT NULL,    `formal_name` VARCHAR(255) DEFAULT NULL,    `web_site` VARCHAR(255) DEFAULT NULL,    `email` VARCHAR(320) DEFAULT NULL,    `logo` VARCHAR(255) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,    `key` ENUM('false','true') CHARACTER SET ascii COLLATE ascii_bin NOT NULL DEFAULT 'false',    `bust` ENUM('false','true') CHARACTER SET ascii COLLATE ascii_bin NOT NULL DEFAULT 'false',    `head_office` INT UNSIGNED DEFAULT NULL,    /* only used for clients */    `accounting_code` VARCHAR(5) CHARACTER SET ascii DEFAULT NULL,    /* used for prospects, and clients obtained by marketing */    `marketing_user` INT UNSIGNED DEFAULT NULL,    `marketing_datasource` CHAR(2) CHARACTER SET ascii DEFAULT NULL,    /* only used for prospects */    `marketing_dead` ENUM('false','true') CHARACTER SET ascii COLLATE ascii_bin NOT NULL DEFAULT 'false',    /* only used for suppliers */    `supplies` SET('materials','plant') CHARACTER SET ascii COLLATE ascii_bin NOT NULL,    `account_holding_branch` INT UNSIGNED DEFAULT NULL,    PRIMARY KEY (`id`),    FOREIGN KEY (`head_office`) REFERENCES sites (`id`),    FOREIGN KEY (`marketing_user`) REFERENCES users (`id`)    FOREIGN KEY (`account_holding_branch`) REFERENCES sites (`id`),  ) ENGINE=InnoDB DEFAULT CHARSET=utf8;  

What do people recommend, and importantly, why?

A similar question, Splitting a table based on business logic provides one curt answer with two relevant reasons to help decide whether splitting is worth it:

  • Partition hot and cold data for cache efficiency
  • Many columns almost always null

As to data 'temperature', key clients and key suppliers are hotter than other clients or suppliers, and hotter than prospects. Any company which has gone bust or for which the marketing trail has died are pretty frigid.

As to NULL, it varies, but prospects almost always do not have a logo uploaded, and suppliers presently do not have data in any of the four accounting_code and marketing_* fields, though as I say there's no technical reason why a parts supplier could not also become a client of our business.

Mongo authentication user can't drop database

Posted: 04 Jul 2013 12:52 PM PDT

my question is - is it possible to define user in the Mongo database who will be able to drop the database? Maximum user privilege is "dbAdmin" but that privilege doesn't allow users to drop the database, only collections can be dropped.

I know that "clusterAdmin" has rights to drop the database, but that role can't be defined in regular database, only in admin database...

SQL Server 2012 Management Studio Books Online - Help

Posted: 04 Jul 2013 08:05 AM PDT

When I use F1 in my SQL Server 2012 Management Studio, it opens Microsoft Help Viewer 1.0 but there is no help content. Adding content is not possible due to the fact that install online and install offline both are grayed out. When I download the Books Online the file is > 200 MB. When I open VS2012 I have in the help section a chapter SQL Server 2012, which is 40 MB. This brings me to these questions:

  1. Why is SQL Server 2012 with still using Help Viewer 1.0 per default (and not the latest on the machine installed version, which would be 2.0)
  2. Why I'm not able to add content to this Help Viewer 1.0
  3. Why is the SQL Server 2012 Help Content in Help Viewer 2.0 40 MB and the downloaded Books Online are 200 MB ?

This sql select must be optimized...how?

Posted: 04 Jul 2013 03:29 PM PDT

Here is my query

SELECT DISTINCT      SM.sales_date,      (SELECT               SUM(gross_sale)          FROM              sales_main AS INTERNAL          WHERE              INTERNAL.sales_date = SM.sales_date AND INTERNAL.vehicle_name=SM.vehicle_name) AS TGROSS,      (SELECT               SUM(net_sale)          FROM              sales_main AS INTERNAL          WHERE              INTERNAL.sales_date = SM.sales_date AND INTERNAL.vehicle_name=SM.vehicle_name) AS TNET,      (SELECT               SUM(cash)          FROM              sales_main AS INTERNAL          WHERE              INTERNAL.sales_date = SM.sales_date AND INTERNAL.vehicle_name=SM.vehicle_name) AS TCASH,      (SELECT               SUM(cheque)          FROM              sales_main AS INTERNAL          WHERE              INTERNAL.sales_date = SM.sales_date AND INTERNAL.vehicle_name=SM.vehicle_name) AS TCHEQUE,      (SELECT               SUM(credit)          FROM              sales_main AS INTERNAL          WHERE              INTERNAL.sales_date = SM.sales_date AND INTERNAL.vehicle_name=SM.vehicle_name) AS TCREDIT,      (SELECT               SUM((SELECT                           amount                      FROM                          sales_main_deduction AS INTERNAL                      WHERE                          INTERNAL.instance = '4%' AND INTERNAL.idsales_main = SMinternal.idsales_main))          FROM              sales_main AS SMinternal          WHERE              SMinternal.sales_date = SM.sales_date AND SMinternal.vehicle_name=SM.vehicle_name) AS '4%'              ,      (SELECT               SUM((SELECT                           amount                      FROM                          sales_main_deduction AS INTERNAL                      WHERE                          INTERNAL.instance = '6%' AND INTERNAL.idsales_main = SMinternal.idsales_main))          FROM              sales_main AS SMinternal          WHERE              SMinternal.sales_date = SM.sales_date AND SMinternal.vehicle_name=SM.vehicle_name) AS '6%',      (SELECT               SUM((SELECT                           amount                      FROM                          sales_main_deduction AS INTERNAL                      WHERE                          INTERNAL.instance = '10%' AND INTERNAL.idsales_main = SMinternal.idsales_main))          FROM              sales_main AS SMinternal          WHERE              SMinternal.sales_date = SM.sales_date AND SMinternal.vehicle_name=SM.vehicle_name) AS '10%',      (SELECT               SUM((SELECT                           amount                      FROM                          sales_main_deduction AS INTERNAL                      WHERE                          INTERNAL.instance = 'M/R' AND INTERNAL.idsales_main = SMinternal.idsales_main))          FROM              sales_main AS SMinternal          WHERE              SMinternal.sales_date = SM.sales_date AND SMinternal.vehicle_name=SM.vehicle_name) AS 'M/R',      (SELECT               SUM((SELECT                           amount                      FROM                          sales_main_deduction AS INTERNAL                      WHERE                          INTERNAL.instance = 'S/R' AND INTERNAL.idsales_main = SMinternal.idsales_main))          FROM              sales_main AS SMinternal          WHERE              SMinternal.sales_date = SM.sales_date AND SMinternal.vehicle_name=SM.vehicle_name) AS 'S/R'  FROM      sales_main AS SM  WHERE      (SM.sales_date >= '2013-06-01' AND SM.sales_date <= '2013-06-05') AND SM.vehicle_name='Lorry1'  

and this is the result..

2013-06-01  59582.03  58336.59  32096.96  11141.02  15098.59  1245.44                  2013-06-03  132334.45  126461.84  33138.93  11046.63  82276.28  1774.46 557.02  3541.13  2013-06-04  72184.69  71882.93  27840.74  12410.16  31632.03   152.16    149.6       2013-06-05  93478.20  86721.32  34461.47  24971.24  27288.6  934.48  3553.32  2269.08  

But this if this queryextends to 31 days of month it takes 10 mins to return a results from 2000 rows table ....please help..

How to run a SELECT query within while loop in PHP?

Posted: 04 Jul 2013 04:47 PM PDT

Within a SELECT query via PHP while loop, I need to run a mysql SELECT query as

$result1 = $mysqli->query("SELECT * FROM table1");    while ( $row = $result->fetch_assoc() ) {    if ( $row['X'] == 'X' ) {  $result2 = $mysqli->query("SELECT * FROM table2");  .....  }    }  

but this does not work. I cannot JOIN the tables, as the if statement is complicated to perform SELECT from different tables.

Delete from table where multiple fields match select subquery from other table

Posted: 04 Jul 2013 09:47 AM PDT

I want to delete an entry in a table where multiple fields match the results of another select subquery which takes data from another table.

This is what I have so far, though it doesn't work:

DELETE FROM table1  WHERE table1.id IN     (SELECT id     FROM table1 a JOIN table2 b     ON a.field1 = b.field1     AND a.field2 = b.field2     AND a.field3 = b.field3     AND b.id = ?     )  

MySql one time event never runs?

Posted: 04 Jul 2013 12:47 PM PDT

Please have a look at below events

1) create EVENT Test1 ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 20 second ON COMPLETION PRESERVE ENABLE DO ...     2) create EVENT Test2 ON SCHEDULE EVERY 20 SECOND STARTS CURRENT_TIMESTAMP ON COMPLETION PRESERVE ENABLE DO ...   

I expect event Test1 to run one time after 20 seconds but it never runs. Event Test2 is working fine.

Any idea? Thanks.

Ok sorry it is the alter that is not working

At first i did create EVENT Test1 ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 20 second ON COMPLETION PRESERVE ENABLE DO

then shortly after i did alter EVENT Test1 ON SCHEDULE AT CURRENT_TIMESTAMP + INTERVAL 20 second ON COMPLETION PRESERVE ENABLE DO

Expect event Test1 to run again in another 20 secs but it didn't.

-E startup option and SSDs

Posted: 04 Jul 2013 04:23 PM PDT

Has anybody seen evidence of the effect of -E when using SSDs?

The effect on 'spinning rust' drives is not disputed - but SSDs aren't really flustered by random I/O. I wonder if the -E option might even hurt performance.

On servers with a mix of drives (SSD SAN, PCI SSDs and traditional SAN) SQL Server has to decide on startup whether to use -E or not. I have some empirical evidence that the option might be detrimental to performance, but I'd like other people's feedback before I consider taking it off.

My setup uses standard 64K RAID stripes, and the NTFS cluster size is also 64K.

ORA-16000 when trying to perform select on read only Oracle database

Posted: 04 Jul 2013 08:46 PM PDT

My application's SQL encounters an ORA-16000 error when trying to access a read only Oracle database.

ORA-16000: database open for read-only access ORA-06512: at "SYS.XMLTYPE",   line 272 ORA-06512: at line 1 ### The error may exist in com/xyz/report/dao/Transaction001Mapper.xml  

This is the query that involves the XMLTYPE, the INTERFACE_CONTENT is a CLOB column:

SELECT CONTENTS FROM ERRCODES WHERE          CODE=(SELECT xmltype(INTERFACE_CONTENT).extract('/Message/PaymentStatus/ps:FIToFIPmtStsRpt/ps:TxInfAndSts/ps:StsRsnInf/ps:Rsn/ps:Prtry/text()','xmlns="urn:abcde" xmlns:head="urn:iso:std:iso:20022:tech:xsd:head.001.001.01" xmlns:ps="urn:iso:std:iso:20022:tech:xsd:pacs.002.001.03"').getstringval() APP_CODE  FROM MESSAGE_EXTERNAL_INTERACTION MEI WHERE MEI.MSG_TYPE='Pacs_002'      AND MEI.MID='MY_ID')  

I also did a lot of EXTRACTVALUE() method on an XML field type.

The SQL is working perfectly if the database is not read only (read write).

What is the issue here? Is this related to some missing privileges/grants?

Many to Many and Weak Entities

Posted: 04 Jul 2013 02:00 PM PDT

I have an entity that cannot exist without being defined by another, and I want this entity to participate in a many to many relationship.

Example: An artist has an album (the album cannot exist without an artist), the album also has many tracks, but the same track can exist in many albums.

So we have a many to many relationship between the album and the tracks.

If the album is also a weak entity, then its primary key, is a foreign key referencing the artist, thus it cannot be a foreign key to another table representing an m to m relationship.

The question is: is it possible to have this kind of relationship in SQL, and if so, how do I express it?

How to run a cold backup with Linux/tar without shutting down MySQL slave?

Posted: 04 Jul 2013 03:47 PM PDT

I run the following before tar-ing up the data directory:

STOP SLAVE;  FLUSH TABLES WITH READ LOCK;  FLUSH LOGS;   

However, tar will sometimes complain that the ibdata* and ib_logfiles* files are updated during the process. What am I missing?

The slave machine is in a cold standby machine so there are no client processes running while tar is running.

CentOS release 5.6 64bits, MySQL 5.1.49-log source distribution.

Can I use SQL Server Management Studio 2012 on a 2008 server?

Posted: 04 Jul 2013 03:21 PM PDT

My school is using SQL Server 2008. Last time I used it I only installed (at home) SQL Server Management Studio 2008 part, and used VPN to connect to school's server.

Now I want to install SQL Server Management Studio again, but this time the 2012 version is available.

Will SQL Server Management Studio 2012 work on the school's SQL Server 2008? Or do I need the 2008 version?

Run Unix command using PL/SQL

Posted: 04 Jul 2013 08:47 AM PDT

Is it possible to run a Unix command using a query in Oracle?

I want to run simple commands (like df -h) using a query.

Is this at all possible or am I wasting my time? I don't want to use a different language like Java or C to call a procedure, it needs to purely PL/SQL.

[SQL Server] Struggling to change an Access Connection string to MS SQL Server

[SQL Server] Struggling to change an Access Connection string to MS SQL Server


Struggling to change an Access Connection string to MS SQL Server

Posted: 03 Jul 2013 05:53 PM PDT

Hi there,I have an evaluation application that I am trying to configer to use SQL server instead of the built in Access database.I have created and set up all the tables and fields on the SQL server, but cannot get the database functionality to connect and work correctly.The string this is used for the Access DB is as follows:[quote]Function OpenDB Dim Conn ' Connect the database Set Conn = Server.CreateObject("ADODB.Connection") Conn.Provider = "Microsoft.Jet.OLEDB.4.0" Conn.Open Server.MapPath("dbname.mdb") Set OpenDB = Conn End Function[/quote]For SQL server, the string is usually very different, like:[quote]Conn = "Provider=SQLOLEDB.1;Persist Security Info=False;User ID=username;pwd=password;Initial Catalog=MyDatabse;Data Source=ServerName"[/quote]How and where can I make changes to this string, so that I can get things to work with SQL Server, rather than with Access?Many thanksHans

How to sum in a bigint instead of an int

Posted: 04 Jul 2013 08:25 AM PDT

I have a Query, where I SELECT SUM(A.MINUTES) from....The variable A.minutes is defined as an INTProblem is, that I now and Again gets an aritmic overflow here, because sum(A.Minutes) can be pretty hugeSo, How to get the sum into an BIGINT instead.To change 'A.Minutes' to BIGINT is not an option unfortunatelyBest regardsEdvard Korsbæk

[Articles] Independence Day 2013

[Articles] Independence Day 2013


Independence Day 2013

Posted: 03 Jul 2013 11:00 PM PDT

Happy 4th of July in the United States

[MS SQL Server] Run a SQL Server Agent job under another account

[MS SQL Server] Run a SQL Server Agent job under another account


Run a SQL Server Agent job under another account

Posted: 04 Jul 2013 04:35 AM PDT

ON my server, SQL Server Agent runs under the NT AUTHORITY\NETWORK SERVICE account. That's usually just fine and I don't want to change it. However, it means that jobs running under the agent cannot access LAN shares. That's what I would like to do in a new job. I figured I could just use an account that I know has access to the share. What I can't figure out is how to do that. I read some MS doc on setting up proxies, but I can't figure out how to make that work.Has anyone done what I'm trying to do? If so, can you give a step-by-step to do it?

Monitoring Blocking Issues

Posted: 04 Jul 2013 02:27 AM PDT

Hi Everyone, Firstly I'd like to wish you all a very happy 4th of July :-D I hope you're all having a good day. I was wondering if anyone could help me with a problem I'm having with blocking on my servers. We have a constant stream of data coming in per minute from multiple gateways and at random times the data flow is blocked. The problem is that it resolves itself by the time I can get onto the system to run sp_who2 and see what's causing the problem. Its been driving me crazy :crazy: The only reason I notice it happens is because I see no current data on the reports I have running on the screen in the office :ermm:Is there any type of monitoring approach someone could recommend that would find the root cause be it a bad trigger or query that is causing this? Would something like DBWarden help? Thanks everyone :-)Kind Regards,Craig

Restoring just a single filegroup for support purposes

Posted: 03 Jul 2013 09:17 AM PDT

We have a large (ish) production database in our data centre which causes a problem when we need to restore a copy for support purposes.Currently we have to take a backup, restore it in the data centre. Drop some large tables. Back it up again and then copy it locally.As this approach isn't ideal (!) I was wondering if we could improve things with the use of filegroups. I could add an additional filegroup to the database and then move the large tables onto the new filegroup. For our support team they could just backup the original filegroup and restore it locally.The only problem I can see is that we might need to create the missing tables in order for our application to function. (ie. we need the tables but not the data).I've done some tests and I can't seem to recreate the table/ drop the old table / drop the old file / drop the old filegroup as the filegroup is offline.For example"Cannot drop the table 'dbo.LargeTable' because at least part of the table resides on an offline filegroup."I tried restoring the database with and without the PARTIAL keyword. I've also tried attaching and detaching the restore database without success.I've attached a script used for my testsAny advice?thanks in advanceDavid

[SQL 2012] Licensing question for services

[SQL 2012] Licensing question for services


Licensing question for services

Posted: 03 Jul 2013 09:47 PM PDT

We're looking at upgrading our server with a new version of SQL Server and I'm trying to work out how many user CALs would be needed. We've usually used processor licenses but are looking at users this time. Some programs which access the database run as services on the database server, some under the system userid and some under another userid. Do services count differently or is it just by the user they will be running under?

Run stored procedure using catalog views SSIS with SQL USER or Windows authentication user with minimum rigths

Posted: 03 Jul 2013 05:06 PM PDT

I need run dtsx packages mounted in SSIS catalog with parameters, throught an stored procedure, because one aplication fired dtsx using stored procedure with parameter valuesI have created the stored procedure using SSISDB.Catalog.Create_Execution, Set_Execution_Parameter_Value, and Start_Execution. This works, but for run the user need have db_owner privilegies on principal_db and ssis_admin on SSISD this will only run if the executing user is a windows account, no problem with this but I need the windows account have minimal rigths on principal_db, I know I can't do it using Sql user because is necesary an windows account but when I create my procedure give execution permition for execute stored proc to an win user in the principal database, create the win user on SSISDB and give permision on catalog procedures SSISDB.Catalog.Create_Execution, Set_Execution_Parameter_Value, and Start_Execution and add to my stored procedure the clause "execute as owner",show me next error:I tried using Execute As and specifying a windows account but I get the error "The current security context cannot be reverted. Please switch to the original database where 'Execute As' was called and try it again."I tried using "execute as self" and when I try run the procedure with another win user (that give permision) show me is necesary windows account.My user have permision on db_owner, sysadmin,and ssis_admin, I think i need come into my permisions to an user for execute this, considering that my procedure working with 2 databases principal_db (where record my procedure) and SSISDB (where live the catalog procedures for create execution and put parameters).Somebody can help me?

SQL Server 2012 Merge Replication Issue - Inserted Data on Subscriber get deleted after successfully synced back to the Publisher

Posted: 03 Jul 2013 03:33 PM PDT

Hi all,I have just encountered this weird issue with our Merge Replication Setup.We have a primary sql server 2012 replicating data out to another secondary sql server 2012 database (pushing data continuously).Then we have subscribers (users) synchronize data to the secondary database server (pull subscription, bi-directional sync).We use parameterized filter (suser_sname) and joined filters on a few tables.We have the following set up as seen in the attachmentAt the subscribers (user's machine), i created a new record of customer which also created customer_contact,contact,pwc_header,pwc_header_extra,pwc_line,site records (one row for each table).Then i sync the Subscriber with the Publisher. All the inserts were inserted into the Publishers (secondary database server and primary database server) correctly. However, the inserts were deleted from the subscribers straightaway in the same sync session and we had no idea why it would do that. ;-)Has anyone encountered the similar issue before? Please help...

[T-SQL] How to Add a Date randomly

[T-SQL] How to Add a Date randomly


How to Add a Date randomly

Posted: 03 Jul 2013 08:25 PM PDT

Hi All,In a table (sale.person) I have more than 9 lacs row. I have only three columns. I need to add a column by Orderdate (which I have done by Alter table). Next I want to fill up the rows with date which should be set as a Order. For example if the date of first row is 2013-07-04 then the date of the next row will be 2013-07-03 and so one.Is there any way so that I can do it easily?

Query help

Posted: 03 Jul 2013 02:10 AM PDT

Hello - I was handed a script to modify. I am an accidental DBA. I am confused on how to modify a query to return certain results. For example when the script returns a 5 i want it to display a 1, when it returns a 4 i want to display a 2 etc.. Any help would be great.Thanks!!

How to wrote procedure for these?

Posted: 03 Jul 2013 06:58 PM PDT

Hai friends, i ve two tables create table travel_request(request_id int identity primary key,user_id varchar(100) foreign key references users(user_id)purpose_travel varchar(100),total_amount varchar(10))create table onward_journey(onward_journey_id int identity,request_id int foreign key references travel_request(request_id),departuredate datetime,from varchar(100),to varchar(100),travel mode varchar(100))how to i insert these tables?and one more think is when i started enter travel purpose request_id ll genereted automatically depends upon that request_id onward_journey ll be there how to do that?

split a string

Posted: 03 Jul 2013 06:19 AM PDT

If I have a street address that I would like to split street from Apt number, how could I do that?for example now I have fullstreet ='100 NW 25 St APT# 303'Now I would like to split fullstreet into two columns Street = '100 NW 25 St ' and Apartment = #303what is the syntax?Thanks much

Can I avoid using cursors

Posted: 03 Jul 2013 06:06 AM PDT

I need to update/create records in a Notes table based upon changes to data in a separate table.I wrote a cursor to solve this problem it follows the following basic outlineIt loops through a table that list the type of 'test code' we need to look for to create notes as well as a template of the note to be created.Next it loops through all patient records that have this 'test code' the notes are patient specific and not order specific so I added this loop to cut down on the number of function calls to set a particular variable.Next it loops through all orders for the patient that has this 'test code' and calls the appropriate stored procedure to update or create the note.Is there a better way to do this without using nested cursors? It takes about 1hr for this query to run.code follows[code="sql"]DECLARE @OrderCode as varchar(50)DECLARE @DaysBetweenTest as intDECLARE @NoteText as varchar(max)DECLARE @TemplateNote as varchar(max)DECLARE @PatientId as varchar(50)DECLARE @LastPerformedDate as DATEDECLARE @OrderEventId as INTDECLARE @FirstPayableDate as DATEDECLARE @NoteId as INTDECLARE @FrequencyCheckNoteId as INT-- Loop through order codes in FrequencyCheck tableDECLARE FreqOCLoop CURSOR FOR SELECT OrderCode, DaysBetweenTest, NoteText from dbo.FrequencyCheckOPEN FreqOCLoopFETCH NEXT FROM FreqOCLoopINTO @OrderCode, @DaysBetweenTest, @TemplateNoteWHILE @@FETCH_STATUS = 0BEGIN-- Loop through patients with future events for order code DECLARE PatientLoop CURSOR FOR select distinct t3.PatientId from dbo.rvw_OrderEventOrderCodes as t1 inner join dbo.[Order] as t2 on t1.OrderId = t2.id inner join dbo.Patient as t3 on t2.PatientId = t3.id where t1.ScheduledDueDate > getdate() and t1.OrderCode=@OrderCode and t1.[Status]=2 OPEN PatientLoop FETCH NEXT FROM PatientLoop INTO @PatientId WHILE @@FETCH_STATUS = 0 BEGIN -- Initialize values SET @LastPerformedDate = NULL SET @FirstPayableDate = NULL SET @NoteText = NULL -- Find last performed date for this patient SET @LastPerformedDate = (SELECT dbo.TestLastPerformedDate(@OrderCode, @PatientId)) -- Calculate next valid date SET @FirstPayableDate = DATEADD(dd,@DaysBetweenTest,@LastPerformedDate) -- Set note value IF @LastPerformedDate IS NOT NULL BEGIN SET @NoteText = REPLACE(@TemplateNote,'<FirstPayableDate>',@FirstPayableDate) END ELSE SET @NoteText = 'No previous results for test code ' + @OrderCode -- Find all future OrderEvents for this patient and loop through them to add or update notes DECLARE EventLoop CURSOR FOR select distinct t1.OrderEventId from dbo.rvw_OrderEventOrderCodes as t1 inner join dbo.[Order] as t2 on t1.OrderId = t2.id inner join dbo.Patient as t3 on t2.PatientId = t3.id where t1.ScheduledDueDate > getdate() and t1.OrderCode=@OrderCode and t1.[Status]=2 and t3.PatientId = @PatientId OPEN EventLoop FETCH NEXT FROM EventLoop INTO @OrderEventId WHILE @@FETCH_STATUS = 0 BEGIN -- Check to see if we have already added a note for this order if (select count(id) from dbo.FrequencyCheckNotes where NoteOrderEventId=@OrderEventId)>0 BEGIN -- Check to see if the currently existing note is the same if @LastPerformedDate <> (select LastPerformedDate from dbo.FrequencyCheckNotes where NoteOrderEventId = @OrderEventId) BEGIN -- Set values SET @NoteId = (SELECT NoteId FROM dbo.FrequencyCheckNotes WHERE NoteOrderEventId = @OrderEventId) SET @FrequencyCheckNoteId = (SELECT id FROM dbo.FrequencyCheckNotes WHERE NoteOrderEventId = @OrderEventId) -- Update OrderEvent note EXEC [dbo].[UpdateFrequencyCheckNote] @NoteText = @NoteText, @NoteId = @NoteId, @LastPerformedDate = @LastPerformedDate, @FrequencyCheckNoteId = @FrequencyCheckNoteId END END ELSE -- Create new note EXEC [dbo].[CreateFrequencyCheckNote] @NoteText = @NoteText, @OrderEventId = @OrderEventId, @LastPerformedDate = @LastPerformedDate FETCH NEXT FROM EventLoop INTO @OrderEventId END CLOSE EventLoop DEALLOCATE EventLoop FETCH NEXT FROM PatientLoop INTO @PatientId END CLOSE PatientLoop DEALLOCATE PatientLoop FETCH NEXT FROM FreqOCLoop INTO @OrderCode, @DaysBetweenTest, @TemplateNoteENDCLOSE FreqOCLoopDEALLOCATE FreqOCLoop[/code]

query

Posted: 03 Jul 2013 05:21 AM PDT

what is the query to select nth highest salary of emp table

Partial match & Update

Posted: 03 Jul 2013 04:55 AM PDT

I need update the AppID in table1 by comparing Table2. the data looks like below. Here the problem is AppNames in the both tables are not exactly same but technically they are same with partial match.Table1AppID||||AppnameNull ||||CSO Services OperationsNull |||| CSPNull ||||AAC Claims SystemsTable 2AppID|||||Appname1 |||||Corporate Services Operations(CSO)2 |||||Credit Servicing Portal(CSP)3 |||||American Assuarance Company (AAC) Claims SystemsPlease let me know how to get rid of this task.thanks

[SQL Server 2008 issues] Find out Percentage Free Virtual Memory

[SQL Server 2008 issues] Find out Percentage Free Virtual Memory


Find out Percentage Free Virtual Memory

Posted: 03 Jul 2013 06:23 PM PDT

Hi,How do i find out Free Virtual memory,Process CPU Utilization,Replication Available,Lock Timeout Rateby sql query

Turn rows into columns

Posted: 03 Jul 2013 06:12 PM PDT

Hi,I have the following table & (sample) associated data set:[code="sql"]CREATE TABLE TableHeaders(Period_Year int,Period_Month varchar(255),ColumnID int,ColumnLabel varchar(255));---/* sample data setPeriod_Year Period_Month ColumnID ColumnLabel2012 January 1 Jan-122012 January 2 RAF2012 January 3 STP2012 January 4 PY2012 January 5 Jan-122012 January 6 RAF2012 January 7 STP*/[/code]I want to select from the table[code="sql"]SELECT ColumnLabelWHERE Period_Year = @pv_SelectedYearAND Period_Month = @pv_SelectedMonthORDER BY ColumnID[/code]but instead of getting a data set giving a number of rows (from example it would be the list of columnLabels), I would like to get back one row with as many columns as they are rows.From example, I would like my data set to be:[code="other"]Col1 col2 col3 col4 col5 col6 col7Jan-12 RAF STP PY Jan-12 RAF STP[/code](column names will not be used so can be anything)I hope the above makes sense - annoyingly, I can test things out out as I am "sqlserver-less" just now :ermm:Cheers,bleroy

Find out free physical memory

Posted: 03 Jul 2013 04:58 PM PDT

HiHow do i find out free Physical Memory using sqlserver query

Queries that cause full scan table

Posted: 13 Jun 2013 05:35 PM PDT

Hi everybody , I would like to retrieve queries that cause full scan table in sql server.Someone can help me please?? :ermm:

Can this be coded better?

Posted: 03 Jul 2013 02:45 PM PDT

My goal is to get a dataset of orders with their overall statuses.The order overall status is a computed column based of their detail line statuses.I have 2 options:1. Create a view of distinct orders and their overall statuses via a function call2. Create a computed column on the order table ( non persisted) because the order status changes over time.This computed column will still be the result of a function call.I am choosing option 1. Attached is sample data and function code.I am looking for suggestions and ideas on how to make the function code better. So far I only code a portion of the function. There are other rules to implement in the function so I am trying to find ways to make it as readable, short and simple. Eventually my function will return an INT. I just return varchar(50) for now to facilitate verification.Thank you![code="sql"]CREATE TABLE dbo.OrderDetail( OrderID INT NOT NULL ,OrderDetailID INT NOT NULL, ProductID INT NOT NULL, Qty INT NOT NULL, STATUS varchar(50) NOT NULL ) -- OrderID=1 overall status = 'SHIPPED' INSERT INTO OrderDetail values ( 1, 1,100, 10, 'Shipped') INSERT INTO OrderDetail values ( 1, 2, 200, 10, 'Shipped') -- OrderID=2 overall status = 'BACKORDERED' INSERT INTO OrderDetail values ( 2, 3,100, 10, 'Shipped') INSERT INTO OrderDetail values ( 2, 4, 200, 10, 'Backordered') -- OrderID=3 overall status = 'CANCELLED' INSERT INTO OrderDetail values (3, 5,100, 10, 'Cancelled') INSERT INTO OrderDetail values ( 3, 6, 200, 10, 'Cancelled') -- OrderID=4 overall status = 'SHIPPED' INSERT INTO OrderDetail values (4, 7,100, 10, 'Cancelled') INSERT INTO OrderDetail values ( 4, 8, 200, 10, 'Shipped') INSERT INTO OrderDetail values ( 4, 9, 300, 10, 'Shipped') -- OrderID=5 overall status = 'NO STATUS' INSERT INTO OrderDetail values (5, 10,100, 10, 'No status') INSERT INTO OrderDetail values ( 5, 11, 200, 10, 'No status') INSERT INTO OrderDetail values ( 5,12, 300, 10, 'No status')USE [TrainingDB]GOCREATE FUNCTION [dbo].[udf_CalculateOrderStatus]( @OrderID int)RETURNS varchar(50)ASBEGINDECLARE @Status varchar(50)DECLARE @TotalDetails intSET @STATUS=NULLSELECT @TotalDetails = Count(Status) FROM [dbo].[OrderDetail] where Orderid=@OrderID-- SHIPPED?IF @Status is nullBEGIN SELECT @Status= 'SHIPPED' FROM [dbo].[OrderDetail] Where OrderID=@OrderID GROUP BY OrderID HAVING MAX(Status) = 'Shipped' AND MIN(Status) = 'Shipped' SELECT @Status= 'SHIPPED' FROM [dbo].[OrderDetail] Where OrderID=@OrderID GROUP BY OrderID HAVING (COUNT(CASE Status WHEN 'Shipped' THEN 1 END)) >= 1 AND (COUNT(CASE Status WHEN 'Shipped' THEN 1 WHEN 'Cancelled' THEN 1 END )) = @TotalDetails END-- CANCELLED?IF @Status is nullBEGIN SELECT @Status= 'CANCELLED' FROM [dbo].[OrderDetail] Where OrderID=@OrderID GROUP BY OrderID HAVING MAX(Status) = 'Cancelled' AND MIN(Status) = 'Cancelled' END--BACKORDERED?IF @Status is nullBEGIN SELECT @Status= 'BACKORDERED' FROM [dbo].[OrderDetail] Where OrderID=@OrderID GROUP BY OrderID HAVING (COUNT(CASE Status WHEN 'BackOrdered' THEN 1 END )) >= 1 END--NO STATUS?IF @Status is nullBEGIN SELECT @Status= 'NO STATUS' FROM [dbo].[OrderDetail] Where OrderID=@OrderID GROUP BY OrderID HAVING MAX(Status) = 'No Status' AND MIN(Status) = 'No Status' ENDRETURN @StatusENDGO-- Goal: Set of Orders with their overall statusesselect distinct orderid, dbo.udf_CalculateOrderStatus(orderid) as OrderShipmentStatus from dbo.orderdetail[/code]

How to import non-delimited text file

Posted: 03 Jul 2013 05:07 AM PDT

Hi, all. I (will) have a non-delimited text file containing 35 lines each for over 5000 records. Below is an example of lines 1-5 and 30-35 of one record. I need two of the 35 values (SubnetMask and SubnetDescription) for each of the 5000+ records in this long, continuous file. Just looking for high-level ideas on the best way to handle this. All my other processes are in SSIS, so will ultimately integrate this import/parse solution in the flow. Any thoughts appreciated.[font="Courier New"]SubnetAddress=10.16.224.128SubnetName=FOS-Salzburg Kasernenstraase #823 VPNSubnetMask=[b]255.255.255.128[/b]NetworkAddress=10.0.0.0LocationID=895...SubnetDescription=[b]CHINA-BEIJING-C025 F2 XIANGJIANG CNVASX1706[/b]DHCPOptionTemplate=DHCPPolicyTemplate=PrimaryInterface=NoAllowDHCPClientsModifyDynamicObjectResourceRecords=Same As in Global Policies[/font]

Login failed. The login is from an untrusted domain and cannot be used with Windows authentication.

Posted: 20 Apr 2012 03:24 AM PDT

Windows 2003 R2 Enterprise SP2 x64SQL Server 2008 SP2At first, I thought it was an AD issue. Server guys checked and said no errors in domain controllers. I ran some reports for network errors/issues, didn't see any. The error lasted for 5-10 minutes before correcting itself.Saw this error message before the authentication one: The client was unable to reuse a session with SPID ---, which had been reset for connection pooling. The failure ID is 29. This error may have been caused by an earlier operation failing. Check the error logs for failed operations immediately before this error message.Any ideas? I keep googling and not finding a definitive causes and resolutions.

Implicit conversion using COALESCE or CASE

Posted: 03 Jul 2013 06:07 AM PDT

Hi,I've discovered that I always seem to get an implicit conversion to INT when using this syntax:SELECT ... ORDER BY COALESCE(SectionOrder, 99); (Column [SectionOrder] is defined as SMALLINT).The execution plan COMPUTE SCALAR operation shows this syntax as being changed to a CASE statement:CASE WHEN SectionOrder IS NOT NULL THEN convert_implicit(int,SectionOrder,0) ELSE (99) ENDIs this conversion a "feature" of SQL Server? Any info is appreciated.~ Jeff

Find out which value in table is in error

Posted: 03 Jul 2013 07:11 AM PDT

I know which column is erroring out. It is varchar and I am trying to convert it to decimal. But do we know how to find out which value is erroing out. It is kind of big table.Error converting data type varchar to numeric.

Limit in date picker control in SSRS

Posted: 13 Jul 2010 11:11 PM PDT

I want to set the date limit in a parameter to the results of a SQL query/SSRS dataset. I mean that I give the limit and the calender control shows date in between those dates only.

system sp permission

Posted: 03 Jul 2013 01:18 AM PDT

quick question here: where can I find out the needed privilege to access different system sp, for example, sys.sql_modulesMy code works in my laptop but fails on server, I am guessing the application pool doesn't have enough privilege to access the system sp.Thanks.

Query Help

Posted: 03 Jul 2013 01:24 AM PDT

Hi All I need one logical help to implement in my main logic.so please help me to do [code="sql"]create table #x1(SchoolID int,SchoolName Varchar(50))INSERT INTO #x1 VALUES (101,'CITY CONNECTIONS-WEST')INSERT INTO #x1 VALUES (102,'City Connections - South')[/code]I need space around hyphens where SchoolName has no space around Hyphes. [code="plain"]SchoolID SchoolName101 CITY CONNECTIONS-WEST102 City Connections - South[/code]Expected output is as below[code="plain"]SchoolID SchoolName101 CITY CONNECTIONS - WEST102 City Connections - South[/code]Please help me to do thisThanks for Your Help

monitoring rant

Posted: 03 Jul 2013 01:37 AM PDT

I have spent the last month with a monitoring product, which shall remain nameless, that advertises its applicability to do network monitoring, application monitoring and database monitoring. I am not qualified to speak about networking or application monitors, but want to ask the list if my database expectations are out of line.From what I have learned from using the product, its main feature is the ability to poll servers throughout the network and collect WMI bits from each server. What surprised me was that if you want any db server specifics, you must write your own queries to get to them! I had been using Glen Berry's scripts which are a godsend, and dumping them to table, to collect and persist monitoring results, but my expectations are that for this kind of money, one would expect that db and resource specific information built into the product, not listed as a 'feature' as in 'yes, you can establish your own metrics and counters through the product'. Being cast into a community of other customers and community templates is not my idea of support either.Further, there is zero in the way of AI, or remedies or suggestions or indeed details pertaining to counters that have crossed thresholds. All i have been able to get out of it is 'Locks are critical' or 'Scanning is critical', but not the specifics of what is being locked how, or what resource is consuming cpu, memory or pipe. Drill down to the critical message gets you to a page with a giant gage, almost militating against getting related details in one place...almost makes me want to attach sparklines to Mr. Berry's suite of queries.Shouldn't a for profit monitoring tool do this stuff for you, besides just collecting WMI and centralizing it?thanks for bearing with me.PS...paradoxically, I just noticed the Redgate message at the top of the forum and feel compelled to say my rant is NOT directed at Redgate.

SQL 2012 SE downgrade to SQL 2008 R2 EE

Posted: 03 Jul 2013 01:22 AM PDT

Hello everyone,Is it possible to downgrade the SQL 2012 standard edition software to SQL 2008 R2 EE if I purchase SQL 2012 standard edition license? I know this option is available windows server.Thanks

SSIS Solution File Issue: Generate a new .database file and ask to overwrite?

Posted: 29 Mar 2013 02:18 AM PDT

Hello All,I have a trouble with my SSIS project solution file. I would appreciate it if someone could help me out.I'm doing things on our team foundation server. I mapped the latest project to my local path, but every time I clicked the project solution file, it gave me like: "[local mapping path]\ [b]projectname11.database[/b] already exists. Are you sure you want to overwrite it?" YES/NO?I have the [projectname].database in my mapping folder, the projectname11.database will be generated automately when I hit the .sln file in VS.If I choose to overwrite it, it will show:" The item [local mapping path]\[b]projectname11.database[/b] could not be found in your workspace" and " This file is under source code control. It must be checked out before it can be modified."If I choose not to overwrite it, " The '[local mapping path]\[b]projectname11.database[/b]' file already exists" This is really irritating. Please let me know if you have any thoughts. Thank you in advance. Best,Jing

Managing Registered Servers

Posted: 02 Jul 2013 08:51 PM PDT

Hi,I connect to SQL servers which are all hosted on a virtual environment.This virtual environment is not our on local network so we have to connect via IP & port number. I've set these servers up in my Registered Servers list in Management Studio.Once a month, this virtual is refreshed with the latest updates but during the process, the IP addresses of these SQL servers change. I then have to go & update every registered server I created to point to the new IP address. There are 17 of them in total. The IP addresses I get from a view in another SQL database.Question is... Is there a quick way I can update these IP addresses from this view? Maybe a process I can use to create a registered servers file & import that again?It's not a serious issue but a tedious process to go through every server & fix it up manually. If there's any way I can automate this process it would make my life much easier.Any help would be appreciated.Thanks!

Online Indexing Being Blocked

Posted: 02 Jul 2013 08:26 PM PDT

Hi All,Running SQL Server 2008 R2 Enterprise Edition. The environment runs 24/7Had a normal indexing job...but this was obviously being blocked by user processes. Changed to Online Indexing...but i'm still having this issue which I don't understand why. Any assistance is greatly appreciated.ThanksDenesh

Page Splits exceeds a predefined value

Posted: 02 Jul 2013 11:14 PM PDT

Hi How can i get Page splits valuePlease provide sql query to find out Page splits value

Multiple Primary key in table?

Posted: 02 Jul 2013 08:55 PM PDT

Hi,I have doubts multiple primay key on single table,Table can be support multiple primary key in this case one Clustred index created by default and other primary key which index name will be created?thanksananda

SQL SERVER 2008 64 bit installation on Windows 2008 - Error code 32

Posted: 26 Jun 2013 07:50 AM PDT

Any advices or idea how to fix it. I am in the middle of installation of it and It is stuck on error code 32 with sqlEngineConfigAction_install_confignonrc_Cpu64. I have admin rights. It is brand new box.

Datafile percentages

Posted: 02 Jul 2013 08:03 PM PDT

Does anyone have any handy T-SQL for datafile percentages?What I am looking for is the percentage of the datafiles size in relation to each other for example: total size 4GBPrimary 10%Log 15%Data 1 20%Data 2 20%Index 35%Many thanks J.

Search This Blog