Friday, June 7, 2013

[SQL Server 2008 issues] How to copy new records and updated records from a database table in local server to database table of remote server

[SQL Server 2008 issues] How to copy new records and updated records from a database table in local server to database table of remote server


How to copy new records and updated records from a database table in local server to database table of remote server

Posted: 06 Jun 2013 04:25 PM PDT

HiI am having locla server and remote server.In the local server ,i have database table which is inserted,updated regularly.There is no primary key in the table .Once in a week i want to copy the differential data to be updated in table of remote server.Both table structures are same.Please help me to find the solution.Thanks Chandran.

Calculate Page number

Posted: 06 Jun 2013 08:30 AM PDT

I have a table that I need to return with a dynamically calculated page number. The number of items on a page are not the same but I do know where each page break is based on a column value being 0.It would look something like thisID, SomeTextValue, SomeOtherId, SortOrder1, Hello, 98, 19, World, 4, 210, GoodBye, 5, 311, '', 0, 6 (This would be the page break)12, It's a 98, 113, great day, 4, 2I then would want to have a column with the page number of 1 for the first 3 rows and a page number of 2 for the last two rows, I really don't need the actual page break record to return in the dataset but I just need it to know where the page breaks are.Right now I am using a loop to figure out what the page numbers are but I would like to speed this up if possible.Thanks for any suggestions.

read registry w/o sysadmin, is it possible?

Posted: 05 Jan 2012 11:34 PM PST

Hi, in my SQL Server 2008 EE SP1 environment, one of the application needs to use xp_instance_regread for reading registry values (values like TCPPort in registry), I couldn't find a documentation but by experimenting I saw that only sysadmin role can use this xp. Is there any way to your knowledge, to grant read registry without making the login sysadmin? (something like using Certificate, credential..??) I just want to grant "read registry", is this possible? Thanks..

How can we find the Null values on indexed columns from all tables in SQL Server.

Posted: 06 Jun 2013 07:16 AM PDT

Hi,Please help me on finding the Null values on indexed columns from all tables in SQL Server.I need a SQL query to find all the indexed columns which is having null values.Please do me needful.

SSIS Derived Columns

Posted: 06 Jun 2013 06:49 AM PDT

I am new to SSIS. There is a data flow task for a derived column. I want to have a Derived Column that only looks at the characters after a hypen, and then eliminates any leading or ending spaces.For example: Some Descriptive Text - charactersI would like it to just return: characters(with no leading or ending spaces)Not sure how to accomplish this in a SSIS drived column expression. Any help will be greatly appreciated. Thank you.

Looking for good article with example for Encryption at database level, table level and field level in SQL SERVER 2008R2 and Also durig the transmission of data between two different server(e.g. Oracle and SQL Server)

Posted: 06 Jun 2013 07:30 AM PDT

Hi firends..I got some example on msdn website regarding this, but I guess I need more practice..If you have any idea about any article, please let me know.thanks.

Transaction replication has many undistributed commands, but tracer token arrives immediately.

Posted: 06 Jun 2013 06:36 AM PDT

hi i have sql server 2008 R2 with transactional replication. In general this replicated database that normally performs very well, only today after reinitializing I saw a large command buildup in the distributor.However, when I insert a tracer token to measure the latency, this arrives at the subscriber within 3 seconds. When I look inside the subscriber db and compare the data to that of the publisher I can see even the most updated tables are up to date. Has anyone had a similar experience for this?

reporting services and analysis services

Posted: 06 Jun 2013 12:14 AM PDT

How do i know reporting services and analysis services are installed or not in my sql server.

NEWSEQUENTIALID() with EC2

Posted: 06 Jun 2013 02:14 AM PDT

HelloI have a db running on an EC2 (Amazon AWS) instance. I have a table that has a uniqueidentifier column as the primary key. There is also a clustered index on that column. To create the uniqueidentifier I use NEWSEQUENTIALID().This has been working great. However I just noticed that now new rows are not getting added at the end. They're getting inserted in the middle. Not good for a clustered index. After looking into it I found that the reason this is happening is because the underlying network interface has changed. When an EC2 instance gets rebooted, it will get a new network interface. Which affects NEWSEQUENTIALID().The reason uniqueidentifier is the primary key and not an int (and use identity) is because there are multiple dbs with the same structure that will need to be copied to a master db. SoI could create my own version of NEWSEQUENTIALID(). Or I could take the clustered index off the primary key, make it a non clustered index and put the clustered index on something else. There's a column that has the timestamp of when the row was created, that may be were I'd put the clustered index.Has anyone run into something similar? Thoughts? Suggestions?Thanks!

sql server transactional Replication

Posted: 06 Jun 2013 03:02 AM PDT

We are running SQL Server Transactional Replication and we selected a few tables to be replicated. One of the tables has 3 triggers defined on it an d when we run the snapshot, this table is not created . It is selected to be replicated and it has a Primary Key . Any ideas ? I am a bit desperate .

Migrating to Virtual Deployment - Long query times

Posted: 06 Jun 2013 02:25 AM PDT

Have a production physical server running SQL 2005 and reporting services. The server is about 6 years OLD (yikes) with dual/dual procs and 4GB RAM. It's been doing the job perfectly fine, but we need to move on from EOL hardware. The database is 6GB in size.I've created a virtual machine in Hyper-V with Windows 2008 R2, 4 procs, 8GB RAM - I've also moved up to SQL 2008 R2 Standard (had some CALs from a previous project) and migrated the data over. As far as functionality, I'm not really seeing any issues (nor is the testing user group complaining of anything) HOWEVER, the reports in reporting services are extremely slow.Just as an example, one of the reports in the old environment is instantaneous. In the virtual environment, it takes 20 seconds to render. That time is very consistent.I have the data on one LUN and System Databases on another LUN and SQL/OS on another LUN in the virtual environment. The current production is actually all on a single LUN attached to the same SAN.I know it's something related to the VM, as I created the same environment my local machine and everything is instant. I guess I'm confused as I've read over the years the advancement of SQL running in a virtual environment. I'm pulling my hair out on this. I even isolated the virtual machine on a host (32 procs, 256GB RAM) with nothing else on it. Same results.Other information I gathered on the VM while running the report:Disk Queue Length (on data LUN) - .2Disk Reads/Sec (on data LUN) - 48avg (136max)Disk Writes/Sec (on data LUN) - 48avg (148max)Proc Queue Length - 2 (the CPU max during report is 20% - which I'm questioning why it's that high)When I run a profiler - I see on the production box, Reads of 691 and the VM Reads of 1,572,147. The same query.What else can I check to pinpoint the bottleneck?

Issue while joining two tables with no unique columns

Posted: 05 Jun 2013 11:11 PM PDT

I have two tables with no common columns: Graph_Range Start_Range End_Range 0.10 0.20 0.20 0.30 0.30 0.40 Graph_data Asset_Value factor Case 500 0.12 1 270 0.13 2 300 0.27 3 240 0.23 4 200 0.13 5 100 0.12 6 I need results as Lower Limit Upper Limit Sum_Asset 0.10 0.20 1,070 i.e. 500 + 270 + 200 + 100 0.20 0.30 540 i.e. 300 + 240 0.30 0.40 0 i.e. the Sum_Asset should give sum of all asset values for which factor falls between Start and end range in the other 2 columns. I am not sure how can I get results in such data set as UNION, CROSS JOIN,INNER JOIN are not working. Either I am getting repaeted entires or error messages. Please help. I have used this query SELECT A.Start_Range, A.End_Range, SUM(B.asset) AS Sum_Asset FROM dbo.Graph_Range AS A , dbo.Graph_Data AS B GROUP BY A.Start_Range, A.End_Range , b.Factor HAVING (B.Factor < A.End_Range) AND (B.Factor > A.Start_Range)

Query help

Posted: 06 Jun 2013 12:58 AM PDT

hi guys need help with a query please. i have a tablethat looks like this;id name aim startdate 1 kev maths 31/10/20101 kev english 01/11/20102 trev science 01/09/20102 hev maths 31/10/20103 nev history 20/3/20104 mev maths 12/12/20105 brad english 31/ 5/20105 brad maths 31/10/2010i need to find out which people have an aim at the start date 31/10/2010 as there earliest start date for their aims. resultid name aim startdate1 kev maths 31/10/20102 hev maths 31/10/2010note how brad is not in result as he has english 31/5/2010 as his earliest start date for one of his aims.please help !!!!!

Please help i have a big probleme my database is empty and i need to go back!

Posted: 05 Jun 2013 09:11 PM PDT

Hi everyonethis morning i start my database and everything is gone insideall my tables are emptyi have no backupis there any way i can restore it as it was the day before?please help

Thursday, June 6, 2013

[SQL Server] Inline Table Valued Function

[SQL Server] Inline Table Valued Function


Inline Table Valued Function

Posted: 05 Jun 2013 10:43 PM PDT

Hi, I would like some help with my first inline table valued function,I have to create one that builds a table containing two columns, one with an integer value that has been derived through a loop, eg a list of integers from one to four and the other needs to have some text based on the integer in the previous column, so maybe odd next to the odd integers and even next to the even ones. Would anyone be able to point me in the right direction, I have already come acrcoss a problem whereby the declaration of the parameter within the function is not allowed.I would need to generate the numbers and perform some check to generate the text based on the number (that's my initial approach)Would you be able to point me in the right direction and guide my approach to the problem?Much appreciatedSoulchyld

Importing/formatting MYSQL time date into SQL as char

Posted: 06 Jun 2013 02:37 AM PDT

i have created a view in MYSQL which is imported into SQL Server. The originating MYSQL table column definition i have an issue with is: `duration` int(11) NOT NULL default '0', i cannot change this.the data field looks like this : [b]'27[/b]' one view definition i have created uses SELECT sec_to_time(`asteriskcdrdb`.`cdr`.`duration`) AS `duration`, [b]which converts the time '00:00:27'[/b] which is the format i need. when i go to import this into SQL Server, i get: Error converting data type DBTYPE_DBTIME to time. so i use cast(`asteriskcdrdb`.`cdr`.`duration` as char(8) charset utf8) AS `duration`which works but does not properly format the field.The query i use to bring the data over is:SELECT * FROM OPENQUERY(PHONEDB,'SELECT calls_key, calldate, time, [b]duration[/b], duration_in_decimal, ext, trunk, calltype, io, areacode, phonenum FROM asteriskcdrdb.Conshy_Calls')I need to get DURATION over to SQL in a char(8) format as 00:00:00.....i have played with all sorts of MYSQL function but I can't seem to convert time format to char. Thanks

Partition Function SQL Server?

Posted: 06 Jun 2013 04:24 AM PDT

Hi all experts,My question is my do we need Partition function?What would have happen if microsoft would have not included Partition Function feature in SQL Server?

theoritical size of an index

Posted: 06 Jun 2013 04:09 AM PDT

I have an index occupying almost 77 GB space on the drive which is 70 percent fragmented. I am trying to calculate the theoritical size of this index. This index is on field with data type bigint and with datetime field in included list. The table has 3.3 billion rows.

To find unique constraint in a table

Posted: 05 Jun 2013 10:12 PM PDT

We have a table in which we want to know whether any of the columns has unique constraint defined.Or if we can get create table statement for that table, not from sql server studio, but from another application. So it has to be through queries.Regards...

[how to] MySQL - Export two columns from large table without causing a lock

[how to] MySQL - Export two columns from large table without causing a lock


MySQL - Export two columns from large table without causing a lock

Posted: 06 Jun 2013 07:34 PM PDT

I am a sysadmin who doesn't do a ton of dba stuff. For a project, I have access to a production server--which I am not the sysadmin for--with an enormous 40,000,000 row >10 GB MySQL InnoDB table. I want to export two small columns, one of which is an INT(11), and the other of which is a VARCHAR(20) from that table to a CSV or .SQL file (either is fine, I'll write a crosswalk for the next step no problem).

We can call the columns ColumnA and ColumnB and the table SampleTable.

MySQLdump is not the right tool for this because I can't specify the columns and I don't need to export a massive massive table just got two tiny columns.

I know I can do a SELECT INTO statement (either to create a new table with just the columns or to do a SELECT INTO OUTFILE to skip the intermediate step), but I am concerned that this will cause a table level lock on a production server. The table is InnoDB.

What's my best bet to avoid inconveniencing any live traffic on the server or locking anything?

Thanks

Resources on database design and data integrity enforcement best practices?

Posted: 06 Jun 2013 07:13 PM PDT

I am a developer at a very small software company who has historically taken a very loose approach to database design and data integrity techniques. I'd like to turn that around and start using more in the way of keys, constraints, transactions, etc to prevent software defects and inconsistencies from causing data problems and also bring those causes to light more quickly. Are there any great/classic resources or books on best practices for database design and these sort of data integrity features, discussion of trade offs, etc?

How to "analyze" a SQL query?

Posted: 06 Jun 2013 08:45 PM PDT

I'm not sure if I used the right term in my question.

I am creating a prototype, which allows users to type in a raw SQL query, which will query the backend DB. I want to also make sure that users only apply SQl to the tables that they are allowed to.

So when a user types in something like SELECT * FROM t1 WHERE id > 10, I feel like I need to

  • Analyze the query to make sure that the tables are ok - in this case "t1"
  • They cannot access any other admin-type tables. For example, I don't want them to access "information_schema (i.e. for Postgres)"

What do you think the best strategy would be for this?

FYI, I'm NodeJS, so the primary programming language would be Javascript.

Would I need to "parse" the query to make sure that the tables being accessed by the query are ok?

FTP map disapears after a while [migrated]

Posted: 06 Jun 2013 04:38 PM PDT

I'm deploying an C# mvc application with a couple of content maps to save my images in. I made them all in the ~Content/Images/ map and they work fine but after a while, sometimes just 15 minutes, a couple of maps dissapear. Only my map ~Content/Images/Advert and ~Content/Images/Flags stay always. But the the 3e map in this ../Images/ map always dissapears with all it's submaps + content.

I've tried to add them with FileZilla, give them full rights, add images or files in them just to keep them, but nothing helps. Does anyone know how to fix this anoying problem?

Regards.

Running a DELETE Stored Proc but finding degradation in Performance in WHILE loop as iterations increase

Posted: 06 Jun 2013 05:37 PM PDT

Have a quick general question. I have a table i'm trying to purge a table. I'm deleting using a WHILE loop with WAITFOR DELAY time of 50ms between batches and 2000 records per batch. The real issue is that as time progresses the number of records deleted drops over time. Refer to the following:

Minute Number | Number of Records Deleted:                1 | 162,000               2 | 116,000               3 |  80,000               4 |  72,000               5 |  62,000               6 |  38,000               7 |  38,000               8 |  34,000               9 |  20,000  

Wondering if there is something fundamental about using WHILE loop batches for deleting records that causes the performance to degrade with each iteration of the loop. we have been manually watching the performance and then stopping the proc as the performance begins to drop drastically around the 5th to 6th minute and then restarting the deployment again. We don't think its a locking issue directly because as we play with the batch size the performance is always dropping at around the 5 to 6 minute mark.

New to mySQL Relationships

Posted: 06 Jun 2013 03:47 PM PDT

I'm new to mySQL relastionships and I'm wondering if you can help me out.

This is what I want to do:

Users Table user_id

user_name

pass_word

permission_id

Permissions Table

p_id

permission_name

permission_type

I want to create a relationship between p_id (permissions table) & permission_id (user table) so when I query the user table it also brings through the corresponding permission name & type?

Is this possible or am I getting it all wrong?

Should I just use joins?

Thanks,

  • WebDevB

MySQL dynamically optimize innodb tables without "file per table" setting

Posted: 06 Jun 2013 03:05 PM PDT

We are getting a "too many connections" error once a week the same time a mysql procedure runs. The procedure runs "optimize table" on hundreds of tables and takes nearly ten hours to finish, taking websites offline. I am thinking because tables are locked connections get backed up, or something like that. So I am thinking it's the procedure that is the cause of this error and am looking at ways to refactor it to be smarter as to which tables it optimizes and/or chop up task to be run over many days.

This procedure only filters which tables and data bases to run the optimize command on by using LIKE against the table or database name. So, for every table match, "optimize table {tablename}" is run. These tables are both of MyISAM and InnoDB engine types.

One refactor approach would be to only optimize if the table really needs it. If a fragmented table signifies it needs optimizing then finding out if is fragmented is not too difficult if, if the table is a MyISAM, or InnoDB using innodb_file_per_table, where you can do the math on the information_schema fields "Data_free", data_length and index_length.

Unfortunately the MySQL server is not using the "file per table" setting. This makes the "data_free" field seemly useless for this task because every InnoDB table with have the same value i.e. the free space in the ibdata file.

Maybe if I can understand answers to these questions I will better understand the solution i need.

  • Q 1: How much fragmentation is allowed before it affects performance?
  • Q 2: Should InnoDB tables even be optimized (some say yes others say no)?
  • Q 3: How do you test for InnoDB fragmentation if the server does not use the "file per table" option?
  • Q 4: Is fragmentation the only reason to run "optimize table"?
  • Q 5: If I do need to run "optimize table" on an InnoDB table should I run ALTER TABLE mydb.mytable ENGINE=InnoDB; and not ANALYZE TABLE

Best high availability solution to avoid maintenance downtime?

Posted: 06 Jun 2013 12:10 PM PDT

We currently only have one production SQL Server 2008 server. Anytime we need to perform maintenance or install patches, it causes downtime as the server reboots. In addition, if the server ever fails, that will obviously cause a great deal of downtime.

What is the best solution to avoid this maintenance downtime? We are looking for a solution that will automatically switch to a failover server so we can apply rolling updates.

Running pt-table-checksum on a table in different databases

Posted: 06 Jun 2013 11:28 AM PDT

I'm using filters for replication:

replicate-wild-do-table = test_slave.%  replicate-rewrite-db = test->test_slave  

Is there an option to specify different DB name on slave for "pt-table-checksum"

pt-table-checksum --host=localhost --databases=test --max-load=Threads_connected:25 --no-check-replication-filters  

EDIT 1:

mysql> SELECT COUNT(*) FROM information_schema.TABLES WHERE TABLE_SCHEMA = 'test_slave';  +----------+  | COUNT(*) |  +----------+  |      145 |  +----------+  1 row in set (0.00 sec)      mysql> SELECT COUNT(*) FROM percona.checksums;  +----------+  | COUNT(*) |  +----------+  |       40 |  +----------+  1 row in set (0.00 sec)  

EDIT 2:

# ls -l *.frm | wc -l  145  # ls -l | awk '{print $3,$4}' | uniq    mysql mysql  

It's also complaining that two tables have no indexes and are oversized. It looks like it's just skipping some tables?

Regards

Query should yield result but doesn't

Posted: 06 Jun 2013 11:24 AM PDT

I need to get data from a remote SQL 2000 server and store it in a new MySQL server. I am using the FreeTSD module for php 5.3 to connect to the SQL 2000 server. I can connect without problems.

Here's my issue; one particular table isn't yielding any result to this query:

SELECT * FROM Item  

But this query is successful when I test it SQL Query Analyzer.

I do get results (also remote) when I change it to:

SELECT * FROM Brands  

The Item table holds over 500K records and I'm thinking that that might have something to do with it. Could that be the issue and how can I work around that. I have tried this without result:

SELECT TOP 10 * FROM Item  

This is the php code to connect:

try {      $db = new PDO('odbc:Driver=FreeTDS; Server=xxx.xxx.xxx.xxx; Port=1433; Database=xxxxx; UID=xxxxx; PWD=xxxxxx;');  } catch(PDOException $exception) {      die("Unable to open database.<br>Error message:<br><br>$exception.");  }  

And this is the select code:

$query = "SELECT * FROM Item";  $statement = $db->prepare($query);  $statement->execute();  $data = $statement->fetchAll(PDO::FETCH_ASSOC);    echo '<pre>';  var_dump($data);  echo '</pre>';  

Which returns an empty array.

looking for a postgresql load emulation client

Posted: 06 Jun 2013 08:48 PM PDT

Is there any load emulation client for postgresql like mysqlslap? And is there any good monitoring tool for postgresql database server?

Thanks

pg_dump 9.2.x command does not work with pg_dump 9.2.3

Posted: 06 Jun 2013 09:03 PM PDT

This is how i used to create all the time backups of my databases with PostgreSQL 9.2.x:

pg_dump -Fc -Z 6 -U postgres mydatabase > 2013-xx-xx_xxxxx.db  

However on a machine where PostgreSQL with version 9.2.3 is running I get all the time the error "too many command line arguments" starting with "-Z". If I remove -Z he is complaining about "-U"? What is wrong?


Okay there is definitely sth. wrong. I removed argument for argument until i just started pg_dump. Then i added argument by argument and it worked. So I opened another command line window and tried the same: First "too many arguments", after calling only "pg_dump" and adding further arguments it worked...

Why InnoDB ALTER TABLE is so slow?

Posted: 06 Jun 2013 04:11 PM PDT

The machine has a 4-core hyper-threaded Intel i7 670 @ 2.8GHz, 16G RAM, 8G of which is InnoDB buffer pool. The database resides on Intel SSD disk. DBMS is MariaDB 5.5.30 x64 running under Windows Server 2008 R2 x64. The problem is that ALTER TABLE still runs too damn slow, i.e. converting a 100M InnoDB table to COMPRESSED row format takes about two minutes. During the operation the CPU isn't even running at full speed, showing 10-20% load. More than a half of InnoDB buffer pool is free. SSD disk active time is roughly 5% and lower. So what could be a bottleneck this time? InnoDB setup is available here.

Backup not creating a new file

Posted: 06 Jun 2013 03:35 PM PDT

I run a daily backup of my database through command prompt:

c:\sqlcmd -H localhost -Q "BACKUP DATABASE test TO DISK='c:\test.bak'"  

But the new backup replaces the previous backup. Please tell me what to change so that all backups will be stored on disk.

Does SQL Server have a reference partitioning equivalent?

Posted: 06 Jun 2013 05:40 PM PDT

Oracle 11g has introduced a new feature that is called Reference Partition.

Using reference partitioning, a child table can inherit the partitioning characteristics from a parent table.

http://www.orafaq.com/wiki/Reference_partitioning

Is there an equivalent feature in SQL Server?

Databse Mail sending functionality not working on local system

Posted: 06 Jun 2013 03:28 PM PDT

I am using Database Mail functionality to send mail in Sql Server 2008 database via following sp:

EXEC sp_send_dbmail @profile_name='MyProfile', @recipients='abc@companyname.com', @subject='Test message',@body='Congrats Database Mail Received By you Successfully.'   

I have tried with my gmail account profile on my local system it's working properly but not with my company or outlook profile.

Error Message:

The mail could not be sent to the recipients because of the mail server failure. (Sending Mail using Account 5 . Exception Message: Could not connect to mail server. (A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond abc.j.i.ooo:pp). )  

What would be the problem.

Thanks

SQL Server Many-to-One replication

Posted: 06 Jun 2013 01:58 PM PDT

I have 8 individual SQL Server 2008 R2 machines, each hosting 1 database. Each database has an identical table structure and schema, and entirely unique data.

I would like to establish a reporting server (may be 2008 or 2012), that consolidates the rows from selected tables across the 8 source servers into a single instance of those tables on the reporting server. This is one-way replication (no changes will be made to the reporting server). I will need to replicate changes from the source databases with relatively low latency (say 20-30 seconds).

Furthermore, I'd like to find methods to achieve this with as little impact to the source servers as is practical. 3rd-party agents, triggers, or schema mods to those servers are difficult in my environment.

My questions:

  • What are promising architectures and technologies for achieving this goal?
  • I've looked at SQL Server Merge Replication, but I am concerned about latency. Is this an appropriate technology for this goal?
  • Are there many-to-one architectures for transactional replication?
  • Should I be looking at 1-to-1 replication into 8 databases on my reporting server, followed by some custom merge function (a 2-step replication)?

Thanks, John

Cannot Delete FileStream Group from Sql Server 2008

Posted: 06 Jun 2013 11:46 AM PDT

We have migrated all our varbinary(max) column data to Azure Blob Storage and so we want to remove the old filestream columns that remain in our Sql 2008 database and the filestream filegroup but when we try to we are getting the error:

Msg 5042, Level 16, State 11, Line 2  The filegroup 'FileStreamGroup' cannot be removed because it is not empty.  

However when we run this:

exec sp_helpfilegroup 'FileStreamGroup'  

It is returning this:

groupname           groupid    filecount  FileStreamGroup     2          0  

So the file count is 0 but it won't let us remove it, has anybody else had this problem and how on earth do you completely remove the filestream from the database.

In addition this query:

select * from sys.tables t   join sys.data_spaces ds on t.filestream_data_space_id = ds.data_space_id   

Returns 0 rows so no tables are using any filestream data if I understand this correctly.

In Memory Database for high transaction and volume spikes on a website

Posted: 06 Jun 2013 02:13 PM PDT

I am looking at the IA for a hypothetical application scenario, for example:

  • A high demand application such as an online ticketing website where there can be demand for 100,000 tickets to be purchased per minute. At the same time the number of tickets needs to be tracked accurately so that they are not oversold (so I'm thinking to maintain the ticket count in memory).

I am thinking the less time critical tasks can be queued or deferred, ie email confirmation, and then write everything to the physical DB after the ticket purchase transaction is complete.

The only risk with in memory I believe is if the software/hardware fails, but I believe there may be some clustered options with some IMDB providers so that they are failsafe.

I have considered horizontal scaling/sharding with regular RDBMS but I am worried about the costs. I'd love to hear some thoughts if anybody has dealt with similar scenarios and if they used IMDB or something else?

Can I add a unique constraint that ignores existing violations?

Posted: 06 Jun 2013 02:37 PM PDT

I have a table which currently has duplicate values in a column.

I cannot remove these erroneous duplicates but I would like to prevent additional non-unique values from being added.

Can I create a UNIQUE that doesn't check for existing compliance?

I have tried using NOCHECK but was unsuccessful.

In this case I have a table which ties licensing information to "CompanyName"

EDIT: Having multiple rows with the same "CompanyName" is bad data, but we can't remove or update those duplicates at this time. One approach is to have the INSERTs use a stored procedure which will fail for duplicates... If it was possible to have SQL check the uniqueness on its own, that would be preferable.

This data is queried by company name. For the few existing duplicates this will mean that multiple rows are returned and displayed... While this is wrong, it's acceptable in our use case. The goal is to prevent it in the future. It seems to me from the comments that I have to do this logic in the stored procedures.

Why don't databases create their own indexes automatically?

Posted: 06 Jun 2013 03:42 PM PDT

I would have thought that databases would know enough about what they encounter often and be able to respond to the demands they're placed under that they could decide to add indexes to highly requested data.

Shell: How to time a script running in SQLPlus and kill it after x amount of minutes?

Posted: 06 Jun 2013 05:05 PM PDT

I have a little tool which automatically runs a series of SQL scripts when and outputs to .XLS when an appropriate request is made. However, some SQL scripts need to be ran on a live database, and I don't want them to run for over 5 minutes.

Is there a way to - within my shell script - time the script's run time and kill it after 5 minutes?

Thanks in advance.

Would Mongo's ObjectID work well in InnoDB's clustered index?

Posted: 06 Jun 2013 05:50 PM PDT

Mongo's ObjectID is defined like this:

ObjectId is a 12-byte BSON type, constructed using:

a 4-byte value representing the seconds since the Unix epoch,

a 3-byte machine identifier,

a 2-byte process id,

and a 3-byte counter, starting with a random value.

Assuming the PRIMARY key was BINARY(12), will this work as well as an auto incremented INT? Does it count as sequential for InnoDB's clustered index? Would gaps in the timestamp be a problem?

How database administrators can see my requests to SQL Server?

Posted: 06 Jun 2013 02:39 PM PDT

I'm a SQL Server 2008 user. I have access to some tables. I need to request few columns from table as I usually do. But I need to do it once (for example) in 5 seconds and system administrators shouldn't see (feel:) my activity.

Result of request - table with approximately 100 lines. My query contains only select and where clause by index. (it is light and it is executing very fast)

As I know, SELECT operations don't write to transaction log. I mean, if I only read database, where is log of my select actions SQL Server keep? Can administrator see my select queries?

C2 audit, as I can see in properties, is disabled.

Is there any other ways to see my activity?

Thanks.

Deriving formulas for input/output

Posted: 06 Jun 2013 08:24 PM PDT

I'm currently enrolled in a DBS class and am having problem with an assignment. I've searched around and have been unable to understand what it is I'm meant to be doing with this derivation formula.

A plant file with TREE-GENUS as the key field includes records with the following TREE-GENUS values: Tsuga, Ficus , Arbutus, Quercus, Melaleuca, Tristaniopsis, Cornus, Sequoiadendron, Lithocarpus, Liriodendron, Pittosporum.
Suppose that records with these search field values are inserted into a random (heap) file with a maximum of 3 records per block. Derive a formula for the expected number of disk I/O to scan these records and to search for a particular record

I've been using some software that was given with the assignment and it also asks what are the maximum number of blocks that are allowed and that is not given by the above brief. I'm not really sure how to derive a formula for this. I've assumed that because there are 3 records per block there are 4 blocks required and that a random heap file uses 1 disk i/o per write/read.

If this is a larger topic than is worth explaining a link to a reliable few pages is also helpful.

Is SQL Server 2012 Management Studio "Express" any different?

Posted: 06 Jun 2013 03:33 PM PDT

Are there any differences from the version of SSMS that comes as part of the SQL Server 2012 Enterprise Installer and the SQL Server 2012 Express Management Studio?

Proper Use of Lookup Tables

Posted: 06 Jun 2013 12:19 PM PDT

I'm having trouble figuring out exactly how to place good boundaries for when and where to use lookup tables in a database. Most sources I've looked at say that I can never have too many but, at some point, it seems like the database would be broken down into so many pieces that, while it may be efficient, it is no longer manageable. Here's a thrown together example of what I'm working with:

Let's say I have a table called Employees:

ID  LName   FName   Gender  Position  1   Doe     John    Male    Manager  2   Doe     Jane    Female  Sales  3   Smith   John    Male    Sales  

Pretend for a moment that the data is more complex and contains hundreds of rows. The most obvious thing I see that could be moved to a lookup table would be Position. I could create a table called Positions and stick the foreign keys from the Positions table into the Employees table in the Position column.

ID  Position  1   Manager  2   Sales  

But how far can I continue to break the information down into smaller lookup tables before it becomes unmanageable? I could create a Gender table and have a 1 correspond to Male and a 2 correspond to Female in a separate lookup table. I could even put LNames and FNames into tables. All "John" entries are replaced with a foreign key of 1 that points to the FName table that says an ID of 1 corresponds to John. If you go down this rabbit hole too far like this, though, your Employees table is then reduced to a mess of foreign keys:

ID  LName   FName   Gender  Position  1   1       1       1       1  2   1       2       2       2  3   2       1       1       2  

While this might or might not be more efficient for a server to process, this is certainly unreadable to a normal person who may be trying to maintain it and makes it more difficult for an application developer trying to access it. So, my real question is how far is too far? Are there "best practices" for this sort of thing or a good set of guidelines somewhere? I can't find any information online that really nails down a good, useable set of guidelines for this particular issue I'm having. Database design is old hat to me but GOOD database design is very new so overly technical answers may be over my head. Any help would be appreciated!

Is normalization process needed?

Posted: 06 Jun 2013 03:10 PM PDT

Well after learning DBMS as a subject i got so many questions in mind. Normalization is one of them. As i learnt it there was a lot more confusion and i found that whatever we do in normalization process we can do it by general common sense also. Even while making projects also people are not used to follow it. So is it really needed? Is it followed in the companies? I am asking this question because probably it might consume more time to normalize the database. We can directly normalize it using just common sense therefore i don't think there is any need of following the standard normalization procedure. Correct me if i am wrong.

[Articles] Quickly Copy Data

[Articles] Quickly Copy Data


Quickly Copy Data

Posted: 05 Jun 2013 11:00 PM PDT

Moving data around can be a challenge as the sizes of our databases grow. Steve Jones talks about some of the challenges and some creative ways you might consider moving data.

sqlbackup Want faster, smaller backups you can rely on?
Use SQL Backup Pro for up to 95% compression, faster file transfer and integrated DBCC CHECKDB. Download a free trial now.

[MS SQL Server] SSRS report Memory limit

[MS SQL Server] SSRS report Memory limit


SSRS report Memory limit

Posted: 21 May 2013 08:15 PM PDT

I want limit report server for cpu and memory.Thanks in Advance.

Access to Snapshot only

Posted: 06 Jun 2013 03:42 AM PDT

I want to give a group of users access to run select queries on a database snapshot but not the live database, but snapshots are Read-Only and that includes the security configuration.One solution might be to run a script that1) Adds the users to the live database db_datareader role.2) Creates the database snapshot3) Drops the users from the live database db_datareader role.Is there a cleaner way to do this?

Create Job to run indefinitely

Posted: 05 Jun 2013 11:59 PM PDT

Hi experts,Can anyone please help me create a job that should run indefinitely i mean the the script should keep on running.Thanks in Advance

OpenXML Query !

Posted: 06 Jun 2013 01:17 AM PDT

Folks:I need help with XML query. I have this XML data and I'm using OPENXML to shred the data into table. For each ModelValue (/DataOutputEntries/ARun/SecXml/Sec/SRateArr/SRateArrValues/ModelValue) I'm writing 12 insert statements (in this example I have only 3 insert statements) and each insert takes around like 2 to 3 seconds because of the huge XML data which has 46K lines (attached example has less data). Can this be done in a better way? Will cross apply be faster? I have never used cross apply before so any help on the query will be appreciated.Thanks ![code="sql"]DECLARE @ADataOutputImportEntries XMLSET @ADataOutputImportEntries='<DataOutputEntries> <ARun> <JobId>9390678</JobId> <JobEntry>1088801</JobEntry> <SKey>Tester=1.000/EquationTesting=Zero</SKey> <Label>TESTING</Label> <SecXml> <Sec> <ADate>2012-03-30</ADate> <SId>10508</SId> <Name>320John</Name> <Ticker /> <Cip>328T7</Cip> <SDate>2012-01-30</SDate> <SRateArr> <SRateArrValues> <CollType>All</CollType> <GroupNumber>0</GroupNumber> <GroupName>TOTAL</GroupName> <ModelValue> <Date>3/1/2012</Date> <CurrentToThirtyTo59>0.0442587393797255</CurrentToThirtyTo59> <CurrentToVoluntaryPrepay>0.00112852461455603</CurrentToVoluntaryPrepay> <ThirtyTo59ToCurrent>0.239157153759315</ThirtyTo59ToCurrent> <ThirtyTo59ToSixtyTo89>0.241455128769983</ThirtyTo59ToSixtyTo89> <NinetyPlusToInvoluntaryPrepay>0.0179558248028735</NinetyPlusToInvoluntaryPrepay> <CurrentToCurrent>0.954612736005718</CurrentToCurrent> <ThirtyTo59ToThirtyTo59>0.518541993321672</ThirtyTo59ToThirtyTo59> <SixtyTo89ToSixtyTo89>0.340082560007853</SixtyTo89ToSixtyTo89> <NinetyPlusToNinetyPlus>0.83600521461896</NinetyPlusToNinetyPlus> <SixtyTo89ToCurrent>0.103180895929781</SixtyTo89ToCurrent> <NinetyPlusToCurrent>0.0441401509179685</NinetyPlusToCurrent> <SixtyTo89ToNinetyPlus>0.298778375525492</SixtyTo89ToNinetyPlus> </ModelValue> <ModelValue> <Date>4/1/2012</Date> <CurrentToThirtyTo59>0.0499544306845915</CurrentToThirtyTo59> <CurrentToVoluntaryPrepay>0.00109431768188259</CurrentToVoluntaryPrepay> <ThirtyTo59ToCurrent>0.199535150055279</ThirtyTo59ToCurrent> <ThirtyTo59ToSixtyTo89>0.273414973899818</ThirtyTo59ToSixtyTo89> <NinetyPlusToInvoluntaryPrepay>0.0176055701572801</NinetyPlusToInvoluntaryPrepay> <CurrentToCurrent>0.948678229151541</CurrentToCurrent> <ThirtyTo59ToThirtyTo59>0.525995695441001</ThirtyTo59ToThirtyTo59> <SixtyTo89ToSixtyTo89>0.337274044725917</SixtyTo89ToSixtyTo89> <NinetyPlusToNinetyPlus>0.836278799696028</NinetyPlusToNinetyPlus> <SixtyTo89ToCurrent>0.0814687575120575</SixtyTo89ToCurrent> <NinetyPlusToCurrent>0.0441429483508975</NinetyPlusToCurrent> <SixtyTo89ToNinetyPlus>0.337497252914493</SixtyTo89ToNinetyPlus> </ModelValue> </SRateArrValues> <SRateArrValues> <CollType>FloatingRate</CollType> <GroupNumber>6</GroupNumber> <GroupName>2A</GroupName> <ModelValue> <Date>6/1/2012</Date> <CurrentToThirtyTo59>0.0611408854165271</CurrentToThirtyTo59> <CurrentToVoluntaryPrepay>0.000678007870239213</CurrentToVoluntaryPrepay> <ThirtyTo59ToCurrent>0.163265030897998</ThirtyTo59ToCurrent> <ThirtyTo59ToSixtyTo89>0.327657418802434</ThirtyTo59ToSixtyTo89> <NinetyPlusToInvoluntaryPrepay>0.015565453655582</NinetyPlusToInvoluntaryPrepay> <CurrentToCurrent>0.937548253411662</CurrentToCurrent> <ThirtyTo59ToThirtyTo59>0.50798882336555</ThirtyTo59ToThirtyTo59> <SixtyTo89ToSixtyTo89>0.341381810295919</SixtyTo89ToSixtyTo89> <NinetyPlusToNinetyPlus>0.845436300707964</NinetyPlusToNinetyPlus> <SixtyTo89ToCurrent>0.0623208735561162</SixtyTo89ToCurrent> <NinetyPlusToCurrent>0.0408626114855406</NinetyPlusToCurrent> <SixtyTo89ToNinetyPlus>0.387547418575595</SixtyTo89ToNinetyPlus> </ModelValue> <ModelValue> <Date>7/1/2012</Date> <CurrentToThirtyTo59>0.0620749359821378</CurrentToThirtyTo59> <CurrentToVoluntaryPrepay>0.000688643874234245</CurrentToVoluntaryPrepay> <ThirtyTo59ToCurrent>0.163887436671472</ThirtyTo59ToCurrent> <ThirtyTo59ToSixtyTo89>0.326071061950598</ThirtyTo59ToSixtyTo89> <NinetyPlusToInvoluntaryPrepay>0.0152817269357896</NinetyPlusToInvoluntaryPrepay> <CurrentToCurrent>0.936225626828455</CurrentToCurrent> <ThirtyTo59ToThirtyTo59>0.508565470087825</ThirtyTo59ToThirtyTo59> <SixtyTo89ToSixtyTo89>0.342158892295477</SixtyTo89ToSixtyTo89> <NinetyPlusToNinetyPlus>0.845490061979147</NinetyPlusToNinetyPlus> <SixtyTo89ToCurrent>0.0622535698764131</SixtyTo89ToCurrent> <NinetyPlusToCurrent>0.0408551960013352</NinetyPlusToCurrent> <SixtyTo89ToNinetyPlus>0.38723632596392</SixtyTo89ToNinetyPlus> </ModelValue> </SRateArrValues> </SRateArr> </Sec> </SecXml> </ARun></DataOutputEntries>' CREATE TABLE #tempTblTransRateArr( Label VARCHAR(50), ADate DATETIME, GroupId INT, CollType VARCHAR(50), SKey VARCHAR(500), BeginState VARCHAR(50), EndState VARCHAR(50), [Date] DATETIME, TransRate FLOAT) DECLARE @hdoc INT EXEC sp_xml_preparedocument @hdoc OUTPUT, @ADataOutputImportEntries INSERT INTO #tempTblTransRateArr SELECT Label, ADate, GroupId, CollType, SKey, 'Current' as BeginState, 'ThirtyTo59' as EndState, [Date], TransRate FROM OPENXML(@hdoc, '/DataOutputEntries/ARun/SecXml/Sec/SRateArr/SRateArrValues/ModelValue', 2) WITH ( Label VARCHAR(50) '../../../../../Label', ADate DATETIME '../../../ADate', GroupId INT '../GroupNumber', CollType VARCHAR(50) '../CollType', SKey VARCHAR(500) '../../../../../SKey', [Date] DATETIME 'Date', TransRate FLOAT 'CurrentToThirtyTo59') INSERT INTO #tempTblTransRateArr SELECT Label, ADate, GroupId, CollType, SKey, 'Current' as BeginState, 'VoluntaryPrepay' as EndState, [Date], TransRate FROM OPENXML(@hdoc, '/DataOutputEntries/ARun/SecXml/Sec/SRateArr/SRateArrValues/ModelValue', 2) WITH ( Label VARCHAR(50) '../../../../../Label', ADate DATETIME '../../../ADate', GroupId INT '../GroupNumber', CollType VARCHAR(50) '../CollType', SKey VARCHAR(500) '../../../../../SKey', [Date] DATETIME 'Date', TransRate FLOAT 'CurrentToVoluntaryPrepay') INSERT INTO #tempTblTransRateArr SELECT Label, ADate, GroupId, CollType, SKey, 'ThirtyTo59' as BeginState, 'Current' as EndState, [Date], TransRate FROM OPENXML(@hdoc, '/DataOutputEntries/ARun/SecXml/Sec/SRateArr/SRateArrValues/ModelValue', 2) WITH ( Label VARCHAR(50) '../../../../../Label', ADate DATETIME '../../../ADate', GroupId INT '../GroupNumber', CollType VARCHAR(50) '../CollType', SKey VARCHAR(500) '../../../../../SKey', [Date] DATETIME 'Date', TransRate FLOAT 'ThirtyTo59ToCurrent') EXEC sp_xml_removedocument @hdoc SELECT * FROM #tempTblTransRateArr DROP TABLE #tempTblTransRateArr[/code]

Centralised Backup Reporting

Posted: 05 Jun 2013 07:49 PM PDT

Hi All,I work with an environment where we have hundreds of Server's, many with common user databases, others with uncommon databases and checking that database maintenance, in particular database backups are taking place, is becoming quite a challenge. We have our own in-house alerting system that alerts us to jobs that have failed, which we can view on a centralised monitoring system, but, what we're really after is something centralised, that will report what databases have not been backed up for x amount of days, with the aim of highlighting backup jobs that may have been disabled or simply don't exist. I've looked at Idera's Backup Status, which is contained within its SQL Admin Toolset, and it's close to the mark, but, unfortunately you can't filter out specific user databases that aren't backed up on purpose and in our case, we have a number of databases that are the same, on all servers and thus we don't back them up as we can always get copy of the database from another server. Hopefully I've explained the situation well, any suggestions would be appreciated.Thanks, Col

[SQL 2012] Why the difference in query execution times?

[SQL 2012] Why the difference in query execution times?


Why the difference in query execution times?

Posted: 06 Jun 2013 02:50 AM PDT

I have this query: UPDATE invoice SET actual_membernumber = members.mbrnbr, vin = members.vin FROM Network_Transaction invoice (nolock) CROSS APPLY ( SELECT clientindex, lname, fname, mbrnbr, vin FROM vw_ActiveMembers members (nolock) WHERE members.clientindex = invoice.client_id ) AS members INNER JOIN erm_claimsdata c (nolock) ON c.clientid = members.clientindex WHERE invoice.transaction_type = 'I' AND (@filename is null OR invoice.filename = @filename) AND (@client_id is NULL OR client_id = @client_id) AND invoice.deletedflag = 0 AND invoice.exportedflag = 0 AND invoice.last_eight_vin is not null AND invoice.actual_membernumber is null AND members.vin LIKE '%' + invoice.last_eight_vinif I run it in SSMS it takes about 8 minutes, but when the same query is executed in a stored proc it takes 5 hours...sometimes longer before I kill the session. What could possibly cause this? This is on SQL Server 2005. Thanks

Calling SSIS package remotely fails on Proxy works on Local

Posted: 05 Jun 2013 08:17 PM PDT

Hi all, New to SSIS and trying to set up a new set of ETLs. We have a Database server and a ETL server.I use a proxy account on the DB server to call the ETLs. It does this via powershell. Link below:http://blogs.msdn.com/b/dbrowne/archive/2010/10/11/remote-ssis-package-execution-with-powershell-2-0.aspxIf i run the package locally via the proxy account it runs with no errors. If i run it from the DB server Agent job it fails with the error:MessageExecuted as user: DOMAIN\SSIS_Proxy. The job script encountered the following errors. These errors did not stop the script: Unhandled Exception: System.FormatException: Index (zero based) must be greater than or equal to zero and less than the size of the argument list. at System.Text.StringBuilder.AppendFormat(IFormatProvider provider, String format, Object[] args) at System.String.Format(IFormatProvider provider, String format, Object[] args) at Microsoft.SqlServer.Management.PowerShell.Res.GetString(String key, Object arg0) at Microsoft.SqlServer.Management.PowerShell.SqlPowerShell.GetErrorRecordMessage(ErrorRecord errorRecord) at Microsoft.SqlServer.Management.PowerShell.SqlPowerShell.HandleAgentJob(RunspaceConfiguration config) at Microsoft.SqlServer.Management.PowerShell.SqlPowerShell.Main(String[] args). Process Exit Code -532459699. The step failed.If i run the agent job via a proxy with my account it works! so it must be security related?The proxy account has the following sec which i thought was enough, as i said it works when i run it locally on the same account.. Could anyone help? clean it up for me?ETL Server: DCOM permissions based on this article:http://msdn.microsoft.com/en-us/library/aa337083.aspxLaunch and Activation Permissions - Local and Remote (I tried with just remote permissions and i couldnt get packages to run)ETL DB engine:Public LoginUser to MSDB with db_ssisoperatorThanks for any help!

moving tempdb - recovery plan

Posted: 05 Jun 2013 11:28 PM PDT

So this is one of those areas where I feel least comfortable... messing around with system files. Our new production SQL server was set up with all of the system databases on the same VMDK - including tempdb - and it turns out that is bad for replication to our DR environment. So I have been tasked with moving tempdb to it's own separate drive. Now, moving tempdb itself isn't the issue - I've done it before, it's a relatively simple process - ALTER DATABASE with the new location for each file, restart SQL, and you're done.The problem is, I need to develop a plan for if things don't go right. That is, we get to the "restart SQL" phase and it won't start. So I was hoping to put my plan here and see what people thought.1. Back up master - we already have backups of master nightly but back it up anyway.2. Stop SQL3. Make copies of master.mdf and master.ldf to another folder4. Start SQL5. Run the script to move tempdb6. Restart SQL7. If SQL won't start, move the copies I made of master.mdf and master.ldf back to the original folder; start SQL.8. If SQL won't start (any reason it wouldn't?) then start in single-user mode, restore masterThe reason I am looking at steps 2, 3, and 7 is because, in my view, it's a lot simpler. With step 8, I need to make sure that no services try to connect to SQL when I'm starting it in single-user mode; which means either mucking with the firewall to block them, or turning off all of the services I can think of on other servers. I figure that if I just put the original master files back, SQL should start okay (assuming no corruption during the copy, but that's unlikely).Does this make sense? Anything I'm missing?

Problems to remove Oracle ODBC aliases from ODBC GUI

Posted: 05 Jun 2013 05:12 PM PDT

Hi,(I know this not SQL 12 matter but I'm in a hurry and did not spot a better spot for this)The new SqlServ2012 64bit server was installed with Oracle 11g client but later it was found out that they still had a source of version 8.x ==> NO GO and another source that did not function.They uninstalled 11g and installed 10g - both 64/32 bit clients because I need 32 with SSIS.BUT after the 10g, I was not able to remove the 11g ODBC entries from the 32 bit ODBC. It was calling after 11g drivers and a NO GO. Did not help even though I deleted the entries from ODBC.INI and from the register and rebooted.So I tried and created another alias with 10g 32 bit home but it gave error messages, connection was established and DB metadata was readable but when trying to run with SSIS (32 bit)"[i]The specified DSN contains an architecture mismatch between the Driver and Application[/i]"...that is related in configuration situation in the topic below[i]http://msdn.microsoft.com/en-us/library/windows/desktop/ms712362%28v=vs.85%29.aspx[/i]What should I try? I think I should reinstall 10g 32 client (64 bit ODBC seemed to work with Sql Agent) but how can I force tge 32 ODBC settings clean??Ville(The reason I'm using this source over ODBC is because it just does not function over OleDB like another 10g source functions OK...I dont' understand these Oracle features)

Grant Control On Schema verses granting individual permissions on schema?

Posted: 05 Jun 2013 07:39 AM PDT

Would granting control at the schema level take the place of granting individual permissions on the schema? For example, GRANT CONTROL ON SCHEMA::[Developer_Schema] TO [Developer_Role]Verses--apply permissions to schemasGRANT ALTER ON SCHEMA::[Developer_Schema] TO [Developer_Role]GOGRANT CONTROL ON SCHEMA::[Developer_Schema] TO [Developer_Role]GOGRANT SELECT ON SCHEMA::[Developer_Schema] TO [Developer_Role]GOGRANT INSERT ON SCHEMA::[Developer_Schema] TO [Developer_Role]GOGRANT DELETE ON SCHEMA::[Developer_Schema] TO [Developer_Role]GOGRANT UPDATE ON SCHEMA::[Developer_Schema] TO [Developer_Role]GO--ensure role membership is correctEXEC sp_addrolemember N'Developer_Role ', N'User1'GO--allow users to create tables in Developer_SchemaGRANT CREATE TABLE TO [Developer_Role]GO--Allow user to connect to databaseGRANT CONNECT TO [User1]Thanks, Kevin

SQL 2012 Cluster failure during Add node

Posted: 05 Jun 2013 06:48 AM PDT

I installed a new SQL 2012 failover cluster Enterprise on a 4 node cluster. When I am trying to run Add node on any of the other 3 nodes, the installation fails with the following error:While running 'Update HADRResource'gives error:Status: Failed: see logs for details Reason for failure: An error occurred for a dependency of the feature causing the setup process for the feature to fail. Next Step: Use the following information to resolve the error, uninstall this feature, and then run the setup process again. Component name: SQL Server Database Engine Services Instance Features Component error code: 0x86D80018 Error description: There was an error to lookup cluster resource types. Error: There was a failure to call cluster code from a provider. Exception message: Generic failure . Status code: 4104. Description: .Please help in resolving this error.

2012 SQL Server 2012 64 Bit Enterprise running on VMWare virtual server with 256 Gig of RAM

Posted: 05 Jun 2013 04:33 AM PDT

Not sure if this is the correct forum, but it appears that the single instance 2012 SQL Server 2012 64 Bit Enterprise with 225 Gig allocated to SQL running on VMWare is not allocating the correct memory. We have Minimum memory setup for 200 Gig and the Maximum for 225 Gig and the SQL server only is using roughly 20 Gig as viewed through SQL Sentry. The Page Life Expectancy is only round 5 to 8 and should be up around 300, so I'm not sure if anyone else has had this problem?Any assistance would be appreciated or I can provide additional information as needed.Thanks

Search This Blog