- Merge Replication is Slow for Uploads
- Changed server name, now maintenance plan fails
- Sybase SQL - Truncation error occurred. Command has been aborted
- Deadlocks in Small Table
- Preferred Methods Of Indexing MS SQL Server Audit Data To Splunk
- What will happen if i change the compatibility level of Distribution Database
- Can I deactivate log shipping jobs without raising errors?
- update column based on the sort order of another query
- Write performance of Postgresql 9.1 with read-only slave
- Convert SQL Server UPDATE Statement for Use in Oracle
- Gaps and islands: client solution vs T-SQL query
- PostgreSQL backup error
- How to check which tables in DB (MYSQL) updated in last 1 hour / last 1 minute?
- RID vs INCLUDE on a large field
- SSIS keeps force changing excel source string to float
- Stored Procedure Create Table from Variable with Variable Constraint
- The DELETE statement conflicted with the REFERENCE constraint
- Stress test MySQL with queries captured with general log in MySQL
- SQL Server 2008 DB Performance on single disk
- Slow SSRS Report in production
- Postgresql querying trends
- Error 1044 Access denied to user
- Multiple database servers for performance vs failover
- SQL Server distribution database log file grows uncontrollably after full database backup
- Restore SQL Server database using Windows Powershell 3.0
- mysqlbackup mysql enterprise utility issue
- Cross Database transactions - Always on
- MySQL generic trigger to save identification data for later accessing of the changed row
- When should I use a unique constraint instead of a unique index?
- SQL Server 2005 Express in VMware causing very high CPU load
Posted: 04 Apr 2013 05:23 PM PDT
I have implemented merge replication with pull subscriptions in production environment. Initially it was working fine, but now a days, its too slow to upload any changes to publisher , but subscribers download changes in minimum time. The maximum percentage of total synchronization is consumed in uploading while for downloading its very minimum. The Subscribers on a WAN. I need a solution to overcome this uploading problem.
Posted: 04 Apr 2013 06:17 PM PDT
I am using SQL Server 2008 R2. When I try to execute a maintenance plan I get the following error:
I ran the following queries:
I then restarted the MSSQLSERVER service and the agent service.
When I run
Is there another step that I haven't performed yet?
and noticed that the only plan is called Backups. So I instead ran the query:
I ran the following two selects to make sure that the are the same:
And they both return the same value.
When I run the plan, I get the same error message as before.
(I have created a new Backup Plan, but I'd still like to figure out why this one is working just for knowledge's sake)
Posted: 04 Apr 2013 03:40 PM PDT
Need help with below code, as it fails with truncation error
Truncation error occurred. Command has been aborted.
Posted: 04 Apr 2013 02:56 PM PDT
Our application is seeing deadlocks while inserting records into a table shortly after installation, while the table is small. These deadlocks eventually go away as the table fills up.
The application (.NET 4.0) spins up a number of threads for document processing. After processing each document, a thread inserts a new record into the database with a .NET-generated GUID as the clustered primary key. The insert is happening entirely within a transaction, and we are specifying ROWLOCK on the insert operation to try to minimize the impact of the operation (also happened without ROWLOCK).
Here is an example of the error we are seeing:
The application has a retry mechanism, and once the document table has a few hundred rows, we stop seeing this.
Our inquiries into these situations haven't been able to find any definitive cause. Our only leading theory right now is that there is some unintended lock escalation because of the non-sequential nature of GUIDs, but we're not sure. We're considering trying sequential GUIDs as a way to increase insert/indexing performance and maybe fix the problem indirectly, but switching GUID algorithms would be problematic for existing installations.
Why might we be seeing this very distinctive behavior, and how might we fix it?
Posted: 04 Apr 2013 12:56 PM PDT
What is your preferred method for indexing MS SQL Server for Splunk? I am collecting audit data in various environments by various means (profiler, app logs, and extended events) but I hope to consolidate them all into extended events moving forward.
However, one or two environments have Splunk connected and we want to ensure all security logs are collected and sent to the Splunk server. I have seen a few ways of doing this but I'm not sure which would be preferred or the 'best practice'. I haven't seen much on the Splunk community on this since really it's up to us to get it in Splunk however we can, so I figured I'd ask us here.
Make sure Splunk can index the darn thing. Prefer minimal user scripting to accomplish it, if possible. Basically make it a text or csv file.
minimal risk of 'fudging' the audit logs before Splunk gets them. I want them to be written to splunk or the text file that is indexed as fast as possible. This is to reduce 'man in the middle' attacks where audit logs are fudged 'in between' pulls and syncs. Also we might be exposed to duplicate entries in certain scenarios.
Minimize DBA access. I understand the DBAs with sa rights for the enterprise will always be able to get around issues, but we could vastly minimize our access to it. I am thinking of a solution so that security really manages this, because sometimes, even DBA's go bad.
1- Collect profiler data and use the .Net class to write a mini-app so it exports to a text file for splunk. I haven't done this before but it seems like it would resolve 'having data read immediately'. It requires a server side trace but in 2012 you can start it up without having to enable server start up stored procs which would go against the security standard.
Profiler is easy and everyone knows how to use it in and out, won't be dependent on me.
Easy .Net integeration
Profiler sucks compared to extended events.
Will have to ensure it is always collecting data. In 2012 it's easy but in 2008 you will have issues if you don't use start up stored procs, and will have to get creative with jobs. Even then, you might miss some logging.
It is being depreciated.
2- Replace profiler with Extended Events.
Will always start up in 2008 and up without any special parameters or start up stored procs.
Very light weight.
Preferred new method
How the heck do you get the file saved in a text or csv format so Splunk can easily access it? I haven't seen any way to do that and my pluralsight sub ran out :/
3- Log the data to a locked down table and have splunk read that table. Put proper permissions in place where a dba/sec admin can modify the trigger/service broker app that logs it and alert secops/dbas if someone changes anything in that table. Problem is I have only found Splunk to be able to query MySQL and not MS SQL Server. Perhaps I could run a powershell script to continually query that table or get creative with a trigger that starts a powershell session (have never done that before) to reduce man in the middle editing the files.
Positives: Meets the requirements
Reduces man in the middle attack if we can have it update the file immediately.
Negatives: It's perhaps more complicated than it needs to be.
4- Use this beta product.
Security team will manage it which is preferred since this is a security concern, and part of it IMO is protecting the data from DBA's as well.
Minimal overhead on the DBA team.
Negatives It's beta.
No real documentation.
Posted: 04 Apr 2013 01:56 PM PDT
I have implemented merge replication in production environment, initially it was in SQL Server 2005, then it was upgraded in-line to SQL Server 2008 R2. Later on all databases compatibility level was changed to 100, except the compatibility of
What impact will happen if I change the distribution database compatibility to 100?
Posted: 04 Apr 2013 01:06 PM PDT
I've set up log shipping from Server A to Server B. I then set it up from B to A when B was the primary.
I've reverted back to A being primary and disabled the backup, copy and restore jobs associated with B to A shipping. However, I have a failure of the LSAlert job on both A and B.
Is there a way to suppress these error or must I remove Log Shipping from B to A while A is primary?
My goal is to leave the log shipping configuration in place for DR, but have it not raise errors stating that databases have not been sync'd.
Posted: 04 Apr 2013 12:51 PM PDT
I'm trying to add arbitrarily ordered records to a database so that they can be sorted by the ui (or when I query the database ). My problem is I already have the list, and I need to add a default sorting based on alphabetical order. I'm thinking I should be able to do this with a subquery or coalesce, but I can't get it quire right. I'm doing this on MySQL so I'm hoping it's possible to do it at the database level.
Posted: 04 Apr 2013 11:12 AM PDT
I have a Postgresql 9.1 database that is being hosted by Heroku. It currently has a read-only "follower". I need to truncate several large tables (over 100GB) of data and reloads them. Is the read-only follower going to cause an additional overhead to this operation? Should I drop the follower and recreate it after the process is complete?
Posted: 04 Apr 2013 12:36 PM PDT
I can not get this UPDATE statement to work in an Oracle environment. It was written for SQL server.
I am looking for some guidance on how to convert it.
Posted: 04 Apr 2013 06:27 PM PDT
Can a T-SQL solution for gaps and islands run faster than a C# solution running on the client?
To be specific, let us provide some test data:
This first set of test data has exactly one gap:
The second set of test data has 2M -1 gaps, a gap between each two adjacent intervals:
Currently I am running 2008 R2, but 2012 solutions are very welcome. I have posted my C# solution as an answer.
Posted: 04 Apr 2013 01:51 PM PDT
I am trying to backup my company PostgreSQL database using pgAdmin III, so I selected our company DB from the tree, then right clicked on it and selected 'Backup', selected the destination folder for the backup file then clicked OK. Then I got this error message:
So can someone please help me by telling me what I am doing wrong here?
I am 100% sure that CompanyDB_TEST do exist.
I am running the PostgreSQL under Windows Server 2003
Posted: 04 Apr 2013 11:04 AM PDT
I have to create a xls datafeed for a website and I would like to know which tables are getting affected when I do a manual entry from CMS.
If i have installed fresh database and I'm doing first entry in it using CMS: I would like to know which tables got updated/appended in last 1 min in that DB.
It is somewhat similar to this question http://stackoverflow.com/questions/307438/how-can-i-tell-when-a-mysql-table-was-last-updated
But in my case I dont know which tables to check. I can check each and every table in the db using the solution posted in the question but I have a gut feeling that there is a better solution for this.
Posted: 04 Apr 2013 11:20 AM PDT
I have a table that stores notes
I have done a lot of reading recently about how MSSS handles indexes (2005 and forward).
I have a clustered index on ID
[ I have considered changing the clustered index to parentId, parentType since that is reasonably narrow and it is static. ]
The overwhelming percentage of queries against this table are going to be along the lines of
The question I want to ask today (though any feedback is welcome) is this:
The NC index I could add is:
This would be useful in creating little lists of the notes where we might include who and when type info.
I am hesitant to include a
Assuming I dont include the
While I have read quite a bit about how expensive RID lookups are, it still has to be better to have this index as opposed to doing a table scan, RIGHT?
[apologies for the code block, i have added the 4 spaces, but maybe i did it wrong? ]
Posted: 04 Apr 2013 02:51 PM PDT
There is a column in Excel that is supposed to be text, but SSIS sees numeric text there and automatically makes it double-precision float [DT_R8].
I can change it manually in the Output branch's External and Output columns, but the Error Output simply refuses to let me change the respective column.
Error at Extract Stations [Excel Source ]: The data type for "output "Excel Source Error Output" (10)" cannot be modified in the error "output column "Group No" (29)". Error at Extract Stations [Excel Source ]: Failed to set property "DataType" on "output column "Group No" (29)".
I have tried modifying the package xml. I tried the IMEX=1 and typeguessrow=0 but not of that has fixed my problem. Does this have any fix at all?
The excel field to be imported into the SQL nvarchar field reads for example
but they are being written to the SQL table as
I put in dataviewers and the fields show
all the way thru execution which is correct but I guess when it hits the OLE DB Destination source it somehow converts it to the
which is wrong.
Posted: 04 Apr 2013 12:04 PM PDT
I have managed use a store procedure to create a copy of a table with a variable name. But I am struggling to understand how to incorporate a constraint into the stored procedure.
The constraint must be a variable as when it makes a copy of a table it cannot name the PK that i have with the same as one used before. I am getting syntax errors in these areas.
I am very new to SQL server... literally just started learning today!! So please explain in dummy terms.
Code so far below:
Posted: 04 Apr 2013 12:56 PM PDT
I'm trying to delete all users but getting the error:
Seems I need to use
Posted: 04 Apr 2013 12:45 PM PDT
Is there any tool available which can do stress testing using the log file created by MySQL general log? After a lot of search on google I found few stress testing tools which only use some benchmarks for stress test. One solution is using Apache JMeter, but it does not create test plans from MySQL log file and creating custom test plan for all the queries I have is too time consuming.
Posted: 04 Apr 2013 01:55 PM PDT
I have a database in SQL Server 2008 with around 20 GB in size. And it is increasing rapidly.
Somehow I can not add multiple independent harddisks to increase IO performance.
If a add large table in separate file group, will it help to improve performance on single disk?
Or any one has tips to improve performance?
Posted: 04 Apr 2013 01:54 PM PDT
I have an SSRS report which gets its data by firing a series of stored procedures.
Now the report is timing out big time when run in production, yet when I pull down the prod database and restore to development the report runs fine.
I was thinking to set up a sql server profiler trace in production and hopefully that will tell me something... eg high Disk I/O at the time it's being run.
What else should I be doing? Something with perfmon?
Posted: 04 Apr 2013 04:03 PM PDT
Firstly apologies if this is a duplicate, I am fairly new to SQL and so Im not sure what the correct terminology to use in my searches
So I have a database which records motor races, with the following simplified schema
If each driver has ~1000 races spread over 2/3 years
How would I go about querying the overall % change (positive of negative) in their average race speed for a given date range, for example
% Change in first 6 months
% Change in first 12 months
UPDATE: More detail on % Change
The % Change could be calculated using linear regression, (least-squares-fit would be suitable), the average change is effectivly the y-difference on a line-of-best-fit, where each point is a race, x is the race_date and y is the average_speed for that race.
Postgres's regr_slope will give the gradient of the line which is effectivly the same as the %change
This gives the figure I want, but I now need to apply is against all users, sorted by 'slope'
Posted: 04 Apr 2013 02:59 PM PDT
This is driving me crazy.
When I try to create a few tables from my Workbench model I get this
I've been trying to find a solution but nothing works for me.
Curiously when I run
Access is denied to both at one point or another.
The MySql server is a remote hosted server with the user permissions correctly set.
Posted: 04 Apr 2013 12:59 PM PDT
If I have two database servers, and I am looking for maximum performance vs high-availability, what configuration would be best?
Assuming the architecture is two load-balanced web/app servers in front of two db servers, will I be able to have both db servers active with synced data, with web1 to db1, web2 to db2 setup? Is this active/active?
I'm also aware that the two db servers can have their own schema to manually 'split' the db needs of the app. In this case daily backups would be fine. We don't have 'mission critical data.'
If it matters, we have traffic around 3,000-7,000 simultaneous users.
Posted: 04 Apr 2013 10:59 AM PDT
We have a merge replication environment that is pushing to 8 subscribers. This is working fine. Our distribution database is setup in Simple recovery mode. We have a maintenance plan that will backup all database every day at 00:30. Once this process completes, the distribution log file grows over the next 30 minutes and absorbs all the remaining space on the hard drive (about 90GB)
What then happens is that the distribution database shows as "Recovery Pending" and we cannot do anything till we restart the machine. After this I can shrink the log file down to 2MB.
I have no idea why this is happening. The log file is running at about 10MB during the day. The database size is sitting at 15GB.
Posted: 04 Apr 2013 07:59 PM PDT
I'm trying to restore a SQL Server database with a PowerShell script, but I'm having problems.
Here is the error I'm getting:
Here is my code:
Posted: 04 Apr 2013 11:59 AM PDT
I recent took a backup using mysqlbackup.
While restoring it, I noticed that the files that were copied into datadir are with
Is anything wrong with taking backup or what?....
Posted: 04 Apr 2013 01:59 PM PDT
Recently we are working on a POC to get Always on work and happened to see this article in BOL
This article suggests that there would be logical inconsistency when we are dealing with Synchronous mode too, but will this actually be the case?
Consider for example databases A and B on which the transaction is running and A is in High-safety mode and B is not mirrored. The log of A has to go to Mirrored database then the Primary database commits eventually two phase commit(transaction on B) succeeds but article suggests that log will not be transferred in the first place and results in commit on B which is contradictory. Please help me in understanding Whether the statement suggested in above article is true. If yes how can it be :).
PS :Please let me know if I need to provide more information around this.
Posted: 04 Apr 2013 05:59 PM PDT
I am pretty inexperienced with this.
I need a generic trigger, able to create and save in a fixed table some sort of identification data for a changed row from generic (any) table. The identification data should be used later to SELECT the changed item in the given table.
Can be this done without previously knowing the table structure?
The only idea I had, but it's way too inefficient in my opinion, also requires previous knowledge of the table column names, is to save a hash by:
to identify the changed row in the table which triggered the change.
I would greatly appreciate any help or suggestions!
Posted: 04 Apr 2013 01:58 PM PDT
When I want a column to have distinct values, I can either use a constraint
or I can use a unique index
Columns with unique constraints seem to be good candidates for unique indexes.
Are there any known reasons to use unique constraints and not to use unique indexes instead?
Posted: 04 Apr 2013 02:17 PM PDT
I'm having the problems described in KB937745 - very high CPU usage and the Application Log is reporting something like this:
I've downloaded the hotfixes and I can't run them - I suspect it is because SQL Server 2005 Express Edition is not in the "Applies to" section of the KB:
The machine is running on an ESX 3.5 host running Windows XP (patched).
Any ideas? I'm stumped why the CPU is so slammed. This is a product from a vendor that hasn't seen this kind of problem with several installations.
|You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange |
To stop receiving these emails, you may unsubscribe now.
|Email delivery powered by Google|
|Google Inc., 20 West Kinzie, Chicago IL USA 60610|