Tuesday, July 16, 2013

[how to] InnoDB, Hash Table, Reference Table and Fragmentation

[how to] InnoDB, Hash Table, Reference Table and Fragmentation


InnoDB, Hash Table, Reference Table and Fragmentation

Posted: 16 Jul 2013 08:30 PM PDT

I have a big list of domains and URLs that I store in a database, there are around 150M domains and 300M URLs, I'm using InnoDB to store each in the format:

CREATE TABLE IF NOT EXISTS `domains_list` (    `id` int(10) unsigned NOT NULL DEFAULT '0',    `domain` varchar(350) DEFAULT NULL    PRIMARY KEY (`id`)  ) ENGINE=InnoDB DEFAULT CHARSET=latin1;  

Which is fine. New records are assigned what is effectively an auto-increment, so there's no fragmentation and inserts are in ascending order.

When new data comes in however (typically between 250K and 2M rows), I use two separate 'hash' tables to see whether the domain or URL already exists in the database. Strictly speaking it's not a 'hash' table, just a bunch of MD5s which I use ensuring values are unique, with the added benefit of the table being fixed length. The table is also partitioned.

CREATE TABLE IF NOT EXISTS `domains_hashes` (    `id` int(10) unsigned NOT NULL,    `segment` tinyint(3) unsigned NOT NULL,    `hash` binary(15) NOT NULL,    PRIMARY KEY (`segment`,`hash`)  ) ENGINE=InnoDB DEFAULT CHARSET=latin1  /*!50100 PARTITION BY RANGE (segment)  (PARTITION p0 VALUES LESS THAN (1) ENGINE = InnoDB,   PARTITION p1 VALUES LESS THAN (2) ENGINE = InnoDB,   ...   PARTITION p254 VALUES LESS THAN (255) ENGINE = InnoDB,   PARTITION p255 VALUES LESS THAN (256) ENGINE = InnoDB) */;  

segment is basically the first character of the hash, which is used for the partition. With the remaining 15 bytes going into hash.

For seeing whether a bunch of domains already exist in the database, this works relatively well, however, the table gets fragmented due to the random nature of insertions.

The hash table is basically only used for insertions and quickly looking up whether a domain exists in the DB or not. During insertions, a script walks from 0-255 and performs the necessary check.

My question is, do you know of a better procedure in order to handle inserts/selects better? I believe when I started out with this database I simply had a key on domains_list.domain, which was slow.

I find that when the partitions are re-organised the lookups are very quick, but after a number of batch insertions, the same lookups slow down somewhat. Server has 32GB of RAM and I use 16GB for the buffer pool, while the table itself takes up 5.4GB on disk.

SQL Server 2008: Is there a graphical reporting tool that shows tps (or select statements) per second for a specific table?

Posted: 16 Jul 2013 07:00 PM PDT

In SQL Server 2008: Measure tps / select statements per second for a specific table? , the question was "how to do it"

OK, so I can do it.... but I want automated, pretty charts, so I can easily find and watch over days tps by table (to guide us in optimization work, etc).

Yes, we could "roll our own" and write all kinds of reports, etc. but that looks to be 1-n days of work.

Is there a tool to automate all or some of this?

E.g.

chart: TPS by table for database N (with timeline over 1 hr, 24 hrs, 7 days, etc) chart: Top five tables w hottest tps etc

MySQL Best Practices for a log table to display website statistics

Posted: 16 Jul 2013 05:25 PM PDT

Google Analytics is a bit too large for what we want to do on our website and we would like to display custom statistics for info logged on each page load. Inserted data would be like: [ip, geo_data, page_url, referer_url, timestamp, etc], statistics would be like: [visits today, visits of a page in particular, New-York visitors for today, etc].

I looked around on Google but only found some partial infos. What are the best practices? For instance, I'm not sure which engine to choose between MyISAM, InnoDB or Archive Storage (I know about table/row locking features, just not sure which is best for this). I'm not sure about index either as I know that indexes can increase the table size and I'm not sure if I should put a single index on each column or indexes on several columns for each possible search combinations. There are probably important facts things I didn't even think about.

function hangs with null case operation

Posted: 16 Jul 2013 05:46 PM PDT

I created a function that accepts a start and end date, with the end date being optional. I then wrote a CASE in the filter to use the start date if no end date is passed.

CASE WHEN @dateEnd IS NULL      THEN @dateStart      ELSE @dateEnd  END  

When I call the function for the most recent month of the data:

SELECT * FROM theFunction ('2013-06-01', NULL)  

... the query hangs. If I specify the end date:

SELECT * FROM theFunction ('2013-06-01', '2013-06-01')  

... the result is returned normally. I took the code out of the function and ran it fine inside a query window. I can't duplicate the issue the fiddle either. A query like:

SELECT * FROM theFunction ('2013-04-01', '2013-06-01')  

... also works fine.

Is there anything in the query (below) that could be causing the function to hang when a NULL is passed for the end date?

SQL Fiddle

Learning to draw entity relational diagram

Posted: 16 Jul 2013 08:14 PM PDT

I am trying to learn entity relationship diagram, following question is the objective.

Information consulting firm has number of offices with employees. The employees can move offices depend on the project, but location is always recorded. This consulting firm manage number of projects and have vast base of clients. The company's success is depend on assigning relative employee with relative skill and location. The time an employee spends on project is recorded each day. This include time for travel and break time etc. To reduce time for travel they try to allocate employees from the nearest office possible. Draw Entity Relationship diagram and entity listing with attributes.

I created entity relationship diagram by using Visio. Please comment on my attempt.

Office    Address  telephone  office manager    Employees     name  address  job  hourly rate  start_date  location etc    Projects    Name  Manager  Start Date  End Date  Estimated Cost  Quote  Actual Cost    Client  Name  address  telephone  e-mail  contact    Assignment    name  Employee name  Date Assigned  Hours spent  

Here is my attempt in this image:

enter image description here

SSIS hangs during For Each File enumeration

Posted: 16 Jul 2013 02:24 PM PDT

While debugging an SSIS package that loops over thousands of files, execution hangs with no errors and no failed tasks. Slightly simplified, the package does this:

For Each File                       (a directory full of pdf files, each ranging from 800 -                                          2000 pages and 3 - 8MB filesize)      *   Script Task                 (get file information)      *   File System Task            (clear out temp folder)      *   Execute Process Task        (use pdftk to break file into individual pages)      *   For Each File               (each page in temp folder, 40 - 100KB each)          *   Script Task             (parse pdf file)          *   Execute Process Task    (pdftk to concatenate like pages)          *   Execute SQL Task        (insert one row into staging table using OPENROWSET)          *   File System Task        (delete inserted file)          *   Script Task             (set variables for next iteration)      *   Execute SQL Task            (execute proc to merge staging into prod)      *   File System Task            (rename completed file)  

When I execute the package (debug mode in BIDS), it runs as expected through the first file and well into the second, but eventually hangs between iterations of the inner loop. When it hangs, all tasks are green, and the two loop containers are yellow. The output window does generate any messages, and going through the process tab, each task has a matching number of starts and stops except for the two loops and the package itself.

My first thought was that I'd exceeded an iteration limit for the For Each File enumerator, but I cannot find any documentation for such a limit. What else can I check to determine why SSIS stops running?

Mongo DB Cant Start

Posted: 16 Jul 2013 04:46 PM PDT

Evening we have had a mongodb running fine for a few weeks and all of sudden it has gone down.

We are unable to start it back up as it is complaining that the "dbpath (/data/db/) does not exist" I've tried routing through the documentation and have seen others suggest using the dbpath flag, however still no luck.

We are starting it with "sudo service mongodb start" and the config file dbpath config points to a directory with our db in.

Any advice would be greatly appreciated.

Thanks in advance. Gary

UPDATE statment with and without SELECT keyword

Posted: 16 Jul 2013 03:57 PM PDT

I have been doing some testing on removing the SELECT keyword from this update:

Update  #tmpDriverTime  Set     WorkDay2 = (Select IsNull(Sum(PyblHrs),0)  From    spr_DriverTimeEntry  Where   spr_DriverTimeEntry.DrvrID = #tmpDriverTime.EmployeeNo  And     Convert(Varchar(12),dtwrkd,112) = Convert(Varchar(12),@StartDate_1,112))  

From the testing I have done it looks like the same data is returned whether or not I have the SELECT keyword. I ran the code by our DBA to get his approval of removing the SELECT statement and he said I can not as removing the SELECT statement would cause the update to work differently but he can not articulate how it works differently. I have done some searching on the new and found a few MSDN pages where they run the UPDATE FROM command with out a SELECT keyword. So now I am confused, my testing and other examples I found on MSDN say I can remove the SELECT keyword but my DBA says no. Does the removal of the SELECT keyword affect how the UPDATE FROM works and if so how does it affect it?

Phonetic search with Solr, for Brazilian Portuguese

Posted: 16 Jul 2013 01:14 PM PDT

we are implementing Solr as the new internal search engine for our website.

Most features are running just fine, others are in the adjusting and calibration phase.

But there is one feature that I'm not finding any good documentation over the web. So here it goes:

  • how can I implement phonetic search and suggest with Solr, for brazilian portuguese language?

I was able already to create an index, using the official stemming tokenizer http://docs.lucidworks.com/display/solr/Language+Analysis#LanguageAnalysis-BrazilianPortuguese

But the match against uses parsers adapted to understand everything as english. That is where the problem lies.

Tutorial, documentation, how to or reply are welcome.

Will Ola's default IndexOptimizer job update my stats?

Posted: 16 Jul 2013 01:20 PM PDT

I just installed Ola Hallengren's MaintenanceSolution.sql maintenance scripts. After looking at the output of the log generated by the IndexOptimize - USER_DATABASES job, I'm concerned that it's not going to update the stats. The reason for my concern is that I see @UpdateStatistics = NULL in the parameters listed in the log output:

Date and time: 2013-07-16 13:58:24  Server: XXXXX  Version: 10.0.5512.0  Edition: Standard Edition (64-bit)  Procedure: [PROD-PH].[dbo].[IndexOptimize]  Parameters: @Databases = 'USER_DATABASES', @FragmentationLow = NULL, @FragmentationMedium = 'INDEX_REORGANIZE,INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE', @FragmentationHigh = 'INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE', @FragmentationLevel1 = 5, @FragmentationLevel2 = 30, @PageCountLevel = 1000, @SortInTempdb = 'N', @MaxDOP = NULL, @FillFactor = NULL, @PadIndex = NULL, @LOBCompaction = 'Y', @UpdateStatistics = NULL, @OnlyModifiedStatistics = 'N', @StatisticsSample = NULL, @StatisticsResample = 'N', @PartitionLevel = 'N', @MSShippedObjects = 'N', @Indexes = NULL, @TimeLimit = NULL, @Delay = NULL, @LockTimeout = NULL, @LogToTable = 'Y', @Execute = 'Y'  Source: http://ola.hallengren.com  

The motivation behind this is that there is a query that runs once an accounting period (every four weeks) for the date range of the period. The query takes over 20 minutes if I do not manually run sp_updatestats. The query takes less than 5 seconds if the stats are current. The table in question has ~ 1.6M rows but is only growing by ~20K a week. As such, it is nowhere near the 20% auto threshold for a given four week period. Several other tables have a similar growth pattern.

My plan is to schedule this job to run once a week. Will it update the stats as required?

Modeling a database for easy counting / reporting

Posted: 16 Jul 2013 02:04 PM PDT

I have an app where user is known (user_id) and he can do several actions (action_id). Every time he makes an action I need to save the fact that he made it for reports/analytics. I guess it is similar to other analytic solutions and their db design.

Once I have the data, provided with a time window (minutes resolution) I need to count for each user (all or some) the number of times he did actions and which actions he did. (sum all data grouped by action_id).

Some assumptions:

  • The number of users are ~1000.
  • Action types are ~100.
  • Actions can happen 24/7.
  • The time windows can span from minutes to days and are random.
  • A time window can't go back more than 30 days.

I'm considering SQL, NoSQL and RRD to save the data.

I put RRD here because it's easy to implement the insert of the data into statds+graphite. I'm concerned if I take this approach, the querying (although provided by graphite) will not be indexed and will probably have to count all the data whenever I ask for a window/user (no indexing). Another problem is that when querying all the data, all users info will be needed, resulting in reading all the files concurrently which I'm not sure is a good thing.

SQL - Very easy implementation when inserting the data and querying. Easy to index, order and group by. However I'm not sure it's easy if I'm anticipating high traffic. Also, I'm not sure how effective is the count() of sql (haven't used SQL in the last few years) after group by. Can it offer parallel computation?

NoSQL - Is there a solution out there that is the right fit for this type of scenario (perhaps a Map/Reduce algorithm to fast generation of counts in a time window?)

Thanks for helping me model

Trouble configuring Toad for oracle 9.7.2 commercial with instant client [on hold]

Posted: 16 Jul 2013 11:54 AM PDT

I have succesfully setup the instant client with sqlplus connecting to database correctly. Now on starting Toad 9.7.2 commercial My service name as specified in the tnsnames.ora does shows up under database but there is no instant client showing there . Earlier i had ran TOAD 12 TRIAL and it it worked with the same instant client nicely .

what is wrong with this previous version of toad .

PLATFORM -MS WIN7 64 bit

INSTANT CLIENT basic and sqlplus Version 12.1.0.1.0 for windows 32 bit

Does existing msrepl_tran_version column need to be removed when re-creating replication?

Posted: 16 Jul 2013 11:40 AM PDT

What is the process for removing a replication and then re-creating the replication. I know the process to set up replication. However, If i wanted to take a publisher database copy it and restore it and create another publication and replication how would I do that?

Would I need to remove the previous mesrepl_tran_version column from all the tables???

The type of replication that I will be re-creating is Transactional Replication with updateable subscribers.

Additionally, After I restore the database and create the publication do I need to let the snapshot complete and then back up that database and restore on subscribers and create the subscription

or

when I restore the database for the publisher can I also restore it on the subscriber?

Help will be greatly appreciated!

Run job once a minute, every day, except during a certain time

Posted: 16 Jul 2013 11:19 AM PDT

I have a Sql server 2012 agent job which currently runs every minute, every day, etc.

I'd like to modify it so that it will run every minute, every day except on Saturdays between 7:00 pm and 11:30 pm. Is this possible with the job scheduler?

I tried using a daily frequency and I can change the time on a per day basis that way, but not for a specific day.

Current schedule:

enter image description here

Putting a Select statement in a transaction

Posted: 16 Jul 2013 11:10 AM PDT

What is the difference between these 2 queries:

start transaction;  select * From orders Where id=1;  UPDATE orders SET username="John" Where id=1;  commit;  

And without transaction:

select * From orders Where id=1;  UPDATE orders SET username="John" Where id=1;    

What is the effect of having a SELECT inside a transaction?

If Delete From orders Where id=1 was called from another session right after the Select in both cases, when will it be processed?

INT or TIMESTAMP for the change time column in a logging table?

Posted: 16 Jul 2013 12:54 PM PDT

We're having an argument over what to use for storing change date in a new log table for our site.

One side says, use INT for the timestamp, storing the UNIX timestamp. We can convert the datetime query in PHP to a UNIX integer and query based on that (WHERE log_date BETWEEN [convert 12/25/2012 to timestamp] AND [convert 12/31/2012 to timestamp]).

The other side says, use TIMESTAMP for this; we can then use the native tools to query. (WHERE log_date BETWEEN "2012-12-25" AND "2012-12-31").

Since this is a logging table, we're not concerned about the 1970-2038 range of TIMESTAMP.

Which makes more sense?

How do I insert a row which contains a foreign key?

Posted: 16 Jul 2013 03:20 PM PDT

Using PostgreSQL v9.1. I have the following tables:

CREATE TABLE foo  (      id BIGSERIAL     NOT NULL UNIQUE PRIMARY KEY,      type VARCHAR(60) NOT NULL UNIQUE  );    CREATE TABLE bar  (      id BIGSERIAL NOT NULL UNIQUE PRIMARY KEY,      description VARCHAR(40) NOT NULL UNIQUE,      foo_id BIGINT NOT NULL REFERENCES foo ON DELETE RESTRICT  );  

Say the first table foo is populated like this:

INSERT INTO foo (type) VALUES      ( 'red' ),      ( 'green' ),      ( 'blue' );  

Is there any way to insert rows into bar easily by referencing the foo table? Or must I do it in two steps, first by looking up the foo type I want, and then inserting a new row into bar?

Here is an example of pseudo-code showing what I was hoping could be done:

INSERT INTO bar (description, foo_id) VALUES      ( 'testing',     SELECT id from foo WHERE type='blue' ),      ( 'another row', SELECT id from foo WHERE type='red'  );  

representation in ms-access

Posted: 16 Jul 2013 03:01 PM PDT

I have a database in microsoft access. I want to know how to look up a singular datum from a reference table giving a dynamic set of values. Here is a representation of what I mean:

I have the following tables:

Points for Pushups(m):

Reps      Age 17-21,          Age 22-26,         Age 27-31    1            6                  7                    8    2            7                  9                    9    3            9                  11                  12  

Fitness Tests:

Name  Reps   Test Date    Bob      2            1 jan 2009    Jill     1            5 may 2010  

People:

Name         DOB    Bob      1 jan 1987    Jill     2 feb 1985    Sal      3 Mar 1991    

I want the query to use People.DOB and the Test date to find the age the person was during the test. I then want the query to use this value to determine which column to look in, and the value from reps to determine which row to look in coming back with the singular value and naming it points.

for example I want bob to show

Query:

Name      DOB            Age AtTest   Reps      Points    Bob      1 Jan 1987         22         2          9  

Does anyone know how to do the dynamic reference part?

I know how to make the query and I know how to get age I just don't know how to use the values as columns in the reference table, I've seen it done, but long ago and never looked into it.

Is this good/bad database design and why?

Posted: 16 Jul 2013 12:27 PM PDT

I am building a database for a small project. I don't have much experience so I am unsure, but I get the feeling I am not approaching this correctly.

I have a system which allows users to create courses, which contain sections of questions which they answer. I have separated out courses, sections and questions as separate entities, but the questions table requires foreign keys from:

  • The Users table to know who created it.
  • The Courses table to know what course it is part of.
  • The Sections table to know what section of the course it is in.

enter image description here

Question: In terms of the Questions table requiring foreign keys from everywhere, is this layout acceptable? Is there any improvements that could be made?

Thanks for any help.

What is the best way to transport database fields from one database to another?

Posted: 16 Jul 2013 02:15 PM PDT

I have two databases. The table name and fields name are different and field numbers are unequal. I need to transport all fields from one database to another. I can import the database as CSV format.

I can use a PHP script which will accomplish this.

But is there any other way to do this easily without any script.

Several PC Cannot Connect To MySQL Server on Certain Time

Posted: 16 Jul 2013 12:15 PM PDT

I have a network of 10 PCs and 1 Server. I installed MySQL Community Server v5.6.10 64-bit on the Server, and all the PCs are Windows XP, using ODBC Connection 5.2(a).

Some of the PC (previously it was 3, now become 4) cannot connect to the MySQL at a certain time, which is 2.00 pm. Before, it was OK, the client can connect to the Server normally, but at 2.00 pm, those PC cannot connect anymore, with this error:

Your connection attempt failed for user xxx@xxx from your host to server at   xxx.xxx.xxx.xxx:xxxx. Cannot connect to MySQL Server on xxx.xxx.xxx.xxx (10055)  Please:  1. Check that mysql is running on server xxx.xxx.xxx.xxx  2. Check that mysql is running on port xxxx  3. Check the xxxxxx has rights to connect to xxx.xxx.xxx.xxx from your address (mysql rights define what clients can connect to the server and from which machines)  4. Make sure you are both providing a password if needed and using the correct password for xxx.xxx.xxx.xxx connection from the host address you're connecting from  

From the above error message:

  1. I already check that mysql is running (other PC can access it normally)
  2. I also check MySQL to run in port xxxx
  3. I also check the user xxxxxx has rights to connect (because it can connect to the server before 2 pm)
  4. I also check the password has been provided correctly
  5. I also check the maximum number of connection in my.ini, which I set to 0 (unlimited connection)
  6. I also check the network connection, but find no problem since I can still ping the server and other PC

The only action I do to make everything work again is by restarting the PC (client), however, I don't want such solution. I want to know what's the cause, is it because of the ODBC connector, the PC, or any other reason. Could anyone give me some hints on what I should check?

Thank you

Start service error code 3417 with SQL Server 2012

Posted: 16 Jul 2013 11:47 AM PDT

I cannot start the SQL Server 2012 service. The event log says the service-specific error code is 3417.

In the Event Viewer:

The SQL Server (MSSQLSERVER) service terminated with the following service-specific error:
WARNING: You have until SQL Server (MSSQLSERVER) to logoff. If you have not logged off at this time, your session will be disconnected, and any open files or devices you have open may lose data.

I've changed the service logon account in the service properties but this error happened again.

I think the problem is in the master database, but I do not know what to do.

EDIT:

I've try Rebuild the Resource database but this error happened again!

Time series data for ad platform

Posted: 16 Jul 2013 12:04 PM PDT

I am trying to figure out how to store time series data for an ad platform I am working on.

Basically I want to know some strategies/solutions for storing billions of rows of data so that I can easily search it (about 6-8 indexes on the table) and get fast counts based on queries.

I tried mySQL with the tokuDB engine and this seems to be very fast but is extremely slow when I try to do a COUNT query when the rows reached about 5-8 million.

I was looking at some noSQL alternatives but since I want to be able to search this data this is probably not the best solution. I was using dynamoDB. I would have had to store the data is many places in order to account for all the searching on the data.

What I am storing is a row in the database for each click on an AD that occurs. This table will grow very fast, especially when this site gets large.

Another solution would be to separate this data per advertiser. This means each advertiser will have their own table where all their data goes into. This means it will be much smaller and the COUNT queries will be much faster. I can even split it up by advertiser and month.

My goal is to give an advertiser the ability to search and display in a paginated way all their clicks. They should be able to get data between a time period and filter by about 5-8 other indexes if they want to.

If an account has REQUIRE SUBJECT, does it still need a password?

Posted: 16 Jul 2013 12:03 PM PDT

I'm in the process of setting up SSL-secured replication between two servers. Each server has its own public/private keypair, and the CA cert is just the concatenation of the two public certs, like this answer.

Now I'm updating the replication account with REQUIRE SUBJECT "exact subject of the client"

Is there any practical value to also having a password on the replication account (IDENTIFIED BY "secret")?

MySQL users corrupt

Posted: 16 Jul 2013 03:15 PM PDT

I have a strange situation here:

From time to time I cannot log in with any of my mysql users. I even cannot make a mysql dump.

So I started searching in mysql files and I found that users.MYD and users.MYI are modified in the time when the login problem occurs. The only way to return everything to work is to restore the users.* files from the time when the system was running okay.

I searched about the problem and I found that there was some bug in the MySQL, but it was in the older versions (4.X). I'm running 5.5.

Any ideas? Thanks!

I have multiple sources INSERTing into a MySQL innodb table. My periodic aggregation script never makes accurate aggregates. Why?

Posted: 16 Jul 2013 01:15 PM PDT

I apologize in advance if this is a repeat. I'm not really sure how to properly ask for what I'm running into.

I have a large InnoDB table set up. I have 3 sources that all INSERT concurrently at a moderately high volume (300-500 INSERT/s).

I have a PERL script running on a cron job every 5 minutes that aggregates data from the last time it ran. It keeps track of the last record it processed by storing the auto_increment value of the last row in a metadata table.

The aggregates are always off. But not by much. This has been so frustrating because it's just plain simple math (SELECT account_id,sum(sold) GROUP BY account_id). I have a suspicion that it has something to do with the transaction isolation (repeatable-read).

I recently found FlexViews which looks very cool and might address my problem. But I was wondering if anyone could:

  • Tell me what is going on here. Why is this happening?
  • What can be done to produce 100% accurate aggregates

I'm like 95% sure the auto_increment thing along with transaction commit ordering is hosing me up, but I don't understand enough about DBs to really ask the right question.

Oh, one thing to note, I've already checked over the field types. This issues isn't the result of rounding.

Are there any tables that I can get all Oracle supported system privileges and object privileges?

Posted: 16 Jul 2013 11:24 AM PDT

Are there any tables that I can get all Oracle supported system privileges and object privileges? I know there is a view called all_sys_privs in Oracle. But I can't find more information about it.

Thanks.

SQL Server 2008 R2 (Suspect) mode - how to repair?

Posted: 16 Jul 2013 09:30 AM PDT

I have SQL Server 2008 R2 database in Suspect mode. I tried to fix it running this query:

EXEC sp_resetstatus 'yourDBname';  ALTER DATABASE yourDBname SET EMERGENCY  DBCC checkdb('yourDBname')  ALTER DATABASE yourDBname SET SINGLE_USER WITH ROLLBACK IMMEDIATE  DBCC CheckDB ('yourDBname', REPAIR_ALLOW_DATA_LOSS)  ALTER DATABASE yourDBname SET MULTI_USER  

But the repair output was this message:

Warning: You must recover this database prior to access.  Msg 8921, Level 16, State 1, Line 5  Check terminated. A failure was detected while collecting facts. Possibly tempdb out of space or a system table is inconsistent. Check previous errors.  Warning: The log for database 'ServeDB' has been rebuilt. Transactional consistency has been lost. The RESTORE chain was broken, and the server no longer   has context on the previous log files, so you will need to know what they were. You should run DBCC CHECKDB to validate physical consistency. The database has   been put in dbo-only mode. When you are ready to make the database available for use, you will need to reset database options and delete any extra log files.  Msg 8921, Level 16, State 1, Line 9  Check terminated. A failure was detected while collecting facts. Possibly tempdb out of space or a system table is inconsistent. Check previous errors.  

How to repair the database fully? This repair which i made, works only for some days, then database again goes to Suspect mode...

[SQL Server] Prev and Next Row Without RowNumber

[SQL Server] Prev and Next Row Without RowNumber


Prev and Next Row Without RowNumber

Posted: 16 Jul 2013 12:29 AM PDT

Hello All,Can anyone help me with this one.here is the table sample.[code="sql"]---------------------------------------------------------------CREATE TABLE [dbo].[Invoice_t]( [Cost_Center_code] [int] NOT NULL, [Payment_code] [int] NOT NULL, [INV_No] [int] NOT NULL, CONSTRAINT [PK_Invoice_t] PRIMARY KEY CLUSTERED ( [Cost_Center_code] ASC, [Payment_code] ASC, [INV_No] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY]GOINSERT [dbo].[Invoice_t] ([Cost_Center_code], [Payment_code], [INV_No]) VALUES (1, 1, 1)INSERT [dbo].[Invoice_t] ([Cost_Center_code], [Payment_code], [INV_No]) VALUES (1, 1, 2)INSERT [dbo].[Invoice_t] ([Cost_Center_code], [Payment_code], [INV_No]) VALUES (1, 1, 3)INSERT [dbo].[Invoice_t] ([Cost_Center_code], [Payment_code], [INV_No]) VALUES (2, 1, 1)INSERT [dbo].[Invoice_t] ([Cost_Center_code], [Payment_code], [INV_No]) VALUES (2, 1, 2)INSERT [dbo].[Invoice_t] ([Cost_Center_code], [Payment_code], [INV_No]) VALUES (2, 2, 1)INSERT [dbo].[Invoice_t] ([Cost_Center_code], [Payment_code], [INV_No]) VALUES (2, 2, 2)INSERT [dbo].[Invoice_t] ([Cost_Center_code], [Payment_code], [INV_No]) VALUES (1, 2, 1)INSERT [dbo].[Invoice_t] ([Cost_Center_code], [Payment_code], [INV_No]) VALUES (3, 2, 1)INSERT [dbo].[Invoice_t] ([Cost_Center_code], [Payment_code], [INV_No]) VALUES (3, 1, 1)---------------------------------------------------------------[/code]How can i get the previous and next row from this type of data, without using rownumbers.Thanks.

[Articles] Hack Resistant?

[Articles] Hack Resistant?


Hack Resistant?

Posted: 15 Jul 2013 11:00 PM PDT

Can we make a hack resistant database? A vendor claims this, but Steve Jones thinks it's not really a good claim to make and that we ought to make it our job to secure databases.

[MS SQL Server] resource error 0x800706B9 on installing a sql2008 r2 instance

[MS SQL Server] resource error 0x800706B9 on installing a sql2008 r2 instance


resource error 0x800706B9 on installing a sql2008 r2 instance

Posted: 16 Jul 2013 12:48 AM PDT

Afternoon All I was in the process of installing a SQL2008 R2 64-bit SE instance on a two node cluster running Windows 2003 R2 64 bit Datacenter Server , that currently hosts 3x SQl2005 SP4 CU3 SQl instances.Before the install got to the final page of the install wizard a dialogue box came up with Not enough resources are available to complete this operation. (Exception from HRESULT: 0x800706B9clicked ok and the install went away (no failure or attempt to stat the install process) Exception type: System.Runtime.InteropServices.COMException Message: Not enough resources are available to complete this operation. (Exception from HRESULT: 0x800706B9) Data: DisableWatson = true Stack: Server stack trace: at Microsoft.SqlServer.Interop.MSClusterLib.ClusResourcesClass.Refresh() at Microsoft.SqlServer.Configuration.Cluster.WindowsCluster.get_Resources() at Microsoft.SqlServer.Configuration.ClusterConfiguration.ClusterDiskPublicConfigObject.get_DiskResources() at Microsoft.SqlServer.Configuration.ClusterConfiguration.ClusterDiskPublicConfigObject.VerifyPathInClusterGroup(String path) at Microsoft.SqlServer.Configuration.SqlEngine.InputValidator.IsInvalidDrive(DirectoryInformationData did, String fullPathName, String bindingArg, String uiBindingArg) at Microsoft.SqlServer.Configuration.SqlEngine.InputValidator.ValidateDataDirectory(String directoryType, String directory, String bindingArg, String uiBindingArg, DirectoryValidationFlags validationFlags) at Microsoft.SqlServer.Configuration.SqlEngine.InputValidator.ValidateDataDirectories() at Microsoft.SqlServer.Configuration.SqlEngine.InputValidator.ValidateInstallInput() at Microsoft.SqlServer.Configuration.SqlEngine.SqlEngineSetupPublic.ValidateSettings() at Microsoft.SqlServer.Configuration.PublishedProperties.Validate() at Microsoft.SqlServer.Configuration.DatabaseEngineSecuritySettings.ValidateSettings() at Microsoft.SqlServer.Configuration.PublishedProperties.Validate() at Microsoft.SqlServer.Configuration.InstallWizardFramework.ConfigurationController.ValidateSettings() at Microsoft.SqlServer.Configuration.InstallWizardFramework.InstallWizardPageHost.CalculateValidationState() at Microsoft.SqlServer.Configuration.WizardFramework.PageHost.<>c__DisplayClass1.<PageLeaving>b__0() at Microsoft.SqlServer.Configuration.WizardFramework.BlockingWorker.ThreadCallBack(MethodInvoker method) at System.Runtime.Remoting.Messaging.StackBuilderSink._PrivateProcessMessage(IntPtr md, Object[] args, Object server, Int32 methodPtr, Boolean fExecuteInContext, Object[]& outArgs) at System.Runtime.Remoting.Messaging.StackBuilderSink.AsyncProcessMessage(IMessage msg, IMessageSink replySink) Exception rethrown at [0]: at System.Runtime.Remoting.Proxies.RealProxy.EndInvokeHelper(Message reqMsg, Boolean bProxyCase) at System.Runtime.Remoting.Proxies.RemotingProxy.Invoke(Object NotUsed, MessageData& msgData) at Microsoft.SqlServer.Configuration.WizardFramework.BlockingWorker.WorkerCallback.EndInvoke(IAsyncResult result) at Microsoft.SqlServer.Configuration.WizardFramework.BlockingWorker.PerformWork(MethodInvoker method) at Microsoft.SqlServer.Configuration.WizardFramework.PageHost.PageLeaving(PageChangeReason reason) at Microsoft.SqlServer.Configuration.InstallWizardFramework.InstallWizardPageHost.PageLeaving(PageChangeReason reason) at Microsoft.SqlServer.Configuration.InstallWizardFramework.InstallWizardTabbedPageHost.PageLeaving(PageChangeReason reason) at Microsoft.SqlServer.Configuration.WizardFramework.UIHost.set_SelectedPageIndex(Int32 value) at Microsoft.SqlServer.Configuration.WizardFramework.NavigationButtons.nextButton_Click(Object sender, EventArgs e) at System.Windows.Forms.Control.OnClick(EventArgs e) at System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent) at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.ButtonBase.WndProc(Message& m) at System.Windows.Forms.Button.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativewindow.WndProc(Message& m) at System.Windows.Forms.Nativewindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)I've done mixed installs like this before but without incident - rather than try again I though I might request some suggestions as to what might have been the cause .Thanks

Transaction Log Backup fails Exclusive access could not be obtained because the database is in use

Posted: 15 Jul 2013 06:42 AM PDT

I have an AM and a PM TransactionLog Backup.The AM runs fine the PM failsI get the following error.Date 07/15/2013 03:00:00 PMLog Job History (Backup_DataWarehouse_TransactionLog_PM_Test.Subplan_1)Step ID 1Server MyServerJob Name Backup_DataWarehouse_TransactionLog_PM_Test.Subplan_1Step Name Subplan_1Duration 00:00:22Sql Severity 0Sql Message ID 0Operator Emailed Operator Net sent Operator Paged Retries Attempted 0MessageExecuted as user: MyDomain\SQLServerAgent. Microsoft (R) SQL Server Execute Package Utility Version 10.50.4000.0 for 64-bit Copyright (C) Microsoft Corporation 2010. All rights reserved. Started: 03:00:00 PM Progress: 2013-07-15 15:00:00.77 Source: {247A3977-1976-4E4B-8B35-B14F86B49B5F} Executing query "DECLARE @Guid UNIQUEIDENTIFIER EXECUTE msdb..sp...".: 100% complete End Progress Error: 2013-07-15 15:00:21.01 Code: 0xC002F210 Source: Back Up Database (Transaction Log) Execute SQL Task Description: Executing the query "BACKUP LOG [DataWarehouse] TO DISK = N'D:\Backup\Tran..." failed with the following error: "Exclusive access could not be obtained because the database is in use. BACKUP LOG is terminating abnormally.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 03:00:00 PM Finished: 03:00:21 PM Elapsed: 20.811 seconds. The package execution failed. The step failed.Any ideas?

SQL Server Crash Dump

Posted: 15 Jul 2013 09:39 PM PDT

Hi All,I have strange problem with my live cluster environment. I have a dev environment in which a procedure completes in approx 4mins. In my live environment, the same procedure if executed it runs for 1st time = 18 sec2nd time = 18-20 sec3rd time it hangs the servers. At this point, I cannot open new window from management studio. I cannot run any query in any other window in that server. This keeps on for several hours or more than 1 day and then server crashes.We have disabled the hypthreading and have made the default max worker threads.My SQL Version:-0Microsoft SQL Server 2008 R2 (SP1) - 10.50.2500.0 (X64) Jun 17 2011 00:54:03 Copyright (c) Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1) Thanks in advance.Harsha

dear all,

Posted: 15 Jul 2013 04:08 PM PDT

can anyone please le tme know, in sql 2012 high avalability solution REPLICATION is deprecated or what? i got this doubt whe i try to surf high avalibility solutions in sql 2012, nowhere i get to see REPLICATION in 2012.can anyone please let me know.Thanks in advance.

Removing article from existing Peer to Peer replication

Posted: 15 Jul 2013 11:32 AM PDT

Hi Everyone,We have a Peer to Peer replication in sql server 2008 R2.We want to remove an article from the replication.What is the safest way to that(through TSQL),Will that cause any problem for the replication operation?Thanks

List of when commands were introduced

Posted: 15 Jul 2013 08:34 AM PDT

Has anyone compliles a list of SQL Commands and which version of MSSQL introduced them?We're wroking on some maintanance & admin scripts that will be used on multiple instances of difference versions and I'm not certain of which commands will work on SQL 2005 or 2000.Dan

SQL bACKUP FILE SIZE

Posted: 15 Jul 2013 05:57 AM PDT

Hello All,I need to monitor size of most recent SQL .BAK file in each folder (each database has it's folder) and insert the name and size to a SQL table. This can be a job which would run everyday.Can someone please suggest me the bets way to achieve these results?Thanks,

SQL Server 2008 R2 to SQL Server 2008 R2 Linked Server Issues

Posted: 15 Jul 2013 06:33 AM PDT

Greetings,I'm trying to establish a linked-server connection from our Regional SQL Server 2008 R2 (R1) to the Division SQL Server 2008 R2 (D1).Both our server (R1) and the Division server (D1) reside on the same NT domain. All users who would be accessing the linked server (D1) have permissions to do so - this has been verified by connecting directly to D1 via SSMS. At this point, we're certain it's not a permissions issue, per se.That said, we're having some issues, and I'm hoping someone here has solved these before and can help me out - Googling and searching the forums hasn't turned up precisely what we're dealing with, so that's why I'm posting here.Scenario One:I connect to R1 via SSMS on my laptop. I then execute a simple SELECT statement against D1 using the linked server connection:[code="sql"]SELECT * FROM [D1].[USERS].[DATA].[USER_DATA][/code]At which point, I get the following error message:[code="other"]Msg 18456, Level 14, State 1, Line 1Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'.[/code]After reading up on linked server connections and permissions, I execute the following, though technically I shouldn't have had to:[code="sql"]EXEC sp_addlinkedsrvlogin 'D1', 'true';[/code]Again, I try the simple SELECT statement above, and get the same error message.Scenario Two:I use Microsoft's Remote Desktop (RDP) program to connect to R1. I then open SSMS on the R1's remote desktop, and attempt the simple SELECT:[code="sql"]SELECT * FROM [D1].[USERS].[DATA].[USER_DATA][/code]And I get valid data from D1The question is this: Why is R1 not passing the NT credentials - despite being configured to do so in the D1 linked server definition - when a user connects to R1 via SSMS, and yet does pass the NT credentials of the RDP user when the query is executed through a RDP session with SSMS?Fundamentally: is there a way to change this behavior? We need to be able to execute queries against the linked server within our local SSMS sessions on our local workstations without having to RDP every time; furthermore we need to replicate data from this linked server.

Find all users mapped to a credential

Posted: 15 Jul 2013 05:57 AM PDT

I just stumbled upon a credential in SSMS under Security-Credentials. I can see the credential in sys.credentials. I can examine the properties of an individual login to see if they are mapped to this credential. Is there any way to see all logins this credential is mapped to?

[SQL 2012] Localisation and SQL 2012 deployment

[SQL 2012] Localisation and SQL 2012 deployment


Localisation and SQL 2012 deployment

Posted: 16 Jul 2013 02:45 AM PDT

Hi guys,we are deploying an application to several locations from the UK, one of these is Moscow.The application is dependant on SQL express 2012, so as part of the installer we have a silent install of SQL 2012 using something like the following:SQLEXPR32_x86_ENU.exe /qs /IACCEPTSQLSERVERLICENSETERMS /INSTANCENAME=xxxxxxx /SQLSYSADMINACCOUNTS=\"BUILTIN\\ADMINISTRATORS\" /ACTION=Install /FEATURES=SQLEngine /SECURITYMODE=SQL /SAPWD=xxxxxxxx";When we installed the app locally, the sql install works perfectly, but when we install it in Moscow, it complains about BUILTIN\\ADMINISTRATORS as it not in Russian.Has anyone dealt with language variants for security groups on the install?Many thanks for any guidance!Peter

USE Database

Posted: 15 Jul 2013 10:50 PM PDT

Hi Team,Am having two datbases in one instance, whenever i execute any scripts, i'll use "USE DATABASE" and execute the scripts,Is there any way to mention instance name, instead of Database name.Eg: USE INSTANCEGOUSE DATABASEGOEXEC Store_spoc..Please Suggest...

Report URLs may use only http://, https://, ftp://, file:// error

Posted: 15 Jul 2013 08:12 PM PDT

Friend, I am trying to generate a report in BIDS. The report is generated from SQL Server 2012 through a Stored Procedure. We are in the process of defining Indexes for the table used to generated the report. There are thousands of records in the table. The report is getting generated after 2 mins in BIDS. The report has 3 pages. I am getting an error while trying to navigate to the second page. Has anybody come across this error. Is this error due to some timeout issue. I am attaching the screenshot. [img]C:\Users\154302\Desktop\Error.JPEG[/img]I executed the Stored Procedure in SSMS and it is executing successfully in 2 mins. I cannot find the reason for this issue. Any help would be appreciated.

Best Practive Network Adapter Configuration for AlwaysOn

Posted: 15 Jul 2013 03:29 PM PDT

Hi,Just wondering if anyone would be able to give any advice here. We are currently setting up AlwaysOn in our new SQL2012 environment and our sys admin team want to have 2 network adapters in each node of 2 node cluster - one adapter would be for regular OLTP traffic and the other adapter would be for AlwaysOn replication traffic.I have been asked to find out what the best practice is for network adapter configuration with AlwaysOn but haven't really been able to find anything online,Thanks in advance,JK

[T-SQL] CALCULATE 2 years back data

[T-SQL] CALCULATE 2 years back data


CALCULATE 2 years back data

Posted: 15 Jul 2013 07:58 AM PDT

how to calculate 2 years back datasuppose today 07/15/2013 ,2 years back ,07/15/2011.how to calculate exact date?

The conversion of a nvarchar data type to a datetime data type resulted in an out-of-range value

Posted: 15 Jul 2013 11:43 PM PDT

Im using this syntax to get a value but it gives me a error in visual studio:The conversion of a nvarchar data type to a datetime data type resulted in an out-of-range valueSyntax im using in visual studio:select h.branch_n, type_n, count(incident_id) as x from incident as ajoin sectn_dept as con a.sectn_dept_id = c.sectn_dept_id join inc_cat on a.inc_cat_id = inc_cat.inc_cat_id join cat_proces_mngt on a.inc_cat_id = cat_proces_mngt.inc_cat_idjoin timeon a.date_logged1 = time.pk_datejoin sectn as gon c.sectn_id = g.sectn_idjoin branch as hon g.branch_id = h.branch_id where exists ( select f.branch_n, count(*) as y from incident as b join sectn_dept as d on b.sectn_dept_id = d.sectn_dept_id join inc_cat on b.inc_cat_id = inc_cat.inc_cat_id join cat_proces_mngt on b.inc_cat_id = cat_proces_mngt.inc_cat_id join sectn as e on d.sectn_id = e.sectn_id join branch as f on e.branch_id = f.branch_id where c.sectn_n = d.sectn_n and date_logged1 >= @week and date_logged1 < dateadd(week, 1, @week1) and location_id = 2 and type_id in (1,2,3) and week >= @week group by f.branch_n having count(*) > 10 and count(*) <= 30)and date_logged1 >= @week and date_logged1 < dateadd(week, 1, @week1) and location_id = 2 and type_id in (1,2,3) and week >= @weekgroup by h.branch_n, type_n order by h.branch_n, type_n descWHEN I'm running in Microsoft server management studio it works fine and i get results like this;Branch_n -- type_n -- xCook ST Change 6Cook Question 7Cook Incident 5BV ST Change 7BV Question 5BV Incident 12G ST Change 7G Question 3

insert or update according the field value

Posted: 15 Jul 2013 06:59 PM PDT

Hi. I try to write a stored procedure. First, if record exists, it will look at recordtime. If recordtime is null, it updates record time, else it updates updatedtime. I write a code like below:set @isexist=(select count(*) from GNL_OgretmenDevamsizlikGirisLog where OgretmenID=@ogretmenID and DersYiliID=@dersyiliID and SinifID=@sinifID and DersID=@dersID and Convert(date,GirisTarihi) = Convert(date,getdate()) ) if(@isexist>0)begin endI will write update code in if state. But I could not write. how can I control recordtime and updatedtime, and update one of them.Thanks.

Performance Tuning Large update statement--HELP

Posted: 15 Jul 2013 02:50 AM PDT

I have a 1.2 TB database in which I am updating a couple columns across 11 tables.T1 (F_name, L_Name, Address)t2 (User_Id, Email_Address)..etc All tables have different columns etc. I have written a while loop that produces the UPDATE statement for me per each table, and then executes the UPDATE statement on that table.The Update statement appears below:USE DBNAME;UPDATE T1SET T1.F_Name = fData(F_Name), T1.L_Name = fData(L_Name), T1.Address = fData(Address)fData is a function that passes the field in and obfuscates the data so that it can be used in our DEV environment without allowing sensitive data into developers hands from production. I am not concerned about table locks in this instance as this process occurs on my personal testbox where the database sits.The issue is I have ran this statement for say T1 (~1.2 million rows) and takes about ~2-3 minutes. Then when running on say table T6 (~250 million rows) and similar column types it is taking over 12 hours. I have tried extrapolating times and this should take closer to 2.5 hours and not 8. I'm wondering what kind of performance enhancements can I make to this process.Things considering: Setting database into simple mode for logging purposes, attempting to complete updates in batches (unneeded for locking purposes, but wondering will this speed up performance as far as SQL retrieving smaller set and updating smaller sets rather than the whole table at once), updating based on table indices, disabling tables indices, setting transaction level isolation...any other thoughts. Or can you expound on how each of these would help a LARGE table update process disregarding any blocking needs.Testbox Specs:Dual quad core processors, 96GB RAMSQL Specs:SQL 2008R2

Forecasting Year Numbers

Posted: 15 Jul 2013 01:40 AM PDT

Hi allApologies if this is posted in the wrong section but I didnt really know where it belonged.I have a set of data (snippet of matrix below)Period 1 Asset Category No Of Jobs Sum Of Spend Av Of Spend 36 £13,972.49 £388.12A 1044 £266,798.16 £255.55B 911 £187,905.17 £206.26C 185 £44,863.45 £242.51D 195 £106,987.62 £548.65New Equipment 25 £61,087.37 £2,443.49Planned Only 41 £24,576.29 £599.42Total 2437 £706,190.55 I actually have 8 periods worth of info, what I want to be able to do is forecast the remaining 4 periods based on the 8 periods I do have. So ideally I would take Asset Cat "A" and say for arguments sake I have 1000 jobs £10,000 Spend with an average of £10 per job. I want to apply that average to the average number of jobs so the average would be 1000 jobs divided by 8 periods = 125 jobs a period. so I want Periods 9 - 12 to show as 125*£10 - does that make any sense ???I could do it easily in excel but how can I do it in SSRS?ThanksCarl.

[SQL Server 2008 issues] quick edit data

[SQL Server 2008 issues] quick edit data


quick edit data

Posted: 15 Jul 2013 05:49 AM PDT

I recently found a quick way to do some data editing for a table with filter added.That is from design view window from management studio.when creating a new view, I can edit the data in the result panel.is that a valid way of doing data modification?Thanks,

Role Permission

Posted: 15 Jul 2013 05:52 PM PDT

Hi allSimple question, If I grant user permission to role "DbCreator", does that user have access to create databases and database objects?Thanks!

create linked server using windows authentication

Posted: 15 Jul 2013 05:16 PM PDT

Hi, I have a peculiar requirement but was not able to go much ahead. Setup a linked server on InstA(mixed mode authentication) to fetch data from InstB(Windows Only). The user accessing the linked server is a SQL Authenticated user on InstA. When i use the below script (from another post in sqlservercentral.com).[code="sql"]DECLARE @strLinkedServer NVARCHAR(100)SELECT @strLinkedServer = 'SRV2\InstB'EXECUTE master.dbo.sp_addlinkedserver @strLinkedServer, 'SQL Server'EXECUTE master.dbo.sp_addlinkedsrvlogin @rmtsrvname = @strLinkedServer, @useself = N'False', @locallogin = N'sqluserA', -- add local login @rmtuser = N'Domain1\user', -- add the remote login @rmtpassword = 'StrongPassword' -- add the remote login password[/code]This seems to work in terms of creating the linked server. While trying to use it, i get the error [font="Courier New"]Msg 18456, Level 14, State 1, Server SRV2\InstB, Line 1Login failed for user 'Domain1\user'.[/font]I get the below error in the SQL Server Error logs on InstB[b][font="Courier New"]Login failed for user 'Domain1\user'. Reason: An attempt to login using SQL authentication failed. Server is configured for Windows authentication only. [CLIENT: 10.1.1.11][/font][/b]Looks like I would not be able to create a linked server mapping a local SQL Authenticated user to a domain user account for the remote login. Is there a work around for this?Cheers

DATABASE BACK UP PROBLEM

Posted: 15 Jul 2013 04:46 PM PDT

Correct me ,, if i am posting my problem in a wrong forum.Well,,,Here i in my company i am using full recovery model for the database back up plan.The Process is like this:SUNDAY: FULL BACK UPMONDAY-SATURDAY: Differential Backup Each Dayand, Transaction log back up every hour.I.e. on Sunday at 12:00 AM Full back up will be taken,,,, then hourly transaction log back up and on Monday 12:00 AM a Differential Back up...transaction as hourly. (This gives us A WEEK SET of database)Now,the problem i am facing is that..... suppose in between Sunday Full back up - Monday differential back up,,, some Developer takes a full backup of the database in an emergency or any situation,, then in that case,,, the Monday differential backup will be of the Latest Full (i.e. in case,, the developer's one) ,,, which breaks my process.So,, can any one suggest me a Script that Gives the Full back up to the developer but does not break my process. (i believe its all about the LSN ,, i am very new to SQL)Request: To all Sql Server masters,, please Help..!! :-)

Handling error message

Posted: 14 Jul 2013 10:10 PM PDT

Hi, while writing script I have to check that whether input value is correct or not and then display the error message for that...What I did is use RAISERROR[code="sql"]RAISERROR('Invalid Amount',20,1) WITH LOG [/code]Here is some part of my code:[code="sql"]IF NOT EXISTS (SELECT (a.VoucherNo) FROM GV_Voucher a WHERE a.TransactionID = @TransactionID AND a.VoucherNo = @FirstVoucherNo )BEGIN SET @ErrorMessage = @ErrorMessage + ' ' + 'First ' + @message2 RAISERROR('User Define: Invalid First Voucher No |', 20, 1) WITH LOGENDIF NOT EXISTS (SELECT (a.VoucherNo) FROM GV_Voucher a WHERE a.TransactionID = @TransactionID AND a.VoucherNo = @LastVoucherNo)BEGIN RAISERROR('User Define: Invalid Last Voucher No |', 20, 1) WITH LOG SET @ErrorMessage = @ErrorMessage + ' ' +'Last' + @message2END[/code]If I handle error in this manner will it make any performance effect on my procedure.

Tempdb files - Load test

Posted: 15 Jul 2013 06:09 AM PDT

How to arrive at the number of tempdb files needed for a SQL 2008 2 processor server. Everyone mentions about having 1/4 or 1/2 or equal to the number of cpu's. But, finally it varies depending on the setup and the load. So, how can i perform a load test and arrive at an optimal number of tempdb files?

Parsing a string...between a comma and space

Posted: 16 Aug 2011 03:53 AM PDT

I have a concatenated field with name which I'm trying to parse out. The names are entered as so: Lastname, Firstname or sometimes... Lastname, Firstname Middleinit (notice the space between the firstname and middleinit)In order to parse out the name, I have the following:[code="sql"]SELECT LTRIM(RIGHT(@Name, CHARINDEX(',',REVERSE((@Name))-1)) --TO FIND FIRSTNAMESELECT LEFT((@Name, CHARINDEX(',', (@Name)-1) --TO FIND LASTNAME[/code]But the code for the first name will only pickup the correct firstname in cases where there's been no initial entered. How do I select out the substring between the comma and the space after the firstname (so it doesn't matter if there's a middle initial entered or not)?For example: "MOUSE, MICKEY X" or "MOUSE, MICKEY" = MICKEY MOUSEHelp?Thank you!!

Truncating data on a remote Server

Posted: 15 Jul 2013 04:29 AM PDT

Hello,I am trying to truncate data from one server to the other (remote server)[u][b]it works from query analyzer like this:[/b][/u]exec RemoteServerName.DatabaseName.DBO.sp_executesql "truncate table DBO.tablename"[u][b]it does NOT work from query analyzer like this:[/b][/u]Declare @cmd varchar(2000)set @cmd = @servername + '.' + @databasename + '_Archive'+ '.' + @schema + '.' + 'sp_executesql' + ' ' + '"' +'truncate table' + ' ' + @schema + '.' + @tablename + '"'exec xp_cmdshell @cmd[u][b]Produced this error:[/b][/u]'RemoteServerName.DatabaseName.DBO.sp_executesql' is not recognized as an internal or external command,operable program or batch file.Please Help !

User datafeed overwrite with ssis package

Posted: 14 Jul 2013 10:56 PM PDT

Hi,I have tables user and userprofiles. I get datafeed every night in csv file. I need to create ssis package to overwrite data everyday. I need to update the existing user data,add the new user data and disable the user if he is not in the datafeed.In the past I used to import this data into a new table called userdatafeed and overwrite the data using a stored procedure. But now the requirement is to implement this SSIS package.What data flow items I can use to achieve this?

Selecting records in table only when ALL related records in other table satisfy multiple conditions

Posted: 15 Jul 2013 03:23 AM PDT

Hello and thank you for your time. I feel like this should be simple but I can't seem to work it out….maybe I have been working on it so long that I need another perspective.I have a Lab_result_obr_p table that contains labs that have been resulted: The PK is LabNum which is the lab Number and the FK is OrderNum which is the order number. So in this table one order# can have multiple lab#sTable: Lab_result_obr_p:LabNum OrderNum A29E02FE D2C6DDA84A563D24 D2C6DDA80F53BC60 DC01E4EBGS43A689 DC01E4EBF943C7034 DF2654G7The other table is Lab_rsults_obx and it contains the lab components and individual component results. In this table, each lab# (uique_obr_Number) from above will have multiple lab comments and results Table: Lab_rsults_obxLabNem LabC Flag ValueA29E02FE WBC N 3.5A29E02FE RBC N 260 4A563D24 LMP: N 504A563D24 RH TYPE N DNR0F53BC60 BACTERIA N TNPGS43A689 MCV N 30GS43A689 MCH N 40F943C7034 RH TYPE Y NegativeI need to select all ngn_order_num from Lab_result_obr_p where all components of all labs has a Abnormal_Flag of N and does not have a value of TNP. So if an Order has two labs, I need all the components for both labs to have an N for Abnormal_Flag and to not have a value of TNP for the order number to be selected So for the data above my result would look like:ngn_order_num D2C6DDA8

Get date only and order by in correct way

Posted: 15 Jul 2013 02:43 AM PDT

I converted orderdate to get date only using code below.convert(varchar(20), orderdate, 101) The problem is that orderdate can not be sorted correctly. (because it is string now)For example, 05/20/2010 will list first when compare to 10/20/2005 (order by orderdate asc)How to solve this problem?

Login Frequency for a Login on a database

Posted: 15 Jul 2013 04:06 AM PDT

Hello SQL Experts,I have a SQL Server 2008 Standared Edison server and I have 3 database with both SQL Logins and windows logins.Is there any way to find out, how frequently a login (could be windows or SQL logins) is accessing the database per week.(For ex: lets say there is a database name "sql2k8" and there is a sql login "login_sql" and a windows login "login_windows" that are part of that DB (sql2k8). I would like to find out how many times the logins "login_sql" and "login_windows" are accessing the database per weekHope I'm clear in explining.Thanks for all your helpRegards

Permissions - Domain Question

Posted: 15 Jul 2013 02:48 AM PDT

Bit of a strange one. We have a user who has a db_owner role but still wasnt able to perform an insert. We found that a group had denydatawriter and that if we changed this, evrythign was fine.Here is the strange thing, the user is not in this specific group. He is in a group of the same name but coming from a different domain so is SQL only looking at the latter part?

CMS to store data

Posted: 15 Jul 2013 02:58 AM PDT

Can we store data of regitered servers to CMS locally to query further.Like to store Failed jobs,database status etc.If yes let me know how.Thanks

Performance issue on DB

Posted: 15 Jul 2013 12:52 AM PDT

Hi,We have a sql 2005 prod server,where one of the DB became very slow past 5 days.i have observer that blocking happening @ regular intervials from past 3 days.and also i have checked fragmentation and some the non cluster indexs and heap ,avg_fragmentation_in_percent is having more than 40% .Can you please help us hw to troubleshoot this issue.Many THanks.

Replication DB - Dead lock issue

Posted: 29 Jun 2013 03:32 PM PDT

Hi,We have One Database which is a Replication Db, sometimes during beginning of the month, lot of user activities going and we having Dead lock issue.Any one has idea how to prevent the dead lock in Replication DB?We have to manually kill the user session one by one but if someone having same kind of issue and have any idea?Thanks,

Alerts for Database Mirroring

Posted: 15 Jul 2013 01:30 AM PDT

HiI'm trying to get SQL Server to alert me via email when the Primary instance loses connection with Mirrored instance (i.e. mirroring becomes suspended/inactive)So far, the methods I've found require the Service Broker to be enabled on the Mirrored DB, unfortunately, this cannot be enabled in this scenario.Is there any other simple way that I can simply set MSSQL to notify me (via email) when Mirroring isn't active?Thanks.

ERROR on SELECT * FROM OPENDATASOURCE using ('Microsoft.ACE.OLEDB.12.0)

Posted: 15 Jul 2013 01:21 AM PDT

Hello, can some one point me to proper direction here or it just stupid question (not sure what to think)I have an automated file processing setup on SQL2010 server.what it does is read a number of Excel 2010 (xlsx) files, convert(extract) specific worksheets form those files into CSV delimited files for uploading to the website.my issue is that I have 6 Excel files, and at least 2 of them error out when I do the select on them.as an example the first select works just fine but the second errors with "Msg 7314, Level 16, State 1, Line 2The OLE DB provider "Microsoft.ACE.OLEDB.12.0" for linked server "(null)" does not contain the table "'Centers$'". The table either does not exist or the current user does not have permissions on that table." all files are in the same folder and created and updated by the same user.folder is a public share for all internal users use thus all users have all permissions to the files.the path is a local path for the server as in I could easily provide path like 'D:\USERSHARE\WebsiteLiveFiles\BAndSConfig.xlsx' and it still works when it works. (in fact all paths are local in actual job script.)yes I know it is not a good idea to have SQL server and file server on one machine but we do not have another chassis at this moment. I can not provide sample of the files as the data is proprietary and confidential.and since I am not sure what cause the issue I can not replicate it either.I have tried copying the work sheet from files that work into one that errors out.the copied WS works just fine selecting from the same file. so it is not file permission.#1 (WORKS)SELECT * FROMOPENDATASOURCE ('Microsoft.ACE.OLEDB.12.0','Data Source="\\MYSERVER\USERSHARE\WebsiteLiveFiles\BAndSConfig.xlsx";Extended Properties="Excel 12.0;HDR=NO;IMEX=1"')...['SPC Data$']#2 (ERRORS OUT)SELECT * FROMOPENDATASOURCE ('Microsoft.ACE.OLEDB.12.0','Data Source="\\MYSERVER\USERSHARE\WebsiteLiveFiles\DPC.xlsx";Extended Properties="Excel 12.0;HDR=NO;IMEX=1"')...['Centers$']

SQLCMD seems to use a different code page

Posted: 15 Jul 2013 12:50 AM PDT

Odd one this.I use SQLCMD mode of SSMS extensively when managing major projects. This was I can get developers to code each procedure into a separate file, create rollback scripts, manage it using source control, and most of all, use the :r and :setvar commands to build install scripts that can run everything.This has worked perfectly for a long time until one of the developers noticed a peculiarity with a stored procedure. The procedure returns as one of it's fields a piece of text with a number prefixed with the £ symbol (e.g. "Remit by return £19.55").For exampleCREATE PROCEDURE [stp_testsymbol]ASSELECT 'This is the £ symbol';GOIf I execute this and use sp_helptext to query it, there is no problem.If I switch to SQLCMD mode and execute the script, there is no problem.But, if I then save the script and execute it from another SQLCMD script using :r "C:\testscript.sql" what I end up with is this:SELECT 'This is the � symbol'; Does anyone know of any way to stop this?

how i can identify local windows service that coresponds to sql spid?

Posted: 11 Jul 2013 01:29 AM PDT

Good morning Everyone,Our environment: sql server 2008 r2 sp2 on windows 2008 r2 enterprise sp1, 2 node active/passive cluster; 200 (500mb databases)Approximately every 10-15 minutes i see multiple (per database) connections are being established…those would run for a minute and disconnectIt's not really causing any performance issues….but, 600-800 connections to server that comes and goes are driving me crazy.I cannot identify WHERE are they coming from….i do see that some local process initiating them…cannot figure out which one.All I see is sql User: NT AUTHORITY\SYSTEMhost name : local physical node nameProgram: .Net SqlClient Data ProviderNet Library: TCPWin Domain : NT AUTHORITYWin User: SYSTEMI see that these are just sql server related information collections and we do have 3rd party monitoring tools, which I currently disabled.Can these be just a sql server native status info collections?????And these are the query that it runs:Connection1:SELECT [dbfiles].[physical_name] AS DBFilesPhysicalName,[mediafamily].[physical_device_name] AS BackupFilesPhysicalName FROM msdb.dbo.backupmediafamily [mediafamily] JOIN (SELECT [backupsets].media_set_id, [backupsets].database_name FROM msdb.dbo.backupset [backupsets] JOIN (SELECT TOP 1 [backupsets].media_set_id, [backupsets].database_name, [backupsets].backup_finish_date FROM msdb.dbo.backupset [backupsets] JOIN sys.databases [databases] ON [backupsets].[database_name] = [databases].[name] and [databases].[is_in_standby] = 0 AND [databases].[source_database_id] IS NULL AND ([databases].name = 'master' OR DATEDIFF ( ss, [databases].[create_date] , [backupsets].[database_creation_date] ) = 0 ) WHERE [databases].[database_id] = DB_ID() AND [backupsets].type = 'D' ORDER BY [backupsets].backup_finish_date DESC) AS [latest_backupset] ON [backupsets].[database_name] = [latest_backupset].[database_name] AND [backupsets].[backup_finish_date] >= [latest_backupset].[backup_finish_date]) AS latest_backups ON [latest_baConnection2:select * from sys.master_files where database_id = '20' and type in (0,1)Connection3:(@DatabaseName nvarchar(10))IF (Exists(SELECT * FROM(SELECT name as 'DatabaseName' FROM sys.databases WHERE name NOT IN ('master','model','tempdb','msdb' ) and source_database_id IS NULL) AS UserDatabase WHERE UserDatabase.DatabaseName = @DatabaseName))BEGIN SELECT distinct obj.name FROM sys.objects AS obj JOIN sys.stats stat ON stat.object_id = obj.object_id CROSS apply sys.dm_db_stats_properties(stat.object_id, stat.stats_id) AS sp WHERE obj.type = 'U' AND modification_counter > 1000 END

Timeout exception

Posted: 14 Jul 2013 11:15 PM PDT

Hi,I am facing problem to connect with SQL Server 2008R2 using java application/tool It through timeout exception.But its working fine with ssms/.net application.Can anyone help.Thanks in advance.

Audit connections via extended events

Posted: 14 Jul 2013 09:53 PM PDT

Hi,I am looking for a little bit of advice on how to setup an extended event in SQL 2008 R2 which will audit each and every connection to the SQL Server.I have managed to do so in SQL 2012, but so far the same code will not work in SQL 2008 R2 as the event sqlserver.login does not exist.This is the event I had defined:[code="sql"]CREATE EVENT SESSION [RT_Test3] ON SERVER ADD EVENT sqlserver.login(SET collect_database_name=(1),collect_options_text=(0) ACTION(package0.collect_system_time,sqlserver.client_hostname,sqlserver.nt_username,sqlserver.username) WHERE ([package0].[equal_boolean]([is_cached],(0)))) ADD TARGET package0.ring_buffer(SET max_events_limit=(0),max_memory=(204800))WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=ON,STARTUP_STATE=ON)GO[/code]Basically I want to capture the following for each and every connection made to the server:1. Time2. Hostname3. NT User (if applicable)4. SQL Login (If Applicable)I may add further fields in the future as at the moment this is merely testing.I have tested using SQL Audit and a server side trace but both generate very large files on disk due to the volume of connections being made, I want to use extended events to utilise the buffer storage as I will summaries the data stored in the buffer and store the results to seperate database. I understand that the buffers are written to in an Asynchronous mannner so can be slightly behind current activity and also that they get overwritten as the buffer fills to its set capacity.Any help would be appreciated.

Is this a crazy way to migrate from one OS version to another?

Posted: 11 Jul 2013 12:06 AM PDT

I need to migrate all the servers I support from Windows Server 2008, to 2008 R2 by the end of the year (plenty of time, really.) All the servers are virtuals, and all of them are running SQL 2008 R2. Seeing as they're configured with an OS drive, and multiple data / SQL Server drives, I had a thought on a possible way to migrate quickly.Of course, this method does have the downside of being one way, with no quick way to go back if it doesn't work...My thought is, set up a new VM with just an OS drive and the OS. Once this is ready, on the current server, detach all the DBs, and stop the SQL service (the reason for the detach will become apparent.) Have the VMWare admins then remove the data drives from the "old" server, and attach them to the new. Once the drives are attached, delete EVERYTHING except where the DBs / TLogs live, install SQL 2008 R2 as normal and update as needed, then proceed to attach all the DBs I detached.My thinking (if this idea has merit) is this gives me a few advantages:1. I don't need to worry about the VMWare / Storage admins complaining about space, I'm just switching already used space from one VM to another...2. I think this might go a bit quicker than backing up each DB on the old, copying the backup to the new, then restoring.Of course possible problems:1. If this doesn't work well, or a DB doesn't come up, my only fall back is a backup of the DB.2. If it takes longer than expected, I may have users hitting me with "why isn't it done yet, what's taking so long..."I'm mostly interested in whether people think this would be a way to do this. If you ignore the VM part, it'd be no different than buying a new server, installing the OS, then pulling out the drives from your old server with the DBs on them and installing them in the new...Of course, a better solution would be if I could get the VMware admin to "clone" the data / log drives and attach the clones to the new VM. But that potentially brings in a fight with the storage guys...Thanks,Jason

Search This Blog