Thursday, August 1, 2013

[how to] Better to create a table for each user account, or one huge table for all user data in MYSQL

[how to] Better to create a table for each user account, or one huge table for all user data in MYSQL


Better to create a table for each user account, or one huge table for all user data in MYSQL

Posted: 01 Aug 2013 08:58 PM PDT

I'm creating a web directory that will allow individual users to signup for an account and store essentially text documents in a mysql database entry.

Initially there will likely be only a few hundred users, but we hope to have 10,000 to 100,000 at some point. And each user would be able to upload 100-200 "documents".

Would it be more efficient to create one massive table, indexed by a user number? Which could theoretically grow to be 20,000,000 entries. Or to go ahead and create a table for each user with their individual documents?

I'm assuming it's not healthy to have thousands of tables in a database, but I really can't find any concrete data on this.

Table Structure: 2 Related Entities Share the Same Entity

Posted: 01 Aug 2013 08:55 PM PDT

I was hoping I could get some advice on my table structure. Im writing a simple workout tracker for me and my buddies. These are the requirements:

  • Multiple users
  • A user can have multiple workouts
  • A workout has multiple sets
  • A set can have multiple users

I have a way to accomplish what I need to but I feel like theres a much simpler way that Im missing.

tbl_user

| UserId |   Name |  -------------------  |      1 |  John  |  |      2 |  Greg  |  

tbl_workout

| WorkoutId |  -------------  |     1     |  |     2     |  

tbl_user_workout

| WorkoutId | UserId |  ----------------------  |     1     |    1   |  |     2     |    1   |  

tbl_set

| SetId | WorkoutId |  ---------------------  |   1   |      1    |  |   2   |      1    |  

tbl_user_set

| UserSetId | SetId | UserId | Reps | Weight |  ----------------------------------------------  |     1     |    1  |    1   |  20  |   50   |  |     2     |    1  |    2   |  15  |   60   |  

I figure this way I could get:

  • A users sets with user->userset
  • A users workouts with user->userworkout
  • A workouts sets with workout->set

Is this structure logical?

Remove duplicates from two inner joins

Posted: 01 Aug 2013 08:13 PM PDT

Hi everyone The following query execute with duplicates values if i use distinct it only result one row

select a.name, b.id as order_id, b.mode, c.id as job_id from customers a  inner join orders b on a.id = b.customer_id  inner join jobs c on a.id = b.customer_id  where a.id = 1;   

general memory requirements for sql server 2008 r2

Posted: 01 Aug 2013 09:12 PM PDT

I'm not experienced with DBA work, but I'm trying to make a case for requesting additional resources for our sql server and was hoping I could get some smart folks to provide a rough rough estimate of what we should be running. I'm suspecting that the allocation of resources IT has given to our production sql server is low.

Hardware & Software:

Database: sql server 2008 r2 enterprise database

Windows: Windows 2008 r2 Enterprise 64 bit, pretty sure running on VMware.

Processor: Intel(R) Xeon(R) CPU E7-4860 @ 2.27GHz 2.26 GHz (2 processors)

Installed memory: 4GB

Hard drive for Database files: 300GB

Hard drive for backups: 150GB

Hard drive for logs: 100GB

Application:

We have 3 main databases that add up to about 170GB in data, a Reporting Services database (SSRS) on the same server that houses maybe 10 different reports (comprising average of 700k records each) that get generated daily. Our user base is about 20 simultaneous users, maybe 5 of those could be considered "resource intensive" with generating data-crunching large reports. Majority of users interact with database through asp.net website and Report server website. Additionally, our developers use SSIS in BIDS extensively by remoting directly onto the server (2 remote connections max). Finally, we have a fairly involved data warehousing operation that probably brings in 3 million records per day by way of SSIS packages that also run on the server.

Current Problems:

We have chronic sever server timeouts and the response time to the website is pretty bad. I suspect the amount of memory we have (4GB) is probably a large bottleneck. Our previous requests for additional memory has been denied with the common response that we need to perform more query optimizations. While we aren't sql pros or (as I'm sure you can tell by our setup) db admin pros, I want to make sure I'm not expending all my time trying to squeeze out little potential performance if the hardware is the bottleneck.

Thanks all for the tl;dr avoidance!

Targeting MSX with job on Multi-Server Administration

Posted: 01 Aug 2013 02:03 PM PDT

I am using Multi-Server Administration to execute jobs on multiple targets. This works well, but I am not able to enlist the master as a target. I receive the following error when attempting to enlist the MSX as a TSX:

Server 'XXXXX' is an MSX. Cannot enlist one MSX into another MSX. (Microsoft SQL Server, Error: 14299)

I would think that running these same jobs on the master would not require managing a local job as well as the multi-server jobs.

Any help is appreciated.

How to solve ORA-011033: ORACLE initialization or shutdown in progress

Posted: 01 Aug 2013 08:57 PM PDT

I have followed How to resolve ORA-011033: ORACLE initialization or shutdown in progress link to solve the problem.

When it is giving ORA-00600: internal error code, arguments: [kcratr1_lostwrt] then I am using: SQL> startup mount

the above line should return ORACLE Instance started

but it is showing "cannot start already-running Oracle - shut it down first".

Need Help.

Why is SQL Server consuming more server memory?

Posted: 01 Aug 2013 11:45 AM PDT

Sqlserver is consuming 87.5 % of my server Ram. This recently causes alot of performance bottleneck such as slowness.I Research on this issue. One common solution i could find on the internet is to set the Maximum limit for Sqlserver. This was done and much improvement is gained.I want to know why if the Maximum Memory value is not set why sqlserver keep consuming the resources

most effective architecture to apply mulitple functions

Posted: 01 Aug 2013 01:14 PM PDT

I have a table that is the only source for a second table. Currently the second table is created using a CTAS (create table as select) statement that pulls in all the columns from the first table and includes a dozen more columns that are functions that we have written. These functions take a column or two from the source table and transform them into new data elements.

In some cases, the columns that are used as parameters for the functions are sparsely populated. The functions are compiled natively and utilize the result_cache. The first thing that happens in each function is a null check on the input parameters and a return of NULL.

I have a few questions:

  • I have full control over the architecture of the source table (as I create it in an earlier process). Is there anything I can do with the source tables (index, partitions, etc.) that will help the CTAS run more efficiently?
  • Should I be using a CTAS at all or would an UPDATE be more efficicent (my gut here says no way, but I'd like some validation)
  • Is there something else I can do to the functions to make them more efficient?
  • Is there anything else I should be thinking about?

MS Access Restricting table/form access [migrated]

Posted: 01 Aug 2013 09:33 AM PDT

Is there a simple way to allow users to only edit form entries they create and only allow an administrator to edit the table/all forms? Ex: I need my entire staff to be able to add and edit their own meetings while not having the ability to edit other staff's meetings.

oracle tablespace extending

Posted: 01 Aug 2013 08:48 PM PDT

While under massive write (insert and create table as) load, when Oracle RAC needs to extend tablespace, all write activity gets blocked for however long it takes it to finish extending (tens of minutes), with all sessions blocked by "data file init write". Is there way to configure Oracle to extend the file proactively so that things can keep moving while it is doing it?

how to being building a data warehouse [on hold]

Posted: 01 Aug 2013 09:33 AM PDT

We have a ERP system that uses a DB in postgres, each table has many rows and we want to build data warehouse (data warehousing). Also we want to add BI using ETL

where would begin? would use schemes or two separate databases?

How can I write a routine to check, optimize and analyze all tables in databases in MySQL

Posted: 01 Aug 2013 09:21 AM PDT

How can I create a routine in MySQL to check, optimize and then analyse the tables?

Also how can I schedule this routine to run every night at set time?

Thanks

changing TEMPORARY TABLE default collation

Posted: 01 Aug 2013 01:35 PM PDT

when I use CREATE TEMPORARY TABLE to create a temporary table, the collation of this table is utf8_unicode_ci by default. why by default it is utf8_unicode_ci? How can I change the default?

Does disconnecting network stop a query?

Posted: 01 Aug 2013 07:50 PM PDT

I was running an update statement for 100000 records, I realized a mistake while running and then quickly unplugged the network cable.

Did this stop the query from processing, did it continue processing on the server or does it do a partial update?

Connect to SQL Server 2012 from an Windows XP client

Posted: 01 Aug 2013 10:44 AM PDT

We have an SQL Server 2012 Standard database, to connect to it we use SQL Server 2008 Management Studio (since 2012 SSMS is not supported on Windows XP). The problem is that SQL Server 2008 gives all the time an exception (Index outside the bounds of the array) and it's not possible to edit table in visual editor or open table. Client can work now only with database objects through queries.
So the question is - is there any solution on Windows XP to work properly with 2012 database ?

Select rows after a specific ID

Posted: 01 Aug 2013 10:16 AM PDT

I have the following query, it chooses from a news articles table. Due to multiple sorting factors, I cannot locate the next page of date.

The parameter I have is the last ID in the query result set, so how can I fetch the next page... Again, I don't have a page index, all I have is the last ID from the previous query result set

SELECT        TOP (20) id, DATEDIFF(SECOND, CONVERT(DATETIME, '1970-01-01', 102), release_date) AS date, DATEDIFF(SECOND, CONVERT(DATETIME, '1970-01-01',                        102), edit_date) AS lastchanged, important  FROM            articles AS news WITH (NOLOCK)  WHERE        (status = 1) AND (release_date <= GETUTCDATE())  ORDER BY important DESC, date DESC, lastchanged DESC  

This is an example of my data

  id       date     lastchanged important  187346  1366070400  1345212540  1  187306  1365984000  1345290300  1  187330  1365984000  1342457880  1  187344  1363392000  1342461540  1  187343  1363392000  1342461300  1  187342  1363392000  1342459980  1  187339  1363392000  1342459800  1  187337  1363392000  1342458960  1  187335  1363392000  1342458720  1  187334  1363392000  1342458600  1  187332  1363392000  1342458060  1  187331  1363392000  1342457940  1  187327  1363392000  1342457340  1  187328  1363392000  1342457340  1  187326  1363392000  1342456860  1  187323  1363392000  1342456020  1  187322  1363392000  1342455480  1  187321  1363392000  1342454700  1  187316  1363392000  1342454580  1  187320  1363392000  1342454520  1  

How to do aggregate functions in GROUP_CONCAT

Posted: 01 Aug 2013 01:11 PM PDT

I am generating a report to show outstanding Fee Like which Fee Type,How much amount is Fixed for that Fee Type, How Much Amount Paid for that Fee type and finally Balance Amount to be paid for that fee Type.

Here i am pulling data from 5 tables like Class Names from Classes table, Roll no & Student Name from admissions table, Fee Types from Feevars table, amount fixed for fee type from studentfees table, and finally amounts paid for fee types from fee collection table.

I am able to generate the partial results by mentioning the fee type names in select statment by summing and subtracting operations.

Here is the full database and my query producing the result. plz look **@ mysql query in the select statement i have mentioned the fee types manually. But i want to generate the result without mentioning the Fee type as Column names.

For this i did one thing, i had taken all the fee types into a sql variable like this

set @sqlList = null;SELECT GROUP_CONCAT(concat('tsf.', Replace(FeeName,' ',''))) INTO @sqlList FROM tbl_feevars;  

this will result all the Fee types into single line as column names. And Finally I have written code to produce output what i am expecting, but i am getting error like Error Code 1064: You have error in your sql syntax.

This is My final code

Expected Output Code

Please anyone tell me, what is error in my sql query. And suggest me if any?? other way to do this report.

How can I determine how many IOPs I need for my AWS RDS database?

Posted: 01 Aug 2013 09:04 AM PDT

I am migrating part of a mysql database into AWS. The data in question is 99% write, and each row has about 1k of varchar fields, a datetime and 4 ints.

I estimate that we will need between 20-25k records inserted/hour during peak times.

I ran iostat -h on the current database, and it reported around 40 tps.

How do I figure out what type of IOPS I'll need?

How to select from a table without including repeated column values?

Posted: 01 Aug 2013 07:11 PM PDT

In a previous question How to merge data sets without including redundant rows? I asked about filtering redundant historical data during import, but @DavidSpillett correctly replied that I couldn't do what I was trying to do.

Instead of filtering the table during import, I now want to create a view on the table that returns only records where the price has changed.

Here's the original scenario rephrased to suite this question:

We have a table of historical prices for items. The table contains rows where the same price is recorded for multiple dates. I want to create a view on this data which only shows price changes over time, so if a price changes from A to B I want to see it, but if it "changes" from B to B then I don't want to see it.

Example: if the price yesterday was $1, and the price today is $1, and there were no other price changes, then the price today can be inferred from the price yesterday so I only need the record from yesterday.

Example (http://sqlfiddle.com/#!3/c95ff/1):

Table data:    Effective            Product  Kind  Price  2013-04-23T00:23:00  1234     1     1.00  2013-04-24T00:24:00  1234     1     1.00 -- redundant, implied by record 1  2013-04-25T00:25:00  1234     1     1.50  2013-04-26T00:26:00  1234     1     2.00  2013-04-27T00:27:00  1234     1     2.00 -- redundant, implied by record 4  2013-04-28T00:28:00  1234     1     1.00 -- not redundant, price changed back to 1.00    Expected view data:    Effective            Product  Kind  Price  2013-04-23T00:23:00  1234     1     1.00  2013-04-25T00:25:00  1234     1     1.50  2013-04-26T00:26:00  1234     1     2.00  2013-04-28T00:28:00  1234     1     1.00  

My initial attempt used ROW_NUMBER:

SELECT      Effective,      Product,      Kind,      Price  FROM  (      SELECT          History.*,          ROW_NUMBER() OVER          (              PARTITION BY                  Product,                  Kind,                  Price              ORDER BY                  Effective ASC          ) AS RowNumber      FROM History  ) H  WHERE RowNumber = 1  ORDER BY Effective  

Which returned:

Effective               Product  Kind  Price  2013-04-23T00:23:00     1234     1     1.00                                               -- not 2013-04-24, good  2013-04-25T00:25:00     1234     1     1.50  2013-04-26T00:26:00     1234     1     2.00                                               -- not 2013-04-27, good                                               -- not 2013-04-28, bad  

I tried searching for a similar question/answer but it's hard to work out how to phrase the search, an example is worth a lot of words.

Any suggestions appreciated. Thanks

FileWatcher Troubleshooting Options

Posted: 01 Aug 2013 10:40 AM PDT

I have setup a Credential/Procedure/Program/File Watcher/Job to monitor a folder for new files, but it is not working. I am trying to figure out what I can check to troubleshoot why this isn't working. After scaling back the code to a bare minimum, here are somethings I have already done.

  • Verify that files are being created in the folder monitored - They are.
  • Verify that the procedure can be run by itself - It can.
  • Verify that the File Watcher shows up in DBA_Scheduler_File_Watchers and is enabled. - It does and is.
  • Verify that DBA_Scheduler_Job_Run_Details shows a successful execution - It does NOT show any entries for this job.
  • Check the alert log - Nothing interesting.
  • Check for other trace files - I am getting trace files mattching *_j000_*.trc, but they just have this: FILE_TRANSFER error is: with nothing after it.

Could someone explain what I am doing wrong or give me a way to troubleshoot this further? Here is my test code:

--Create Credential.  BEGIN      sys.dbms_scheduler.create_credential(          username => 'oracle',          password => 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxx',          database_role => NULL,          windows_domain => NULL,          comments => NULL,          credential_name => 'TestCredential'      );  END;  /      CREATE TABLE FilesFound(FoundDate Date);      CREATE OR REPLACE PROCEDURE TestProcedure(iFileInfo In sys.scheduler_filewatcher_result) AS  BEGIN     INSERT INTO FilesFound VALUES (sysdate);     COMMIT;  END;  /      BEGIN     DBMS_SCHEDULER.create_program(        program_name => 'TestProgram',        program_type => 'stored_procedure',        program_action => 'TestProcedure',        number_of_arguments => 1,        enabled => False);     DBMS_SCHEDULER.define_metadata_argument(        program_name => 'TestProgram',        metadata_attribute => 'event_message',        argument_position => 1);  END;  /        BEGIN  dbms_scheduler.create_file_watcher(     file_watcher_name => 'TestFileWatcher',     directory_path => '/u01/test',     file_name => '*.*',     credential_name => 'TestCredential',     destination => NULL,     enabled => False);  END;  /      BEGIN     dbms_scheduler.create_job(     job_name => 'TestJob',     program_name => 'TestProgram',     event_condition => NULL,     queue_spec => 'TestFileWatcher',     auto_drop => False,     enabled => False);       --Enable Everything.     dbms_scheduler.enable('TestProgram, TestFileWatcher, TestJob');  end;  /    --Set a one minute check interval.  BEGIN    DBMS_SCHEDULER.set_attribute(      'file_watcher_schedule',      'repeat_interval',      'freq=minutely; interval=1');  END;  /    --Create a file.  DECLARE    vFile utl_file.file_type;  BEGIN    EXECUTE IMMEDIATE 'create or replace directory TESTDIRECTORY as ''/u01/test''';    vFile := utl_file.fopen('TESTDIRECTORY', 'TestFileWatcher.txt', 'w', NULL);    utl_file.put_line(vFile, 'File has arrived '||SYSTIMESTAMP, TRUE);    utl_file.fclose(vFile);  END;  /      --Delay to give time for file to appear.  BEGIN     DBMS_LOCK.SLEEP(120);  END;  /    --Check the table.  BEGIN     FOR vx IN (select count(*) ct from sys.FilesFound) LOOP        If (vx.ct = 0) Then           DBMS_Output.Put_Line('Failed - No File Watcher Activity.');        Else           DBMS_Output.Put_Line('Success - File Watcher Activity Found.');        End If;     END Loop;  END;  /    --Cleanup.  EXECUTE dbms_scheduler.drop_job(job_name => 'TestJob');  EXECUTE dbms_scheduler.drop_program(program_name => 'TestProgram');  EXECUTE dbms_scheduler.drop_file_watcher(file_watcher_name => 'TestFileWatcher');  EXECUTE DBMS_SCHEDULER.drop_credential(credential_name => 'TestCredential');  drop table FilesFound;  drop procedure TestProcedure;  drop directory TestDirectory;  

restrict user host settings to socket connection only

Posted: 01 Aug 2013 12:48 PM PDT

Is there a way to restrict a user's host setting so that (s)he can only connect by socket and not TCP/IP? I'd like to have this setting for the root account.

edit: As Abdul Manaf pointed out skip-networking can be used to turn off TCP/IP connectivity altogether. But can it be done on a user account basis?

Unable to create a new listener in oracle 11g

Posted: 01 Aug 2013 01:48 PM PDT

In net manager i get an error message "A valid logfile is required" when i click on listener and choose '+' button. and then it keeps on giving me that error when i click on any other tab or button in net manager.

Grant access to a table to all users

Posted: 01 Aug 2013 10:09 AM PDT

Is it possible to assign a grant to a table for all users, or a default permission so that when new users are created they will have the specific grants for that table to SELECT, UPDATE, INSERT and DELETE?

What is Transaction Path Analysis?

Posted: 01 Aug 2013 08:41 PM PDT

I came across the term Transaction Path Analysis which I am not familiar with. A Google search resulted in only a few results, most of which seem to assume everyone knows what it means. From these my understanding is that it is about tracking what columns are used by queries to access data, in preparation for deciding indexes. This seems like common sense, but I'm surprised I can't find more specifics on it.

Is there a concise definition for transaction path analysis, a process for completing one, or any alternative more commonly used phrases that mean the same thing?

event scheduler not called

Posted: 01 Aug 2013 02:48 PM PDT

I had created one event scheduler which looks like this

mysql> show create event event1      -> ;  +--------+----------+-----------+--------------------------------------------------------------------------------------------+  | Event  | sql_mode | time_zone | Create Event  | character_set_client | collation_connection | Database Collation |  +--------+----------+-----------+-----------------------------------------------------------------------+----------------------+----------------------+    | event1 |          | SYSTEM    | CREATE DEFINER=`root`@`localhost` EVENT `event1` ON SCHEDULE EVERY 1 MONTH STARTS '2013-02-02 00:00:00' ON COMPLETION NOT PRESERVE ENABLE DO BEGIN   update samp set col1 =col1  + 1; END | utf8                 | utf8_general_ci      | latin1_swedish_ci  |  +--------+----------+-----------+-----------------------------------------------------------------------+----------------------+----------------------+---------  -----------+  1 row in set (0.00 sec)  

This events has not called on 1st of month. So i tried show processlist\g to find it is runnung or not; it showed following o/p

mysql> show processlist\g;  +-----+-----------------+-----------------+---------------+---------+------+---     | Id  | User            | Host            | db            | Command | Time | State                       | Info             |  +-----+-----------------+-----------------+---------------+---------+------+---  | 136 | event_scheduler | localhost       | NULL          | Daemon  | 1855 | Waiting for next activation | NULL    |  

so NULL in db col means that no DB is assigned to it?

Please help me to solve it.

What is a good Diff tool when comparing on-premise to SQL Azure?

Posted: 01 Aug 2013 04:59 PM PDT

We're working on migrating our database (~400 tables) from SQL 2008 R2 to SQL Azure. We're currently in the proof-of-concept stage, figuring out the process we'll be following. We have a process that we think is solid but I'd like to perform some validation diffs on both schema and data to confirm that we have a successful migration. I've never done this before with SQL Azure and am looking for a good tool to do this. I'd love if it's a free tool but we can pay for it if necessary. Ultimately, this is a one-time migration (we'll do it a few times but the real migration will only be done once).

Whatcha got?

Shrink a database below its initial size

Posted: 01 Aug 2013 06:56 AM PDT

I've got a dev database that is a copy of live 30gb, we have deleted some data that is not needed in dev that brings the data file used to 20gb.

Its SQL server 2005

So we have about 33% unused.

I need to reclaim the space which will allow us to have a second dev DB on the server (based on the cut down version)

However I cannot reclaim the space, I have done the following

The initial size of the file SMS2_Data is 30gb.

DBCC SHRINKFILE (N'SMS2_Data' , 0, TRUNCATEONLY)  

followed by

DBCC SHRINKFILE (N'SMS2_Data' , 19500)  

nojoy.

I have tried making a backup, creating a new DB with a low initial size then restoring, no joy as the initial size gets overwritten

Have tried

  ALTER DATABASE SMS2HazSub MODIFY FILE     (NAME = 'SMS2_Data', SIZE = 20000)   

errored saying MODIFY FILE failed. Specified size is less than current size.

Have done the shrink then changed the recovery mode from FUll to simple and back again. no joy

Any ideas??

ADDITION: So I thought it was to do with some TEXT fields. We have about 6 accross the system. So as a test I dropped them all and then did a shrink of the file and still no change.

ADDITION 2 : Cheers everyone for your answers, no single answert has sorted teh problem. The only option left is to reimport the data to another DB. This is not practicle as it would have to be done on the Live DB, which carries too much risk. I've no idea what will happen to the bonus points. I cant pick an answer as nothing has given me a solution to my problem

ADDITION 3 : I've tried lots of things and nothign has worked, thanks all for your help but I've just got to accept that its gonna be 30 gig at least.

[SQL Server] Easy (I hope) join question

[SQL Server] Easy (I hope) join question


Easy (I hope) join question

Posted: 01 Aug 2013 07:53 AM PDT

Hi,I have two tables with many fields in them. I am only interested in a few of the fields, and both of the tables have records that are duplicates on the fields I am interested in. I think if I do a left outer join A on B I will get all the records in A, but some of them will become duplicated because they exist in B more than once. How do I do a join so that I keep all the records from table A and fill in the missing blanks, but do not create additional rows? Here's a small example:table AName | AddressTai | #table BName | AddressTai | 123 Main StTai | 123 Main StTai | 123 Main StMike | 456 Broadwaydesired output:Tai 123 Main StI do not expect there will ever be more than one distinct address for each name. However, if there is more than one distinct address then I would take one randomly and don't care if there are others that I did not get. I've done something similar to this but I had to use ROW_NUMBER() OVER (PARTITION BY... which given my newbie status seems complicated and it seemed like there should be an easier way.Thanks for the help!Tai

TempDB Full...!!

Posted: 01 Aug 2013 02:13 AM PDT

Hello Masters,My TempDB full..!! Its utilising almost 400% more disk space than its assigned..! How can I free disk space ? If I will take backup of system databses, will it free the disk space ? Please advice me..

Can someone tell me what is wrong in this query: I get error Msg 102, Level 15, State 1, Line 1 Incorrect syntax near

Posted: 31 Jul 2013 10:03 PM PDT

insert [User_Id],[Customer_Id],[User_Name],[Preferred_Name],[Email],[Mobile_Phone_1],[Mobile_Phone_2],[Mobile_Phone_3],[Phone_1_Status],[Phone_2_Status],[Phone_3_Status],[Password],[REMINDER_QUESTION],[REMINDER_ANSWER],[Registration_Date],[Registered_By],[Approval_date],[Approved_by],[Last_UpdateD],[Last_Updated_By],[Last_Sign_On_Tel],[Last_Sign_On_SMS],[Last_Sign_On_IB],[Status],[approved],[block],[TryCount],[user_flg],[ex_flg]into [e_onenew].[dbo].[Users] SELECT [User_Id],[Customer_Id],[User_Name],[Preferred_Name],[Email],[Mobile_Phone_1],[Mobile_Phone_2],[Mobile_Phone_3],[Phone_1_Status],[Phone_2_Status],[Phone_3_Status],[Password],[REMINDER_QUESTION],[REMINDER_ANSWER],[Registration_Date],[Registered_By],[Approval_date],[Approved_by],[Last_UpdateD],[Last_Updated_By],[Last_Sign_On_Tel],[Last_Sign_On_SMS],[Last_Sign_On_IB],[Status],[approved],[block],[TryCount],[user_flg],[ex_flg]from [e_one].[dbo].[Users]

[Articles] The Human Touch

[Articles] The Human Touch


The Human Touch

Posted: 31 Jul 2013 11:00 PM PDT

Humans are often the weakest links in any computer process, but we do need human input. Steve Jones talks about a few issues, and how we should be changing our workflow to reduce issues.

sqldbabundle Top 5 Hard-earned Lessons of a DBA
'10 Tips for Efficient Disaster Recovery' by Steve Jones. Prepare for any future disaster by reading Steve's tips today.

[MS SQL Server] unable to connect with default instance

[MS SQL Server] unable to connect with default instance


unable to connect with default instance

Posted: 01 Aug 2013 04:10 AM PDT

Hi,I have installed sql server 2008r2 enterprise edition on my system.i have got the following errors such as1) unable to connect with an instance name ..some network failure.when I repaired that software master database was not existing and database engine connection problems.so, i uninstall and installed it properly.still iam unable to connect with default instance but i can connect through named instance and local.Could anyone explain me the reason.when i searched on Google,someone mentioned that we don't connect through default instance.As of my knowledge i did connect through that instance before on my frnd's system.please explain me the problem.thank you..!!!

changing disk arrays for db's but keeping same drive letters!

Posted: 31 Jul 2013 10:08 AM PDT

We run in Amazon EC2 but I am sure the principal is the same.Currently the system db's are on C:\ , Logs D:\ , User Data E:\ and TEMPDB F:\.SQL is installed in the default C:\ location.We need to move Logs and Data onto new RAID arrays but keep the same drive letters (so SQL can restart!!)So the plan is:Two new RAID arrays are going to be built and named X: and Q:SQL Service is stopped via SQL Configuration ManagerAll data on D: is copied to X:All data on E: is copied to Q:The original D: will be changed to S:The original E: will be changed to T:X: will be changed to D:Q: will be changed to E:Restart SQL from SQL Configuration ManagerWe will be trying to do a test if we can but thought I would see if anybody see any issues?I can't see anything that needs changing as long as the right files are copied to the right drives and the letters are changed before SQL is restarted.[i]The only thing I can think of is if SQL takes some ID from the underlying disks / volumes etc and recognised the new D:\ is different and spits the dummy!!!![/i]thanks

[SQL 2012] cannot remove FILESTREAM file

[SQL 2012] cannot remove FILESTREAM file


cannot remove FILESTREAM file

Posted: 01 Aug 2013 12:52 AM PDT

Hello, we just recently migrated to SharePoint 2013 and I'm trying to move around FILESTREAM files on all of our content databases (we're running SQL Server 2012) and I'm getting an error when deleting a FILESTREAM file even if it's empty. The content database is in simple recovery mode and I've emptied the file and confirmed that it is empty but dropping the file brings an error saying it is not empty:[code="sql"]USE WSS_Content_20122013_Classes;GOCHECKPOINT;EXEC sp_filestream_force_garbage_collection @dbname = N'WSS_Content_20122013_Classes';DBCC SHRINKFILE (RBSFilestreamFile, EMPTYFILE);ALTER DATABASE WSS_Content_20122013_Classes REMOVE FILE RBSFilestreamFile [/code]Here's the error I get:File ID 1 of database ID 7 cannot be shrunk as it is either being shrunk by another process or is empty.DBCC execution completed. If DBCC printed error messages, contact your system administrator.Msg 5042, Level 16, State 13, Line 4The file 'RBSFilestreamFile' cannot be removed because it is not empty. Any idea on how to resolve this?

Scaling SSRS

Posted: 31 Jul 2013 11:37 PM PDT

This question has been asked in some form or another before, but not for SQL Server 2012 (that I have found) and without an adequate answer, so I am going to take a shot at explaining what we are doing and hopefully someone will have some experience they can share. We are currently using SQL Server 2012 Reporting Services for some internal business reporting. The current setup has been relatively simple: A single server with SQL Server 2012, along with all of the SQL Server Reporting Services components (tools, web server services, etc) installed locally. All reports run locally on the box though they do reach out via linked server to some external data sources. My main question focuses on scaling this setup, should we decide to open up access to SSRS reports to either more users internally, or possibly even externally via a .NET web application. My two concerns with scaling are cost and performance. In terms of cost, has anyone created a core reporting server with the SQL Server Reporting Services components/databases and then installed the SSRS web services on multiple, separate web servers, pointing them back to the core SSRS server? If so, how has your licensing worked? I have been unable to find a clear answer as to whether you can run such a setup with SQL Server licensing only on the core reporting server...or do you have to license each web server with the SSRS web services installed as well? If so, that makes this an extraordinarily expensive reporting solution to scale out on. If you have successfully built and licensed this type of environment, what has been your experience in terms of performance? Any major gotchas or caveats that you could share? And have you found a solid formula you use for user concurrency vs. number of web servers being used? Thank you ahead of time for any wisdom you can share!

Send Email Alerts immediately depending upon data/text in a column

Posted: 31 Jul 2013 10:04 AM PDT

Hi Experts,We do custom logging into a table. The requirement is that the developers should receive email immediately whenever there is [b]specific data[/b] in that columnAny Input is AppreciatedThanks a ton in adavnce!

Service Pack 1 on SQL Server 2012

Posted: 31 Jul 2013 07:12 AM PDT

On a single VM, I have the following1) A default Instance of SQL Server 2012 (SP1 build number 11.0.3128.0) with database engine, SSAS, SSIS, DQS and MDS etc2) A named instance of SSAS for SSAS tabular mode3) A named instance of SSAS in SharePoint integrated mode for PowerPivot.1) After the initial installation of SQL Server 2012, I have installed the SP1 on default instance. So, all is good. 2) Now, I want to install the sp1 on SSAS tabular mode instance ( I have couple of databases installed on this currently) and SP1 on SSAS in SharePoint integrated mode(Nothing on this instance).Can somebody please provide me a guide or steps or link as to how I can approach this. When I apply service pack, obviously I dont want to disturb my default instance because everything is good there. When I apply service pack on tabular mode, will it effect my existing databases? If yes, how should i approach it? Greatly appreciate if anybody can give me some pointers and I am under the gun to get this done.

List All Jobs & Their Steps

Posted: 31 Jul 2013 05:21 AM PDT

This is another of my quick scripts that I want to store where I can find easily. If you want to take it, please help yourself.This script will return all SQL Agent jobs, and give a step by step break down of what the job does. I included a column called Date_List_Generated to ensure I was looking at the latest version.[code="sql"]Select sJob.Name As Job_Name ,sJob.Description ,sJob.Originating_Server ,sJob.Start_Step_ID As Start_At_Step ,Case When sJob.Enabled = 1 Then 'Enabled' When sJob.Enabled = 0 Then 'Not Enabled' Else 'Unknown Status' End As Job_Status ,Replace(Replace(sCat.Name,'[',''),']','') As Category ,sJStp.Step_ID As Step_No ,sJStp.step_name AS StepName ,Case sJStp.SubSystem When 'ActiveScripting' Then 'ActiveX Script' When 'CmdExec' Then 'Operating system (CmdExec)' When 'PowerShell' Then 'PowerShell' When 'Distribution' Then 'Replication Distributor' When 'Merge' Then 'Replication Merge' When 'QueueReader' Then 'Replication Queue Reader' When 'Snapshot' Then 'Replication Snapshot' When 'LogReader' Then 'Replication Transaction-Log Reader' When 'ANALYSISCOMMAND' Then 'SQL Server Analysis Services Command' When 'ANALYSISQUERY' Then 'SQL Server Analysis Services Query' When 'SSIS' Then 'SQL Server Integration Services Package' When 'TSQL' Then 'Transact-SQL script (T-SQL)' Else sJStp.SubSystem End As Step_Type ,sJStp.database_name AS Database_Name ,sJStp.command AS ExecutableCommand ,Case sJStp.on_success_action When 1 Then 'Quit the job reporting success' When 2 Then 'Quit the job reporting failure' When 3 Then 'Go to the next step' When 4 Then 'Go to Step: ' + QuoteName(Cast(sJStp.On_Success_Step_ID As Varchar(3))) + ' ' + sOSSTP.Step_Name End As On_Success_Action ,sJStp.retry_attempts AS RetryAttempts ,sJStp.retry_interval AS RetryInterval_Minutes ,Case sJStp.on_fail_action When 1 Then 'Quit the job reporting success' When 2 Then 'Quit the job reporting failure' When 3 Then 'Go to the next step' When 4 Then 'Go to Step: ' + QuoteName(Cast(sJStp.On_Fail_Step_ID As Varchar(3))) + ' ' + sOFSTP.step_name End As On_Failure_Action ,GetDate() As Date_List_GeneratedFrom MSDB.dbo.SysJobSteps As sJStp Inner Join MSDB.dbo.SysJobs As sJob On sJStp.Job_ID = sJob.Job_ID Left Join MSDB.dbo.SysJobSteps As sOSSTP On sJStp.Job_ID = sOSSTP.Job_ID And sJStp.On_Success_Step_ID = sOSSTP.Step_ID Left Join MSDB.dbo.SysJobSteps As sOFSTP On sJStp.Job_ID = sOFSTP.Job_ID And sJStp.On_Fail_Step_ID = sOFSTP.Step_ID Inner Join MSDB..SysCategories sCat On sJob.Category_ID = sCat.Category_IDOrder By Job_Status ,Job_Name;[/code]

[T-SQL] query to get the size of all indexes in a database

[T-SQL] query to get the size of all indexes in a database


query to get the size of all indexes in a database

Posted: 01 Aug 2013 12:49 AM PDT

Hi Friends,Is there any query to get the size of all indexes in a database? Let us assume a database size is 400Gb. I need to know how much space index is occupying.Thanks in advance.

How to convert my cursor tsql to set based

Posted: 31 Jul 2013 11:31 AM PDT

Hello All....I was just thinking about a query I need to write and know how to do it with cursors but would like to try and make it happen using a set based approach. Not sure if it can be done.Here are my requirements:I need to group the following into a pool.step 1 - Get all House Idsstep 2 - For each house id, get all people linked directly to that housestep 3 - for each of those people , get any additional house ids linked to themstep 4 - for each of those additional house ids, get any additional people linked to theme.g. resultPool 1 = House Id | people Id 1 | 1 1 | 2 2 | 2 3 | 2 3 | 4I hope I have made sense, I can see how easy it would be to code using cursors but an not sure of the set based approach.Thoughts??Cheers

select distinct rows

Posted: 31 Jul 2013 07:50 AM PDT

himy stored prc return value slike thisrule date noabc1 03/17/2003 23abc1 04/15/2004 45abc2 04/43/2009 120i need to display only 1 rule on my front end.if i put distinct or group by still its not working

how to store result at variable?

Posted: 31 Jul 2013 10:03 PM PDT

Hi,create proc sp_temp--@temp varchar(10)asbegindeclare @Result varchar(5)select COUNT(*) from #temptableendthis simple SP working fine.. I want store the result in one variable..thanksananda

how to import excel file into sql server using identity column

Posted: 31 Jul 2013 07:18 PM PDT

I have an excel file with email ids as one column.i want those email ids into the database along with new column as autoid in which identity column must be used.I am able to import only emails id.but i want it along with autoid generation.So can you please help me.For example: i haveemaildsabc@gmail.comxyz@gmail.combut what i want is autoid emaild1 abc@gmail.com2 xyz@gmail.comin which autoid should generate automatically.

sp_executesql MAXDOP

Posted: 31 Jul 2013 08:53 PM PDT

Just trying to test the affect of MAXDOP = 1 inside a query executed by sp_executesql within CRM. The query itself has lots of variables, so takes a similar form to the below, (excuse the copying and pasting):exec sp_executesql N'select top 51 when 6 then @StatusCode1 when 1 then @StatusCode2 ORDER BY incident0.TicketNumber desc@StatusCode1=N'Canceled',@StatusCode2=N'In Progress'My question really is this, where to put the OPTION MAXDOP(1) command, I put it right at the end but its seemingly ignored?Many thanks!

Nulls in Dynamic Pivot - how to remove

Posted: 31 Jul 2013 05:13 AM PDT

Here is my code, i'm creating a dynamic pivot of sales for the last 6 days. Works great, except if I have a customer that does not have a sales on a particular day, I have nulls. I can't figure out how to take the nulls out of the pivot. I assume I have to use ISNULL but no where seems to work ... any suggestions or ideas on how to get rid of the pesky nulls?:crazy:SET @sql = N'SELECT *FROM (SELECT custid, sorderdate, qtyFROM dbo.dailysales) AS DPIVOT(SUM(qty) FOR sorderdate IN(' + @cols + N')) AS P;';EXEC sp_executesql @sql;custid 6/19/2013 6/26/2013 7/3/2013customer1 130 138 144customer2 11 20 17customer3 2 2 NULLcustomer4 42 37 41customer5 NULL NULL 1

Sum Negative Numbers

Posted: 14 Jul 2013 03:53 PM PDT

Hi All,I have a column GNLBalance with negative and positive balances. I want to add these numbers together when another column GNLFSMID in the same table is equal to a series of numbers.I need the total closing\opening balance for each client with there account, account group & division.Balance Buisness Rules:2013 Opening balance = Closing balance 2012 (opening 2012 + Jan 12 to Dec 12 balance)Closing balance 2013 = Opening + Jan13 to Current month balanceSoOpening: FSMID 113 to 125Closing: FSMID (113 to 125) + 127,128,129,130DDLTable GeneralLedgerGNLID GNLCLIID GNLCOAID GNLFSMID GNLBalanceBase1 1385 576 113 -4845.00002 1385 713 114 -395.85003 3139 39 115 8703.34004 727 39 116 -1321.65005 727 39 117 -8811.00006 3139 713 118 -15416.52007 1480 713 119 18429.92008 3144 39 120 1321.65009 1480 713 121 -10799.000010 3144 576 122 4371.000011 3139 713 123 1619.850012 727 45 124 1786.140013 3139 39 125 46.420014 727 576 127 -12802.400015 1480 713 128 1198.260016 1480 713 129 -1785.000017 3139 713 130 800.0000Table ClientsCLIID CLICode CLIName727 HARK HARK CONSULTING1385 3HPARC 3HARCHITECTS1480 GUYCAR GUY CARPENTER3139 ABNAP ABB PTE LTD3144 SYSACC SYSTEM ACCESSTABLE ChartOfAccountCOAID COANAME1 COACode2Code COACode2NAme2 COAAGPID39 Total Billings RV10 RV10 Billings 145 Prod Billing RV40 RV40 Revenue 2576 Loan FD100G FD100G Financial Debts 3713 Receivables FA301G FA301G Loans Advances 4Table AccountGroupAGPID AGPNAME11 Assets2 Goodwill3 Deferred4 Trade5 CashTable FiscalMonthFSMID FSMNAME1113 Opening Balance 12114 Jan 12115 Feb 12116 Mar 12117 Apr 12118 May 12119 Jun 12120 Jul 12121 Aug 12122 Sep 12123 Oct 12124 Nov 12125 Dec 12127 Opening Balance 13128 Jan 13129 Feb 13130 Mar 13Table ClientOwnerCLOID CLOCLIID CLODIVID1 727 12 1385 23 1480 34 3139 35 3144 46 727 5Table DivisionDIVID DIVName11 Digital2 Zenith3 Stars4 MSL5 LeoExpected ResultsClient code(CLICODE), GL Account Group(AGPNAME1), GL Account(COANAME1),3HPARC, Deferred, Loan,3HPARC, Trade, Receivables,HARK, Assests, Total Billings,GUYCAR, Trade, Receivables,--continue on same lineClosing balance(Total of GNLBalanceBase when FSMID = 113 to 130), Division(DIVName1), GLChartCode(COACode2Code), -4845, Zenith, FD100G -395.85, Zenith, FA301G -10132.65, Digital, RV10 7044.18, Stars, FA301G[code="sql"]select distinct c.CLIName1 as 'Client', c.CLICode as 'Client Code', g.AGPName1 as 'GL Account Group', a.COAName1 as 'GL Account', CASE WHEN l.GNLFSMID IN(113,114,115,116,117,118,119,120,121,122,123,124,125,127,128,129,130) THEN SUM(l.GNLBalanceBase) ELSE '0' END AS [Closing Balance], d.DIVName1 as 'Division', d.DIVName2 as 'Division (2)', t.CLTName1 as 'Client Type', a.[COACode2Code] as 'GLChart Code 2', a.COACode2Name1 as 'GLChart Code 2 Name', CASE WHEN l.GNLFSMID IN(113,114,115,116,117,118,119,120,121,122,123,124,125) THEN SUM(l.GNLBalanceBase) ELSE '0' END AS [Opening Balance] from vwClient cinner join [vwChartOfAccount] aon a.[COASourceID] = c.[CLISourceID]inner join [vwChartOfAccountGroup] gon g.[AGPID] = a.COAAGPIDinner join [vwGeneralLedger] lon l.GNLCLIID = c.[CLIID]inner join [vwclientOwner] oon o.CLOID = c.CLIID inner join [vwDivision] don d.DIVID = o.CLODIVIDinner join [vwClientType] ton t.CLTID = c.[CLICLTID]inner join [DW].[vwFiscalMonth] mon l.GNLFSMID = m.FSMIDgroup by c.CLIName1, c.CLICode, g.AGPName1, a.COAName1, l.GNLFSMID, d.DIVName1, d.DIVName2, t.CLTName1, a.COACode2Code, a.COACode2Name1[/code]

multiple records into single record

Posted: 31 Jul 2013 01:33 AM PDT

I have a task, here in my table I have mutiple records for single icn that means each record is different by its detail line level information about that icn. As this is health care data, now my task is to keep all the detail level information of a particular icn into single record.The thing is for example:icn =1 may have 10 records icn =2 may have 20 recordsnot all icn will have the same number of records. so how can I create a single chunk or record of a particular icn(where ten records should come in single record)?

[SQL Server 2008 issues] Login Creation

[SQL Server 2008 issues] Login Creation


Login Creation

Posted: 31 Jul 2013 07:06 PM PDT

Hi Team,While creating a new login, Effective tab is missing in securables window,Please suggest..

Generate a day date according to existing month+year+daydiff

Posted: 31 Jul 2013 06:46 PM PDT

im using sql 2008.i got this 5 columns:StartMonth,StratYear,EndMonth,EndYear,DaysBetweeni dont have the day of these dates and that's what im trying to generate for example:12 2008 1 2009 8now, i want to create a random date (start date and end date , format as dd/mm/yyyy) which will include the day and will make scene upon the data i have under days betweenFOR INSTANCE,if i know that i got 8 days (DaysBetween) and the startmonth is 12, the date must be from 24/12/2008 cause if i add 8 days i get the EndMonth (1/2009)if i would choose the date 2/12/2008 i would get 10/12/2008 and its not good cause the month is still 12....and i need 1 (2009)how can i generate valid dates ?thank you

How To DISTINCT COUNT with Windows Functions (i.e. OVER and PARTITION BY)

Posted: 21 May 2013 04:09 AM PDT

SQL 2012 Reference = http://msdn.microsoft.com/en-us/library/ms175997(v=sql.110).aspxI'm trying to get a unique count of rows in a child table in a multi-table join just like the example shown below from the above Microsoft article on COUNT(). I'm using the Windows functions (as opposed to old fashion aggregation with GROUP BY) because I need both aggregate data and non-aggregated data to be included in the results.In the below Example (from example C in the above Microsoft Document on T-SQL) it shows that you can do the following [code="sql"]USE AdventureWorks2012; GOSELECT DISTINCT Name, COUNT(edh.BusinessEntityID) OVER (PARTITION BY edh.DepartmentID) AS EmployeesPerDeptFROM HumanResources.EmployeePayHistory AS ephJOIN HumanResources.EmployeeDepartmentHistory AS edh ON eph.BusinessEntityID = edh.BusinessEntityIDJOIN HumanResources.Department AS d ON d.DepartmentID = edh.DepartmentIDWHERE edh.EndDate IS NULLORDER BY Name;[/code]This is using the DISTINCT keyword so as to get a distinct count where as without it the Count() coould be over stated by the number of rows retruend by the other tables in the join.Is there a way to do the equivelant in T-SQL for SQL Server 2008R2? I've tried the above and it errors out under SQL 2008R2 and examplce C above is NOT in the same Microsoft article when you change the platform to SQL 2008R2 so I'm assuming this is a feature nehancement in SQL 2012. How then can you do this under SQL 2008R2?Thanks

Query execution time in Milisecond

Posted: 31 Jul 2013 06:02 PM PDT

HiI have one problem statement to create sq. query. For the solution i have created three different queries, one using CTE expression and two using sub queries. When i execute all three queries in SSMS on local database, takes zero seconds to execute.I want to check performance of each query and want to find exact execution time in milliseconds. [b]How can i check it.Is there any other way to find best query among all?[/b]One way is to assign getdate() in datetime varibale before start of query execution and after the query execution complete and find the difference between both of them.

ADO NET Source works perfectly within BIDS but getting error in a SQL Agent Job

Posted: 30 Jul 2013 08:05 PM PDT

Hi,I have a SSIS Package with a ADO NET SOURCE for capturing data from MS Access DB which is in different Server. So, I have setup the System DSN in 32 bit ODBC Data Source Administrator and used .NET Provider\ODBC Data Provider connection manager in SSIS.This works perfectly within BIDS. However when I put this SSIS package in a SQL Agent Job, it does not work. I am getting the below error :----------------------------------------------------------------------------------------Code: 0xC0047062 Source: DFT_Customer_DLT ADO NET Source [16] Description: System.Data.Odbc.OdbcException: ERROR [IM014] [Microsoft][ODBC Driver Manager] The specified DSN contains an architecture mismatch between the Driver and Application at System.Data.Odbc.OdbcConnection.HandleError(OdbcHandle hrHandle, RetCode retcode) at System.Data.Odbc.OdbcConnectionHandle..ctor(OdbcConnection connection, OdbcConnectionString constr, OdbcEnvironmentHandle environmentHandle) at System.Data.Odbc.OdbcConnectionOpen..ctor(OdbcConnection outerConnection, OdbcConnectionString connectionOptions) at System.Data.Odbc.OdbcConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.CreateNonPooledConnection(DbConnection owningConnection, DbConnectionPoolGroup poolGroup) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.Odbc.OdbcConnection.Open() at Microsoft.SqlServer.Dts.Runtime.ManagedHelper.GetManagedConnection(String assemblyQualifiedName, String connStr, Object transaction) at Microsoft.SqlServer.Dts.Runtime.Wrapper.IDTSConnectionManager100.AcquireConnection(Object pTransaction) at Microsoft.SqlServer.Dts.Pipeline.DataReaderSourceAdapter.AcquireConnections(Object transaction) at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostAcquireConnections(IDTSManagedComponentWrapper100 wrapper, Object transaction) End Error Error: 2013-07-31 03:29:50.19 Code: 0xC0047017 Source: DFT_Customer_DLT SSIS.Pipeline Description: component "ADO NET Source" (16) failed validation and returned error code 0x80131937. End Error Error: 2013-07-31 03:29:50.19 Code: 0xC004700C Source: DFT_Customer_DLT SSIS.Pipeline Description: One or more component failed validation. End Error Error: 2013-07-31 03:29:50.19 Code: 0xC0024107 Source: DFT_Customer_DLT Description: There were errors during task validation.-------------------------------------------------------------------------------------------As per my analysis there might be problem with 32bits and 64bits.Details of my project Environmenta nd Package :SERVER --> Windows Server 2008 R2 64 bitPACKAGE --> 32 bitODBC data source --> 32 bitSQL Server Agent --> 64 bit. (I have also tried by changing the property with 32bit in Execution Option in SQL Server Agent.)What needs to be done to resolve this error for executing with SQL Agent Job?Any and All help will be greatly appreciated...Thanks

checking the setup of your discs

Posted: 30 Jul 2013 10:00 PM PDT

I need to check what disc configuration we use to store our log files and data files. We use a third party cloud company to host our servers and i want to check that things are set up for optimal performance.When i log onto the server and go into computer managment i just see 'basic disc 100gb' for the disc that hold my log files. If it was using RAID would it show up here ? When i go to 'My Computer' and click on the properties of the same disc then click on the harware tab i see something different:Floppy disc driveNECVMWare IDE CDR10ATA....Then under that 3 virtual disc which say:VMware Virtual disk SCSI DISC DEVICEVMware Virtual disk SCSI DISC DEVICEVMware Virtual disk SCSI DISC DEVICEIf i then click on each of these and click properties, i see:device type: Disc drivesManufacturer (standard disk drives)Location Location 160(BUS Number 0, Target id0, Lun 0)Each one reads the same, except Target Id changes with each disc ( 0 - 2 )Can i tell from this what RAID (If any ?) is being used ?

Autoshrink enabled on a database

Posted: 31 Jul 2013 07:04 AM PDT

I understand that it is highly undesirable to have autoshrink on a database, since this would cause severe fragmentation.I have now inherited databases with autoshrink turned ONOnce I turn OFF autoshrinkI imagine that the index fragmentation could be reduced by running the reorganize/rebuild indexesMy question here would be , what about the fragmentation in the datafiles that was caused by the autoshrink ON.Could this be removed somehow ??

SSRS fillfactor on indexes in reportServerTempdb

Posted: 31 Jul 2013 11:43 AM PDT

Anyone have any recommendations on what fill factor should be for indexes on tables Segment and ChunkSegmentMapping in SSRS' reportservertempdb?

Baseline Configuration Analyzer Error

Posted: 08 Oct 2010 01:07 AM PDT

Hi Folks,I have another question today. I was trying to get analysis by Microsoft Baseline Configuration Analyzer 2.0 for our SQL Server 2008 installation using [b]SQL Server 2008Rs BPA[/b].Now, I am running this BCA from my workstation, so I am selecting "connect to another computer" option, I am sure that I am administrator of that remote machine and this local machine. But to be sure, I feed my ID and PWD in required fields too.After this much configuration when I try to run BPA, it gives me error [b][i]The specified module "baselineconfigurationanalyzer" Was not loaded because no valid module file was found in any module directory[/i][/b]I have checked that everything is 64bit (my workstation, remote server, BPA). I have also setup WinRM to accept remote connection to remote server. I tried to google this error but it seem that nobody had this before (or it displays lots of wrong results because of my long string)Is there any suggestion for me ??Thanks,-Jack

Send Email Alerts immediately depending upon specific data in a column

Posted: 31 Jul 2013 10:05 AM PDT

Hi Experts,We do custom logging into a table. The requirement is that the developers should receive email immediately whenever there is [b]specific data[/b] in that columnAny Input is AppreciatedThanks a ton in adavnce!

Bizarre Database Mail problem

Posted: 31 Jul 2013 09:41 AM PDT

I'm having a very strange problem related to Database Mail. I wonder if anyone can help me with.I have a SQL Server 2008 database that supports our help desk. Users create trouble tickets, which send out various emails.To send an email, my app inserts a record into a ticket email queue table. The table contains a createdate field autopopulated with getdate(), and a senddate field that's initially null. Meanwhile I have a SQL job that runs once a minute, its job being to execute a stored procedure. The SP queries for all queue records having a null send date then loops thru the result set, sending each email in turn. Our helpdesk is pretty busy, so typically multiple records will be inserted into the queue in a minute. Normally, you will see that each went out within a minute or two of being put in the queue.This morning, however, the email job history suddenly began reporting that the job failed. The reason given was that I needed to specify @recipient or @copy_recipents or @blind_copy_recipients. My SP specifies @recipients.Then I noticed that even tho the job was failing, some emails were going out. Then I noticed that the time between the email queue record's createdate and senddate was growing. The difference grew from the usual 1-2 minutes to, at this moment, three and a half hours. I could see too from the senddate field that exactly 1 email is being sent per minute. So I'm thinking that the SP starts working thru its result set, sends the first email OK, then dies, causing the job to report failure.I have made no changes to the SP, job, or anything else in ages. I have no idea how it happened or how to fix it. Restarting SQL Server did not help.Can anyone shed some light?

The txt file is not being created from the SQL table.

Posted: 31 Jul 2013 04:19 AM PDT

Hi, I am getting the following error when running the following script. I am trying to copy the table 'MAMI.EngravingsForVendor' into a flat file 'C:\JohnTest\EngravedProducts.txt' and I am getting an error. I am including the script, the error, and the content of variable "@sql" which I copied during debugging.-- SCRIPT THAT IS NOT WORKING...DECLARE @Table NVARCHAR(MAX)DECLARE @filename VARCHAR(MAX)SET @table = 'MAMI.EngravingsForVendor'SET @filename = 'C:\JohnTest\EngravedProducts.txt' IF OBJECT_ID(@table) IS NOT NULL BEGIN DECLARE @sql NVARCHAR(MAX), @cols NVARCHAR(MAX) = N''; SELECT @cols += ',' + name FROM sys.columns WHERE [object_id] = OBJECT_ID(@table) ORDER BY column_id; SELECT @cols = STUFF(@cols, 1, 1, ''); SET @sql = N'EXEC master..xp_cmdshell ''bcp "SELECT ''''' + REPLACE(@cols, ',', ''''',''''') + ''''' UNION ALL SELECT ' + 'RTRIM(' + REPLACE(@cols, ',', '),RTRIM(') + ') FROM ' + DB_NAME() + '.' + @table + '" queryout "' + @filename + '" -c -T'''; EXEC sp_executesql @sql; END ELSE BEGIN SELECT 'The table '+@table+' does not exist in the database'; END-- THE ERROR THAT IS BEING PRODUCED...SQLState = S1000, NativeError = 0Error = [Microsoft][SQL Server Native Client 10.0]Unable to open BCP host data-fileNULL-- THE CONTENT OF VARIABLE @SQL.EXEC master..xp_cmdshell 'bcp "SELECT ''VendorID'',''VendorEMailAddress'',''DocumentNumber'',''ProductCode'',''Quantity'',''VendorProductCode'',''Description'',''EngravedText'',''EngravedText2'',''ShippingAddressLine1'',''ShippingAddressLine2'',''ShippingAddressLine3'',''ShippingAddressLine4'',''ShippingAddressLine5'',''ShippingAddressLine6'',''ShippingAddressLine7'',''Country'',''ShipToTelephoneNumber'',''ShipToEMailAddress'',''Status'' UNION ALL SELECT RTRIM(VendorID),RTRIM(VendorEMailAddress),RTRIM(DocumentNumber),RTRIM(ProductCode),RTRIM(Quantity),RTRIM(VendorProductCode),RTRIM(Description),RTRIM(EngravedText),RTRIM(EngravedText2),RTRIM(ShippingAddressLine1),RTRIM(ShippingAddressLine2),RTRIM(ShippingAddressLine3),RTRIM(ShippingAddressLine4),RTRIM(ShippingAddressLine5),RTRIM(ShippingAddressLine6),RTRIM(ShippingAddressLine7),RTRIM(Country),RTRIM(ShipToTelephoneNumber),RTRIM(ShipToEMailAddress),RTRIM(Status) FROM DS_DEV.MAMI.EngravingsForVendor" queryout "C:\JohnTest\EngravedProducts.txt" -c -T'

"Failed to retrieve data for this request" error when trying to add feature

Posted: 08 Dec 2010 12:02 PM PST

Hi everyone, I wish to check if anyone has encountered this similar issue.I am on Windows Server 2003 SP2 and using SQL Server 2008 R2.When trying to add in a feature using the installer, the following error was encountered:SQL Server Setup has encountered the following error:"Failed to retrieve data for this request. ."When I tried applying the cumulative update, this exact same error also appears.Any suggestions will be appreciated !Thanks !Regards,Al

SSIS Import - Delimited File with Fields &gt; 8k Characters

Posted: 29 Jul 2013 05:46 AM PDT

Hello all,I'm attempting to import a text delimited file with a few fields that have greater than 8k characters. I'm running into this error on import, "Text was truncated or one or more characters had no match in the target code page.". So, how do I get around this? I've attempted to import those columns as text, varchar(8000), and varchar (max) with no luck so far. I'm hoping that I don't have to split those fields into multiple rows if they are larger than 8k characters.

How to separate the address?

Posted: 30 Jul 2013 11:57 PM PDT

Hi Friends, [code="sql"]Create Table Adrtbl (Address varchar(80))Create table SpltAdr (Prefix char(2),StreetName varchar(50),StreetType varchar(10))GOinsert into Adrtblselect 'WESTLAKE DR'unionselect 'W BLK BROWN DEER RD'unionselect 'JOHNSON ST'unionselect 'FREDONIA AVE'unionselect 'BANK ONE'unionselect 'N MARTIN LUTHER KIND DRIVE'GOinsert into SpltAdrselect '','WESTLAKE' , 'DR'unionselect 'W', 'BLK BROWN DEER', 'RD'unionselect '', 'JOHNSON','ST'unionselect '','FREDONIA', 'AVE'unionselect '','BANK ONE',''unionselect 'N', 'MARTIN LUTHER KIND DRIVE',''GOselect * from Adrtbl -- Source formatselect * from SpltAdr -- OutPut format[/code]I am looking for splitting the address. I have provided the sample script in here. I tried my own way but i failed. I have some listed Prefix and StreetType..Let me give you some examples...StreetType - Aly, Ave, Bch, Blf, Blvd,Rd,Dr,StPrefix - E,W,N,S,NE,NW,SE,SWthe prefix and street type must come under these categories....Gimme your suggesstions, friends....Any function or any T-SQL statement can do this?

Transactional Replication issue

Posted: 31 Jul 2013 06:35 AM PDT

Hi,I am having issues with transactional replication. yesterday I reinitialized the subscription with new snapshot. As I know once the bulk copy of data into tables and creating the Primarykeys, it has to say completed and start the delivery of transactions. Instead of that, again it is reinitializing the tables. In one day three times it reinitialized the tables . Please suggest me.Thanks

Snapshot Agent will not start

Posted: 17 Aug 2011 04:34 AM PDT

I'm working on a 2008 R2 Enterprise SQL Server. There are other working transactional replication publications working on this server.The distributer sql server is also a 2008 R2 Enterprise SQL Server.Today I created a new Publication, added a subscriber (going to a different instance) and the snapshot agent will not start.Here are the errors that it gives me:Error messages:The replication agent encountered a failure. See the previous job step history message or Replication Monitor for more information. The step failedHere are the details from the job:MessageThe replication agent encountered a failure. See the previous job step history message or Replication Monitor for more information. The step failed.Executed as user: (bob). Agent shutdown. For more information, see the SQL Server Agent job history for job (bob's job). [SQLSTATE 01000] (Message 20557) Replication-Replication Snapshot Subsystem: agent (bob)failed. The replication agent encountered a failure. See the previous job step history message or Replication Monitor for more information. The step failed. [SQLSTATE 42000] (Error 14151). The step failed.I have verified that the security is the same as the rest of the replication and the SQL Account is a db_owner of the destination and source databases.Thoughts?

Create User

Posted: 31 Jul 2013 02:34 AM PDT

Hi Team,how to create a new user with PasswordUser Name : STLOKPassword : STLOK and it should be read - only user.

Data import

Posted: 31 Jul 2013 12:51 AM PDT

Greetings friends,I have a .dta file that's formatted in a strange way and I would like to discuss ways of importing into a staging database.The first field of each record consists of a record type. For each entity there will be one record of type 01. Also, for each record type 01 there will be one or more record type 02. There maybe zero or more record type 03, 04, 05 and 06For example[quote]01*9517469000*117844165*0335*The Occupier* CRICKET CLUB***02*1*Ground*Tea Room*35.40*9.50*33602*2*Ground*Changing*28.00*9.50*26602*3*Ground*Kitchen*16.40*9.50*15602*4*Ground*Scorer*3.80*9.50*3602*5*Ground*Mower Shed*15.00*2.38*3603*Cricket Square*2.0*100.00*+20003*Cricket Ground*2.0*850.00*+172601*9517470000*200027165*0335*The Occupier* CRICKET CLUB***02*1*Ground*Club Room*32.60*19.00*61902*2*Ground*Changing Room*21.60*12.67*27402*3*Ground*Internal Storage*2.80*9.50*2702*4*Ground*Bar*13.90*19.00*26403*Cricket Square*1.0*100.00*+10003*Cricket Pitch*2.8*500.00*+142001*9517471000*203509165*0335*The Occupier*CRICKET & FOOTBALL CLUB***02*1*Ground*Pavilion*86.20*20.00*172402*2*Ground*Garage*23.40*10.00*23402*3*Ground*Changing Room*25.00*13.33*33303*Cricket Square*1.0*100.00*+10003*Playing Field*1.8*500.00*+91001*9517472000*200736165*0335*The Occupier*PT GNDOUSE****ROWOSE*02*1*Ground*Kitchen*37.14*16.67*619[/quote]I would like to create 6 staging tables to hold the different types but I'm not sure if it's best to load all the data into SQL first then split it out or go down the SSIS route for this kind of thing? Obviously once the data is split there must be a unique ID which links the records together. Any suggestions?Thanks in advance.

Change query from sql 2000 to 2008

Posted: 31 Jul 2013 02:18 AM PDT

Hi,I want to modify the below query which exists in 2000 to sql 2008. Can u please help me?select * FROM company_role_match crm, company_role_match crm2, company_role_match crm3, access_control_doc acdWHERE crm.id =* crm3.id AND crm2.id =* crm3.idAND crm.user_id = 547 --@query_user_idAND crm2.user_id = 4397 --@profile_idAND acd.ac_type = 'User Roles' --@access_control_typeAND acd.ac_id =* crm3.id AND ( crm.granted = 1 OR crm2.granted = 1 )AND crm3.id = 32 --@access_id

Disk space used last year

Posted: 30 Jul 2013 09:01 PM PDT

Hi,Can anyone please provide me the query to find the how much disk space used in last year?I need to provide approximate space for one more year as disk space for particular drive is about to full.Any immediate assistance will be a great help.

Query from SSRS to SSMS

Posted: 30 Jul 2013 09:29 PM PDT

Hi Team,We have a Report (SSRS) with select query, when we generate the report, Database is hanging,how to find the query which is executed inside the report.is there any possible way / query to find the query which is executed thru Report (SSRS).Please suggest..

Find executed scripts

Posted: 30 Jul 2013 09:50 PM PDT

Hi Team,am having 10 select statements and 10 Insert statements, and executed all the 20 statements individually.Now i want to get the report or result to display all the 20 executed statements.latest executed queries, am using below code, but that is not corrent.SELECT top 10 DMExQryStats.last_execution_time AS [Executed At], DMExSQLTxt.text AS [Query]FROM sys.dm_exec_query_stats AS DMExQryStatsCROSS APPLY sys.dm_exec_sql_text(DMExQryStats.sql_handle) AS DMExSQLTxtORDER BY DMExQryStats.last_execution_time DESCPlease sugest...

Connect to sever by name rather than IP address

Posted: 30 Jul 2013 08:05 PM PDT

HiWhen i first open SSMS the 'Connect to Server' logon box appears. In it the server name drop down box is a whole bunch of different ip address of all the servers we use in our production environment. If i click 'cancel' the 'Connect to server' box goees away and i am left with ssms open, and, at the bottom left i can see my 'Registered servers tab' and my 'Object Explorer' tab.In the Registred servers tab is a list of servers by name which is far more easier when i need to connect to the required server.The problem is when i want to change connection by clicking the 'Connect Object explorer' it opens up the 'Connect to Server' box again with all the IP address.I have tried typing in the names of the servers in replacement of the ip address but it fails to connect. How do i get this list of ip address to display the server names so its easier to switch when i want to connect ?This is probably really basic but its one of those things you probably only set up once and then its done ! Any tips ?

Export Calendar items from Exchange 2010 Public folder to MS SQL 2008

Posted: 30 Jul 2013 09:45 PM PDT

Overall situation: We are now migrating from a Windows 2003 Server domain to a Windows 2008 Server R2 domain. At the same time, we are migrating mailboxes from a Exchange 2007 to an Exchange 2010.Current situation: we use an Access 2003 API (a front-end MS Access application) to get the calendar items of a public folder and place them in an MS SQL 2008 database. This public folder resides on an Exchange 2007. The MS Access "application" is on the D drive of the pc's of persons who have the permissions to put items into this public calendar and it uses the Outlook 2003 client to get the calendar data.Problem: we don't find an apropriate Access 2010 API to use with MS SQL 2008 databaseDesired situation: a solution (via Powershell or other scripting) that reads the calendar items directly from the public folder on Exchange 2010 (without the need to use an Outlook client) and copy/sync it with the MS SQL 2008 database (on another server) on several timepoints (e.g. every 6 hours or every day at 18:00 PM).

How to set publication compatibility level for transactionnal replication ?

Posted: 30 Jul 2013 09:18 PM PDT

Hello,I'm trying to set up transactional replication from an SQL Server 2008 R2 instance to another.The publication created using the wizard causes the snapshot agent to fail, with the error :[i][...]Column [col name] in object FeaturedAgent contains type [col type], which is not supported in the target server version, SQL Server 2000[...][/i]I'm getting the same problem as the one is this post : [url=http://social.msdn.microsoft.com/Forums/sqlserver/en-US/dd88985a-5e47-4904-8eda-d41d1e276bdd/replication-failed-on-one-table]replication failed on one table[/url].The solution in this post was : [quote]Changing the publication database compatibility level to 100[/quote]But how to change the publication compatibility level for [b]transactional[/b] publications ?- The publication wizard only proposes to set the subscribers type for [b]merge[/b] publications.- And publication_compatibility_level property seems to be available only for [b]sp_changemergepublication[/b], not for [b]sp_changepublication[/b].Thanks for any help or information.

Search This Blog