[SQL Server 2008 issues] SQl date script help |
- SQl date script help
- Indexing question
- SQL to divide one row by another row based on formula or rule.
- Should I rely on the table definition to implement logic in a stored proc?
- Join yeilds different results if used with CTE or a real table
- Oracle to 2008 R2 SSIS
- script
- Query hangs inside sproc but runs in seconds when run in Query Analyzer!
- UPDATE HELP
- Average hourly rowcounts
- How do we know whether or not a column is involved in an index?
- sp_updatestats and the default sampling rate
- Find difference between two data sets
- BCP
- Not able to find historical data using SQL Server Profiler in SQL Server 2008
- Index rebuild before table update
- Replication error while creating a new publication
- Transactional replication problem
- Primary key or not to primary key, that is the question
- Database backup and restoration
- HEAP index ????
- Snap Mgr for SQL breaking log chains
- Impersonate possible for a security group?
- Insert duplicate rows based on number in the column
- Issue in Linked Server
- Truncate table
- Function in where clause
- BCP to import tab delimited file with header
- ROWLOCK
- Giving permission for each table within a database
- Distributed Query
- SSMS User Interface Tip to Freeze Panes.
- Why is RPC duration dependent on which server making the call?
- Decrypting values using code other than sql command
Posted: 21 Mar 2013 12:19 PM PDT I need to check if days are Sat, Sun and Mon and time between Sat 6am to Monday 6am. Can you please help?I am trying something like this but I am stuck..IF datepart(dw, getdate()) IN (1,7,2) AND (datepart(hour,getdate()) Between 6 AND ....) |
Posted: 21 Mar 2013 10:17 AM PDT Hello,I'm trying to create indexes on a table that will hold all of data changes in any columns of any tables so the number of rows will be pretty big.Here's the table structure:ID int identitychg_type char(1) not nulltbl_nm varchar(40) not nullp_key_nm varchar(40) not nullp_key_id id not nullcol_nm varchar(40) not nullold_txt varchar(1000) nullnet_txt varchar(1000) nullmod_dt datetime not nullmod_by varchar(50) not nullAnd common query #1, someting like; select old_txt, new_txt, mod_dt, mod_by from table1 where tbl_na = 'tbl_name' and p_key_id = 111 and p_key_nm = 'ttt'common query #2, someting like; select tbl_name, old_txt, new_txt, mod_dt, mod_by from table1 where mod_dt > '1/1/2013' and mod_by = 'uuu'I created index like this: 1. PK on ID column 2. Non-Unquie NC on p_key_id, tbl_name includes old_txt and new_txt 3. NC on mod_dt, mod_byI wonder if these indexes are good enough (from initial point of view) to cover both queries without impacting much overheads?Thanks much for your help!! |
SQL to divide one row by another row based on formula or rule. Posted: 19 Mar 2013 03:35 AM PDT Hi All,I have a below table and need to divide one row by another row based on the formula.Date Name ID Result3/18/2013 A 1 53/18/2013 B 2 163/18/2013 C 3 213/18/2013 D 4 113/18/2013 E 5 453/18/2013 F 6 223/18/2013 G 7 53/19/2013 A 1 23/19/2013 B 2 73/19/2013 C 3 153/19/2013 D 4 93/19/2013 E 5 193/19/2013 F 6 123/19/2013 G 7 3Please suggest how to proceed to get below result. Formula Column is used to get Result(%) column.Formula Date Result(%)B/A 3/19/2013 0.285714286D/C 3/19/2013 0.6F/(E-G) 3/19/2013 0.75B/A 3/18/2013 0.3125D/C 3/19/2013 0.523809524F/(E-G) 3/19/2013 0.55Thanks in advance. |
Should I rely on the table definition to implement logic in a stored proc? Posted: 21 Mar 2013 07:09 AM PDT I want to hear your thoughts on this. Is there a correct way to do this or accepted best practice? Here is the scenario.[code="sql"]CREATE TABLE dbo.parent(some_id TINYINT IDENTITY(1, 1), string_val VARCHAR(10), CONSTRAINT pk PRIMARY KEY(some_id))CREATE TABLE dbo.child(some_blah SMALLINT, some_id_from_parent TINYINT NOT NULL, CONSTRAINT f FOREIGN KEY (some_id_from_parent) REFERENCES dbo.parent(some_id)CREATE PROC dbo.insert_into_child(@list_of_string_vals) -- @list of string_vals converted to a table var BEGIN TRY BEGIN TRAN INSERT INTO dbo.child(some_id_from_parent) SELECT P.some_id FROM dbo.parent P LEFT JOIN @list_of_string_vals C ON P.string_val = C.string_val -- left join, so if there are nulls then insert will fail cos of column definition COMMIT TRAN END TRY BEGIN CATCH -- roll back END CATCH[/code]Lets say I have a table called dbo.child with a integer column (some_id_from_parent) that is defined as NOT NULL. The values in this column would be foreign key children from a parent table and each primary key in the parent table has a string value. The app would pass in a list of these string values, and I do a (left) join on the parent table to gather the id values and insert them in the child table. Now in the list of string values that are passed, there might be a value that does not belong in the original parent table; and in such a case, none of the string values that are passed down should be recorded in the child table.So in my proc, I put this insert in a transaction and I am relying on the table definition where the column is declared as not null. So when a string value that is not in the parent table comes in as part of a list, then the id value for that would be null (since i am doing left join) and the insert would fail because the column is declared as not null; so rollback the transaction.However, I am wondering if this is a good way of implementing this. Main concern being, if someone changes the table definition for that column to be nullable, then this proc no longer functions as defined. Should I explicity check for null values in the proc and then make a decision to rollback (which seems to be a more robust way of implementing this, but looks like more explicit work)?In general, how should I treat the table definitions? Are they representing strictly business rules that tell what kind of data goes into the table or can I use that information to implement other logic? |
Join yeilds different results if used with CTE or a real table Posted: 21 Mar 2013 10:22 AM PDT Hi,Here is the SQL I used as an example:[code="sql"]--CREATE AND POPULATE TABLE AAA ------------------------------------------CREATE TABLE [dbo].[AAA]( [Col1] [nchar](10) NOT NULL, [RowNumber] [int] NOT NULL) ON [PRIMARY]GOINSERT INTO [dbo].[AAA] ([Col1] ,[RowNumber]) VALUES ('a', 1), ('s', 2), ('d', 3), ('e', 4), ('f', 5), ('g', 6)GO--QUERY No 1 ----------------------------------------------------------------;with BBB as( select top 3 ABS(checksum(newid())) % 6 + 1 as RandomRow, RowNumber, Col1 from AAA order by RowNumber )select A.Col1, A.RowNumberfrom AAA as A join BBB On A.RowNumber = BBB.RandomRow; --QUERY No 2 ----------------------------------------------------------------select top 3 ABS(checksum(newid())) % 6 + 1 as RandomRow, RowNumber, AAA.Col1into BBB from AAAorder by RowNumber; select A.Col1, A.RowNumberfrom AAA as A join BBB On A.RowNumber = BBB.RandomRow; --Cleaning ---------------------------------------------------------------------drop table BBB; --drop table AAA; [/code]I have a table AAA with just two columns, one of which is a row number. I use CTE to create another table "BBB", which based on the AAA. The BBB table contains the same columns as the AAA table, plus a new column RandomRow.The RandomRow is populated by random numbers, generated from the range 1 to @N, where @N is the number of rows in the table AAA – in this case 6. The table BBB contains 3 rows only: the row numbers will be 1,2,3.The Select statement should return rows from AAA, which are identified by the number in the [BBB].[RandomRows].The Select statement contains the join, which is expected to return exactly 3 rows, because the BBB is subset of AAA. However, each time I run the query the Select returns different numbers of rows. To see the effect, please try to re-run the query No 1 several times, and you should get different numbers of returned rows each time.If I replace the CTE with real table (see the query 2/) I get expected results, exactly 3 rows each time I run the query.My questions are:A/ Why the join yields different results for CTE and for the real table?B/ If I replace the join in the query No 1 with RIGHT OUTER JOIN, the query works fine and returns just 3 rows. However, I think it should work with "join" also, because the BBB is subset of AAA, and number in RandomRows column will always identify the valid row in AAA. Thank you for help. |
Posted: 21 Mar 2013 12:24 PM PDT Hi All, Scenario: Migratin Oracle to SQL Server Issue: I was succesfully connected to Oracle from SQL Server 2008 R2 from a different box and imported data, now I am on windows 2012 server, where there is already a named instance of SQL Server 2012, now i installed other SQL Server 2008 R2 instance on the same box and trying to connect to Oracle from 2008 R2 instance. It is failing with following error But I tried to connect to oracle using same oledb from import export wizard and it is successfully connecting :( TITLE: Microsoft Visual Studio ------------------------------ Error at Tets [Connection manager "SourceConnectionOLEDB"]: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x8000FFFF. Error at Data Flow Task 1 [Source - XXX_XXX_XCC_XXX[1]]: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The AcquireConnection method call to the connection manager "SourceConnectionOLEDB" failed with error code 0xC0202009. There may be error messages posted before this with more information on why the AcquireConnection method call failed. ------------------------------ ADDITIONAL INFORMATION: Exception from HRESULT: 0xC020801C (Microsoft.SqlServer.DTSPipelineWrap) ------------------------------ BUTTONS: OK ------------------------------ Please Help me |
Posted: 20 Mar 2013 10:23 PM PDT can anyone help me with a script to move some data in the same table.the data looks like thisid field1 field2 field3 field41 a null b null2 null 9 null 63 2 null null 8want the data to look like this after running scriptid field1 field2 field3 field41 a b null null2 9 6 null null 3 2 8 null nullhope i have posted in right placethanks |
Query hangs inside sproc but runs in seconds when run in Query Analyzer! Posted: 20 Mar 2013 08:52 PM PDT I have a table of just over 200k records which contains several filtered hash indexes.There are two issues I'm trying to solve at the moment.The first is the INSERT statement in the stored procedure hangs but when I run the same code in a new Query Analyzer window, it runs in a couple of seconds. I've tried to run my sproc using WITH RECOMPILE but that's made no difference.The other problem is the table scan in the actual execution plan which I'm trying to eliminiate.My query looks like this:[img]http://i46.tinypic.com/2cdfk9k.jpg[/img]but I have the following index on the mkMatchKeyType1 column:[code="sql"]CREATE NONCLUSTERED INDEX [idx_mkMatchKeyType1] ON [dbo].[MergeTest1_keys_] ( [mkMatchKeyType1] ASC, [ID] ASC)INCLUDE ( [GUID]) WHERE ([mkMatchKeyType1] IS NOT NULL)[/code] |
Posted: 21 Mar 2013 09:36 AM PDT I've been struggling on this and was hoping to get some help here on this. (Looking to automate lottery results)I'm looking to update the LOTTERY_MY_NUM values that match in the LOTTERY_WINNING_NUM based on a row (RID column) and not a the set. Any column can match any column it just needs to be within the same row(s). (25 possible combinations) When a match is found, I would like to UPDATE the NUMx column with a value in LOTTERY_MY_NUM. LOTTERY_WINNING_NUM_RAW - data being received in EXCEL file, being ingested via SSIS into SQL raw table. LOTTERY_WINNING_NUM - Cleaned up data from raw, parsed out into columns. LOTTERY_MY_NUM - My lottery numbers I want to match on the winning numbers.--DROP TABLE LOTTERY_WINNING_NUM_RAWCREATE TABLE LOTTERY_WINNING_NUM_RAW(DATE VARCHAR(10),NUMBERS VARCHAR (20))--DROP TABLE LOTTERY_WINNING_NUMCREATE TABLE LOTTERY_WINNING_NUM (DATE VARCHAR(10),RID INT IDENTITY(1,1),NUM1 VARCHAR(20),NUM2 VARCHAR(20),NUM3 VARCHAR(20),NUM4 VARCHAR(20),NUM5 VARCHAR(20))--DROP TABLE LOTTERY_MY_NUMCREATE TABLE LOTTERY_MY_NUM (DATE VARCHAR(30),RID INT IDENTITY(1,1),NUM1 VARCHAR(20),NUM2 VARCHAR(20),NUM3 VARCHAR(20),NUM4 VARCHAR(20),NUM5 VARCHAR(20))INSERT INTO LOTTERY_WINNING_NUM_RAWSELECT '3 19 2013','15 16 23 26 32'INSERT INTO LOTTERY_WINNING_NUM_RAWSELECT '3 18 2013','09 22 30 38 40'INSERT INTO LOTTERY_WINNING_NUM_RAWSELECT '3 17 2013','27 28 32 37 39'INSERT INTO LOTTERY_WINNING_NUMSELECT REPLACE(DATE, ' ', '/'), SUBSTRING (NUMBERS, 1, 2) AS NUM1, SUBSTRING (NUMBERS, 4, 2) AS NUM2, SUBSTRING (NUMBERS, 7, 2) AS NUM3, SUBSTRING (NUMBERS, 10, 2) AS NUM4, SUBSTRING (NUMBERS, 13, 2) AS NUM5 FROM LOTTERY_WINNING_NUM_RAWDROP TABLE LOTTERY_MY_NUMCREATE TABLE LOTTERY_MY_NUM (DATE VARCHAR(30),RID INT IDENTITY(1,1),NUM1 VARCHAR(20),NUM2 VARCHAR(20),NUM3 VARCHAR(20),NUM4 VARCHAR(20),NUM5 VARCHAR(20))INSERT INTO LOTTERY_MY_NUMSELECT '03/18 - 03/22', '17', '20', '28', '39', '40' INSERT INTO LOTTERY_MY_NUMSELECT '03/18 - 03/22', '03', '10', '16', '23', '40'INSERT INTO LOTTERY_MY_NUMSELECT '03/18 - 03/22', '07', '21', '26', '31', '38'INSERT INTO LOTTERY_MY_NUMSELECT '03/18 - 03/22', '02', '05', '27', '34', '38'INSERT INTO LOTTERY_MY_NUMSELECT '03/18 - 03/22', '05', '18', '22', '31', '37'SELECT * FROM LOTTERY_WINNING_NUM_RAWSELECT * FROM LOTTERY_WINNING_NUMSELECT * FROM LOTTERY_MY_NUM |
Posted: 20 Mar 2013 09:16 PM PDT HiI have a table with RunId, RunDateTime, RowCountRunDateTime will be every 15 minutes.1 2013-03-21 10:00:00 202 2013-03-21 10:15:00 303 2013-03-21 10:30:00 204 2013-03-21 10:45:00 255 2013-03-21 11:00:00 15I want to retrieve hourly average rowcountsLikeAvgHourlyDateTime AbgRowCount----------------------------------------------------------------- 2013-03-21 10:00:00To2013-03-21 11:00 (20+30+20+25+15)/5How can I do this |
How do we know whether or not a column is involved in an index? Posted: 21 Mar 2013 08:20 AM PDT We have so many indices in a table. Is there any script to easily find whether or not a column is involved in any existing index?Many thanks in advance for any input. |
sp_updatestats and the default sampling rate Posted: 04 Mar 2013 08:51 AM PST Using SQL Server 2008R2As my Production database approaching 500GB and potentially can grow up to 1TB in 2 years, issue with sp_updatestats (after re-indexing part of maint step) using the default sampling rate (which potentially can skew the performance) bothers me.It has been a discussion earlier at that link: http://www.sqlservercentral.com/Forums/Topic1310877-146-2.aspx, but I still confused about using sp_updatestatsCurrently I am performing the following steps during weekend maintenance:1. ALTER Database MyDBSET RECOVERY Simple,AUTO_CREATE_STATISTICS OFF,AUTO_UPDATE_STATISTICS OFF2. My index maintenance routine based on the following criteria:Currently I Reindex Clustered and Non-Clustered Indexes when avg_page_space_used_in_percent < 75 and avg_fragmentation_in_percent > 10 and page_count > 500.Of those selected, if Fragmentation <=30, than I reorganize those Indexes. If Fragmentation > 30, than I rebuild those Indexes. So at the end of the Reindex Maint I have Non-Clustered and Clustered Indexes either Rebuilt or Reorganized.3. Currently I am running the Update Statistics on a whole database after previous reindex step:[b]sp_updatestats[/b]Since sp_updatestats updates statistics by using the default sampling rate, it possibly can deteriorate all my indexes after running reindex routine.[b]3A.[/b] So I was thinking about "… update statistics for all tables which are defragged and don't run update statistics for all tables which are REBUILD"http://sqlserverpedia.com/wiki/Updating_StatisticsSo here is my logic for performing routine in [b]3A[/b]Select indexes which were "REORGANIZE"d (no statistics update) during reindex maintenance along with other indexes, where statistics were either not updated for the last, say, few weeks and run the following:UPDATE STATISTICS Table_Name (IndexName) WITH FULLSCAN, NORECOMPUTE against indexes selected above.By running above I will be able to update statistics without running sp_updatestats4. ALTER Database MyDBSET RECOVERY Full,AUTO_CREATE_STATISTICS ON,AUTO_UPDATE_STATISTICS ONPlease let me know if you have any comments, suggestions, recommendations on [b]step 3A[/b].It has been a suggestion earlier to run: USE dbname;EXEC sys.sp_MSforeachtable @command1 = N'UPDATE STATISTICS ? WITH FULLSCAN, COLUMNS;';But unfortunately it takes way to long on my Production Database, given the time frame I have for the weekend maintenance. |
Find difference between two data sets Posted: 21 Mar 2013 08:16 AM PDT Hello everyone.I have a query that I made on the database the output of which is say for example Id12345I then made changes to the Database.I ran the same query . This time I getId 135I would like to know if there is a way of finding the missing Id's in the second query.My query outputs more than 11000 rows. ThanksL&P |
Posted: 21 Mar 2013 07:16 AM PDT I have a stored procedure that exports data from a table into .csv file. There are several different record sets that get exported. They are all comma delimited with double quotes as a text qualifier. All of my files export and you can double click open the .csv file into excel and everything is fine, except for one. The interesting thing about this record set is that some of the data has a trademark symbol.BCP:DECLARE @bcp varchar(1000) = 'bcp "SELECT * from dbo.tempExportData" queryout "' + @filePath + + @FileOutputName + '" -T -w';EXEC @Result = xp_cmdshell @bcp;In the process I am getting the data:DECLARE @ColumnList varchar(5000) select @ColumnList = coalesce(@ColumnList + ',','') + quotename(cast(Name as varchar(50)) , '"') from tempdb.sys.columns where object_id = object_id('tempdb..#tempData') insert into dbo.tempExportData (outputData)values (@ColumnList) insert into dbo.tempExportData (outputData)selectquotename(isnull(FieldA, ''), '"') + ',' +quotename(isnull(FieldB, ''), '"')from #tempDataCSV:FieldA,"FieldB"ACME®,"some more data","even more data"Big Company,"still more data"All of the data is contained in column A of the spreadsheet and FieldA is not text qualified. If you open the .csv in notepad,textpad, etc all of the columns are text qualified. I know that I can open excel and use the import wizard to successfully import the data but I am wondering why when you open the .csv excel is not handling it correctly?Let me know if you need more information. |
Not able to find historical data using SQL Server Profiler in SQL Server 2008 Posted: 19 Mar 2013 05:10 PM PDT Hello All,I would like to capture the script that was ran by user on yesterday between particular time.But somehow i am not able to get the desire information.Can anyone please help me if i am missing anything.Template: Standard (No option with Default)LoginName: Name of userStartTime : Greater than > 2013-03-18 06:00:00.000EndTime : Less than < 2013-03-18 06:30:00.000.Please let me know if more information is required. |
Index rebuild before table update Posted: 20 Mar 2013 10:26 PM PDT There is some front end code that insert around 40000+ rows into a table then runs an update to all columns on all the rows.Would it be okay after the insert to do an index rebuild before the update?So i would create a sproc which first rebuilds indexes then runs the update command? |
Replication error while creating a new publication Posted: 21 Mar 2013 06:09 AM PDT I am trying to create a publication( transactional replication ) on a publisher and I get the following error in the last screen where it shows the progress in the new publication wizard.The account used is Domain\SVC_SQLREPL which is a domain account"The current transaction cannot be committed and cannot support operations that write to the log file."It looks like some permission issue but I am not sure what. Thanks |
Transactional replication problem Posted: 19 Mar 2013 07:49 AM PDT Hi everyone,We encountered some strange error when we were trying to add a subscriber.Following steps were performed:1. Publsher and distributor (2 different machines) were created2. Publication was created3. Few (12) subscribers were successfully added to the publicationWhen some time after everything is successfully running we tried to add 2 or more subscribers and got an error:MessageThe replication agent encountered a failure. See the previous job step history message or Replication Monitor for more information. The step failed.Checking replication monitor it sends us to job and job sends us to replication monitor. Removing newly added subscribers make everything looks good again.Any ideas if we need to check some settings?Thanks. |
Primary key or not to primary key, that is the question Posted: 04 Mar 2013 02:58 AM PST Yeah, bad taste on the subject but nonetheless, you're reading my post :). I'd like to pose a scenario and see what other developers are doing out there (or would recommend at least).We have most of our tables defined with a PRIMARY KEY constraint over a single column that is defined as UNIQUEIDENTIFIER with a default of NEWID(). This is never clustered unless the table is small. We use this in our child tables as the foreign key reference like so (excuse the pseudo table definitions):Table:ParentID UNIQUEIDENTIFIER NOT NULL DEFAULT NEWID() PRIMARY KEY,Field VARCHAR(30) NOT NULL,Code INT NOT NULLStatus VARCHAR(3) NOT NULLTable:ChildSurrogateID INT IDENTITY(1, 1) NOT NULL, --has a unique clustered index definedID UNIQUEIDENTIFIER NOT NULL DEFAULT NEWID() PRIMARY KEY,Field1 VARCHAR(30) NOT NULL,ParentID UNIQUEIDENTIFIER NOT NULL (FK defined to Parent.ID)Obviously the primary key constraint maintains an index behind the scenes. When we query between the two tables it will look soemthing like this:SELECT P.ID, P.Field, P.Code, P.Status, C.ID, C.Field1FROM dbo.Parent AS P INNER JOIN dbo.Child AS C ON P.ID = C.ParentIDLooks simple right? This query should (and does for me) do an index seek using the primary key constraint. But it also generates an expensive key lookup (clustered) over the clustered index. The only way to resolve that is to add included columns in the primary key to resolve it.Yes... primary keys indeed *DO NOT* allow included columns to be defined. At this point I have one of two options:- Drop the PRIMARY KEY and simply make a unique index with the necessary included columns- Create a unique index over Parent.ID with the neccessary included columns. However this results in doubling the index storage required for the Parent.ID column.So the question is this, do you:- drop the primary key and create the unique index with included columns- duplicate the primary key and create the unique index with included columnsI'm leaning towards dropping the PK and creating the unique index myself. Thoughts? |
Database backup and restoration Posted: 20 Mar 2013 08:40 PM PDT Hello All,We currently have a backup and restore job which runs on a weekly basis and the database size is more than 10TB.Split backup method is followed for taking the database backup and the same is restored on another server.During restoration we are seeing very high network utilization which is affecting the ETL load performance.Is there any best practice while restoring the database on another server?How can the network utilization be minimized?Please let me know if there is any other better way to do this.Loads are throughout the week and we do not have a big window either.Your help would be highly appreciated |
Posted: 21 Mar 2013 04:25 AM PDT CAn you please help me figure this one out??In MS, I am looking at the table and see no indexes there...However, running this select and get the following result:SELECT * FROM SYS.INDEXES I WHERE I.object_id = object_id('TABLENAME')object_id927342368nameNULLindex_id0type0type_descHEAPis_unique0data_space_id1ignore_dup_key0is_primary_key0is_unique_constraint0 |
Snap Mgr for SQL breaking log chains Posted: 21 Mar 2013 04:14 AM PDT Hello, I have inherited a dozen or so servers which have been configured with Snap Manager for SQL backups in addition to traditional SQL backups. Neither backup set is restorable since they are breaking each other's log chains. One server has 6 daily SMSQL backup jobs, but only one appears in the scheduler. I'm assuming the others were configured using powershell or some other non-GUI means. I would like to disable the jobs I can't see and manage, and let the one job I can actually manage go ahead and run. (we use the product to mount clones for test and dev) I'll then schedule our "traditional" SQL backup job to run immediately after the SMSQL job, thus giving us a clean set of T-Logs. Does anyone have a Powershell script to a) discover the invisible jobs -and- b) delete the invisible jobs? The SMSQL console shows the log reports for the 6 daily backup runs, but only displays one scheduled job. I'd appreciate any input or assistance you can give me on this. |
Impersonate possible for a security group? Posted: 21 Mar 2013 04:10 AM PDT Hi everyone,I was thinking of publishing some Table Reports with an online connection to my MSSQL server.All of that works fine...Now I was hoping to use my security groups to manage the access to the database.When trying to use the security group as shown below it does not work - probably because that is only for users - however I was not able to find syntax for a group.GRANT IMPERSONATE ON USER::[my_domain\gordon] to [my_domain\tableau];Any advise is much appriciated.Cheers,Gordon |
Insert duplicate rows based on number in the column Posted: 21 Mar 2013 02:42 AM PDT First table (@OrigTab) has Parts and quantity, now we need copy each part record basing on quanity show in @DesireTable Declare @OrigTab Table(Part varchar(30), qty int)insert into @OrigTabselect 'X51','3' unionselect 'A5','1' unionselect '123','2' unionselect '054','5'select * from @OrigTab ORDER BY 1 DESC/*Basing on the number of quantity duplicate rows need to be created with a new id column(tag)in the below format */Declare @DesireTable TABLE(TAG VARCHAR(13),Part varchar(30), qty int)insert into @DesireTableselect 'PNM-01','X51','3' unionselect 'PNM-02','X51','3' unionselect 'PNM-03','X51','3' unionselect 'PNM-04','A5','1' unionselect 'PNM-05','123','2' unionselect 'PNM-06','123','2' unionselect 'PNM-07','054','4'unionselect 'PNM-08','054','4'unionselect 'PNM-09','054','4'unionselect 'PNM-10','054','4'unionselect 'PNM-11','054','4'select * from @DesireTable |
Posted: 20 Mar 2013 10:55 PM PDT I Have done the Linked server configuartion on Server 1Server1 - ABCDServer2 - XYZZConfigured XYZZ on ABCD * ABCD on XYZZso after this when i execute the below query on XYZZ Serverselect foo.agent_idfrom [ABCD].db_Test.dbo.Students fooIt got executed.. when i am trying to run the query on ABCDselect foo.agent_idfrom [XYZZ].db_Test.dbo.Students fooError comesMsg 7314, Level 16, State 1, Line 2The OLE DB provider "SQLNCLI" for linked server "XYZZ" does not contain the table ""db_Test"."dbo"."Students"". The table either does not exist or the current user does not have permissions on that table.:w00t: |
Posted: 21 Mar 2013 01:35 AM PDT When we truncate the table does the indexes made on them exists in memory??Can we check whether they exist or no??? |
Posted: 20 Mar 2013 09:47 PM PDT Hi allGenerally speaking I know that using functions in a where clause is bad practice as it can confuse the query optimiser however I have a situation here which I am wondering about.I have a stored procedure used in a SSRS report that uses multi-valued parameters, as such the function I am talking about splits the string and returns a table dataset, and is then queried using the IN clause.My question is, in this case, would that be the best way to code this or would splitting the string first and placing it in a temp table before being queried be better coding practice?Thanks for any tips in advance |
BCP to import tab delimited file with header Posted: 18 Mar 2013 11:35 PM PDT Hi, i am trying to load a file which is a tab delimited file , which has a header row in it.so using F with 2 to consider second row, and create a error file.i am getting a error message Msg 102, Level 15, State 1, Line 1Incorrect syntax near '.'.please let me know the correct one.code used::Declare @sql varchar(8000),@IncomingPath varchar(500),@FileName varchar(500)set @FileName='12272012_114537_AB123.txt'set @IncomingPath='e:\feeds\HH_feeds\procen\incoming\' SET @sql = 'bcp [dbo].ABt_file_load_2012 in '''+@IncomingPath+ @FileName+''' -t''|'' -r'''' -F 2 -e'+@IncomingPath+'ErrorLog\'+ @FileName+'.ERR -T -S ' + @@SERVERNAME print @sql EXEC (@sql) |
Posted: 18 Mar 2013 05:03 PM PDT CREATE TABLE [dbo].[TESTROWLOCK]( [ID] [int] NULL, [NAME] [varchar](100) NULL, [SURNAME] [varchar](200) NULL) ON [PRIMARY]GO[b]TAB -1 [/b]I m trying BEGIN TRANSELECT ID FROM TESTROWLOCK WITH(ROWLOCK) WHERE ID=2[b]TAB-2 [/b]SELECT ID FROM TESTROWLOCK WHERE ID=2I want to ask that tab-2 select is give result but tab-1 is not commited why rowlock hint does not lock that rows |
Giving permission for each table within a database Posted: 21 Mar 2013 01:37 AM PDT I want to give different permission for different tables within a single database for a user.For example a user has full permission to a table while for another table only read permission.How it can be done within a database? |
Posted: 20 Mar 2013 09:08 PM PDT Hi Team,i have two tables, i want a query to retrieve records available in (Instance 1) Table : A and missing in (Instance 2) Table :B.[b]Table_1: [/b] ID | CHARACTER 1 A 2 B 3 C 4 D 5 E [b]Tabe_2 : [/b] ID | CHARACTER 1 B 2 D 3 COutput should be : 1 A2 B4 D5 Eselect * from Table_1 T1LEFT join [SHH05\SVR1].Students.dbo.Stud_info T2ON T1.ID=T2.IDbut am not getting the exact result, can u please help me... |
SSMS User Interface Tip to Freeze Panes. Posted: 18 Mar 2013 08:39 PM PDT Accidentally discovered this, and thought I should share it, as I don't think I've seen it mentioned before.In Management Studio, in a sql window, if you click on the little rectangular block at the top right of the window (below the X to close), and pull down, the current window splits into panes, allowing you to look at 2 parts of the same procedure simultaneously.I don't expect it will change your life, but you never know! |
Why is RPC duration dependent on which server making the call? Posted: 20 Mar 2013 09:45 PM PDT I have a problem which I've tried to solve for several days now.The customer was complaining that the system is slow. I ran SQL Server Profiler and saw that e g stored procedures (RPC:Completed event class) run 10 to 50 times slower than normal.Having ruled out a lot of different hypothesis about the cause of this problem I landed in the fact that:[center]Depending on which web server making the call to the db server, the stored procedure performs differently![/center]I have the same binaries installed on each web server, so that should not be the problem. All remote calls from the different servers goes fast, except when the production server makes the call. Then the duration of the stored procedure is 10 to 50 times longer than for calls from the other servers.The problem started with no connection to any known changes of the system, so that adds to the mystery, but the main mystery I want solved is the question above, namely why is the execution time for the stored procedures at the db server dependent on which web server calling??Any ideas? |
Decrypting values using code other than sql command Posted: 20 Mar 2013 09:30 PM PDT I am using encryptbyphrase to encrypt some particular columns. I can use decryptbyphrase command to decrypt it and read data or place the value in grid etc. But i want to know whether i can decrypt the column using any other method like using some vb.net code or any other codes other than sql statement? |
You are subscribed to email updates from SQLServerCentral / SQL Server 2008 / SQL Server 2008 - General To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment