Thursday, April 18, 2013

[SQL Server 2008 issues] job fails with linked server

[SQL Server 2008 issues] job fails with linked server


job fails with linked server

Posted: 17 Apr 2013 07:22 PM PDT

Hello everybody.I'm getting an error in a job that I can't explain myself.I have two servers: ERPSERVER and STAGINGSERVERon STAGINGSERVER I run scheduled jobs that get data from ERPSERVER so to elaborate and provide them to other databases.on ERPSERVER I have defined a view that prepares the data to be provided to STAGINGSERVERNow, on STAGINGSERVER I have a scheduled job that queries ERPSERVER and use those data to feed my ETL. ERPSERVER is defined as linked server on STAGINGSERVER.What happens is that somewhen I got errors on the job.This is part of the View's Code:[code="plain"]SELECT Partite.CreditNote,[b] 1 AS exchangerate,[/b] CONVERT(NVARCHAR(100), Partite.Salesperson) AS owneridExt,......FROM dbo.MA_PyblsRcvbls AS Partite INNER JOIN dbo.MA_PyblsRcvblsDetails AS PartiteRighe......[/code]the code inside my job is the following[code="plain"]SELECT [CreditNote] ,[exchangerate] ,[owneridExt].........INTO #_OPENITEMSFROM [ERPSERVER].DBNAME.dbo.VW_CRM_OpenItem[/code]this is the error I get:[code="plain"]Step ID 1Server STAGINGSERVERJob Name Import_1D_19Step Name ETL OpenItem Duration 00:00:10Sql Severity 16Sql Message ID 3621Operator Emailed Operator Net sent Operator Paged Retries Attempted 0MessageExecuted as user: domain\crmsqlserver. [b]Cannot insert the value NULL into column 'exchangerate',[/b] table 'tempdb.dbo.#_OPENITEMS_______________________________________________________________________________________________0000000045DB'; column does not allow nulls. UPDATE fails. [SQLSTATE 23000] (Error 515) The statement has been terminated. [SQLSTATE 01000] (Error 3621). The step failed.[/code]I'm out of resources, because I can't understand how it can the column 'exchangerate' be NULL.If I execute the ETL Query from SSMS, all runs without any problem. If I run the query then I run the job it runs without problemsMay it be a network/authentication problem?Thank you in advance.

Update statement really slow but only on the first execution

Posted: 17 Apr 2013 12:32 AM PDT

Hi,I have a standard update statement that updates one row at a time. The table size is 340,000 rows, so it's large but it's not huge.The first time I run the query it takes a very long time considering its a simple update, around 40 seconds. However the second time I run the same update statement it takes less than a second.I just wondered if anyone might be able to assist in letting me know where I might be able to start looking to find out why the update statement is so slow?Any help would be appreciated.Thanks,Lewis

Weekly report through whole year

Posted: 10 Apr 2013 10:51 PM PDT

Hi thereI have 2 tables and sample data:--------------------------------------------------------------------------------------------------------------------------------------------------------CREATE TABLE [dbo].[tblProduct]( [ProductID] [int] IDENTITY(1,1) NOT NULL, [Name] [varchar](50) NOT NULL, CONSTRAINT [PK_tblProduct] PRIMARY KEY CLUSTERED ( [ProductID] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY]--------------------------------------------------------------------------------------------------------------------------------------------------------CREATE TABLE [dbo].[tblOrder]( [OrderID] [int] IDENTITY(1,1) NOT NULL, [ProductID] [int] NOT NULL, [OrderDate] [datetime] NOT NULL, CONSTRAINT [PK_tblOrder] PRIMARY KEY CLUSTERED ( [OrderID] ASC)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]) ON [PRIMARY]--------------------------------------------------------------------------------------------------------------------------------------------------------INSERT INTO tblProduct (Name) VALUES('Coffee')INSERT INTO tblProduct (Name) VALUES('Tea')INSERT INTO tblProduct (Name) VALUES('Lager')INSERT INTO tblOrder (ProductID, OrderDate) VALUES (1, '2013-01-01 09:00:00')INSERT INTO tblOrder (ProductID, OrderDate) VALUES (1, '2013-01-08 09:00:00')INSERT INTO tblOrder (ProductID, OrderDate) VALUES (2, '2013-01-08 09:00:00')INSERT INTO tblOrder (ProductID, OrderDate) VALUES (3, '2013-01-09 09:00:00')INSERT INTO tblOrder (ProductID, OrderDate) VALUES (1, '2013-01-10 09:00:00')INSERT INTO tblOrder (ProductID, OrderDate) VALUES (2, '2013-01-10 09:00:00')INSERT INTO tblOrder (ProductID, OrderDate) VALUES (3, '2013-01-11 09:00:00')INSERT INTO tblOrder (ProductID, OrderDate) VALUES (1, '2013-01-11 09:00:00')INSERT INTO tblOrder (ProductID, OrderDate) VALUES (2, '2013-01-15 09:00:00')INSERT INTO tblOrder (ProductID, OrderDate) VALUES (1, '2013-01-18 09:00:00')INSERT INTO tblOrder (ProductID, OrderDate) VALUES (1, '2013-01-30 09:00:00')INSERT INTO tblOrder (ProductID, OrderDate) VALUES (2, '2013-05-01 09:00:00')INSERT INTO tblOrder (ProductID, OrderDate) VALUES (3, '2013-07-15 09:00:00')INSERT INTO tblOrder (ProductID, OrderDate) VALUES (1, '2013-09-23 09:00:00')INSERT INTO tblOrder (ProductID, OrderDate) VALUES (1, '2013-10-01 09:00:00')INSERT INTO tblOrder (ProductID, OrderDate) VALUES (1, '2013-10-08 09:00:00')INSERT INTO tblOrder (ProductID, OrderDate) VALUES (2, '2013-10-11 09:00:00')INSERT INTO tblOrder (ProductID, OrderDate) VALUES (1, '2013-11-01 09:00:00')INSERT INTO tblOrder (ProductID, OrderDate) VALUES (3, '2013-11-06 09:00:00')INSERT INTO tblOrder (ProductID, OrderDate) VALUES (1, '2013-11-08 09:00:00')INSERT INTO tblOrder (ProductID, OrderDate) VALUES (3, '2013-12-10 09:00:00')INSERT INTO tblOrder (ProductID, OrderDate) VALUES (1, '2013-12-25 09:00:00')INSERT INTO tblOrder (ProductID, OrderDate) VALUES (2, '2013-12-30 09:00:00')INSERT INTO tblOrder (ProductID, OrderDate) VALUES (1, '2013-12-30 09:00:00')--------------------------------------------------------------------------------------------------------------------------------------------------------I need to generate report which shows number of orders for each ProductID for each week in year 2013. Ideally would be to generate 54 weeks as pivot table so week1, week2 would be columns and 3 rows for 3 products and order count for each product each week.Product

Performance Tuning

Posted: 17 Apr 2013 05:46 PM PDT

I have a Table called Employees with following Columns EmployeeID INT EmployeeName VARCHAR(256) Salary MONEY DeptID SMALLINT The table has following Indexes created on it 1. Clustered Index on EmployeeID 2. Non Clustered Index on EmployeeName 3. Non Clustered Index on DeptID The Employees tables has over 10 million rows. You want to get the count of Employees, for which you write the following query SELECT COUNT(*) FROM Employees What will be the likely query plan for above query? Which index will SQL SERVER use in above query? Please give an explanation for your answer.

In my laptop some times sqlserver getting problem while connecting in to the instance

Posted: 17 Apr 2013 06:07 PM PDT

HI all ,In my laptop some times sqlserver getting problem while connecting in to the instance 1. it is unable to connect the instance and iam checking the tcp/ip connections and all services but unable to connect in to the sqlserver 2. every time this happens i always uninstall the sqlserver and its services and freshly installing the sqlserver then it is working for few days or few months later it happens again but unable to solve this every time 3. iam checking these shared memory status enabled named pipes status enabled tcp/ip status enabled via status disabled windows firewall exception port number 1433 is added or not evry thing is working fine but it is unable to get connect in sqlserver [services are in not running during this period ] iam performing fresh instillation of the sqlserver How to avoid this and in which cases it happens if it happens in any server what should i do

HOW TO MAKE DYNAMIC PIVOT FOR THIS TABLES WHEN I PASS ONLY ONE PARAMETER @DID INT?

Posted: 16 Apr 2013 11:48 PM PDT

here iam having two table @DETAILS and @CONTACTDETAILS in this two table @DETAILS is a static table in which cotactid will be the primary key for the table @DETAILS and it will be the foreign key for the @CONTACTDETAILS and here this the tables with data[code="sql"]DECLARE @DETAILS TABLE (CONTACTID INT IDENTITY(1,1), VALUE VARCHAR(200))INSERT INTO @DETAILSSELECT 'PHONENUMBER' UNION ALLSELECT 'MOBILENUMBER' UNION ALLSELECT 'ADDRESS1' UNION ALLSELECT 'ADDRESS2' UNION ALLSELECT 'EMPLOYEENAME' UNION ALLSELECT 'EMPLOYEEID'DECLARE @CONTACTDETAILS TABLE (DETAILID INT IDENTITY(1,1), DID INT ,CONTACTID INT, DETAILSVALUE VARCHAR(200))INSERT INTO @CONTACTDETAILS SELECT 1,5,'SIVA' UNION ALL SELECT 1,6,'SIA001' UNION ALL SELECT 1,2,'9344139991' UNION ALL SELECT 2,1,'245678' UNION ALL SELECT 2,2,'1686597' UNION ALL SELECT 2,5,'GANESH' UNION ALL SELECT 1,3,'ST-ROAD' SELECT * FROM @DETAILSthe output for thisCONTACTID VALUE1 PHONENUMBER2 MOBILENUMBER3 ADDRESS14 ADDRESS25 EMPLOYEENAME6 EMPLOYEEID SELECT * FROM @CONTACTDETAILSthe output for thisDETAILID DID CONTACTID DETAILSVALUE1 1 6 SIVA2 1 5 SIA0013 1 2 93441399914 2 1 2456785 2 2 16865976 2 5 GANESH7 1 3 ST-ROAD[/code]in the @CONTACTDETAILS column iam having DID column which i used to show one particular employee detail in my stroe proc iam using parameter @DID int i just tried this query[code="sql"] declare @DID int=1 SELECT (SELECT VALUE FROM @DETAILS WHERE CONTACTID=A.CONTACTID)AS CONTACTID, A.DETAILSVALUE FROM @CONTACTDETAILS A WHERE A.DID=@DIDwhich will show the result like thisCONTACTID DETAILSVALUEEMPLOYEEID SIVAEMPLOYEENAME SIA001MOBILENUMBER 9344139991ADDRESS1 ST-ROADand now i am trying output like thisEMPLOYEEID EMPLOYEENAME MOBILENUMBER ADDRESS1 SIA001 SIVA 9344139991 ST-ROAD [/code] can any one plz help me

Error on SQL job - SQL 2008

Posted: 17 Apr 2013 10:54 AM PDT

Hi,Can someone help us to determine the resolution for the encountered error below.MessageExecuted as user: HO\sqlibg. The instance of the SQL Server Database Engine cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users. Ask the database administrator to check the lock and memory configuration for this instance, or to check for long-running transactions. [SQLSTATE HY000] (Error 1204). The step failed.Actions Taken:1. Enabled AWE.2. Input number of locks to 10K3. Checked blocking sessions - none

PBM(Policy Based Management) Implementation issues

Posted: 17 Apr 2013 05:12 PM PDT

HiMy policies just noticed that Public has the 4 default Connect permissions on endpoints. To get rid of them I would be using the beloe commands :REVOKE VIEW ANY DATABASE FROM public;REVOKE CONNECT ON ENDPOINT::[TSQL Local Machine] FROM public;REVOKE CONNECT ON ENDPOINT::[TSQL Named Pipes] FROM public;REVOKE CONNECT ON ENDPOINT::[TSQL Default TCP] FROM public;REVOKE CONNECT ON ENDPOINT::[TSQL Default VIA] FROM public;However, before I do that I need to provide connect permissions on All TCP ports to other logins so that they are not thrown out. Here, I am facing issues while providing grant to the following logins :--GRANT CONNECT ON ENDPOINT::[TSQL Default TCP] TO [##MS_SQLAuthenticatorCertificate##] --GRANT CONNECT ON ENDPOINT::[TSQL Default TCP] TO [##MS_SQLReplicationSigningCertificate##] --GRANT CONNECT ON ENDPOINT::[TSQL Default TCP] TO [##MS_SQLResourceSigningCertificate##] Error for them says :Cannot find the login '##MS_SQLAuthenticatorCertificate##', because it does not exist or you do not have permission.My questions here are :1. If I am unable to grant connect to these logins, will it have adverse affects on some operations?2. Any other impact that is usually reported after the Public has been revoked the default access?will someone please assist me with the clarifications ?Thanks in Advance..

xp_cmdshell slow response

Posted: 17 Apr 2013 04:31 PM PDT

I am using xp_cmdshell to bcp out a small table to a file and than copy it inside the local disk.1. The system (memory , cpu , disk queue) are all not loaded with work.2. The table is very small and the bcp results in 1 KB file.3. The bcp operation varies under the xp_cmdshell between 700ms to 2200 ms .4 Using cmd prompt from the windows os the same operation of bcp takes 31 ms. (login are the same as sql service)5. The xp_cmdshell operates fast on another server.6. The same operation performed 2 weeks ago worked fast on this server.Somthing had changed but what to look for ?:hehe:

How to know the original user name

Posted: 17 Apr 2013 10:42 AM PDT

Hi all,I'm having hard time to figure out a way to create a DDL trigger while capturing the original user name who initiated a connection. ASP.NET application is running on IIS server and AppPool is configured with a shared account to make a connection to SQL. So, from the SQL server side we only see this shared account.My problem is, how to know who actually initiated the connection originally. ORIGINAL_LOGIN() won't work since IIS is using a shared account so wonder if there's a way from SQL side. Anyone knows if this is even possible? I don't think it is but wonder if there's a way, hope...Thanks~

DBCC on a detached MDF file

Posted: 10 Apr 2013 07:05 AM PDT

Hello,I am by no means above a SQL novice, but I am trying to figure out if there is a way to run DBCC commands (DBCC fileheader in particular) on an offline/detached MDF? Is this possible? If so, any help pointing me in the right direction? Thank you!

Creating array in sql server

Posted: 16 Apr 2013 09:05 PM PDT

A string contains 3,4 suppose I want to store 3 in one parameter and 4 in another parameter similar in c# like arr[0] and arr[1]? how can i achieve it

read values in column from xlsx files in SSIS

Posted: 17 Apr 2013 02:31 AM PDT

is it possible to write a code that will read the values in a specific column from xlsx files in SSIS? i'm comfortable with visual basic 2008.in a_square.xlsx, there's a column named "country"and in b_a_circle.xlsx, there's a column named "country_region"i want it to read the values in country in a_square.xlsx and also read the values in country_region in b_a_circle.xlsx so it'll create a new file if there's a match (for example, if the country has a US value in it and the country_region has a US value in it as well)sorry i know this sounds too confusing but i don't know how to explain it better.

[Microsoft][ODBC SQL Server Driver][SQL Server]1

Posted: 24 Jun 2010 05:05 AM PDT

I am using a basic script that calls an ODBC connection on the computer to write records to our SQL 2008 database. It will write three records, return this message [Microsoft][ODBC SQL Server Driver][SQL Server]1 and then it will continue to write the next three records. This is a problem because I only get six of the seven records. I know that this message is returned because I write the information into a text file.Could this message be a connection problem between the computer and the server? I haven't seen anything online like it yet. Is there a place on the server that might tell me more information so I can better diagnose this problem?

Large Object in buffer cache

Posted: 17 Apr 2013 12:59 AM PDT

HelloWe are using Redgates SQL Monitor tool and I am getting 'Large object in buffer cache' alert message - I have read that I need to find out what the object is and that this object may be suitable for compression or a schema revision (data type changes or sparse columns).I have ran the following query:USE [tempdb]GOSELECT COUNT(*) AS cached_pages_count , ( COUNT(*) * 8.0) / 1024 AS MB , CASE database_id WHEN 32767 THEN 'ResourceDb' ELSE DB_NAME(database_id) END AS Database_name FROM sys.dm_os_buffer_descriptors GROUP BY DB_NAME(database_id) , database_id ORDER BY cached_pages_count DESC;and the results are thus :Cached_pages_count MB DataBase_Name344695 2692.929687 a336681 2630.320312 b320527 2504.117187 c154540 1207.343750 dCan anybody advise on how i deal with the alert

Trigger issue, help with trigger

Posted: 17 Apr 2013 04:30 AM PDT

I have a SQL2008R2 db that has a regular table and an audit table. I have a trigger setup so it will keep track of changes via an audit table. When an update statement is run against the regular table it fails with:Msg 2627, Level 14, State 1, Procedure TI_JOB_POSTING_TRIGGER, Line 15Violation of PRIMARY KEY constraint 'PK_JOB_POSTING_AUDIT_1'. Cannot insert duplicate key in object 'dbo.JOB_POSTING_AUDIT'. The duplicate key value is (548, 0589054 , Apr 17 2013 12:37PM).THis is the statement run against the table: update job_posting set site_num=8800Here is the regular table:[dbo].[JOB_POSTING]( [JOB_POSTING_NUM] [int] IDENTITY(1,1) NOT NULL, [SITE_NUM] [int] NULL, [JOB_CAT_COD] [varchar](7) NOT NULL, [DEFAULT_TXT_IND] [bit] NOT NULL, [ADDRESS1_TXT] [varchar](100) NULL, [ADDRESS2_TXT] [varchar](100) NULL, [CITY_NAM] [varchar](50) NULL, [ST_PROV_COD] [char](2) NULL, [POSTAL_COD] [varchar](10) NULL, [START_DAT] [date] NULL, [END_DAT] [date] NULL, [MAINT_EMP_ID] [char](11) NOT NULL, [MAINT_TMS] [datetime] NOT NULL, [UPLOAD_TMS] [datetime] NULL, [POST_IND] [bit] NULL, [COMBINATION_NUM] [int] NOT NULL, [CREATE_EMP_ID] [char](11) NOT NULL, [CREATE_TMS] [datetime] NOT NULL, CONSTRAINT [PK_JOB_POSTING] PRIMARY KEY CLUSTERED ( [JOB_POSTING_NUM] ASCHere is the trigger:ALTER TRIGGER [dbo].[TI_JOB_POSTING_TRIGGER] on [dbo].[JOB_POSTING] for INSERT, UPDATE AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; Insert into job_posting_audit (JOB_POSTING_NUM ,SITE_NUM ,JOB_CAT_COD ,DEFAULT_TXT_IND ,ADDRESS1_TXT ,ADDRESS2_TXT ,CITY_NAM ,ST_PROV_COD ,POSTAL_COD ,START_DAT ,END_DAT ,MAINT_EMP_ID ,MAINT_TMS ,UPLOAD_TMS ,POST_IND ,COMBINATION_NUM) SELECT JOB_POSTING_NUM ,SITE_NUM ,JOB_CAT_COD ,DEFAULT_TXT_IND ,ADDRESS1_TXT ,ADDRESS2_TXT ,CITY_NAM ,ST_PROV_COD ,POSTAL_COD ,START_DAT ,END_DAT ,MAINT_EMP_ID ,MAINT_TMS ,UPLOAD_TMS ,POST_IND ,COMBINATION_NUM FROM insertedENDThis is my first go around with triggers... what am I doing wrong?

Number of Active Users

Posted: 17 Apr 2013 06:11 AM PDT

1)find number of users on a database in sql server 20122)find number of users accessing in sql server 20123)find users using a specific object on a database inn sql server 20124) active process in sql serverCan someone provide me how to find above for SA on the server and a user who has db_reader role on serverThanks a lot in advance

join aggregated quantity to the breakdown quantity

Posted: 17 Apr 2013 01:58 AM PDT

I have two tables, one with (potentially) multiple records of an item with different quantities, and the other table will have only one record for each item and will aggregate the sum total of the quantities - i.e.:Table 1 Item Qty MemoABC 24 pending Table2 Item Qty MemoABC 14 testABC 10 testXYZ 5 testI need to update the memo in Table1 with the memo from Table2, and would like to find an efficient way to do so. Any advice would be greatly appreciated.

CMDExec Proxy account not working with Domain Service Account...

Posted: 17 Apr 2013 12:17 AM PDT

I'm working on migrating an existing Server 2003 / SQL 2005 system to Server 2008 R2 / SQL 2008 R2, and am having some problems getting a proxy account to work. The account is needed to execute some OS operations, NOT to affect DBs on the server. Things like unzipping files that are recieved to a particular folder and deleting the files when done.The SQL Agent is running using a domain account for this purpose. There's also a domain-level account for the proxy account.I've added the Proxy account to the SQL Logins, created a Credential for it, then created the Proxy in Agent. I've granted the appropriate users access to the Proxy (the "Principals" section), and assigned the proxy to execute a step in a test job (dir e:\)Now, the problem.It fails.If I run the step using the Agent account, it works. If I set the Agent to use the "Network Service" account and the proxy, it works. If I set the Agent to use the domain account and the job to use the proxy account, it fails with:"Executed as user: domain\svcaccount. The process could not be created for step 1 of job 0x2E0030DE6B7C444E8C0E4759A405B8E5 (reason: A required privilege is not held by the client). The step failed"I have verified the proxy account does have access to the E:\ drive by running a command prompt as the account. I also "cloned" the local group membership for the proxy from the Server 2003 system, so it belongs to the local Admins account.I've looked through the Windows Security log, and it shows it to be logging in OK. I see a logon event with a subject account name of the Agent service account, and a new logon security ID of the Proxy account. So it seems the impersonation is working...Any thoughts?

Ghost Cleanup and CHECKPOINT after Migration to SQL 2008 R2

Posted: 17 Apr 2013 05:48 AM PDT

I am migrating a database from SQL 2005 to SQL 2008 R2. As soon as the database is restored over to the new server with SQL Server 2008 R2 enterprise, it starts Ghost Cleanup and CHECKPOINT process. It takes about an hour and disks are pegged, utilization is very high during this one hour. It doesn't even let me open database property and simple queries on this database takes forever or times out.New server has a much better and the best possible hardware. Database size is 600 GB. There were lot of data deleted on the old server few months back. LDF file is 1 GB when restore completes but then it starts going up as big as to 60 GB. When I run sys.dm_db_index_physical_stats on the database, it doesn't give me HUGE number of ghost_record_count as well. Out of 500 tables in the database only one table has 300 ghost record count and few other tables have 1 or 2. Is it deleting those records after migration? Is it because Ghost cleanup doesnt currently run on my SQL 2005 box?Any suggestions?

Substract End Dates

Posted: 16 Apr 2013 09:11 PM PDT

Hi Professional I need to substract the end date as like below2-1,3-2,4-3, and showing in new column like datediif_days if there would be null we have to treat as gatedate.[code="sql"]CREATE TABLE abc(gg_key int,fg_sta_key int,fg_eff_dt_key int,fg_end_dt_key int,fg_seq_nbr int)[/code][code="sql"]insert into abc values(19925,2,20130128,20130128,1)insert into abc values (19925,8,20130128,20130128,2)insert into abc values(19925,1,20130129,20130416,3)insert into abc values (19925,5,20130417,NULL,4)[/code]Regards,KiranR

Pros & Cons- Table Value Function vs Summary Table

Posted: 17 Apr 2013 01:07 AM PDT

For discussion...I need to provide summary information on records in a table such as total amounts of varying types and total amounts by status. An example would be total amount of transactions pending, total amount of transactions accepted, etc. This would be across all accounts, but also at the account level so toss in there total amount of transactions pending by account.In the past, I would go with a summary table however I'm not in a position to chase that rabbit so my thought was to do a Table Value Function that queries the table once, and then generates the totals using a Table variable.The number of users on the front end website is very minimal (3) so the added cost of performing the calculations in memory shouldn't have that great of an impact.

AD name change Old name still persists

Posted: 16 Apr 2013 11:36 PM PDT

After a user change of name in AD. SQl still shows the user as the previous credentialsIe Domain\UsernameoldDomain\UsernamenewAlthough the users name has changed to the new account the old one still persists in SQL.Domain\Usernameold no longer exists as it cannot be added to a test server as a domain login.SQL must be caching the credentials and marrying them up with the SIDs. How to remove this from the cache.

Transactional Replication with updatable subscribers over the internet

Posted: 16 Apr 2013 11:44 PM PDT

I have created a publisher on server A. I need to create a subscriber on server B over the internet. For this i'm using VPN. While creating subcriber I get following message: "the initial snapshot for publication is not yet available". I suppose this is because of some permission issue to the snapshot folder which I have created on Server A. Do I need to have same windows authentication on both servers? As of now they are different as my servers are on different domains. Please help.

Compare scripts B4 and After a Performance Fix !

Posted: 16 Apr 2013 11:09 PM PDT

"How can I"/"what are all the ways I can" compare a query/SP before and after a Performance Fix, to know whether Am I improved it really. in case of speed, memory, resource and what else..Appreciating your helps always,Rafi

Third party backup tool breaking the backupset LSN chain

Posted: 23 Mar 2011 09:48 AM PDT

I am attempting to develop a workaround to this issue and was wondering if it is possible to modify the BACKUPSET lSN numbering of a backup so as to skip the piece or pieces of the backup completed by the third party tool so the next transaction log backup starts at the lsn numbering of the backup before the third party tool was run.

configuring alerts

Posted: 16 Apr 2013 08:46 PM PDT

Any one help how to configure alerts on sql like log growth , backup failure ,disk space . login failed etc as part of daily activity while monitoring server?

how to do pivot

Posted: 16 Apr 2013 07:42 PM PDT

Hi Guys,actually my table data looks Key valuefirstname suryafirstname rakhifirstname venkatfirstname shankarfirstname danyI want to disply likefirstnamesuryarakhivenkatshankardanyi write the below querySELECT firstnameFROM test pvtPIVOT (MAX(pvt.value ) FOR pvt.key1 IN (firstname)) AS Pbut it showingfirstnamevenkatPlease help to solve this

Wednesday, April 17, 2013

[SQL Server Data Warehousing] SSIS Auditing Mechanism?‏


Hi All,


I have a SSIS pkg which inserts/updates tables in my data warehouse. ( SQL Server 2008 )


I need to develop a mechanism to capture how many records added/updated against tables on data warehosuse ( daily basis) and store data onto a table as below.


Can you guys provide some advice on this  please??















Table Name



Date



New Rows added



Rows Updated



factSales



2013-04-07



21



115



dimProducts



2013-04-07



4



2



Regards


Mira


.

social.technet.microsoft.com/Forums

[SQL server issues] Commented Issue: AdventureWorksDW2008R2 DimDate table has missing dates [13383]


Hello,
Is this been taken care? If so, please share the latest data for DimDate. I'm working on some POC which requires dates in between 2008 and 2010 as well. If it is not already done, can someone let me know is there is any way I can generate the data for those missing dates in that table?



.

sqlserversamples.codeplex.com

[SQL Server] Not all Databases getting backed up

[SQL Server] Not all Databases getting backed up


Not all Databases getting backed up

Posted: 17 Apr 2013 06:03 AM PDT

I have a maintenance plan that makes a full backup of 45 databases every night, at least it supposed to. The job runs without error, but when it's done and I look at the backup destination they are not always there. I ran it manually just now for example, only 30 databases got backed up, the other 15 databases have yesterday's date or farther back.Another portion worth mentioning is the fact that these databases are backed up to a Linux system and does not recognize Windows domain credentials. Simply putting in the path will not work in the Back Up Database Task because it will fail authentication. To resolve this, I put in an Execute SQL Statement task before the backup starts. Here is the task.xp_cmdshell 'net use z: /delete'GOxp_cmdshell 'net use z: \\winvtblacka02\dbateam /USER:dbateam dbateam'GOThis allows me to pick drive z:\ for the location as seen in the picture. I have looked through Windows and SQL logs and for the databases that did not get backed up and they are not even mentioned so I have no errors to look up. Sometimes the job completes in 15 minutes when it should take several hours.I am wondering is there a maximum amount of database that can be backed up per maintenance plan?Is there a hidden setting that tells a job to stop running once it's ran for a certain amount of time?Does the logon to the mapped Z:\ drive go stale after? (I would think only idle connections would time out) I have it set to "All Databases" and ignore databases where the state is not online check as well. Many of these databases are mirrored and I have the identical setup on the mirror, so if there is a fail over the new principal will begin backing them up.

How to find values in a comma delimeted string with len>3

Posted: 17 Apr 2013 12:58 AM PDT

Hi everyone,can you please help me with the following? I have a nvarchar column with data like this: ', 55,85,1,4,9888,6587,' How can I found the rows that include in their comma delimeted string, values with len()>3?I.e. if I had the following rows:', 55,85,1,4,9888,6587,' ', 55,85,1,4,98,65,' I would like a statement that returns only the first row (beacuse it has values: 9888 and 6587) or only the values: 9888, 6587.Thank you in advance

[how to] PostgreSQL, How to keep only one schema?

[how to] PostgreSQL, How to keep only one schema?


PostgreSQL, How to keep only one schema?

Posted: 17 Apr 2013 09:05 PM PDT

I don't want some users be available to create schemas, and only use the public schema.

I'd like to remove the permission of creating schemas, but I can't found anything on PotsgreSQL 9.1 documentation. How can I do this?

SQL Server Transaction Backups vs Logs

Posted: 17 Apr 2013 09:18 PM PDT

I've inherited a moderately-sized business-critical SQL Server 2008 database and am trying to wrap my head around backup planning. (I'm a developer, not a DBA.)

The way our system is set up right now there are two backup systems:

  1. Weekly full backups (.bak) and hourly transaction log (.trn) backups. We keep several sets of these backups and they are regularly shipped offsite.
  2. SQL Server logs (.ldf), with recovery model set to Full. This file is on a separate drive from the main .mdf file, but is otherwise not backed up.

In the event of an emergency restore (or when restoring backups to a dev machine), my procedure is to use the .bak files and then apply the .trn files. We have a script that makes this procedure relatively straightforward.

My questions:

  1. Is it possible to restore the database from the .ldf file? What is it even for?
  2. Is it needlessly redundant to have both of these transaction logs?
  3. Is it important to back up the .ldf file?

Moving one TempdB on 3 instance server

Posted: 17 Apr 2013 09:07 PM PDT

I have a large virtualized SQL Server (Full 2008R2). I run 3 SQL instances and would like to relocate the TempdB database file to another location, splitting off from the TempdB log file.

The trouble I am having is that, even though the query "ALTER DATBASE" executes successfully, and then restarting that particular instance's SQL Server Service, does not relocate that database file.

Do I have to restart the whole server to move this database?

Oracle SQL, how to filter on a set by a rank?

Posted: 17 Apr 2013 04:24 PM PDT

I am struggling with the following SQL query right now:

http://sqlfiddle.com/#!4/736a7/12

select uc.*  from usercredential uc  inner join users u on u.userid = uc.userid  inner join credential c on c.credentialid = uc.credentialid  inner join healthsystemcredential hsc on hsc.credentialid = c.credentialid  inner join accesslevel ac on hsc.hscredentialid in (    -- Trying to figure out how to filter here?    select 1 from dual  )  where c.fulfillmentmethod = 'Attach'  and c.display = 1  and uc.auditstatus <> 0  and ( (uc.auditstatus is null) or (uc.auditstatus = 1 and uc.lastmodified > uc.auditdate) )  

Basically I am trying to get UserCredential that are Many To One with Credential. Credential is One To Many with HealthSystemCredential.

The HealthSystemCredential has a foreign key to AccessLevel, however this AccessLevel column in HealthSystemCredential is actually the Lowest Allowable Access Level. AccessLevel records have a Type Rank between 1 at the lowest and 4 at the highest.

Example: A HealthSystemCredential with an AccessLevel record that has a Type Rank of 2 should be fetched when I filter my query by AccessLevels with Types of 2, 3, or 4.

The problem is I can't figure out how to do this in one query. Is there some way I can get a DENSE_RANK column in a subquery of AccessLevels, apply my filter in the subquery and join it to the table somehow to give me all the applicable HealthSystemCredentials I want?

MySQL shared server regular lockup

Posted: 17 Apr 2013 04:20 PM PDT

I have a web/ database server with 5 websites on it, all of which execute fairly complex queries regularly.

Some of these queries appear in the slow log and take around 0.6-0.8 seconds. However I have a regular occurrence of such queries coming from 2 specific websites with 2 separate database that seem to lock each other up. What I am finding in this case in top is a MySQL process occupied with about 20-30% of a CPU and about 30 Apache processes taking up about 30% CPU each. This situation goes away after about 15-20 minutes or if I restart the MySQL server. The slow query log shows more slow queries at this time.

The problem never seems to involve only queries to a single database, but always to the same 2 databases (and websites)

The server is on a VPS with 4GB of RAM, 5 CPU cores on Ubuntu 12.04

Any idea how I could diagnose what is happening?

What's the proper term for the diverse systems to which a DW connects?

Posted: 17 Apr 2013 03:39 PM PDT

The data warehouse I've built gets data from transactional databases, web services, flat files, and a few other bits and pieces. It also writes back to some of these, and to analysis tools. I'm building a unified system for passing changes between these different systems, and it occurs to me that I don't have a single simple term that covers them all.

  • "Database" and "service" are correct for some, but not all.
  • "Data source" implies that these only supply data, which is not correct.
  • "System" and "data system" are correct, but very vague.
  • "End point" fits my mental model of spokes connected to the data warehouse hub. It's not true in that some of these systems to talk to each other directly.

What term do you use for all the systems you integrate, as part of a data warehouse or just in general? Ideally I'd like a single word which will make intuitive sense to both DBAs and non-technical parties.

problem running xp_cmdshell when Application Name set in connection string

Posted: 17 Apr 2013 02:58 PM PDT

I have an app on 10 or so servers that runs a few xp_cmdshell statements against MSSQL 2008. It works fine on all the servers except for one. To make things worse, I can run all the commands in SQL Management Studio, but in the app, they don't work. I even made a temporary app to test and it works fine! But in the deployed app, I get a simple SQL error "Access Denied". I have narrowed it down to the connection string, if I include the application name

Data Source={0};Initial Catalog={1};Persist Security Info=True;User ID={2};Password={3};Application Name=TheGroovyApp  

It throws access denied only when calling xp_cmdshell, normal SQL statements works fine. But if I remove the application name

Data Source={0};Initial Catalog={1};Persist Security Info=True;User ID={2};Password={3}  

It works fine for both normal SQL statements and calls to xp_cmdshell. The strange thing is, its only happening on one of the ten servers. The only difference being that the server has SP1 and the others do not.

@@VERSION returns

Microsoft SQL Server 2008 R2 (RTM) - 10.50.1600.1 (X64)   Apr  2 2010 15:48:46   Copyright (c) Microsoft Corporation  Standard Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1) (VM)   

I am thinking that there is some sort of authentication that can be granted to applications, but I cant seem to find anything. I can replicate it in SQL Managment Studio by adding

Application Name=TheGroovyApp  

to the Additional Connection Parameters tab on the Connect to Database Engine dialog when you create a new query or change its connection.

The simple test statement I use is

EXEC master..xp_cmdshell 'DIR F:\SomeDirectory'  

If anyone could shed some light on what is happening it would be much appreciated.

How should I set up these tables in my relational database?

Posted: 17 Apr 2013 06:31 PM PDT

I'm working on a project where a person can create a survey with custom questions and display the questions in whatever order they choose. They can break the questions into sections. The problem I'm having is that there are 2 or more groups of people. Each group can have the same question in different sections and/or in a different order. My tables right now are:

Group: key(gId), gName  Question: key(qId), qName  Section: key(sId), sName  QOrder: key(gId, qId), sId, qOrder  

Is this okay or is there a better/more efficient way of doing this? Any help would be appreciated.

How does SQL Server generate a query execution plan that adds up to 6,000%?

Posted: 17 Apr 2013 01:55 PM PDT

weird execution plan

Today I was on The Heap and was looking at a query plan for something I thought could be improved. However, it created something that shook my belief in the SQL Server query optimizer. Can I still trust if it cannot even count to 100%?

Characteristics of the table:

  • clustered on a non-identity column
  • 12 indexes, one of them being the date_entered column in question
  • 60,000 records
  • 26 columns of various types and length
  • PAGE compressed table

Has anyone seen this before and what causes the plan to look so skewed?


And below from SQL Sentry Plan Explorer

SQL Sentry Plan Explorer - image Top Operations

exec xp_cmdshell 'sc \\<servername/serverip> start <service_name>'

Posted: 17 Apr 2013 03:37 PM PDT

When I execute this on sql server on another machine (in same sub net) I am getting "Access is denied". But when I did it from command prompt (sc \ start ) it is working.

Please advice why I am getting access denied when I am doing it from sql server using exec xp_cmdshell cmd

Best practice for copying tables from server to server

Posted: 17 Apr 2013 07:12 PM PDT

Not being a DBA and not having sysadmin privileges to do a backup/restore, would the following be a good solution for copying a set of tables? (I have a linked server from serverA to serverB)

copyTables

SET ANSI_NULLS ON  GO  SET QUOTED_IDENTIFIER ON  GO  CREATE PROCEDURE [dbo].[copyTables]  AS  -- NOCOUNT ON added to prevent extra result sets from interfering  -- with SELECT statements. XACT_ABORT ON to terminate the transaction  -- and rollback if a run-time error occurs.  SET NOCOUNT, XACT_ABORT ON    BEGIN      DECLARE @serverName varchar(50), @dbName varchar(50), @schemaName varchar(50)        SET @serverName = QUOTENAME('serverB')      SET @dbName = QUOTENAME('db')      SET @schemaName = QUOTENAME('dbo')        BEGIN TRY          BEGIN TRANSACTION            EXEC [dbo].[copyTable] @serverName, @dbName, @schemaName, 'tbl1', 'copyLog'          EXEC [dbo].[copyTable] @serverName, @dbName, @schemaName, 'tbl2', 'copyLog'          EXEC [dbo].[copyTable] @serverName, @dbName, @schemaName, 'tbl3', 'copyLog'          ...            COMMIT TRANSACTION      END TRY      BEGIN CATCH          -- Insert error into log table            ROLLBACK      END CATCH  END  GO  

copyTable

SET ANSI_NULLS ON  GO  SET QUOTED_IDENTIFIER ON  GO  CREATE PROCEDURE [dbo].[copyTable]      @serverName varchar(50),      @dbName varchar(50),      @schemaName varchar(50),      @tableName varchar(50),      @logTableName varchar(50)  AS  -- NOCOUNT ON added to prevent extra result sets from interfering  -- with SELECT statements. XACT_ABORT ON to terminate the transaction  -- and rollback if a run-time error occurs.  SET NOCOUNT, XACT_ABORT ON    BEGIN      DECLARE @localFullName varchar(200), @localShortName varchar(150),          @foreignFullName varchar(200), @logShortName varchar(150);        SET @localFullName = QUOTENAME(@dbName) + '.' + QUOTENAME(@schemaName)          + '.' + QUOTENAME(@tableName);      SET @localShortName = QUOTENAME(@schemaName) + '.' + QUOTENAME(@tableName);      SET @foreignFullName = QUOTENAME(@serverName) + '.' + QUOTENAME(@dbName)          + '.' + QUOTENAME(@schemaName) + '.' + QUOTENAME(@tableName);      SET @logShortName = QUOTENAME(@logTableName) + '.' + QUOTENAME(@schemaName)          + '.' + QUOTENAME(@tableName);        IF EXISTS       (          SELECT *           FROM sys.objects           WHERE object_id = OBJECT_ID(@localShortName)                   AND type in (N'U')      )      BEGIN          DROP TABLE @localShortName;          SELECT *               INTO @localFullName               FROM @foreignFullName;            INSERT INTO @logShortName (stamp, [message])           VALUES               (                      GETDATE(),                       'Table ' + @foreignFullName + ' was copied'              );      END  END  

SQL Server 2012 AlwaysOn: need to automatically add databases by script- T-sql or PowerShell

Posted: 17 Apr 2013 02:35 PM PDT

I've installed and successfully configured our SQL Server 2012 AlwaysOn 2-node servers for our new "Intranet" that is coming out. I've gotten AlwaysOn working great, and our Front End servers for the Intranet will be using SharePoint 2013. The glitch is that SharePoint 2013 is configured to add databases automatically to our SQL Server 2012 back end, but NOT to AlwaysOn. In reading about this and in contacting Microsoft MSDN support, the default answer is "you must manually find, select, back-up and then add those new databases individually to get them into AlwaysOn."

But wait; that can be quite a task, constantly checking the SQL Server back-end servers to see what databases were created, then having to add them into AlwaysOn, 7/24! I'm looking for a script or process that will check for new databases, back those new databases up in FULL mode, (for being added to AlwaysOn, of course) then add those databases to AlwaysOn, all automatically. Or have this run every...1-2 hours? (without user intervention)

What I've come up with so far is this script that actually identifies the newly-added databases, (not yet in AlwaysOn), and then backs them up to a shared location. My next task is to find those newly-added databases and through the various processes needed, get them added to AlwaysOn. This will involve some sort of looping action, I imagine. I'm not a T-SQL/scripting guru; is there any solution or script that I might access that would do this? (add databases to AlwaysOn automatically)?

Please advise, I'm sure I'm not the first person to have this issue. I have seen previous posts on various Internet Sites (including this one!) , and the solution is either incorrect, or states something like "sure, go ahead and just script that!". Thanks, but I need just a little more detail there.

Thanks again,

-Allen

DECLARE @name VARCHAR(50) -- database name    DECLARE @path VARCHAR(256) -- path for backup files    DECLARE @fileName VARCHAR(256) -- filename for backup      -- specify database backup directory  SET @path = '\\atel-web-be2\backups\'      DECLARE db_cursor CURSOR FOR    select name from sys.databases  where group_database_id is null and replica_id is null       and name not in('master','model','msdb','tempdb')    OPEN db_cursor     FETCH NEXT FROM db_cursor INTO @name     WHILE @@FETCH_STATUS = 0     BEGIN         SET @fileName = @path + @name + '.BAK'        BACKUP DATABASE @name TO DISK = @fileName         FETCH NEXT FROM db_cursor INTO @name     END     CLOSE db_cursor     DEALLOCATE db_cursor  

How can I join one to one to (many to one) without using group_concat

Posted: 17 Apr 2013 11:36 AM PDT

I have a table structure as follows:

    --  -- Table structure for table `emails`  --    CREATE TABLE IF NOT EXISTS `emails` (    `ID` int(5) NOT NULL AUTO_INCREMENT,    `email` varchar(255) COLLATE utf8_unicode_ci NOT NULL,    `sent` smallint(1) NOT NULL DEFAULT '0',    PRIMARY KEY (`ID`)  ) ENGINE=MyISAM  DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=5062 ;    -- --------------------------------------------------------    --  -- Table structure for table `ips`  --    CREATE TABLE IF NOT EXISTS `ips` (    `ID` int(11) NOT NULL AUTO_INCREMENT,    `ip` varchar(255) NOT NULL,    PRIMARY KEY (`ID`)  ) ENGINE=InnoDB  DEFAULT CHARSET=latin1 AUTO_INCREMENT=7534 ;    -- --------------------------------------------------------    --  -- Table structure for table `user_items`  --    CREATE TABLE IF NOT EXISTS `user_items` (    `ID` int(10) NOT NULL AUTO_INCREMENT COMMENT 'Allows sorting by last added..',    `name` varchar(255) NOT NULL,    `owner` varchar(255) NOT NULL,    `folder` int(10) NOT NULL,    `version` int(5) NOT NULL,    PRIMARY KEY (`ID`),    KEY `name` (`name`),    KEY `folder` (`folder`)  ) ENGINE=InnoDB  DEFAULT CHARSET=latin1 AUTO_INCREMENT=10431 ;    -- --------------------------------------------------------    --  -- Table structure for table `data`  --    CREATE TABLE IF NOT EXISTS `data` (    `ID` int(10) NOT NULL AUTO_INCREMENT,    `name` varchar(255) NOT NULL,    `info` varchar(255) DEFAULT NULL,    `inserted` varchar(255) NOT NULL,    `version` int(5) NOT NULL,    PRIMARY KEY (`ID`),    KEY `inserted` (`inserted`),    KEY `version` (`version`),    KEY `name_version` (`name`,`version`)  ) ENGINE=InnoDB  DEFAULT CHARSET=latin1 AUTO_INCREMENT=7207 ;    -- --------------------------------------------------------    --  -- Table structure for table `data_emails`  --    CREATE TABLE IF NOT EXISTS `data_emails` (    `ID` int(11) NOT NULL AUTO_INCREMENT,    `email_id` int(5) NOT NULL,    `name` varchar(255) NOT NULL,    `version` int(5) NOT NULL,    `time` int(255) NOT NULL,    PRIMARY KEY (`ID`),    KEY `version` (`version`),    KEY `name_version` (`name`,`version`)  ) ENGINE=InnoDB  DEFAULT CHARSET=latin1 AUTO_INCREMENT=9849 ;    -- --------------------------------------------------------    --  -- Table structure for table `data_ips`  --    CREATE TABLE IF NOT EXISTS `data_ips` (    `ID` int(5) NOT NULL AUTO_INCREMENT,    `ns_id` int(5) NOT NULL,    `name` varchar(255) NOT NULL,    `version` int(5) NOT NULL,    `time` int(255) NOT NULL,    PRIMARY KEY (`ID`),    KEY `version` (`version`),    KEY `name_version` (`name`,`version`)  ) ENGINE=InnoDB  DEFAULT CHARSET=latin1 AUTO_INCREMENT=17988 ;  

What i need to achieve is as follows. I need to get each user_item and get the data, emails, and ips associated with it. user_items links to data, data_emails, and data_ips by the columns 'name' and version.

data_emails and data_nameservers link to emails and ips respectively with email_id/ip_id = ID

Because multiple joins result in row multiplication I have had to use nested subqueries. Because a user can have multiple emails and ips associated with an item I have used group_concat to group all that particular row.. I then explode this column in my PHP - this in itself seems inefficient, but I see no other way of doing it?

Now I am appreciative that I could speed this up with proper indexes (which I am getting my head fully around atm), but I am doubtful that there would be decent speed improvements. Others have likewise been doubtful.

Without nested subqueries and group_concat noone on the Internet seems to be able to propose another way of doing this..

I ideally want to be able to do this in a SQL query as I want to integrate search functionality whereby a user for example can search for items he has which have a given email associated with them. This would be incredibly complex/resource intensive if the tables are not joined.

Does ANYONE have any proposals as to how a SQL join such as that described could be implemented without huge load times?

Thanks

How to optimize a query that's running slow on Nested Loops (Inner Join)

Posted: 17 Apr 2013 02:54 PM PDT

Consider the following simple query (only 3 tables involved)

    SELECT            l.sku_id AS ProductId,          l.is_primary AS IsPrimary,          v1.category_name AS Category1,          v2.category_name AS Category2,          v3.category_name AS Category3,          v4.category_name AS Category4,          v5.category_name AS Category5        FROM category c4      JOIN category_voc v4 ON v4.category_id = c4.category_id and v4.language_code = 'en'        JOIN category c3 ON c3.category_id = c4.parent_category_id      JOIN category_voc v3 ON v3.category_id = c3.category_id and v3.language_code = 'en'        JOIN category c2 ON c2.category_id = c3.category_id      JOIN category_voc v2 ON v2.category_id = c2.category_id and v2.language_code = 'en'        JOIN category c1 ON c1.category_id = c2.parent_category_id      JOIN category_voc v1 ON v1.category_id = c1.category_id and v1.language_code = 'en'        LEFT OUTER JOIN category c5 ON c5.parent_category_id = c4.category_id      LEFT OUTER JOIN category_voc v5 ON v5.category_id = c5.category_id and v5.language_code = @lang        JOIN category_link l on l.sku_id IN (SELECT value FROM #Ids) AND      (          l.category_id = c4.category_id OR          l.category_id = c5.category_id      )        WHERE c4.[level] = 4 AND c4.version_id = 5  

This is a pretty simple query, the only confusing part is the last category join, it's this way because category level 5 might or might not exist. At the end of the query I am looking for category info per product ID (SKU ID), and the that's where the very large table category_link comes in. Finally, the table #Ids is just a temp table containing 10'000 Ids.

When executed, I get the following actual execution plan:

Actual Execution Plan

As you can see, almost 90% of the time is spent in the Nested Loops (Inner Join). Here's extra information on those Nested Loops:

Nested Loops (Inner Join)

Note that the table names don't match exactly because I edited the query table names for readability, but it's pretty easy to match (ads_alt_category = category). Is there any way to optimize this query? Also note that in production, the temp table #Ids doesn't exist, it's a Table Valued Parameter of the same 10'000 Ids passed on to the Stored Procedure.

Additional info:

  • category indices on category_id and parent_category_id
  • category_voc index on category_id, language_code
  • category_link index on sku_id, category_id

Buffer usage for index with no reads

Posted: 17 Apr 2013 02:14 PM PDT

Using SQL Server 2008 R2, the main transactional table in a vendor database is massive compared to all others and has about 14 indexes. Some of these indexes don't get used in our environment, but we can't remove them. That's not a problem, it's just something we have to live with.

The question I have is about why some of these low-or-no-read indexes seem to be using memory - more than other indexes on the same large table that get used often. I would have thought that most of the buffer cache would go towards objects that are read from frequently. The only thing happening on these indexes is write overhead.

For example, one of these low-read indexes is being allocated about 2 GB of memory (58% of the index's total size) and another has 1.7 GB of memory (27% of its size). Meanwhile, the monster-sized and well-used clustered index itself only has 4 gigs (2% of its size). A different NC index with lots of reads only has 100 GB in the buffer cache (5% of its size).

Looking at the physical stats, I can see the fragmentation is pretty bad. That's understandable from all the writes on this table and the non-sequential inserts. I'm not sure if it could be related to memory usage, though.

Looking at the operational stats for these indexes is also interesting.

  • The leaf_ghost_count reminds me that there are just about as many deletes taking place on this table as there are inserts (from a regular cleaning process).
  • One of these low-read indexes has some of the highest page_lock_wait values in the database. Perhaps that's only because of the writes?
  • 2 others have some of the highest page_io_latch_wait values. I understand that latch usage would be related to memory usage, so that makes sense.

I realize this is an abstract question and that I'm not providing many actual stats. I'm just curious about how SQL Server comes to these buffer cache usage decisions and wonder if anyone out there understands it.

How can I verify that a full database restore reflects the exact source database in SQL Server?

Posted: 17 Apr 2013 09:02 PM PDT

We are decommissioning our old SQL Server 2000 Ent. instance in favor of SQL Server 2008 R2 Ent. My planned migration path is:

  1. Terminate client connections (2000)
  2. Full Backup (2000)
  3. Restore (2008 R2)

I am being asked to provide conclusive proof that every single transaction "made it" and that the data is an exact replication of what existed on the 2000 instance.

I hope that I can use the following documentation as evidence:

However, if this is not sufficient, the only thing I can think of is to rip through ever row of every table of every database and calculate a checksum (on both instances) as well as get row counts for every table in every database.

Is there any better way to satisfy the "exact replica" verification criteria? I'm also open to better documentation.

How to recover data from corrupted innoDB table from a crashed mySQL server?

Posted: 17 Apr 2013 08:25 PM PDT

Some background:

I had a MySQL server running on Ubuntu Server. I don't remember which versions they were running but it was likely a Ubuntu 12.04 with MySQL 5.5. During the course of a day, connections from the client application to the server kept dropping. On the next day, the server crashed and mysql wouldn't start again. I was also getting lots of disk errors so I decided to attempt to recover the data on a new machine.

What I tried:

I tried to recover the data with a Ubuntu 12.04 running MySQL 5.5 but I couldn't get the service running. Before troubleshooting it, I tried the same steps on a Ubuntu Desktop running mySQL 5.1 that I had available, and they worked. Here's what I did:

  1. I copied /var/lib/mysql from the crashed server, to the new one
  2. I ran chown -R mysql:mysql /var/lib/mysql
  3. I ran service mysql start

The service started and everything seemed to work, except on one table. This is a table with BLOB data.

After some trial and error, I narrowed the problematic queries to record ids above a certain number (roughly, the last 100 records of a total of 7000 records).

Sample of some queries and outputs:

> select field1 from table1 where field1 = 6900  > (completes successfully)  > select field1 from table1 where field1 <= 6900  > (completes successfully)  > select field1 from table1 where field1 = 6901  > Error 2013 (HY000) at line 1: Lost connection to mySQL server during query  

Then, I started trying to dump the data in an attempt to recover it:

> mysqldump -u root -p database1 table1 > table1.sql  > Lost connection to mySQL server during query when dumping table 'table1' at row 6642  

I looked into mysqldump's options and decided to try to add -f and the output was different:

> Couldn't execute 'UNLOCK TABLES': MySQL server has gone away (2006)  

To make sure the problem's in this table, I tried:

> mysqldump -u root -p --ignore-table=database1.table1 database1 > database1.sql  

... and the command completed successfully.

I tried to delete one of the offending records and the delete command completed successfully. I guess I could delete them all, from the first problematic one and minimize losses but is there any hope of recovering them?

What would you suggest I do from here?

Sidenote: I was using Navicat to manage the server and make backups, and strangely enough, restoring that particular table with a recent backup also fails with the connection lost error, which leads me to believe that this table has been having problems for a while now, maybe due to disk errors and not specifically due to the crash.

How to permanently delete a row in oracle forever

Posted: 17 Apr 2013 01:13 PM PDT

I want to delete a record from a table and no one should ever be able to recover the deleted data. I have done delete operation but the record is still in the undo segments. And one can do flashback and retrive that row.

Executing Child packages without using master package

Posted: 17 Apr 2013 08:30 PM PDT

I have many Fact and Dimension packages which are executed using vb script . Today one of my seniors proposed that these packages should be executed individually instead of creating a master package to execute the child pkgs using execute package task .His reasoning was based on the fact that execute package task will create several threads which will be performance intensive when compared to individually executing the packages.

Leaving aside sharing of config file and other issue how can individually executing it be more performant that creating master package for all the child packages?

By the way the child packages are executed out of process which is configured in the Execute Package Task

Can SQL Server 2008 / 2012 have execute permission in a function for a user, but not for a table used in the function?

Posted: 17 Apr 2013 01:54 PM PDT

I have a function that returns a table. This function SELECTS DB1.dbo.table1

user1 has SELECT permission in this function only, but when selecting from the function error comes up stating that the security context does not allow access to DB1

Can a user have permissions to read only functions but not tables?

Migrating from SQL Server 2005 to SQL Server 2008 R2 on a different box

Posted: 17 Apr 2013 12:26 PM PDT

We currently have a SQL Server 2005 instance hosting a lot of application specific databases (ArchiveManager, SolarWinds, Kaspersky etc). We are planning to deploy a new SQL Server 2008 R2 instance on a new server and slowly migrate across to this instance. All of the applications have fairly straightforward migration plans for changing the database location normally revolving around stopping services, detaching, attaching and rerunning installers.

All looks good so far. My only concern is with SQL Server Agent jobs and logins. Initially I was under the impression that these were all stored in the system databases and that I could move these. I understand now that this is not the case. Microsft KB 918992 suggest running a script to generate a stored procedure and then calling this and running the generated script on the new instance. There seem to be a lot of caveats to it though. Another option would be to use SSIS to generate a "Transfer Logins" and "Transfer Jobs" package and run that (both editions are Standard) but I am unversed in SSIS so am a touch scared to use that.

I suppose my question boils down to this:

  • Is the Microsft KB the best way to migrate logins to the new SQL instance?
  • Is the only way to migrate tasks to the new instance via SSIS?

I am trying to get it as clear in my head as possible for my planning documentation so am very appreciative of any replies/guidance.

MSDTC on SQL Server 2012 with mount point for data disk

Posted: 17 Apr 2013 11:29 AM PDT

I have installed a new SQL Server 2012 instance on a cluster. The installation went fine but when I've added the MSDTC as resource of the instance, it went wrong. When I just added it, the MSDTC was online but when I tried my first failover, it refused to go online. The error message was "MSDTC failed".

My configuration is: a physical drive disk (let say J:) with two mount points "DATA" and "LOG".

I read some people who experienced this kind of errors with mount points and SQL Server 2008 but nothing with 2012.

I tried to create the "MSDTC" folder on the physical drive (the J: one) but nothing better.

Does anyone know something about it or know how to configure the MSDTC and force it to use the J: drive?

Thx a lot!

Backing up MySQL database

Posted: 17 Apr 2013 11:22 AM PDT

I have quite large and rather important DB to backup.

First off, whats best way to do it?

Secondly, easiest way I know is to export it through phpmyadmin, but, i'm worried about options, although I'm quite sure this "Add DROP DATABASE" will not drop actual databases, I'm interested what are these options for? (see pic)

Thanks.

enter image description here

SQL 2008: Why is *any* user, with minimal permissions able to restart/stop the SQL instance through Management Studio

Posted: 17 Apr 2013 01:26 PM PDT

So I'm on a project that I need to help admin SQL 2008 . I'm coming from a MySQL background.

Why is it that a basic user, with the most basic/default permissions, through Management Studio is able to restart or stop the entire instance of SQL but not view a simple table?

Optimizing queries on a range of timestamps (two columns)

Posted: 17 Apr 2013 03:45 PM PDT

I use postgresql-9.1 with ubuntu 12.04.

I need to select records inside a range of time: my table time_limits has two timestamp fields and one property integer. Indeed there are other info columns but not used for any queries, only for storing information, I won't speak about these ones.

create table (     start_date_time timestamp,     end_date_time timestamp,      id_phi integer,      primary key(start_date_time,end_date_time,id_phi);  

this table now contains something like 2.000.000 records.

but the time spent were so enormous for a query like:

select * from time_limits as t   where t.id_phi=0   and t.start_date_time <= timestamp'2010-08-08 00:00:00'  and t.end_date_time >= timestamp'2010-08-08 00:05:00'  

that I tried creating the inversed index:

create index idx_inversed on time_limits(id_phi,start_date_time,end_date_time);  

I have the impression that the times spent are better: The time needed for accessing the records in the middle of the table seems to be more reasonable. ( somwhere between 40 and 90 seconds)

But's its several tens of seconds for values in the middle of the time range. And twice more when targeting the end of the table (chronologically speaking).

So I have tried explain/analyze for the first time. You can see the result here on depeszcom.

    QUERY PLAN                                                                                           Bitmap Heap Scan on time_limits  (cost=4730.38..22465.32 rows=62682 width=36) (actual time=44.446..44.446 rows=0 loops=1)     Recheck Cond: ((id_phi = 0) AND (start_date_time <= '2011-08-08 00:00:00'::timestamp without time zone) AND (end_date_time >= '2011-08-08 00:05:00'::timestamp without time zone))     ->  Bitmap Index Scan on idx_time_limits_phi_start_end  (cost=0.00..4714.71 rows=62682 width=0) (actual time=44.437..44.437 rows=0 loops=1)           Index Cond: ((id_phi = 0) AND (start_date_time <= '2011-08-08 00:00:00'::timestamp without time zone) AND (end_date_time >= '2011-08-08 00:05:00'::timestamp without time zone))   Total runtime: 44.507 ms  (5 rows)  

What could I do to optimize the search ? You can see all the time is spent scanning the two timestamps columns once the id_phi is set to 0. And I don't understand the big scan (60K rows!) on the timestamps, aren't they indexed ? ( both the primary key and the idx_inversed I created)

Should I change from timestamp types to something else ?

I have read (really little) about GIST and GIN indexes, I just know they can be more efficient on certain conditions for custom types. Should I go this way to have a efficient time index ?

Payment methods conceptual and logical model

Posted: 17 Apr 2013 09:02 PM PDT

I need to create a conceptual and logical (normalized) models of parking house according to the requirements below. It looks to me as a very simple concept that doesn't need all tables to have relationships - but then they could not be modelled as entities. I tried asking this on stackoverflow but got no feedback for couple of days now.

  1. Three possible methods of payment:

    • a ticket paid on leave,
    • prepaid card with cash credit,
    • prepaid card with "time credit",
  2. Price of ticket depends on time:

    1. 1-2hrs = $0,
    2. 3hrs = $2,
    3. 4hrs = $4,
    4. afterwards x hrs = $(x+1), but max. $20 for 24hrs (... easiest to put these to 24 rows, right?).
  3. A ticket (a) may be allowed a 20% discount (ie. for shopping in the mall).

  4. Cash credit card uses same prices as tickets but with 40% discount.
  5. Cash credit card can be reloaded.
  6. Time card is paid once and allows parking while valid.

The problem is I don't know how to put those highlighted relations to the logical db model and whether event to put them there. Is it ok-practice to have isolated tables in the design?

How to check the status of SQL Server?

Posted: 17 Apr 2013 08:21 PM PDT

I need to check the status of SQL Server. When I say status its not the status for each database, but rather for the whole SQL Server.

I need to know what are the queries running in the server and no. of connections, which query is taking too long.

Can anyone please help me with this..

SQLITE: A problem of tags and products

Posted: 17 Apr 2013 07:42 PM PDT

I'm searching for a way to create a query to do the following:

Let's consider 3 tables:

  • products: List of products
  • tags: list of tags
  • tag_ties: table used to associate a tag to a product

Let's consider this structure for each table:

Products:

  • id (int, autoincrement)
  • name (varchar, name of the product)

Tags:

  • id (int autoincrement)
  • label (varchar, label of the tag)

Tag_ties:

  • id (int, autoincrement)
  • tag_id (int, reference to a tag id)
  • ref_id (int, reference to a product id)

What I want:

Obtain all the products who are tagged with tags id 10, 11 and 12 for example.

This query does not work, as it returns the products having at least one of the tags:

select       p.name as name,      p.id as id  from       products p inner join tag_ties ties  on      p.id=ties.ref_id  where      ties.ref_id=p.id and      ties.tag_id in (10,11,12)  group by       p.id  order by       p.name asc  

[SQL server issues] Created Issue: New to datawarehousing...and want to know how the Adventurework and AdventureworkDW has been developed [17881]




description



New to datawarehousing...and want to know how the Adventurework and AdventureworkDW has been developed..
how the OLTP data has been extracted to AdventureworksDW..datawarehouse system and need some design documents....what to know how the DIM tables are created...and everything







.

sqlserversamples.codeplex.com

[SQL server issues] Created Issue: SQL Server 2008 Developer edition sample database install error [18174]







description



I'm trying to install Adventureworks sample dB and i get an error.
I successfully downloaded the mdf file however when attach, add and click OK from SSMS I get the error:
 
Msg 5172, Level 16, State 15, Line 1
The header for file 'C:\Program Files\Microsoft SQL Server\AdventureWorks2012_Data.mdf' is not a valid database file header. The FILE SIZE property is incorrect.







.

sqlserversamples.codeplex.com

Search This Blog