Tuesday, April 9, 2013

[SQL Server 2008 issues] When to update statistics manually, if at all?

[SQL Server 2008 issues] When to update statistics manually, if at all?


When to update statistics manually, if at all?

Posted: 08 Apr 2013 06:59 AM PDT

So, I'm using my google foo to try to find some articles about why we should not be updating statistics and recompiling on a daily basis. It is pretty clear that with auto_update_statistics enabled, there is no reason whatsoever to recompile after the statistics update. However, I cannot find any information about how to tell when we should update the statistics. I would like to believe that we shouldn't have to at all, but I have to convince myself of that as well. Instead of just saying "well, statistics are created and updated manually, so we don't need to update them daily." I need some references. Google is only showing me how to do it and what it does... not how to determine when I should do it. Any help appreciated.

Sum 'LastPeriod' short version

Posted: 08 Apr 2013 01:26 PM PDT

Maybe my last post was too long ...Does anyone know if the results of a 'LastPeriod' MDX query can be added together & if so, how?Thanks

Write query for Suond like search

Posted: 08 Apr 2013 06:33 PM PDT

hello everyoneI am working on database where I would require to make search which base on sound like search:e.g.In my search values 'Poonam' and 'Punam', 'Veenita' and 'Vinita' both are same but 'Ramita' and 'Amita', 'Amita' and 'Ankita' ,Both are different.can any one suggest me how to write such query.

Add date range by record line

Posted: 08 Apr 2013 12:37 PM PDT

I'm trying to add a date range to this query, where the min date is the last 10 days, the max TheDate. The Min/Max are calculated for records that fall within thoe dates. I tried all kind of combinations and came up with nothing. Hope someone can solve this puzzle, Thanks![code="sql"]CREATE TABLE #Sample (TheDate datetime, Client varchar(10), Calls int, Answers int, Sales int) INSERT INTO #Sample SELECT '2013-04-03', 'A', 50, 20, 60 UNION ALL SELECT '2013-04-04', 'A', 60, 30, 50 UNION ALL SELECT '2013-04-05', 'A', 70, 40, 40 UNION ALL SELECT '2013-04-06', 'A', 80, 50, 30 UNION ALL SELECT '2013-04-07', 'A', 90, 60, 20 UNION ALL SELECT '2013-04-08', 'A', 100, 70, 10 UNION ALL SELECT '2013-04-03', 'B', 150, 120, 160 UNION ALL SELECT '2013-04-04', 'B', 160, 130, 150 UNION ALL SELECT '2013-04-05', 'B', 170, 140, 140 UNION ALL SELECT '2013-04-06', 'B', 180, 150, 130 UNION ALL SELECT '2013-04-07', 'B', 190, 160, 120 UNION ALL SELECT '2013-04-08', 'B', 200, 170, 110 SELECT Client, TheDate, COUNT(1) as Records, --number of days with records MAX(Calls) as MaxCalls, MIN(Calls) as MinCalls, MAX(Answers) as MaxAnswers, MIN(Answers) as MinAnswers, MAX(Sales) as MaxSales, MIN(Sales) as MinSales FROM #Sample WHERE TheDate BETWEEN DATEADD(D, -10, TheDate) AND TheDate GROUP BY Client, TheDate ORDER BY Client, TheDate[/code][code="plain"]Result: Client TheDate Records MaxCalls MinCalls MaxAnswe MinAnswe MaxSales MinSalesA 4/3/2013 1 50 50 20 20 60 60A 4/4/2013 2 60 50 30 20 60 50A 4/5/2013 3 70 50 40 20 60 40A 4/6/2013 4 80 50 50 50 60 30A 4/7/2013 5 90 50 60 60 60 20A 4/8/2013 6 100 100 70 70 60 10B 4/3/2013 1 150 150 120 120 160 160B 4/4/2013 2 160 150 130 120 160 150B 4/5/2013 3 170 150 140 120 160 140B 4/6/2013 4 180 150 150 120 160 130B 4/7/2013 5 190 150 160 120 160 120B 4/8/2013 6 200 150 170 120 160 110[/code]

Transaction Log Growth

Posted: 08 Apr 2013 07:35 AM PDT

How to find the transaction/process/Job which cause growth of transaction log size?Thanks

CASE statement

Posted: 08 Apr 2013 12:28 PM PDT

[code="sql"] WHERE SS.[Key]='A' AND SO.[type]='X_P' AND vu.[Status]=1 AND 1= CASE WHEN @CreateDateDatTime > vu.LastRunDate THEN 1 ELSE ---Secondary filter ( L.create_date>=ISNULL(vu.LastRunDate,'01/01/1900') OR L.modify_date>=ISNULL(vu.LastRunDate,'01/01/1900') )[/code]What i am trying to do here is to use the case in the filter, if @CreateDateDatTime>vu.LastRunDate THEN don't apply the secondary filter (select everything that satisfies other filters) or else apply the secondary filter as well. Can't put it together. any help?

Trace flag 8295?

Posted: 08 Apr 2013 06:16 PM PDT

Hi,Does anyone have where trace flag 8295 is used for. I found it active on a customer server, SQL Server 2008 R2 SP2 Enterprise Edition, but I can't figure out the purpose of it. When I search the internet, nobody seems to know this trace flag.Can anyone help me?RegardsPieter

Parallel Data Warehouse for OLTP

Posted: 08 Apr 2013 05:10 PM PDT

Hi,Is Parallel Data Warehouse feature of SQL Server 2008 R2 targeted only for OLAP/BI impementations?Can we use it for hosting OLTP databases as well?Thanks.

Can I SUM the 'LastPeriod' query results?

Posted: 08 Apr 2013 12:57 PM PDT

Hi,I want to create 'Rolling' Month on Month, Quarter on Quarter queries with Revenue data.So, for example if I want to create a Quarterly Reveneue variance (which I would calculate as the Last months Revenue minus the First Months Revenue & divide that revenue amount in to the Last months Revenue amount)My Quarterly query therefore needs to include the 'End Month' let's say June 2012 & contain the 'Billed_Amount' field to find the Revenue for a given month.I thought I'd use the 'Lastperiod' function to collect 3 months back from June 2012 & SUM those amounts to use in a variance calculation with the next quarter & so on.Here's my MDX query that I'm checking in SSMS to ensure it's correct before I add it to the Calculations in BIDS;SELECT LastPeriods(3,[MTH_QTR_HLF_YR].[Month].&[201206]) ON 0FROM [RM RADAR ACCT CUBE]This gives me the months I need & counts the records, OK so far...Then I add the 'Total_Billed' Measure to get the Revenue figures;SELECT LastPeriods(3,[MTH_QTR_HLF_YR].[Month].&[201206].[Measures].[Total Billed]) ON 0FROM [RM RADAR ACCT CUBE]I don't get anu results for this query, not sure why & I want to SUM these 3 'Total_Billed' amounts once this query works.So, I'm thinking I could then use this to SUM the 'Total_Billed';SELECT SUM(LastPeriods(3,[MTH_QTR_HLF_YR].[Month].&[201206].[Measures].[Total Billed])) ON 0FROM [RM RADAR ACCT CUBE]Can anyone suggest how I best use 'Lastperiod' OR should I use another Function altogether??Thanks

Are Queries Actually Using the Index

Posted: 08 Apr 2013 12:19 PM PDT

Hello EveryoneI have a SQL database that is the backend to a Java application. There are indexes on the column in the tables. The java app does not use any stored procedures. It passes in all the queries from the front-end. Some are simple and some are dynamically built on the front-end.How can I see if the query that is being passed in, is actually using the index or performing a full table scan?It seems that some of the queries are taking a long time to return any data to the front-end. I just need to verify what is actually happening.Thank you in advance for all your assistance, comments and suggestions.Andrew SQLDBA

SSIS script task

Posted: 08 Apr 2013 06:50 AM PDT

I need to set the unicode property of a connection manager (which is a variable) to true in a Script Task. Can you please help?

Active Directory Connection

Posted: 08 Apr 2013 11:26 AM PDT

At work we have an application that has the ability to use 3 active directory groups to setup security within the application (Admin, User, Read Only). The application access a SQL Database (typically 2008 R2), and I am being told that I don't need to have these AD groups setup with the SQL environment (such as new login) for this authentication to work. However, we have been having problems with this setup for some time now, but the engineers who created this insist this works.We do use an AD settings file to direct it to the AD server and location, however, the specific groups are not a part of this file. The group names are stored within the SQL Database, so a successful authentication to the Database should be needed to obtain this information. Thus, I am being told, that once everything is setup, I should be able to connect in using a Windows Authentication, and our software will be able to accesses several databases within a single instance without any user information being listed in Management Studio. When I have tried to do this with the last few customers I have worked with, unless we setup the AD groups themselves within the SQL environment, we have not been able to get this to work.Any suggestions on what I could be doing wrong? Any assistance on how this could be working, or if it should be working at all would be greatly appreciated.ThanksLD

New Linked Server Uses Local Catalogs - Not Those on the Remote Server

Posted: 08 Apr 2013 09:07 AM PDT

I have tried every way I can think of to define a linked server so that I can query catalogs on a remote production server. Problem is, when I expand the catalogs on the new linked server, it shows the catalogs on the local server where it's defined - and not the catalogs on the remote server! Could it be that the login on the remote server is invalid - so it defaults to the catalogs on the local server? Here's the code:EXEC master.dbo.sp_addlinkedserver @server = N'ProdServer', @srvproduct=N'SQL Server'EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname=N'ProdServer',@useself=N'False',@locallogin=NULL,@rmtuser=N'RPTSUSER',@rmtpassword='########'This definition seems to work fine (no errors). But when I look at the catalogs, hoping to see those on the remote server, I see the ones on the current server. What am I missing here? Thanks!

WHERE > CASE > IN

Posted: 07 Apr 2013 11:57 PM PDT

Hey,I'm using MONTH and YEAR functions to match on current month and current year. If the Month is 12 (December) I want to include Jan of the next yearThis is the current code:[code="sql"]WHERE MONTH(abr.ABR_Shift_Date) IN (MONTH(GETDATE()), MONTH(DATEADD(m, 1, GETDATE()))) AND YEAR(abr.ABR_Shift_Date) IN (YEAR(GETDATE()), YEAR(DATEADD(yy, 1, GETDATE())))[/code]Can I do anything this (not working):[code="sql"]AND CASE WHEN YEAR(abr.ABR_Shift_Date) < 12 THEN YEAR(abr.ABR_Shift_Date) IN (YEAR(GETDATE()), YEAR(DATEADD(yy, 1, GETDATE()))) ELSE YEAR(abr.ABR_Shift_Date) IN (YEAR(GETDATE())) END[/code]Thanks

Omnibackup.exe backup got failed while trying through this setup.

Posted: 22 Mar 2013 03:36 AM PDT

In SQL Server,to talk a backup for various databases from various server we are using omnibackup setup to execute SQL database backup not through SQL AGENT JOBS.This is script we are using,OmniBackup.exe -b D:\GenetecBackups -s (local)\OMI -d VideoSQL -MAX_BACKUPS 12(any error on this script pl let me know)We are getting error while executing this setup, BACKUP DATABASE failed with an application error.Error number on LOG : Error: 17836, Severity: 20, State: 17We are getting the above error when trying to back up the Security SQL databases (all servers).I have read this and not sure what size we would make this, any recommendations would be great.Kindly revert me with an good solution.Waiting for a valuable replies.Thanks in Advance friends.

Publishing and accessing SQL Server 2008 Reports

Posted: 31 Mar 2013 12:20 PM PDT

I 'm new to SQL Server database and I need help. I 'm trying to publish sql reports for users to access from the web. We have SQL server 2008 database with data, and I have installed a separate SQL server 2008 with Reporting Services on my windows xp machine. I have created data source from the remote sql server 2008 installed on a Windows Server, and publish the reports. I can access the report manager through the URL (//localhost:8080/ReportServer) on my xp machine browser. However, when I typed the same URL from another computer, I get page cannot be displayed message. What is going on? Does the report manager URL needs to be configured on the SQL Server that has the database instance? Please help. ThanksTo make myself clear, what I 'm trying to acomplish is to create and publish reports for users to access from their workstations. Any help will be appreciated.

tempdb

Posted: 08 Apr 2013 06:28 AM PDT

Hi,Our monitoring tool has reported that blocking on produtioni checked the detailsone user is running adhoc query on standalone database(not production database)one more query is running by the same user against tempdbSELECT SCHEMA_NAME(tbl.schema_id) AS [Schema], tbl.name AS [Name] FROM sys.tables AS tbl WHERE (tbl.name not like '#%') ORDER BY [Schema] ASC,[Name] ASCis this Intellisense feature? (which helps the user to make correct SQL Queries)one of production process (running aginst production db) is blocked by this user temp db query.So my question is user is trying to run adhoc query in standalone database and production db process is blocked. Just trying to understand how SQL works (because nothing in appln client or our server processes accesses this standalone databse)Please clarify.

How to NOT match on an exception list

Posted: 08 Apr 2013 05:10 AM PDT

If I have a table of Products that have a ProductName and I have a table of ProductNames that I don't want the first table matched to, what is the best process in which to do this? The names will be LIKE each other, but will not match exactly. When I pull data, I only want the ones from dbo.Products where they don't look like the Products from dbo.Exceptions. In my example, I want to pull back Products that don't look like WidgetD or WidgetE. Normally I would write: SELECT ProductDesc FROM dbo.Products WHERE ProductDesc NOT LIKE '%WidgetD%' AND ProductDesc NOT LIKE '%WidgetE%'The query should return WidgetA, WidgetB, and WidgetC (3 records)However, I don't want to jack with the code every time a new exception is added. Is there a better method? (example code below)CREATE TABLE [dbo].[Products]( [ProductDesc] [varchar](200) NULL) INSERT INTO Products(ProductDesc) SELECT 'WidgetA'INSERT INTO Products(ProductDesc) SELECT 'WidgetB'INSERT INTO Products(ProductDesc) SELECT 'WidgetC'INSERT INTO Products(ProductDesc) SELECT 'WidgetD'INSERT INTO Products(ProductDesc) SELECT 'WidgetE'CREATE TABLE [dbo].[Exceptions]( [ProductDesc] [varchar](200) NULL) INSERT INTO Exceptions(ProductDesc) SELECT 'Big WidgetD'INSERT INTO Exceptions(ProductDesc) SELECT 'Small WidgetE'

User interface

Posted: 07 Apr 2013 11:17 PM PDT

I am new to SQL server and currently using Access 2007. I am just wondering could someone attach a screen shot of what the user interface looks like for SQL server 2008?

Tracking a change to the notifications on an Agent job

Posted: 08 Apr 2013 01:34 AM PDT

Hi everyone,So I've had someone change the notifications, specifically the Page: nofitication on all agent jobs on our production server. It's easy enough to change this back to it's correct value but I'm wondering who did it. Do these get logged anywhere by default, or do I have to enable logging in some way to catch it in the future (which does me no good on this instance). Thanks

Joining with large table

Posted: 08 Apr 2013 01:29 AM PDT

Good morning. I have been working on a faster solution for some time, and thought I had it. I have a very large table with all people who work in the state. I need to join that table to a much smaller one of college graduates for outcomes. Since the graduates table will continue to grow, I can't remove anyone from the wage table. So wage table remains huge, and will only get bigger. Now, joining the tables is not a big deal, but for the fact that people often work in more than one job. But I created a table-valued function:CREATE FUNCTION dbo.GetWages (@SSN char(9))RETURNS TABLEASRETURN SELECT year, quarter, ssn, SUM(hours) hours, SUM(wages) wages, SUM(wages)/nullif(sum(hours),'0') rate FROM dbo.Wage_data WHERE ssn IN (@SSN) GROUP BY year, quarter, ssn;GOAnd it works great: SELECT student_id, LName, rateFROM dbo.Degrees_Data CROSS APPLY dbo.GetWages(SSN)However, we also want to know the industry of employment. When there are two jobs, we want the industry of the main job, defined as one with most wages. To get this, I have been using a CTE in a view, which is very slow since I cannot write in index to it. So again I wrote a TVF:CREATE FUNCTION dbo.GetWagesAndIndustry (@SSN char(9))RETURNS TABLEASRETURNWITH cte AS ( SELECT year, quarter, ssn, hours, industry , ROW_NUMBER() over (partition by year, quarter, ssn ORDER BY wages ASC) rn , SUM(wages) over (partition by year, quarter, ssn ) SumWage , SUM(hours) over (partition by year, quarter, ssn ) SumHours FROM dbo.Wage_data WHERE ssn IN (@SSN) )SELECT year, quarter, ssn, industry, SumHours hours, SumWage wages, SumWage/NULLIF(SumHours, '0') rageFROM cte WHERE rn = 1But this is also painfully slow. Eventually we want to be able to create reports on the aggregated data, but who's going to wait 5-10 minutes for a report to generate? Definitely, not my boss. There must be a creative way to group the wage data to one record per person, yet have the industry information still available. What am I not seeing? Thank you for any help.

Prepared vs AdHoc - Plan Cache ObjType

Posted: 08 Apr 2013 03:45 AM PDT

I would think that this would be an easy find with google... However, I have not found anything useful. So, gurus... What is the definition of a Prepared ObjType for a plan and what is the definition of an AdHoc ObjType for a plan; what distinguishes one from another and why is it important to me?

Link Server Connection

Posted: 07 Apr 2013 10:18 PM PDT

Sir, I am working in Medical college campus.They have 5 server in our campus. I have done a Port for SQL 1433 Both Server are activate an also Named Pipes Provider are Enable,TCP/IP is Enabled in Both Server.In Configuration Manager SQL Browser is running and SQL SERVER (MSSQLSERVER) also running.When connection Link server the following Error as occurred.TITLE: Microsoft SQL Server Management Studio------------------------------The linked server has been created but failed a connection test. Do you want to keep the linked server?------------------------------ADDITIONAL INFORMATION:An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)------------------------------Named Pipes Provider: Could not open a connection to SQL Server [53]. OLE DB provider "SQLNCLI10" for linked server "172.17.15.100" returned message "Login timeout expired".OLE DB provider "SQLNCLI10" for linked server "172.17.15.100" returned message "A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.". (Microsoft SQL Server, Error: 53)

Linked server

Posted: 08 Apr 2013 01:23 AM PDT

I created a linked server and use it to select data from one server to another.Is there another way except for linked server which will do the same job?

Searching stored procedure for table name but ignoring comments

Posted: 27 Nov 2010 08:43 PM PST

Hi All,Thanks to others who have posted helpful tips I have a way of searching through stored procedures to list those that reference a specific table (one way listed below ) but ideally I would like to exclude those table references that are listed in comments or hashed out and no longer part of the active body of the procedure ..Currently can't see a way to do this - has anyone got an idea on how to approach this or have actioned this themselves ? It may be this is just not possible searching through the sys tables ..but thought I would just ask Any feedback/tips much appreciatedThanks,Ralph SELECT DISTINCT SO.NAME FROM SYSOBJECTS SO (NOLOCK) WHERE SO.TYPE = 'P' AND SO.NAME LIKE @StringToSearch ORDER BY SO.Name

SQL to transpose data using the Formula from a Different table.

Posted: 07 Apr 2013 08:34 PM PDT

Hi Team,I have a Complex requirement.Source:-------- NAME PAYMENT_TYPE PAYMENTSUDHIR SAL 30.3SUDHIR ADV 10.3SUDHIR ALL_1 10SUDHIR ALL_2 10SUDHIR ALL_3 10MADHAV SAL 34MADHAV ALL_1 24MADHAV BONUS 10Formula:--------TGT_PAYMENT_TYPE PAYMENT_FORMULASAL SAL-ADVCOMM ALL_1+ALL_2-ALL_3BONUS SAL * 1.1Target Table:------------NAME SAL BONUS COMMSUDHIR 20 0 10MADHAV 34 10 24Please need help to code SQL Query / Package which will take the Formula on from the table Formula and apply on Source before pushing it to target.Thanks & Regards,Sudhir Nune.

Error while moving model database

Posted: 07 Apr 2013 11:32 PM PDT

I have moved the model database by using following method1. For each file to be moved, run the following statement.ALTER DATABASE model MODIFY FILE ( NAME = modeldev , FILENAME = 'G:\model\model.mdf' )ALTER DATABASE model MODIFY FILE ( NAME = modellog , FILENAME = 'G:\model\modellog.ldf' )2. Stop the instance of SQL Server to perform maintenance.3. Move the file or files to the new location.4. Restart the instance of SQL Server or the server.While restarting instance of server i am getting following error.The request failed or the service did not respond in a timely fashion. Consult the event log or other applicable error logs for details.How to solve this error?

Corruption - Renaming of system data databases to ldf :(

Posted: 02 Apr 2013 10:34 AM PDT

Hi,This morning a made a proper boo boo while building a new SQL Server 2008 R2 install. After the installation had completed I wanted to move the system databases to a new drive.However when moving the log files I executed the following T-SQL in error:ALTER DATABASE [master] MODIFY FILE ( NAME = master , FILENAME = 'J:\Log\SystemLog\mastlog.ldf' );ALTER DATABASE model MODIFY FILE ( NAME = modeldev , FILENAME = 'J:\Log\SystemLog\modellog.ldf' );i.e. I renamed the data files to ldf's.....As a result I cannot start the SQL Engine and get the following error in the ERRORLOG on startup:2013-04-03 10:12:40.74 spid9s Error: 5171, Severity: 16, State: 1.2013-04-03 10:12:40.74 spid9s J:\Log\SystemLog\modellog.ldf is not a primary database file.2013-04-03 10:12:40.77 spid9s Error: 945, Severity: 14, State: 2.2013-04-03 10:12:40.77 spid9s Database 'model' cannot be opened due to inaccessible files or insufficient memory or disk space. See the SQL Server errorlog for details.2013-04-03 10:12:40.77 spid9s Could not create tempdb. You may not have enough disk space available. Free additional disk space by deleting other files on the tempdb drive and then restart SQL Server. Check for additional errors in the event log that may indicate why the tempdb files could not be initialized.I would rather not have to re-install again. Does anyone have any advice on how to get around this one?Thanks in advance,JK

check condition

Posted: 07 Apr 2013 09:05 PM PDT

Hi all, I have to check a condition that if I entered correct first and last voucherno then only it have to perform some operation like insertion.... But I used IN then it will solve my purpose as if I enter @firstvoucherno correct and @lastvoucherno incorrent then also it will insert. Inspite of IN if I use and it will not returnign anything from my table example:SELECT * FROM GV_Booklet gv WHERE gv.BookletID = 'B00001' AND gv.BookletID ='B00003'Declare @FirstVoucherID ='B00001',@lastvoucherID = 'B00004'Here is part of my script:IF EXISTS(SELECT * FROM GV_Voucher gv JOIN GV_VoucherStatus gvs ON gv.VoucherStatusId = gvs.VoucherStatusIdWHERE gv.VoucherNo IN (@FirstVoucherID , @lastvoucherID)AND gvs.VoucherStatus = 'Dispatched')BEGINRAISERROR('Voucher No already in use',16,1)ENDELSEIF EXISTS (SELECT gv.VoucherNo FROM GV_Voucher gv JOIN GV_VoucherStatus gvs ON gv.VoucherStatusId = gvs.VoucherStatusIdWHERE gv.VoucherNo IN (@FirstVoucherID, @lastvoucherID )AND gvs.VoucherStatus = 'Active at HO')BEGIN INSERT INTO GV_StoreAllocation (StoreId, STNNo, FirstVoucherNo, LastVoucherNo, Quantity)VALUES (@storecode,@STNNo,@FirstVoucherID,@lastvoucherID,@quantity)END

Migration detail for SQL SERVER 2000 TO SQL SERVER 2012

Posted: 02 Apr 2013 09:12 PM PDT

HIi need help for migration the coding in sql server 2000 to sql server 2012 . regarding this what are the step taken from initial or any basic configuration need to setup before migration to step up step action to final stage please advise or guide mewith regardssankaripr

*MAY* need to switch DBs from Full Recovery to Simple, due to backup space issues...

Posted: 03 Apr 2013 02:52 AM PDT

First and foremost, I'm working with our backup admin to find a better solution for the problem, that won't require doing this.Second, adding capacity to the backup system would be nice, but by the time the request were to make it through the bureaucracy here, we'd be migrating to SQL3010...:hehe:So the scoop is, the backup system here ran out of room for the Sharepoint SQL DBs, and now myself and the backup Admin are trying to get it working again. She's limited in what she can do by the retention policies, I'm limited by the fact that it's Sharepoint, so no going in and dropping rows from big tables...(The Sharepoint Admin is on vacation, to boot...)So I think what may have set the problem off was the SP admin, after realizing that the previous admin had enabled auditing of EVERYTHING and did not set up the "trim" option to only keep a month or so worth of entries, tried to delete out old records. With the DBs being in Full recovery, the TLogs got rather full.Then the backup ran out of space because of this (I think the retention is 30 days, then it goes to tape)So my worst-case solution to reduce the size of data in the TLogs is to do the following:1. After-hours, switch all DBs from Full Recovery to Simple, sacrificing point-in-time recovery2. After SQL has "truncated" the logs, switch back to Full Recovery3. Let the early AM Full DB backup run, and get back on our bi-hourly TLog backupsYes we lose PiT recovery, but right now we have NO recovery since yesterday...Yes, I should have caught this sooner.So, a question:About how long might it be before SQL auto-truncates the logs? Can I "force" it to happen sooner by issuing a "Checkpoint" on the DBs, and could it take more than one checkpoint?I'm not currently planning to shrink the physical files, so that SQL doesn't need to "grow" the file later.Thanks,Jason(As an example, one of the Log files is showing 18655.19MB used in the log... I had been monitoring the sizes of the DBs and TLogs for a couple weeks, looking to free up some disk, and this DB averaged 402MB...)

HOW TO FIND A VALUE IN ALL COLUMN OF ALL TABLE IN A db.

Posted: 08 Jan 2012 04:58 PM PST

HI ALL,As i Have a task to find 9 digit value in all column of all table in a DB or All DB in SQL server 2005.SO DO ANY ONE HAVE THE T-SQL STATEMENT TO FIND IT?All Help is appreciated.

Monday, April 8, 2013

[SQL Server] timeline in cross-tab??

[SQL Server] timeline in cross-tab??


timeline in cross-tab??

Posted: 08 Apr 2013 05:33 AM PDT

Hello experts,My problem is as follows. On one hand I have got the beginning and end date of a tournement, on the other hand I have the beginning and end dates of the several events and the name of the events within that tournement.The end result should be a sort of time schedule disguised as a cross tab. On the y-ax there are the names of the several events, on the x-ax there are the tournement dates (eg from 2012-07-27 to 2012-08-11). In the middle of the cross-tab we can see the timelines related to the several events.In the database we have a table Editions and a table Events (details below). Is this question an impossible question or the contrary?Reference data:-- first table #EditionsIf Object_Id('TempDB..#Editions', 'U') Is Not Null Drop Table #EditionsCreate Table #Editions(Edition_ID Int Primary Key Clustered,Edition_Year int,Edition_Start datetime,Edition_Finish datetime) Set Dateformat DMYInsert into #Editions (Edition_ID, Edition_Year, Edition_Start, Edition_Finish)Select '1', '2012', '27-07-2012', '11-08-2012' UNION ALLSelect '2', '2008', '07-08-2008', '23-08-2008'-- second table #EventsIf Object_Id('TempDB..#Events', 'U') Is Not Null Drop Table #EventsCreate Table #Events(Event_ID Int Primary Key Clustered,Event_Name nvarchar(10),Event_Year int,Event_Start datetime,Event_Finish datetime)Set Dateformat DMYInsert into #Events (Event_ID, Event_Name, Event_Year, Event_Start, Event_Finish)Select '1', 'AA', '2012', '04-08-2012','07-08-2012' UNION ALLSelect '2', 'BB', '2012', '07-08-2012','10-08-2012' UNION ALLSelect '3', 'CC', '2012', '04-08-2012','06-08-2012' UNION ALLSelect '4', 'DD', '2012', '31-07-2012','01-08-2012' UNION ALLSelect '5', 'EE', '2012', '27-07-2012','03-08-2012' UNION ALLSelect '6', 'AA', '2008', '17-08-2012','20-08-2012' UNION ALLSelect '7', 'BB', '2008', '12-08-2012','13-08-2012' UNION ALLSelect '8', 'CC', '2008', '20-08-2012','20-08-2012'To give you some idea of the meant end-result (unfortunately in Excel) the following table for the year 2012: 27-7 28-7 29-7 30-7 31-7 1-8 2-8 3-8 4-8 5-8 6-8 7-8 8-8 9-8 10-8 11-8AA X X X X BB X X X X CC X X X DD X X EE X X X X X X X X I left the year out in the top and the 'X' can be anything else, eg a black or colourful square.When the above is possible I want to end up with an in-database solution eg stored procedure or function. I don't know which form is the best.Is there anyone out there who can help me out?Thanks in advance,Robert.

Temp Table Column name or number of supplied values does not match table definition

Posted: 07 Apr 2013 08:02 PM PDT

Hi all, Starting to play around with PBM and found an example of using DBCC LOGINFO. When i copy the code, i get the error message[code="sql"]Msg 213, Level 16, State 7, Line 1Column name or number of supplied values does not match table definition.[/code] meaning im using the wrong column type, I found another example of putting dbcc loginfo into a temp table and it had different values for the cols. I tried that and again it failed with the same error. Im guessings its because im in the UK so using a different collation to the online examples? Is there a way to find out what col types are for dbcc results? or is it a case of keep trying different values till you get it to work?

synonyms for table column

Posted: 07 Apr 2013 08:32 PM PDT

Dear AllThe way we hide the tbale name from user by using the synonyms. Is there a way to hide the column name.RegardsKrishna1

Two basic MySQL questions

Posted: 07 Apr 2013 08:30 PM PDT

Due to my lack of expertise with MySQL I have two rather basic questions.I use this query to give me the results I need, but I have to adjust the bold figures manually:SELECT computer_ID, 100 - ( nr -1) * (100/[b]157[/b]), 100 - (nr_rpi - 1)*(100/[b]157[/b]), ... FROM xyz WHERE Version LIKE 6.0 ORDER BY nrThe problem is the number 157. Ideally this should read 'COUNT(*)-1' or 'MAX(nr)-1'. But due to the aggregate nature this only gives me a single row, not the complete list. What I need to do is change absolute rankings on different variables to a scale of 0 to 100. I have - in this example - rankings from 1 to 158 (1 being top) and I need to adjust them to a scale of 100 (100 being top) to 0.What do I need to change in this query?I need to generate the rankings used above and currently I use manual labor to create these, but obviously it is far from ideal.This is what I use to manually generate these ranking figures:First empty Benchmark_results. Then get rank for total time:Insert into Benchmark_results(Rank, RankCS6, nr_rpi, nr_disk, nr_dvd, nr_h264, Ref_ID)SELECT @rank:=@rank+1 AS Rank,@nr:=CASE WHEN Version LIKE 6.0 THEN @nr+1 ELSE @nr END AS RankCS6,@nr_rpi:=CASE WHEN Version LIKE 6.0 THEN @nr_rpi+1 ELSE @nr_rpi END AS nr_rpi,@nr_disk:=CASE WHEN Version LIKE 6.0 THEN @nr_disk+1 ELSE @nr_disk END AS nr_disk,@nr_mpeg:=CASE WHEN Version LIKE 6.0 THEN @nr_mpeg+1 ELSE @nr_mpeg END AS nr_dvd,@nr_h264:=CASE WHEN Version LIKE 6.0 THEN @nr_h264+1 ELSE @nr_h264 END AS nr_h264,Ref_IDFROM Personal_data, (SELECT @rank:=0, @nr:=0, @nr_rpi:=0, @nr_disk:=0, @nr_mpeg:=0, @nr_h264:=0 AS i) AS iWHERE 1 ORDER BY [b]total_time[/b], crpi, cgain ASC LIMIT 0, 5000Now update rank for total time:UPDATE Personal_data, Benchmark_resultsSET Personal_data.nr=Benchmark_results.RankCS6WHERE Benchmark_results.Ref_ID=Personal_data.Ref_ID AND Personal_data.Version LIKE 6.0Now empty Benchmark_results again and repeat for crpi:Insert into Benchmark_results(Rank, RankCS6, nr_rpi, nr_disk, nr_dvd, nr_h264, Ref_ID)SELECT @rank:=@rank+1 AS Rank,@nr:=CASE WHEN Version LIKE 6.0 THEN @nr+1 ELSE @nr END AS RankCS6,@nr_rpi:=CASE WHEN Version LIKE 6.0 THEN @nr_rpi+1 ELSE @nr_rpi END AS nr_rpi,@nr_disk:=CASE WHEN Version LIKE 6.0 THEN @nr_disk+1 ELSE @nr_disk END AS nr_disk,@nr_mpeg:=CASE WHEN Version LIKE 6.0 THEN @nr_mpeg+1 ELSE @nr_mpeg END AS nr_dvd,@nr_h264:=CASE WHEN Version LIKE 6.0 THEN @nr_h264+1 ELSE @nr_h264 END AS nr_h264,Ref_IDFROM Personal_data, (SELECT @rank:=0, @nr:=0, @nr_rpi:=0, @nr_disk:=0, @nr_mpeg:=0, @nr_h264:=0 AS i) AS iWHERE 1 ORDER BY [b]crpi[/b], total_time, cgain ASC LIMIT 0, 5000Now update rank for crpi only:UPDATE Personal_data, Benchmark_resultsSET Personal_data.nr_rpi=Benchmark_results.nr_rpiWHERE Benchmark_results.Ref_ID=Personal_data.Ref_ID AND Personal_data.Version LIKE 6.0Now repeat for Disk_io:Insert into Benchmark_results(Rank, RankCS6, nr_rpi, nr_disk, nr_dvd, nr_h264, Ref_ID)SELECT @rank:=@rank+1 AS Rank,@nr:=CASE WHEN Version LIKE 6.0 THEN @nr+1 ELSE @nr END AS RankCS6,@nr_rpi:=CASE WHEN Version LIKE 6.0 THEN @nr_rpi+1 ELSE @nr_rpi END AS nr_rpi,@nr_disk:=CASE WHEN Version LIKE 6.0 THEN @nr_disk+1 ELSE @nr_disk END AS nr_disk,@nr_mpeg:=CASE WHEN Version LIKE 6.0 THEN @nr_mpeg+1 ELSE @nr_mpeg END AS nr_dvd,@nr_h264:=CASE WHEN Version LIKE 6.0 THEN @nr_h264+1 ELSE @nr_h264 END AS nr_h264,Ref_IDFROM Personal_data, (SELECT @rank:=0, @nr:=0, @nr_rpi:=0, @nr_disk:=0, @nr_mpeg:=0, @nr_h264:=0 AS i) AS iWHERE 1 ORDER BY [b]Diskio[/b], crpi, total_time, cgain ASC LIMIT 0, 5000And update rank for Diskio:UPDATE Personal_data, Benchmark_resultsSET Personal_data.nr_disk=Benchmark_results.nr_diskWHERE Benchmark_results.Ref_ID=Personal_data.Ref_ID AND Personal_data.Version LIKE 6.0Repeat for dvd:Insert into Benchmark_results(Rank, RankCS6, nr_rpi, nr_disk, nr_dvd, nr_h264, Ref_ID)SELECT @rank:=@rank+1 AS Rank,@nr:=CASE WHEN Version LIKE 6.0 THEN @nr+1 ELSE @nr END AS RankCS6,@nr_rpi:=CASE WHEN Version LIKE 6.0 THEN @nr_rpi+1 ELSE @nr_rpi END AS nr_rpi,@nr_disk:=CASE WHEN Version LIKE 6.0 THEN @nr_disk+1 ELSE @nr_disk END AS nr_disk,@nr_mpeg:=CASE WHEN Version LIKE 6.0 THEN @nr_mpeg+1 ELSE @nr_mpeg END AS nr_dvd,@nr_h264:=CASE WHEN Version LIKE 6.0 THEN @nr_h264+1 ELSE @nr_h264 END AS nr_h264,Ref_IDFROM Personal_data, (SELECT @rank:=0, @nr:=0, @nr_rpi:=0, @nr_disk:=0, @nr_mpeg:=0, @nr_h264:=0 AS i) AS iWHERE 1 ORDER BY [b]dvd[/b], crpi, total_time, cgain ASC LIMIT 0, 5000And update rank for dvd:UPDATE Personal_data, Benchmark_resultsSET Personal_data.nr_mpeg=Benchmark_results.nr_dvdWHERE Benchmark_results.Ref_ID=Personal_data.Ref_ID AND Personal_data.Version LIKE 6.0Repeat for h264:Insert into Benchmark_results(Rank, RankCS6, nr_rpi, nr_disk, nr_dvd, nr_h264, Ref_ID)SELECT @rank:=@rank+1 AS Rank,@nr:=CASE WHEN Version LIKE 6.0 THEN @nr+1 ELSE @nr END AS RankCS6,@nr_rpi:=CASE WHEN Version LIKE 6.0 THEN @nr_rpi+1 ELSE @nr_rpi END AS nr_rpi,@nr_disk:=CASE WHEN Version LIKE 6.0 THEN @nr_disk+1 ELSE @nr_disk END AS nr_disk,@nr_mpeg:=CASE WHEN Version LIKE 6.0 THEN @nr_mpeg+1 ELSE @nr_mpeg END AS nr_dvd,@nr_h264:=CASE WHEN Version LIKE 6.0 THEN @nr_h264+1 ELSE @nr_h264 END AS nr_h264,Ref_IDFROM Personal_data, (SELECT @rank:=0, @nr:=0, @nr_rpi:=0, @nr_disk:=0, @nr_mpeg:=0, @nr_h264:=0 AS i) AS iWHERE 1 ORDER BY [b]h264[/b], crpi, total_time, cgain ASC LIMIT 0, 5000And update rank for h264:UPDATE Personal_data, Benchmark_resultsSET Personal_data.nr_h264=Benchmark_results.nr_h264WHERE Benchmark_results.Ref_ID=Personal_data.Ref_ID AND Personal_data.Version LIKE 6.0Now clean up again:Insert into Benchmark_results(Rank, RankCS6, nr_rpi, nr_disk, nr_dvd, nr_h264, Ref_ID, Total, RPI, Gain)SELECT @rank:=@rank+1 AS Rank,@nr:=CASE WHEN Version LIKE 6.0 THEN @nr+1 ELSE @nr END AS RankCS6,@nr_rpi:=CASE WHEN Version LIKE 6.0 THEN @nr_rpi+1 ELSE @nr_rpi END AS nr_rpi,@nr_disk:=CASE WHEN Version LIKE 6.0 THEN @nr_disk+1 ELSE @nr_disk END AS nr_disk,@nr_mpeg:=CASE WHEN Version LIKE 6.0 THEN @nr_mpeg+1 ELSE @nr_mpeg END AS nr_dvd,@nr_h264:=CASE WHEN Version LIKE 6.0 THEN @nr_h264+1 ELSE @nr_h264 END AS nr_h264,Ref_ID, Diskio+dvd+h264+IFNULL(MPE_On,MPE_Off) As Total, FORMAT((50.893*Diskio+158.333*dvd+77.027*h264+950*IFNULL(MPE_On,MPE_O ff))/12138.0,4) As RPI, FORMAT(MPE_Off/IFNULL(MPE_On,MPE_Off),3) As GainFROM Personal_data, (SELECT @rank:=0, @nr:=0, @nr_rpi:=0, @nr_disk:=0, @nr_mpeg:=0, @nr_h264:=0 AS i) AS iWHERE 1 ORDER BY total_time, crpi, cgain ASC LIMIT 0, 5000This of course is far from ideal, but the only way I figured out that works, but among you geniuses there must be someone who can help me optimize this workflow and I'm open to all suggestions.

[how to] Attached image on form with query

[how to] Attached image on form with query


Attached image on form with query

Posted: 08 Apr 2013 08:44 PM PDT

I have a table called Products with codProduto, ProductName, Price and Image (in this case an attachment). I created a form of budget details, which uses codProduto to enter multiple products in the Budget. I did the Budget Details form based on a query consDetOrçamento because I'll need a field called Total, which calculates Quant * Preço. I want this query returns to Image (attached), and it returns. But when I create a new record and choose a product gives an error message. I know it's because of the attachment in the query. Question: How do I view each product (attached) working in this way?

I already tried

Forms! Budgets! Attachment = DLookup ("Image", "Products", "ProductID =" & Me.CodProduto)

but without success.

Multiple principals in a single kerberos keytab file?

Posted: 08 Apr 2013 07:55 PM PDT

I have a situation where I have multiple keytab files (different principal accounts) and my application is going to use these different service principal accounts and connect to one or more Oracle databases (all kerberos enabled). Can I maintain only one keytab (merging all into one)in my application environment? If I merge all keytabs into one using kutil and issue kinit (or okinit) using keytab and service principal, I could see the command runs successful and see the cache credentials getting updated. But I am not sure if the single cache file is actually storing tickets for all the principals. When I issue klist (or oklist), I could only see the last issued service principal's ticket.

Do we ever put more than one principal in a single keytab file and maintain it in an application env? If not, why there is an option to merge keytab files? only to be used in kdc may be? The reason why I want to maintain one keytab is, my applications rely on Oracle OCI thick driver (sqlnet.ora) and I cant maintain multiple keytab files and multiple sqlnet.ora, as sqlnet.ora cannot be switched or changed in runtime.

I know I am missing something here, perhaps a flaw in my application design using more than one service account at first place?

Please give me some directions, I dont find the right forum where I get my queries answered. Thanks in advance.

-Srivatsan

Delete entry from related tables

Posted: 08 Apr 2013 06:21 PM PDT

I can not use InnoDB engine because of some dependency troubles in LuaSQL and PtokaX.

My current query is something like:

DELETE e.*, m.*   FROM `entries` e   INNER JOIN `magnets` m       ON m.`eid` = e.`id`   WHERE e.`id` = %d  

where, %d is an integer passed by the user. This query works if there is atleast one entry in the magnets table related to the entries table.

But, it fails when the entries has no dependent row in magnets table. Can I modify this query to perform deletion from both tables, independent of child-rows in magnets? Or do I need to first check if a child exists and delete accordingly?

POSTGRES DB Server Blunder

Posted: 08 Apr 2013 08:01 PM PDT

This weekend I had to rebuild my laptop which had Postgres running. For some reason, I forgot to dump the data but I did make a complete copy of the original hard drive.

Is it possible to recover data from the original data files to transfer to another instance of Postgres?

I will very much appreciate your help.

Ray

Restoring a database back to a point in time before full backup was made

Posted: 08 Apr 2013 04:00 PM PDT

A problem with one of our database tables arose at the end of last week and I wanted to restore that database to a server in order to retrieve the table. We only have the last backup, and a full set of transaction logs before the error occurred, up to and after the full back was taken.

However, when I restore, I can only use the transaction logs to roll forward, not back. Is what I'm trying to do possible?

Sequence of events:

  • Error occurred: Friday April 5 at 12:00
  • Full backup taken: Sunday April 7 at 02:00

Many thanks.

I want to create a mirror in SQL Server 2008, but the database is in the status "restoring ..."

Posted: 08 Apr 2013 07:55 PM PDT

I'm trying to create a mirror in SQL Server 2008, and did the following:

On the principal server

USE master;  GO  ALTER DATABASE Test  SET RECOVERY FULL;  GO;    BACKUP DATABASE Test  TO DISK = N'D:\Test.Bak' WITH FORMAT  GO  RESTORE VERIFYONLY  FROM DISK = N'D:\Test.Bak'  GO    BACKUP LOG Banco  TO DISK = N'D:\TestLog.Bak' WITH FORMAT  GO  RESTORE VERIFYONLY  FROM DISK = N'D:\TestLog.Bak'  GO  

On the mirror server:

USE master;  GO;  RESTORE DATABASE Teste  FROM DISK = N'D:\Test.bak'  WITH NORECOVERY,  MOVE 'Test_Data' TO  N'D:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\Test_Data.mdf',  MOVE 'Test_Log'  TO N'D:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\Test_Data.ldf'  GO    USE master  GO  RESTORE LOG Test  FROM DISK = N'D:\TestLog.bak'  WITH NORECOVERY  GO  

I can not select, users can not change, change nothing!

it happens

how do I fix this?

Update one table from another table while sorting that table based on one column

Posted: 08 Apr 2013 02:13 PM PDT

This is the problem I'm trying to figure out in MySQL. We have an old table contains some forms submitted by our users. Somehow, the previous decision was each time a user comes to this survey, a new form will be submitted. So in the old table we easily have several rows with the same Firstname, Lastname, but different values in the other columns, and there's a timestamp column Date_Submission as well.

Now we are trying to move everything to a new table, but this time, for each person we only keep one row. And we want to keep some of the the latest old data of that user (like email, phone number, etc)

I could do the following:

update newtable, oldtable set newtable.email = oldtable.email where newtable.firstname = oldtable.firstname and newtable.lastname = oldtable.lastname;

Apparently this won't give me the "latest" old date for each person.

So I tried this one:

update newtable, oldtable set newtable.email = oldtable.email where newtable.firstname = oldtable.firstname and newtable.lastname = oldtable.lastname order by oldtable.Date_Submission;

But they MySQL will complain with:

"ERROR 1221 (HY000): Incorrect usage of UPDATE and ORDER BY".

So I'm wondering, what's the correct way to achieve this?

How can I alter the fill factor of an index outside of an ALTER INDEX command?

Posted: 08 Apr 2013 02:49 PM PDT

I am using a COTS package to perform index rebuilds across my SQL Server portfolio. The package unfortunately doesn't have an option to set index fill factors globally for an individual instance or DB (I can set them when scheduling rebuilds on an index-by-index basis, but with 1,600+ indexes, I'd like to set them in advance in bulk). Outside of an ALTER INDEX command, is there a way I can redefine existing index fill factors?

increasing mysql table open cache?

Posted: 08 Apr 2013 01:16 PM PDT

I often read that it is best to increase this variable slowly. Can someone explain why? My status indicates that I should increase it... What is best practice / "slowly"?

Thanks!

Execution of a java program by a trigger

Posted: 08 Apr 2013 01:18 PM PDT

Can an update trigger execute a java program?

I have a table T1 with a column named Flag. Whenever Flag changes, I want to run a trigger that results in an execution of a java program.

If MySQL isn't caching, then what is?

Posted: 08 Apr 2013 12:49 PM PDT

I ran a few commands to see whether MySQL is caching queries/result sets. It looks like this is not the case.

mysql> show variables like 'query_cache_size';  --------------  show variables like 'query_cache_size'  --------------    +------------------+-------+  | Variable_name    | Value |  +------------------+-------+  | query_cache_size | 0     |  +------------------+-------+  1 row in set (0.01 sec)      mysql> show status like 'Qcache%';  --------------  show status like 'Qcache%'  --------------    +-------------------------+-------+  | Variable_name           | Value |  +-------------------------+-------+  | Qcache_free_blocks      | 0     |  | Qcache_free_memory      | 0     |  | Qcache_hits             | 0     |  | Qcache_inserts          | 0     |  | Qcache_lowmem_prunes    | 0     |  | Qcache_not_cached       | 0     |  | Qcache_queries_in_cache | 0     |  | Qcache_total_blocks     | 0     |  +-------------------------+-------+  8 rows in set (0.00 sec)  

At the same time, I can run the same query at one given time, and it will take 42 seconds, and at another given time, and it will take 1.10 seconds.

If the MySQL server is not doing the caching, what is? Is it the mysql client? Is it my operating system (OS X)? Is it some service on the server that's not MySQL? Are all of these possibilities, or only one?

Should I disable triggers, timestamps in a slave in mysql?

Posted: 08 Apr 2013 08:41 PM PDT

We're using MySQL. We have a master that eventually will have 2 slaves. There are triggers in the db that execute when data changes in certain tables. I am wondering whether to disable the triggers in the slaves. It seems to me that I should. It also seems like things like timestamps should be disabled in the slaves because otherwise the data will be different on the slaves than the masters.

I'm not a DBA, I'm a developer. We don't have a DBA at my company so the ops admin and I are figuring it out as we go along. We already have the master and one slave set up and replicating and we are getting duplicate entry error on the slave.

We were thinking of not halting replication for duplicate replication errors, as in this post: http://www.ducea.com/2008/02/13/mysql-skip-duplicate-replication-errors/. We don't know whether this is a good idea. I do feel that this masks the problem, but neither of us knows how to fix the problem.

If we should disable the triggers and/or set the slave not to insert timestamps, how do we go about doing that? Or does establishing a db as a slave automagically do these things anyway?

MySQL PDO Cannot assign requested address

Posted: 08 Apr 2013 12:40 PM PDT

Can someone help me with this error?

[08-Apr-2013 17:44:08 Europe/Berlin] PHP Warning:  PDO::__construct(): [2002]      Cannot assign requested address (trying to connect via tcp://****:3306) in       /var/www/***  [08-Apr-2013 17:44:08 Europe/Berlin] PHP Fatal error:  Uncaught exception       'PDOException' with message 'SQLSTATE[HY000] [2002] Cannot assign requested       address' in /var/www/***  

I have a Server with a lot connections per second; out of about 100 Connections, a single one got this error.

I've tried this recommendation from stackoverflow however it does not solve my problem.

Odd syntax error when creating view with SCHEMABINDING [closed]

Posted: 08 Apr 2013 08:52 AM PDT

I've been working with SQL Views for a while now, and I recently read that something that I want to do requires that one of my views be created with schemabinding. That sounds fine, but I get a syntax error when I try to follow along with an example:

CREATE VIEW dbo.MyView WTIH SCHEMABINDING AS  SELECT t1.a, t2.b      FROM dbo.Table1 t1      INNER JOIN dbo.Table2 t2 ON t1.x = t2.x;  

I get the following error in SQL Server Management Studio 2012:

Msg 102, Level 15, State 1, Procedure DHRA_Appointment, Line 2  Incorrect syntax near 'WTIH'.  

This looks like some simple error in my SQL. What am I doing wrong?

SQL Server BULK INSERT

Posted: 08 Apr 2013 02:27 PM PDT

I have a file saved using BCP and I want to know the specification of the table of the database that it needs to be materialized on the database. Is that possible?

Thanks

Incremental backup in postgresql

Posted: 08 Apr 2013 04:59 PM PDT

I want to migrate a production database from one linux server to other. I am not concern about upgrading my database version, just I want to move the database from one server to other with minimum down time.

The version of my database is 8.3. Its of size 160 GB.

It takes 12 Hours to take the dump and 8 hours to restore it.

Over here the downtime is of 20 Hours, which I cannot afford. I can afford a down time of 4 hours at max.

So how will I refresh the delta changes that happened in 20 Hours into the production database with the database over the other server.

Please help.

Delete the N oldest entries grouped my multiple columns

Posted: 08 Apr 2013 12:29 PM PDT

I´d like to delete the oldest entries of my database, but in each category shall remain at least N words of each char.

Structure of TABLE words

id | word | char | category  

For example I have the following entries:

id | word | char | category  1     abc     a       1  2     abc     a       1  3     abc     a       1  4     bcs     b       1  5     bcs     b       1  6     bcs     b       1  7     csd     c       2  8     csd     c       2  9     asd     a       2  10    asc     a       2  

For N=2 the following entries should be removed as their id´s are the lowest:

1     abc     a       1  4     bcs     b       1  

Can somebody help? I tried various things (GROUP BY, HAVING, INNER AND OUTER JOINS) but it does not work.

select count(*) in mysql 5.5 innodb-- rewrite advice?

Posted: 08 Apr 2013 09:43 AM PDT

I need advice on how to rewrite a select count(*) query for innodb tables mysql 5.5. in new environment its very slow...

select count(*)   from mails3     join questions using (question_id)   where mails3.from_user_id = '86696'     and mails3.is_community_star = 1     and mails3.rating = 3     and questions.is_rated_community = 1;  

the query execution plan looks simple enough but very slow

*************************** 1. row ***************************             id: 1    select_type: SIMPLE          table: mails3           type: ref  possible_keys: question_id_index,emails_doublestars,favorite_emails,user_responses,owner_responses,sentmail,watchlist,read_responses,rating_date,read_and_unrated            key: emails_doublestars        key_len: 8            ref: const,const,const           rows: 2784          Extra: Using where  *************************** 2. row ***************************             id: 1    select_type: SIMPLE          table: questions           type: eq_ref  possible_keys: PRIMARY,is_rated_community_senddate_idx            key: PRIMARY        key_len: 5            ref: answerology.mails3.QUESTION_ID           rows: 1          Extra: Using where  2 rows in set (0.00 sec)  

Trying to create a Data Model

Posted: 08 Apr 2013 10:47 AM PDT

It may sound stupid but it's my first experience ever with databases.

It's a simple database for a car accessories store. I was hoping when making a sale on a certain item it effects the quantity of this item, not sure if this is possible with this design. Also as shown in the image an item could be for one car or more so I thought it should be many to many.

Diagram

Get Hierarchial Data in Self-Referencing Table (Parents , Childs) [closed]

Posted: 08 Apr 2013 01:47 PM PDT

In the following Fiddle,

sqlfiddle

I will pass a sno and I need all the parents of that sno in a table. And all the children of that sno in another table

Please refer to this question. Since both are related, they use the same table.

I need to display top 10 levels in a table.

Adding slave to existing master-master config in mysql throws Foreign key constraint failed error

Posted: 08 Apr 2013 09:08 AM PDT

We have two mysql servers running in master - master configuration.

Now we have to add a slave to the existing configuration.

But upon adding the third DB server and starting slave on it, it throws Cannot add or update a child row. foreign key constraint fails error.

We have tried taking a mysqldump from the master with --master-data and restored it. After that we started the slave on the slave DB. But even then, it says the same error. We have also tried XtraBackup. But that also throws the same error. Are we missing something?

EDIT 1

mysqldump command: mysqldump --master-data -uroot -p dbname > dbname.sql

We have also tried it this way: percona xtrabackup

In both cases, upon getting the foreign key error, we tried to dump and restore individual tables referenced by the foreign keys manually from the master to the slave. Upon doing this, the replication starts and seems to work normally with 0 seconds behind master for a few minutes, after which another foreign key error shows up, stopping the replication.

EDIT 2

auto_increment_increment is set to 2 in both masters. auto_increment_offset is set to 1 in Master 1 and 2 in Master 2. We have mostly innoDB tables and some MyISAM tables.

MySQL version 5.5.30.

EDIT 3

There is a follow up question.

Transform XPath map into XML document using relational data

Posted: 08 Apr 2013 09:57 AM PDT

Background

Most modern databases have XML functions that can be used to extract data in an XML format. I want to avoid the task of manually calling XML functions to extract the data.

This problem involves devising a generic solution to create XML documents based on mapping database tables (and JOIN conditions) to XPath expressions.

Problem

An XPath map codifies associations between an XML document and relational data as follows:

root               > people              -- "root" is a special keyword  person             > person              -- > is a node to table relation  person.first_name -> name/first          -- -> is a node/attribute to column relation  person.last_name  -> name/last  person.age        -> [@age]  account.person_id => person.person_id    -- => is a join  account            > person/account  account.number    -> [@id]  

Where a PERSON table might resemble:

person_id | first_name | last_name | age        123 | Peter      | Parker    | 18        456 | James      | Jameson   | 42  

Where an ACCOUNT table might resemble:

account_id | person_id | number           1 |       123 | 123456789  

Calling a function using the XPath map would produce the following XML document:

<people>    <person age="18">      <name>        <first>Peter</first>        <last>Parker</last>      </name>      <account id="123456789" />    </person>    <person age="42">      <name>        <first>James</first>        <last>Jameson</last>      </name>    </person>  </people>  

In this case, James Jameson does not have an account and so the corresponding XML element (account) is not included in the final document.

This is a difficult problem and a complete solution is not necessary. A solution that handles 80% of simple tables mapped to simple XML elements and attributes would suffice.

Question

What technologies, or open source implementations, already perform such a task?

Barring that, what algorithm would return an XML document based on such a generic XPath map? The algorithm must transform the structure defined by the XPath map into an XML document with the content from the relations defined in the XPath map.

Related Links

Links that are somewhat related to this idea.

Articles and White Papers

Articles:

Commercial Software

Similar solutions:

Concatenation Physical Operation: Does it guarantee order of execution?

Posted: 08 Apr 2013 09:55 AM PDT

In standard SQL, the result of a union all is not guaranteed to be in any order. So, something like:

select 'A' as c union all select 'B'  

Could return two rows in any order (although, in practice on any database I know of, 'A' will come before 'B').

In SQL Server, this turns into an execution plan using a "concatenation" physical operation.

I could easily imagine that the concatenation operation would scan its inputs, returning whatever input has records available. However, I found the following statement on the web (here):

The Query Processor will execute this plan in the order that the operators appear in the plan, the first is the top one and the last is the end one.

Question: Is this true in practice? Is this guaranteed to be true?

I haven't found any reference in Microsoft documentation that the inputs are scanned in order, from the first to the last. On the other hand, whenever I try running it, the results suggest that the inputs are, indeed, processed in order.

Is there a way to have the engine process more than one input at a time? My tests (using much more complicated expressions than constants) are on a parallel-enabled 8-core machine, and most queries do take advantage of the parallelism.

Search every column in every table in Sybase Database

Posted: 08 Apr 2013 08:01 PM PDT

I'm been taxed with the task of creating an application that pulls data from our Micros Point of Sales system. The POS is using a Sybase database running on one of our servers. The database schema is very convoluted. I've been able to figure out most of the schema to gain access to the data I need, however, there are a few things still left to find. I know what they are called in the actual POS, but I can't seem to find them anywhere in the database (although I haven't actually looked through all 200+ tables).

I'm wondering if there's any way to run a query to search for a specific string in all the columns in all the tables. BTW, I'm using the "Interactive SQL" application that comes with the Sybase Database Client software to connect to the database.

column precision exceeded. ORA-01438 is so unmeaningful

Posted: 08 Apr 2013 07:01 PM PDT

I am not sure if this is a question to StackExchange DB Admin group. please let me know if its not.

Let's suppose you have a Java batched prepared statement writing data to database. If one of the columns has a value larger than specified precision it throws ORA-01438 exception. Is there any way to find out which column is that?

Applying user-defined fields to arbitrary entities in a SQL Server database

Posted: 08 Apr 2013 09:01 PM PDT

Currently we have an old (rather crude) system that has user defined fields, which are mapped against rows in arbitrary tables. This was an after-the-fact modification based on a customer request, and it wasn't really designed to scale well. Our system has around 60 different types of entities, which makes things even more complicated. Essentially the implementation looks like this:

USER_DEFINED_FIELDS:

UDF_ID         int  ENTITY_TYPE    int  UDF_NAME       nvarchar(64)  UDF_DATA_TYPE  int  UDF_FORMAT     nvarchar(16)  UDF_MASK       nvarchar(24)  UDF_FLAGS      int  

UDF_VALUES_NUMBER:

UDF_ID         int  ENTITY_ID      int  VALUE          int  MODIFIED       datetime  

UDF_VALUES_TEXT:

UDF_ID         int  ENTITY_ID      int  VALUE          nvarchar(255)  MODIFIED       datetime  

etc...

This gets nice and fun when we generate our own ways to index compound primary keys, but that's another DailyWTF-worthy story.

Obviously this is pretty hideous, and leads to some spectacularly horrific queries being generated, but it's worked alright for now because we limit each entity to a maximum of 5 user defined fields. As a quick disclaimer, I wasn't with the company when this design decision was made! ;)

Anyway, we're about to start a shiny new project and will inevitably need a better way of doing this, with no restrictions on the number of UDFs we can apply to entities, increased performance, and less horror in the generated query department. If we could run a NoSQL solution like Mongo or Redis I'd be happy and wouldn't need to bother you all, but sadly that's not an option. Instead, I need a way to do this from within SQL Server 2008 R2.

So far, I've come up with the following options:

  • Individual UDF table for each entity type, with identical structures.
    • Benefits: Queries are faster and simpler, solves the compound index problem, can search by UDF.
    • Downsides: Table versioning is more cumbersome, lots of extra tables, difficult to index across similar entity types (e.g. purchase orders and sales orders)
  • Binary blob UDF column in every entity's table.
    • Benefits: Really fast: no fetch overhead, no extra JOINs.
    • Downsides: Can't search by UDF, extra proprietary code required.
  • XML column in every entity's table.
    • Benefits: Fast like binary blobs, seems to be indexable.
    • Downsides: No idea how they work - looks complicated!

So, do any of these ideas have merit? Is there a better alternative?

How to install SQL Server 2008 R2 Profiler

Posted: 08 Apr 2013 02:31 PM PDT

SQL Server Management Studio doesn't have it on the Tools menu in my case, as is reported in some blogs and forums. Apparently you somehow either got it or not, but how did you get it, if you did, is a luck of the draw, or something? I am a MSDN Subscriber.

Is is possible to install the Profiler using some well-defined steps? Like download X, press menu item Y and click button Z sort of prescription?

[Edit additional info] Upgrading from SQL Server 2005 Enterprise Edition to Server 2008 R2 Enterprise Edition didn't help (previously, of 2008 R2 editions I had only Express Edition). I am trying Setup program's various options but am unable to upgrade from Basic Tools (originally installed with 2008 R2 Express Edition) to Complete Tools, which I hope would contain the Profiler. Complete Tools option can't be checked in any variation of Setup options that I am trying.

Simplified/automated datafile free disk space reclaim

Posted: 08 Apr 2013 09:10 AM PDT

On Oracle version 11g:

After Googling, I can't find a simple way to reclaim free space after deleting a table.

I've found a lot of explanations, telling how the datafile becomes fragmented, the big stack of boring queries you have to run in order to move the "empty space" at end of the datafile (table by table... even when you have 200 tables !?).

Then you have to reduce the datafile size by "guessing" by how much you can reduce it, or you must know exactly what is your "block size"... And finally you should not forget to "rebuild the indexes".

See for example : http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:54178027703899

and http://www.oracle-base.com/articles/misc/ReclaimingUnusedSpace.php

Is there a simple PL/SQL procedure which, given a tablespace name or datafile name, would to that job? Or any similar Oracle tool?

How do I create an index to speed up an aggregate LIKE query on an expression in postgres 8.4?

Posted: 08 Apr 2013 11:47 AM PDT

I may be asking the wrong question in the title. Here are the facts:

My customer service folk have been complaining about slow response times when doing customer lookups on the administration interface of our Django-based site.

We're using Postgres 8.4.6. I started logging slow queries, and discovered this culprit:

SELECT COUNT(*) FROM "auth_user" WHERE UPPER("auth_user"."email"::text) LIKE UPPER(E'%deyk%')  

This query is taking upwards of 32 seconds to run. Here's the query plan provided by EXPLAIN:

QUERY PLAN  Aggregate  (cost=205171.71..205171.72 rows=1 width=0)    ->  Seq Scan on auth_user  (cost=0.00..205166.46 rows=2096 width=0)          Filter: (upper((email)::text) ~~ '%DEYK%'::text)  

Because this is a query generated by the Django ORM from a Django QuerySet generated by the Django Admin application, I don't have any control over the query itself. An index seems like the logical solution. I tried creating an index to speed this up, but it hasn't made a difference:

CREATE INDEX auth_user_email_upper ON auth_user USING btree (upper(email::text))  

What am I doing wrong? How can I speed up this query?

Search This Blog