Wednesday, April 24, 2013

[SQL Server 2008 issues] Function To Add [n] Days to A Date and Adjust to the Nearest Business Date

[SQL Server 2008 issues] Function To Add [n] Days to A Date and Adjust to the Nearest Business Date


Function To Add [n] Days to A Date and Adjust to the Nearest Business Date

Posted: 21 Apr 2011 12:49 PM PDT

Anyone have a ready-made function that will add [x] number of days to a date and then adjust it to the nearest business day (Mon-Fri)?The code below is a VB.NET version of what I'm looking for. It allows for [n] days to be added (or subtracted if a negative number) to a date and then optionally adjusted if the result falls on a Saturday or Sunday.Its late, I'm tired and yes, I'll probably play around myself. But thought I'd ask anyway.[code] Public Shared Function getCalculatedDate(ByVal CalculationBaseDate As Date, ByVal Interval As Integer, Optional ByVal AdjustToNearestBusinessDate As Boolean = False) As Date Dim NewDate As Date NewDate = DateAdd(DateInterval.Day, Interval, CalculationBaseDate) If AdjustToNearestBusinessDate Then If Interval < 0 Then If DatePart(DateInterval.Weekday, NewDate) = 1 Then NewDate = DateAdd(DateInterval.Day, -2, NewDate) End If If DatePart(DateInterval.Weekday, NewDate) = 7 Then NewDate = DateAdd(DateInterval.Day, -1, NewDate) End If End If If Interval > 0 Then If DatePart(DateInterval.Weekday, NewDate) = 1 Then NewDate = DateAdd(DateInterval.Day, 1, NewDate) End If If DatePart(DateInterval.Weekday, NewDate) = 7 Then NewDate = DateAdd(DateInterval.Day, 2, NewDate) End If End If End If getCalculatedDate = NewDate End Function[/code]

Returning datasets in groups of 3 in SQL

Posted: 23 Apr 2013 06:47 PM PDT

I have a reporting services report that needs to generate multiple forms for the same report. Each report generated needs to always have 3 rows in it irregardless of the number of rows returned by the underlying query. That is, if my query returns 7 rows, I need to generate 3 reports for the with first 2 having 3 rows each and the third one having only 1 row in it. I have tried using NTILE with the parameter 3 but this only splits my set into 3 groups with the first having 3 rows and the remaining 2 groups having 2 rows each when my dataset of 7 rows. Is there any other way to split the records without using NTILE? Please help.

Building and shipping applications with SQL Server2008 R2 Express

Posted: 23 Apr 2013 05:23 PM PDT

If I am including sql server 2008 r2 express in my application then i have to signup for free distribution rights according to download page of sql server 2008 r2 express(http://www.microsoft.com/en-in/download/details.aspx?id=23650) I have downloaded the licence terms from [url=http://www.microsoft.com/en-us/download/details.aspx?id=29693]here[/url] . But whether I need to inform Microsoft or have to do anything to include sql server express in my application?

Help with "There is insufficient system memory in resource pool 'internal' to run this query"

Posted: 23 Apr 2013 11:01 AM PDT

Hi Experts I am getting this error when Running SSID job please helpHi,We are trying to run an SSIS job by pointing to "APPLICATION". It is failing because of insufficient system memory (please see below for additional details). This appears to be a problem on DATABASE.Time: Tue, Apr 23, 2013 01:44:35 PMJob: jbSpecCapMMCalcEarningsStep: FACETS - Generate MMStatus: failedServer: CDDEVDBV001Message: Executed as user: Microsoft (R) SQL Server Execute Package Utility Version 10.50.1600.1 for 64-bit Copyright (C) Microsoft Corporation 2010. All rights reserved. Started: 1:44:35 PM Error: 2013-04-23 13:45:43.38 Code: 0xC0202009 Source: Load ods_PCPNeighborhood FACETS_SUPP_vwPRK_PROVIDER_NEIGHBORHOOD_MAPPING [5886] Description: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80040E14. An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "There is insufficient system memory in resource pool 'internal' to run this query.". End Error Error: 2013-04-23 13:45:43.39 Code: 0xC004701A Source: Load ods_PCPNeighborhood SSIS.Pipeline Description: component "FACETS_SUPP_vwPRK_PROVIDER_NEIGHBORHOOD_MAPPING" (5886) failed the pre-execute phase and returned error code 0xC0202009. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 1:44:35 PM Finished: 1:45:43 PM Elapsed: 68.344 seconds. The package execution failed. The step failed.Time: Tue, Apr 23, 2013 01:44:34 PMJob: jbSpecCapMMCalcEarningsStep: (Job outcome)Status: failedServer: CDDEVDBV001Message: The job failed. The Job was invoked by User TIDALUSER. The last step to run was step 1 (FACETS - Generate MM).Completed at 04/23/2013 01:45 PM

Best Solution?

Posted: 23 Apr 2013 05:05 AM PDT

I have the following issue:1. I work for a company that has boutique stores in various locations around the world. We also sell our products to other retailers, our trading partners. In some cases, we have small kiosks within our trading partners' stores. 2. I have built a data warehouse in SQL Server 2008 R2 for the sales of our products, both wholesale and retail. Actually, version 1 of the data warehouse was already underway when I was hired.3. In the staging database of our system, we have a table that contains information about both our boutiques and our kiosks inside our trading partners' stores. This table ("Call Cycle") contains the stores status (open/closed), Trading Partner's name and ID if applicable, store name within the retail chain, store opening date, store closing date, etc.4. The Call Cycle table also includes our sales organization's hierarchy that covers the particular store (VP, regional manager, etc.). 5. Currently, a business user creates a Excel spreadsheet (.xlsx) that contains the information described above. Then an SSIS package loads the .xlsx into the table in our staging database. The spreadsheet is updated intermittently, so the SSIS package is launched manually. Once the .xlsx file is loaded, the business user who modified the file can't look at the table himself to see what was loaded.Needless to say, this process is inefficient. The question is, what is a good way to modify or replace this process? I was considering Master Data Services since we are using SQL Server 2008 R2. But is MDS overkill for one table? Is there another way that requires less overhead and can be up and running quickly? I am the only developer working on this and I have project manager duties along with my SQL Server developer duties (I think we are woefully understaffed). May I please have some advice on this?

Installing a named Instance of SQL Server 2012, on a server where it already had SQL Server 2008 R2 Instance.

Posted: 23 Apr 2013 07:56 AM PDT

Hi Expertts,I got a requirement to install a new SQL Server 2012 on a server where it already had SQL Server 2008 R2 installed.My question is: what care i have to take before and after installing the new instance.The first instance is installed on C:can i install the second instance on the same drive leaving the default location as it is?

Database Data File Physical Fragmentation From Small Auto-Growth Setting

Posted: 13 Jan 2013 04:42 AM PST

To All,I need some advice. I've just started a new job and one of the first problems I've encountered is that a 365 GB database has auto-grown 1 MB at a time since the day it was created.3 years ago, the database was on direct attached storage. Then, it was moved to a Dell-Compellent SAN, a system for which I am a certified administrator. The database migration involved a database backup and a restore to the SAN which eliminated the physical fragmentation. But after that, it kept auto-growing in 1 MB increments. So, it probably has at least 100,000->200,000+ 1MB file fragments scattered all over multiple disk drives. I am going to review the performance implications of this condition with Dell-Compellent.In the meantime, I'd like to hear from any of the community on this forum:1. If you've had this situation, how did you respond to it?2. Is the potential for server performance degradation reduced if the database data file is on a SAN?3. If I must take action to rectify this condition, what are my best options?On the last question, I'm considering 3 options:1. A full backup and restoration of the database:1a. Backup the database.1b. Take the database out of production.1c. Delete the database.1d. Recreate the database with appropriate initial and auto-growth settings.1e. Restore the database.1f. Reconnect the Login and User SIDs.1g. Resume production.2. Take a SAN based snapshot of the database and use it to replace the existing database. (I don't think this will produce the desired results and I'm going to review it with Dell-Compellent.)3. File defragmenting software. There is a file defragmenting program available from Microsoft that is specifically designed to turn physically fragmented files into contiguous files: http://technet.microsoft.com/en-us/sysinternals/bb897428.aspx. Contig.exe Version 1.7. It sounds promising. I would be very interested in your responses, especially if you've actually had to solve a similar problem.

concatenate

Posted: 23 Apr 2013 01:51 AM PDT

Can not figure out what I am doing wrong in table on I have FirstName and PhoneNumber and in the second table I have Carrier with @txt.***.Com etc. So when I write a proc to send text I want to be able to concatenate phonenumber and @txt.***.com togetherExampleSelect FirstName,PhoneNumber + SMSfrom Names as Ninner Join Carrier as Con N.CarrierID = C.CarrierID

Need Computed Column / Trigger / Function to insert data from another table

Posted: 23 Apr 2013 06:10 AM PDT

Hi,I've got the following Problem, which I'd like to solve in SQL Server 2008:I got a Table 'Products', each product with a range of up to 3 different areas, which contains Information about the Total Amount of items for this specific product.So the first product contains of a Total of 10 different Parts, whereas the 2nd product contains out of 5 and so on … PROD_ID : ACA : CW : DIVE : BR_ID1 : 5 : 1 : 4 : 102 : 0 : 0 : 5 : 113 : 1 : 0 : 2 : 124 : 2 : 1 : 2 : 10I got a second table named 'Breakdown' which defines the commissions paid for each different Product-Part ... BR_ID : ACA : CW : DIVE10 : 85 : 425 : 21311 : 0 : 0 : 29012 : 850 : 0 : 325This table is fix and used to calculate the Comms for each Individual – it's NOT editable by the USER.My third table 'Orders' contains all the ORDERS placed by the Customers ...ORD_ID : PROD_ID : Date 100 : 1 : 01-Apr 101 : 1 : 01-Apr 102 : 2 : 02-Apr 103 : 3 : 02-Apr 104 : 1 : 03-Apr In each of the 3 tables the first column is IDENTITY and as such definitely not entered twice.Here my problem:In my fourth table 'Comms' I need to get all the Commissions correctly entered and it's done by the Receivers themselves AFTER the Product has been sold and PAID- obviously it's unlikely that Comms are not entered at all seeing that people want to collect those - but it's more likely that people enter those multiple times/multiple people enter the Comms twice ... What I try to achieve is a Calculated Field to validate the entries in that table on SAVE Action, with multiple criterias.I want to only allow Users to enter the maximum allowed Amount of Prod.-Parts from the first table (ACA/CW/DIVE) for a single ORDER_ID from Table 3, so for example if Order_ID 100 refers to PROD_ID 1 (which contains 5 Parts ACA), then a Maximum of total Entries of ACA could not exceed 5 in ALL rows entered for this ORDER_ID or an error message is thrown …My Table 4 'Comms' looks like this so far:CREATE TABLE [dbo].[Comms]( [Comm_ID] [int] IDENTITY(65000,1) NOT NULL, [Cert_Date] [datetime] NOT NULL, [Ins_ID] [int] NOT NULL, [ORD_ID] [int] NOT NULL, [Amount_Aca] [int] NOT NULL, [Amount_Confined] [int] NOT NULL, [Amount_Dives] [int] NOT NULL,What I would like now is a Computed Column of the style like "Compare the Records for this Order_ID, find the Product and Breakdown-Values for it from Table 1 & 2 and then search for existing Records with that Order_ID in the 'Comms' Table itself. If no entry has been made, return the Maximum Parts as defined in the 'Products' Table. If someone entered for example already 2 ACA for PROD_ID 1 under ORD_ID 101, then return the remaining Value as 5 (MAX ITEMS ACA) - 2 = 3 ...A simple View can do this, but unfortunately I couldn't find a way to Create it with Computed Columns (as they can't refer to other Tables) or Functions/Triggers, as well as it has to be done more or less on Record Creation.The ID-Fields on all Tables are set IDENTITY, and FK are set in between them.On a positive side: This application is not for millions of Transactions at the same time, so it's a chance of about one million to one, that two users will enter their Comms at the same time, so double entries/engine performance shouldn't be a problem. ...Any help with this would be really appreciated, I'm new to this stuff, so any extra information that could help I'd love to provide. Thanks a lot already! :)

Script

Posted: 22 Apr 2013 09:20 PM PDT

Script to find the all the db roles of all the users in all the databases in sql 2005\2008

Backup taking too long after migration to SQL 2008 R2

Posted: 23 Apr 2013 06:40 AM PDT

Hello,We recently migrated databases from SQL 2005 to SQL 2008 R2. It's a nice shiny hardware, much better than the old server. It used to take 40 minutes to backup 550 GB database on old crap hardware with SQL 2005 and now it takes 7+ hours. I also tried with RedGate SQL Backup professional and it also takes 7 or more hours for the same backup. Compatibility mode is changed to 100 as well.Thoughts? Thanks for your help in advance.

Inserting records with a TIMESTAMP field for Archiving Table Data

Posted: 23 Apr 2013 06:14 AM PDT

Because I am paranoid (well, I call it 'realistic'), I like to backup my data before I delete anything and so I have quite a few examples of[code="sql"]INSERT INTO _Archive_TableSELECT * FROM Data_Table WITH (nolock)WHERE Last_Modified_Date < @Archive_Date[/code]Then, when I do the actual delete from Data_Table, I only delete the records that have a Unique Identifier already in the _Archive_Table.However, since a recent upgrade, this does not work on some tables because there is a field with the TIMESTAMP data type. I like using the asterisk, especially when there are 50 fields so specifying the fields is not acceptable.I have found the following solution;1. Get Query Analyser to generate the SQL script to Create the original table (Data_Table in my example above) and then edit the script as follows;a) Use _Archive_Table as the table name (use _Archive_Table_New as the name if an old _Archive_Table already exists)b) Replace the TIMESTAMP datatype with BINARY(8)c) Add a field to the end called Archive_Date with the type DATETIME2. Run the CREATE TABLE _Archive_Data SQL Script.3. If you have an existing archive table that you want to keep the data from, run the following SQL Scripts in order;[code="sql"]INSERT INTO _Archive_Table_NewSELECT *, GETDATE()FROM _Archive_Table WITH (nolock)TRUNCATE TABLE _Archive_TableDROP TABLE _Archive_TableSELECT *INTO _Archive_TableFROM _Archive_Table_New WITH (nolock)TRUNCATE TABLE _Archive_Table_NewDROP TABLE _Archive_Table_New[/code]4. Edit the archiving script to include a GETDATE() field after the asterisk;[code="sql"]INSERT INTO _Archive_TableSELECT *, GETDATE() FROM Data_Table WITH (nolock)WHERE Last_Modified_Date < @Archive_Date[/code]The TIMESTAMP field will implicitly convert to BINARY(8). The Archive_Date field is a useful way to find when a record was deleted and potentially allows you to delete the data from the _Archive_Table a year or two later once you are sure that no-one is going to want to recover it. I tend to use an initial underscore for backup tables in order to easily find them. If the backup table is a complete copy of an existing table, I usually just add the underscore as the prefix and the current date _YYYYMMDD as the suffix to the original table name. This makes it easy to know how old the backup table is for later pruning.A similar approach will work if the original Data_Table structure changes and you want to keep the old _Archive_Table active.

Subquery

Posted: 23 Apr 2013 05:10 AM PDT

I have gathered employee dataSELECT EmpID,LName,FName,StationIDFROM emp.employeesProvides me with my employee list of 2475This works fine until I doSELECTa.EmpID,a.LName,a.FName,a.StationID,b.RegionFROM emp.employees aJOIN emp.division bON a.StationID=b.StationIDI am getting multiple records back for the same person. I even tried a left join.Thoughts on how to make sure I can add the Region and only get the 2475 records back?

Is there a way to find who added data and log file to database

Posted: 23 Apr 2013 05:33 AM PDT

Hi SQL Guru's,Is there any way to find out who has added any new data file or log to the database without enabling DDL triggers.RegardsMd.S.Hassan

Is this data lost?

Posted: 23 Apr 2013 01:14 AM PDT

Hi, Have a database that is set to recovery model=full. Some data was inserted then immediately deleted, is it possible to get the inserted data back? A Full backup has been done since this happened so I'm assuming that the tail of the transaction log has now been "flushed". It was a Sql 2005 express installation.

Aggressive Indexes

Posted: 23 Apr 2013 05:54 AM PDT

Hi everyone. I used Brent Ozar's excellent sp_blitzindex script to discover the below issue. There is a link in the script to explain the output but I'm still a bit confused. Is it telling me that the index had to wait 11,40 times to get a page lock totaling 86 minutes since the last time SQL Server restarted? It has been almost a month since restart so does that means that 86 minutes of processing time has been lost? With an avg duration of 0, should I be concerned?Thanks for reading.HowardAggressive Indexes: Total lock wait time > 5 minutes (row + page)dbo.OAD.JJ_032913_DTA1 (12): Page lock waits: 11,480; total duration: 86 minutes; avg duration: 0 seconds; Lock escalation attempts: 10,601; Actual Escalations: 7.

Copy db from server a to server b nightly

Posted: 23 Apr 2013 02:50 AM PDT

Hello - I have a db on one server, and I'd like to copy it over to a different server every night to use as a backup/reporting instance. These are both SQL Server 2008 R2. (Not sure what else is relevant information.)Anyway, if there is something that details the steps to do this, I would be happy to go there and read that - does anyone have any recommendations? I did search the forums, but what I saw wasn't ss2008, so...thought I'd better ask.Thank you in advance for any guidance/advice/knowledge-sharing --Donnap.s., you can take it as given that my "dba skills" are somewhat short of novice. :( I get the security stuff (somewhat), but the manipulation/creation, etc....pretty much zip. Hence the cluelessness.

Trouble grabbing values for multiple IDs

Posted: 23 Apr 2013 04:28 AM PDT

Trying to get a total of instances of events and assigning it an alias [b](SELECT COUNT(id) from Finish f1 where PerfDate = '2013-04-18' and NoEvent = 'yes' ) AS numbercancelled[/b] ...but if there are more than one ID that had a NoEvent = 'yes' , i dont want the grand total of both IDs, but individual totals. group by here don't work because I just need one value for numbercancelled, I'm wondering if i need an sql function to return a value for each ID, where result would be something like:ID numbercancelled100 3115 7

Optimizer and stale statistics

Posted: 23 Apr 2013 02:53 AM PDT

The question is, what happens when statistics on NORECOMPUTE set tables go stale? i.e. pass the rowmodctr > (unfiltered rows * 0.2) + 500 (assuming unflitered rows > 500), i.e. go past the point where they would be automatically recomputed if the NORECOMPUTE wasn't set.Normally, automatic recomputation of the stats would force a recompile. Does the optimizer even check the staleness of the stats if NORECOMPUTE is set? If it does and notices they are out of date, but it can't recompute them, does it recompile the plan anyway? When, for any reason, a plan is recompiled and the optimizer finds the stats it had previously used were out of date, does it use them anyway? Or does it ignore them and act as if no stats existed or could be automatically created?And yes, before anyone says, I'm well aware that NORECOMPUTE is not advised. That's not the question.

Urgent - Reporting services migration - Subscription jobs did not get created

Posted: 23 Apr 2013 04:15 AM PDT

We are moving to a new data center. We have RS install on appweb server that points to different sql server for Reportserver and Reportservertempdb. So, We created a new virtual appweb server and new physical database sql server 2008 r2 at new DC. I did a fresh install of new reporting services 2008 r2 on appweb server. I backed up and restored Reportserver and reportservertempdb dbs from old 2005 sql server to new 2008 r2 database server.During new RS configuration, I pointed databases to the new restored databases and restored encryption keys. When launched reporting services URL, it works fine.I stopped and restarted reporting services after configuration.However, I am not seeing any single job created on new database server. Old server had 20 jobs.How do I get these subscriptions back to the new install? I thought this should be created with migration and I do not have to do anything.One odd thing I had to do during config was that:In scale-out deployment tab in RS configuration, it was showing both the new and old appweb server name as status joined. I was unable to remove the old server name by clicking 'remove server' there. So, I deleted old server name from keys table in Reportsserver as suggested by some forums to have a clean installation. Does it have anything to do with jobs not created?What do I need to do now to get subscripts jobs created. Ther eare only 20 jobs here but another RS server has over 800 subscriptions that I can not mess up. So, need to find the issue and correct it before I can move on to the another RS instance.We are under severe time crunch. I apprecite your help.Thanks,

Need an update statement

Posted: 22 Apr 2013 10:36 PM PDT

[code="sql"]/****** Object: Table [dbo].[PhoneTable] Script Date: 4/23/2013 4:55:46 PM ******/SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOSET ANSI_PADDING ONGOCREATE TABLE [dbo].[PhoneTable]( [PhoneNumber] [varchar](10) NULL, [PhoneType] [nvarchar](4) NULL, [NameID] [int] NOT NULL, [AssocID] [int] NOT NULL, [SortOrder] [int] NULL) ON [PRIMARY]GOSET ANSI_PADDING OFFGOINSERT [dbo].[PhoneTable] ([PhoneNumber], [PhoneType], [NameID], [AssocID], [SortOrder]) VALUES (N'2626770315', N'HOME', 1000, 20, 0)GOINSERT [dbo].[PhoneTable] ([PhoneNumber], [PhoneType], [NameID], [AssocID], [SortOrder]) VALUES (N'2626270104', N'CELL', 1000, 20, 1)GOINSERT [dbo].[PhoneTable] ([PhoneNumber], [PhoneType], [NameID], [AssocID], [SortOrder]) VALUES (N'0003343553', N'HOME', 1000, 10, 0)GOINSERT [dbo].[PhoneTable] ([PhoneNumber], [PhoneType], [NameID], [AssocID], [SortOrder]) VALUES (N'0003354427', N'WORK', 1000, 10, 2)GOINSERT [dbo].[PhoneTable] ([PhoneNumber], [PhoneType], [NameID], [AssocID], [SortOrder]) VALUES (N'2628479870', N'CELL', 1000, 30, 1)GOINSERT [dbo].[PhoneTable] ([PhoneNumber], [PhoneType], [NameID], [AssocID], [SortOrder]) VALUES (N'3458790', N'WORK', 1000, 30, 2)GO/****** Object: Table [dbo].[NamePhone] Script Date: 4/23/2013 4:56:19 PM ******/SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOSET ANSI_PADDING ONGOCREATE TABLE [dbo].[NamePhone]( [PhoneNbr1] [varchar](20) NULL, [PhoneType1] [char](15) NULL, [PhoneNbr2] [varchar](20) NULL, [PhoneType2] [char](1) NULL, [NameID] [int] NOT NULL, [AssocID] [int] NOT NULL) ON [PRIMARY]GOSET ANSI_PADDING OFFGOINSERT [dbo].[NamePhone] ([PhoneNbr1], [PhoneType1], [PhoneNbr2], [PhoneType2], [NameID], [AssocID]) VALUES (NULL, NULL, NULL, NULL, 1000, 20)GOINSERT [dbo].[NamePhone] ([PhoneNbr1], [PhoneType1], [PhoneNbr2], [PhoneType2], [NameID], [AssocID]) VALUES (NULL, NULL, NULL, NULL, 1000, 10)GOINSERT [dbo].[NamePhone] ([PhoneNbr1], [PhoneType1], [PhoneNbr2], [PhoneType2], [NameID], [AssocID]) VALUES (NULL, NULL, NULL, NULL, 1000, 30)GO[/code]Hi Friends, I want to update Phone number 1 and phone number 2 in NamePhone table from Phone Table....Requirements: 1. If a AssocID has a Home number , it has to update to phone number1 else cell number else work number.......as you can see the sortorder field in PhoneTable.... 2. If Phone number 1 field is having home number then update phone number 2 field with cell number else work number....if phone number 1 has cell number then update phone number 2 with work number....I tried to do this with case statement, but some how I failed....So i need your suggesstions friends.....

Need to change NDF initial size to reclaim disk space

Posted: 23 Apr 2013 02:16 AM PDT

I've inherited a server with several databases, one which has 7 NDF files. The initial size for most of the NDF files are 500 MB, with one of them having an initial size of 104G! I cannot shrink the database or file to recover the space that this .NDF using, and it's up to 55G.So my question is this, can I change the INITIAL SIZE in SSMS for that NDF data file without taking the DB down and losing data? How can I get that NDF to move data out into the other NDF files? Perhaps I need to increase all the NDF fileS initial growth size to something more manageable?

snapshot agent failing

Posted: 23 Apr 2013 02:12 AM PDT

We are currently having a problem with snapshot replication which we use load data into out Reporting servers.Would the fact the main data file is 98.93% of the databse we are snapshoting full cause the snapshot agent to fail as we are having issues with setting up the replication.Any advice would be helpful.Thanks:-)

Language British

Posted: 22 Apr 2013 10:36 PM PDT

I have one of dozen users who is getting the date in US format, this despite running a script to change her langauge, also by checking Security > Logins she is showing the langauge as British, she is running Windows 7 and if she uses another machine the date shows (and edits) correctly.

Help with Second highest date

Posted: 22 Apr 2013 10:23 PM PDT

I have a stored procedure and I am selecting Renewal Date and the Last Renewal date (second last greater date)[code="sql"]ALTER PROCEDURE [rept].[spRenewalDates] @startDate DATETIME, @endDate DATETIMEASBEGINSET NOCOUNT ON;--DECLARE @system INT SELECT tblrenewals.SystemNum, (case when Year([Date]) = 1900 or Year([Date]) = 2999 then '' else ISNULL(CONVERT(VARCHAR, DATE, 103) + ' ' + left(CONVERT(VARCHAR, DATE, 108), 5),'') end ) AS [Renewal Date] ,(SELECT MAX( date ) FROM tblRenewalsWHERE date != ( SELECT MAX( date ) FROM tblRenewals where SystemNum = tblSystem.SystemNum ) and SystemNum = tblSystem.SystemNum) AS [Last Renewal Date] ,ClientName, Name AS [Ot's] FROM tblRenewals INNER JOIN dbo.tblSystem ON dbo.tblRenewals.SystemNum = dbo.tblSystem.SystemNum INNER JOIN dbo.tblClient ON dbo.tblSystem.ClientNumber = dbo.tblClient.ClientNumber INNER JOIN dbo.tblReps ON dbo.tblClient.RepType1 = dbo.tblReps.Id WHERE (DATE BETWEEN @startDate and @endDate) ORDER BY Date DESC END[/code]To get the correct format for the renewal date I am using[code="sql"] ISNULL(CONVERT(VARCHAR, DATE, 103) + ' ' + left(CONVERT(VARCHAR, DATE, 108), 5),'')[/code]Can someone please edit the above stored procedure so I get the Last Renewal date in the same format as the Renewal Date.Thank you

double hopping

Posted: 22 Apr 2013 10:15 PM PDT

can any one explain me what is double hopping

Qurery to know tha no of instances

Posted: 22 Apr 2013 09:24 PM PDT

How to know No. of instances in my sever using Query

storing output of a Stored Procedure in a vairable

Posted: 23 Apr 2013 12:34 AM PDT

Hi,I am inserting value in a table Gv_storeAllocation for column STNNo i have created a stored procedure. Can anyone tell me how to store output of that stored procedure in a variable @StnNo? INSERT INTO GV_StoreAllocation (StoreId, STNNo, FirstVoucherNo, LastVoucherNo, Quantity)SELECT DISTINCT @storecode,EXEC @STNNo = dbo.GetSTNValue @storecode , 0 ,'',@FirstVoucherID,@lastvoucherID,@quantityFROM GV_Voucher gv JOIN GV_VoucherStatus gvs ON gv.VoucherStatusId = gvs.VoucherStatusIdWHERE gvs.VoucherStatus = 'Active at HO

Error="8952" Severity="16" State="1 what is error number 8952 stands and solution?

Posted: 22 Apr 2013 10:52 PM PDT

What is the error number 8952 and remedy to fix the same.<DbccResults> <Dbcc ID="0" Error="8952" Severity="16" State="1">Table error: table 'backupmediaset' (ID 1858105660). Index row in index 'backupmediasetuuid' (ID 2) does not match any data row. Possible extra or invalid keys for:</Dbcc> <Dbcc ID="1" Error="8956" Severity="16" State="1">Index row (1:952:151) with values (media_uuid = 'F039C1AB-8103-4D9D-B0F6-86D4968938C3' and media_set_id = 22433) pointing to the data row identified by (media_set_id = 22433).</Dbcc> <Dbcc ID="2" Error="8952" Severity="16" State="1">Table error: table 'backupmediaset' (ID 1858105660). Index row in index 'backupmediasetuuid' (ID 2) does not match any data row. Possible extra or invalid keys for:</Dbcc> <Dbcc ID="3" Error="8956" Severity="16" State="1">Index row (1:1127:36) with values (media_uuid = '9F1D7A44-16E9-417B-B58C-DF4CEB8ACFBD' and media_set_id = 22431) pointing to the data row identified by (media_set_id = 22431).</Dbcc>

Add x amount off days and don't count Weekends

Posted: 22 Apr 2013 09:39 PM PDT

Good DayI am struggling to figure this one out. I have a column with Startdate, I want to add x amount of days to that date but don't want to count weekends. if date is 2012/04/23 and I add 7 days straight it comes to 2012/04/23 without the weekends 2012/05/02 (Correct Way). I did look at [code="plain"]WHERE WEEKDAY(date) '>' 0 AND WEEKDAY(date) '<' 5)[/code] but don't know how to implement it.Thanks

Excel 64 bit driver

Posted: 14 Jan 2013 03:51 PM PST

Hi all, I am running the following query in sql server 2008 r2(64-bit),MS office 2007 excel file.SELECT * FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0', 'Excel 8.0;Database=<loc>', 'SELECT * FROM [Sheet1$]')its showing errorMsg 7308, Level 16, State 1, Line 1OLE DB provider 'Microsoft.Jet.OLEDB.4.0' cannot be used for distributed queries because the provider is configured to run in single-threaded apartment mode.i run the following codesp_configure 'show advanced options', 1;GORECONFIGURE;GOsp_configure 'Ad Hoc Distributed Queries', 1;GORECONFIGURE;GObut still its showing the same error...any help....thanks and regardsSathiyan R

Similar strings.

Posted: 22 Apr 2013 10:33 PM PDT

In the past week, I saw somewhere a string similarity function.(I think I saw it in my 'spare' time when I am not focused on my work but still am reading about SQL-server to keep up. So did not note the link, the book or whatever.)Depending on the longest string which was shared between two strings a value between 0 and 1 was given. This dependend on the length of the string(s) and the similarity. A tresshold was set to get a ?? good starting point??One example were it was used was to compare chemical formula. And find the same formula but which was not written the same.Does anybody recognise were I could have seen this?Other (or same) techniques to find similar strings?We want to use this technique to find questions which are similar but formulated differently. Does not have to be perfect, but is to be used a a help. (I know the Soundex and Difference functions).Thanx in advance,ben brugman

[SQL Server] Testing with Profiler Custom Events and Database Snapshots



Weekly SQL Server newsletter with articles, forum posts, and blog posts via email. Subscribers receive our white paper with performance tips for developers.



SQLTeam.com Articles via RSS


SQLTeam.com Weblog via RSS



.

sqlteam.com

Tuesday, April 23, 2013

[SQL Server] Efficiently Reuse Gaps in an Identity Column



Weekly SQL Server newsletter with articles, forum posts, and blog posts via email. Subscribers receive our white paper with performance tips for developers.



SQLTeam.com Articles via RSS


SQLTeam.com Weblog via RSS



.

sqlteam.com

[SQL Server] Using REPLACE in an UPDATE statement



Weekly SQL Server newsletter with articles, forum posts, and blog posts via email. Subscribers receive our white paper with performance tips for developers.



SQLTeam.com Articles via RSS


SQLTeam.com Weblog via RSS



.

sqlteam.com

[how to] Build career as DBA

[how to] Build career as DBA


Build career as DBA

Posted: 23 Apr 2013 09:11 PM PDT

I wanted to know that if someone is interested to build her/his career as a DBA then is programming language required ?I mean ,without knowing programming language is it possible to become a DBA?Kindly guide me because i wanted to build my career in this field.

one vs two column index difference when doing JOIN query?

Posted: 23 Apr 2013 08:41 PM PDT

Let`s suppose that alfa,beta and gamma contains milions of rows so we need to create indexes obviously to get optimal performace for this query :

SELECT * FROM alfa    JOIN beta on beta.id = alfa.beta_id    JOIN gamma on gamma.id = alfa.gamma_id  WHERE    beta.someattribute = 'X' AND   gamma.anotherone = 'Y' AND   alfa.something = 'Z'  

Now what is optimal way to create indexes on alfa table ? Is is better to create one index across two columns (beta_id,gamma_id) or two separate indexes for each column ? (I am not sure how is JOIN query internally interpreted).

DB : Oracle 11g 11.1.0.7.0

update statement with self join

Posted: 23 Apr 2013 08:21 PM PDT

I needed to copy information from one row of a table to another. I noticed I could do the following:

update address  set col1 = a2.col1,  col2 = a2.col2,  .....etc  from address a1, address a2  where a1.id = @to and a2.id = @from  

However, the above SQL seems kind of ambiguous.
How do I know it's going to update row @to with data of @from and not the otherway around?

Edit: Note: while I have not done extensive testing, this query does seem to work as I try it against our development database. It simply updates the one @to row with results from the @from row.

Storing data as rows as opposed to columns

Posted: 23 Apr 2013 03:11 PM PDT

Here's a typical way I would store data (obviously not storing the password in plain text)

USER TABLE    | UserID | UserName | FullName | EMail   | Password |  |--------|----------|----------|---------|----------|  |1       |userAAA   |User Aaa  |aa@aa.com|aAaaA     |  |1       |userBBB   |User Bbb  |bb@bb.com|bBbbB     |  |1       |userCCC   |User Ccc  |cc@cc.com|cCccC     |  |--------|----------|----------|---------|----------|  

Is there anything wrong with storing it in the following manner?

USER TABLE                 ATTRIBUTES TABLE    | UserID | UserName |      |AttributeID | Attribute |  |--------|----------|      |------------|-----------|  |1       |userAAA   |      |1           |Full Name  |  |1       |userBBB   |      |2           |E-Mail     |  |1       |userCCC   |      |3           |Password   |  |--------|----------|      |------------|-----------|    ATTRIBUTE_VALUES TABLE    |UserID | AttributeID | AttributeValue |  |-------|-------------|----------------|  |1      | 1           |User Aaa        |  |1      | 2           |aa@aa.com       |  |1      | 3           |aAaaA           |  |2      | 1           |User Bbb        |  |2      | 2           |bb@bb.com       |  |2      | 3           |bBbbB           |  |3      | 1           |User Ccc        |  |3      | 2           |cc@cc.com       |  |3      | 3           |cCccC           |  |-------|-------------|----------------|  

The huge benefit I see here is the ability to easily add additional attributes to the user. But I really wanted to get another opinion on this.

Using a table swap for a data load but I have triggers

Posted: 23 Apr 2013 07:01 PM PDT

I'm working on re-writing a table load. The current method is a daily job that deletes from the table, then re-loads it. The complaint is that if the load fails the users have no data. The users have said they are ok if the load fails and they have to use yesterdays data.

My solution is to do a table swap.

  1. Create a staging table that has exactly the same structure, indexes etc
  2. Then the daily job does the following
    1. Delete from the staging table
    2. Load the staging table
    3. Begin a transaction to keep everyone out of the tables until I'm done
    4. Rename Live to Backup
    5. Rename Staging to Live
    6. Rename Backup to Staging
    7. Commit the transaction

This works fine on 3 of the 5 tables I'm working with. The other 2 have triggers. Because the trigger has code that references the table name (of course) it can't handle the rename of the object it is attached to.

I start with this:

Table1_Live    |    -- Trigger for Table1_Live    Table1_Stage    |    -- Trigger for Table1_Stage  

After I do my renames I get the following

Table1_Stage    |    -- Trigger for Table1_Live    Table1_Live    |    -- Trigger for Table1_Stage  

The benefits to the method are that the down time for the users is minimal. I have yesterdays data. And if anything goes wrong I roll back to yesterdays data.

My question is does anyone know a better method without the trigger problem, or a way that I can swap my triggers between the two tables so that the code remains correct after the rename?

Oh, and I'm somewhat embarrassed to admit but even though the database is 2005 (we are upgrading soon) the requirement is that the load be done using a DTS package. My plan is for the rest of the process to be done in a stored procedure that can be called from the DTS package (or SSIS if I can ever talk them into it).

ALLOW_SNAPSHOT_ISOLATION and READ_COMMITTED_SNAPSHOT

Posted: 23 Apr 2013 06:30 PM PDT

Most of the forum and example online always suggest to have both ALLOW_SNAPSHOT_ISOLATION and READ_COMMITTED_SNAPSHOT set to ON whenever someone is asking snapshot, row versioning or similar question.

I guess the word SNAPSHOT in both setting get a little confusing. I thought that, in order for database engine to use row versioning instead of locks for READ_COMMITTED default behavior, the database READ_COMMITTED_SNAPSHOT is set to ON regardless of what ALLOW_SNAPSHOT_ISOLATION setting.

The ALLOW_SNAPSHOT_ISOLATION setting is set to ON only to allow snapshot isolation when starting a transaction (e.g. SET TRANSACTION ISOLATION LEVEL SNAPSHOT) regardless of READ_COMMITTED_SNAPSHOT setting.

The only reason to have these two settings set to ON is when it needs to have READ COMMITTED row versioning AND snapshot isolation.

My question is, is my understanding incorrect in some way? And that these two setting have to be always set to ON together (especially for READ COMMITTED row versioning)?

Switch MySQL lag slave to new master and keep lagging (5.1)

Posted: 23 Apr 2013 02:03 PM PDT

We have the following scenario:

mysql setup

Set of slaves replicate from the master, separate standby master (a.k.a slave with binlogs) replicates from the same master; lag slave, powered by pt-slave-delay, replicates from the same source.

We also utilise mha4mysql to switch slaves from one master to another in case of need.

All would be dandy, if not for lag slaves.

Problem

Let's say lag slave is running 12 hours behind master. Switch lag slave from Master to Standby Master in such a way that failover occurs immediately (i.e. CHANGE MASTER is executed on lag slave the same time as on regular slaves) and that lag slave keeps lagging 12 hours.

Issues

While I know that slave keeps track of master binlog position in its own relay log, I can calculate master position to 12 hours in the past, however, as far as I understand it Standby Master (aka slave with binlogs) has no relation between own relay logs and own binlogs.

Question

Is there a reliable way to calculate position of the Standby Master we need to switch to (discarding the rest of relay logs on lag slave, obviously) and restart replication from 12 hours behind, but on the new master?

NOTE we are running MySQL 5.1, so 5.5 (or 5.6 with GTIDs) aren't a solution for now unfortunately.

Thank you.

Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'

Posted: 23 Apr 2013 04:17 PM PDT

I have 4 SQL Server 2008 instances from which I need to pull data using OPENROWSET. I am adding this data to a table on server 1. Server 1 and 2 are in a cluster, same as 3 and 4. I am remoted into server 1, connected to server 1 and running OPENROWSET for servers 3 and 4 works just fine.

But for 1 and 2 I get this error:

Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'

All servers have the same users (including the one I am logged in as)

Any ideas? Thank you.

Optimize UNION query in MYSQL

Posted: 23 Apr 2013 01:24 PM PDT

I have a problem with a UNION query in MySQL. We have 10 millions players on our website and we would like to select players with a multi-criterias system. For exemple, selecting US people, men, > 35 yo.

We are using "vertical partionning" : 1 table per criter. For exemple :

* user_country  - id_user  - id_country  

We would like to do this kind of query :

SELECT id_inscri FROM userdata_langue   WHERE id_langue='43'     UNION   SELECT id_inscri FROM userdata_sexe   WHERE sexe='2'     UNION   SELECT id_inscri FROM userdata_nb_jour   WHERE nb_jour>='31'     UNION   SELECT id_inscri FROM userdata_last   WHERE last<='2013-04-07'     AND last>='2013-04-03' ;  

Do you have any idea on how to optimize that ?

Thanks !

François

----- More details Explain of the query :

id  select_type table   type    possible_keys   key key_len ref rows    Extra  1   PRIMARY userdata_langue ref id_langue   id_langue   1   const       398846  Using index  2   UNION   userdata_sexe   ref sexe    sexe    1   const   1667137 Using index  3   UNION   userdata_nb_jour    range   nb_jour nb_jour 2   NULL    5830    Using where; Using index  4   UNION   userdata_last   range   last    last    3   NULL    371614  Using where; Using index  NULL    UNION RESULT    <union1,2,3,4>  ALL NULL    NULL    NULL    NULL    NULL  

SHOW CREATE TABLE

Table   Create Table  userdata_langue CREATE TABLE `userdata_langue` (   `id_inscri` bigint(20) NOT NULL,   `id_langue` tinyint(3) unsigned NOT NULL,   PRIMARY KEY (`id_inscri`),   KEY `id_langue` (`id_langue`)  ) ENGINE=InnoDB DEFAULT CHARSET=latin1     

How can Innodb ibdata1 file grows by 5X even with innodb_file_per_table set?

Posted: 23 Apr 2013 01:23 PM PDT

I have innodb_file_per_table set and just today my ibdata1 file jumped from 59M to 323M after I made several changes to an 800M table to reduce it to about 600M. That particular table's .ibd file was reduced but the server's ibdata1 file went crazy. Any ideas?

Change SQL Server 2012 Version from registry?

Posted: 23 Apr 2013 01:17 PM PDT

I am creating an automated build of SQL Server which is run from a batch file. Is there any way to change the registry, or other methoid, of deciding which SQL Server version (Either Enterprise or Standard) will be installed, or are the two versions still seperate builds?

If they are seperate builds, I guess I could get around it, by having a "master" batch file call an appropriate "child" batch file, but this would mean having a folder which contains both sets of binaries, which I rather avoid.

When converting a table valued function to inline, why do I get a lazy spool?

Posted: 23 Apr 2013 12:18 PM PDT

I have a table valued function that I want to convert to inline to improve performance. It sets the value of variables to the value four bit columns in one row of a table (looking up the row using a function parameter as the key), then runs one or more of four different insert statements (differing only by the WHERE clause) to populate a temp table based on which of the bit variables have a value of 1, and then returns the content of the temp table.

I rewrote it to be one SELECT (with a CTE to get the four bit column values) using four OR clauses in the WHERE to get the same result.

However, the plan cost jumped from .003 to over 2.5. On the other hand, the actual performance is better judging by the output from SET STATISTICS TIME ON. 75% of the cost of the inline version is for a lazy spool. Is there a good way to avoid the lazy spool and improve performance even more? The output could be anywhere from one row to thousands.

Best way to synchronize several databases in different computers

Posted: 23 Apr 2013 01:39 PM PDT

I must to do a POS program for college in Java, the problem is that I don't know how to link the different computers to a master computer that handles the changes of the database and sends them to the others PCs .

I've thought the best way is that every computer has their own copy of the database and the changes are made directly on this database and later sends them to the master PC, which return the changes made by others computers. Too, I could do it using only one database in the master, but I don't know if this are going to respond well when all the computers are working at the same time.

Do you have any alternative? or It'll work fine with whatever of the ways I've thought?

Is there slowdown inserting into an InnoDB table that has no index set?

Posted: 23 Apr 2013 12:26 PM PDT

I have an old application with lots of InnoDB tables, that have no indexes at all, not even a primary ID or such.

Those tables only contain a few thousand rows.

Would it be faster to INSERT data into these tables if I would set a primary index (that I don't need otherwise)?

How can I set Timeout by View, User or Role?

Posted: 23 Apr 2013 07:30 PM PDT

I also posted this question at http://superuser.com/questions/586148/how-can-i-set-timeout-by-view-user-or-role I have searched at ServerFault for an answer and not found the question or an answer.

For SQL 2008 R2, data views.

Looking for a timeout control using Microsoft SQL Server Management Studio (SSMS) that is NOT at the Server Level, and/or is NOT dependent on query timeout as set by application initiating the query.

I have been unable to find timeout controls by View, User or Role using SSMS.

There are server level timeouts (remote query timeout http://technet.microsoft.com/en-us/library/ms189040.aspx ), but as I understand it would also impact the main applications use of the database, which lives on a different server (main application gets to define its own limits).

I found DBPROP_COMMANDTIMEOUT http://msdn.microsoft.com/en-us/library/windows/desktop/ms712980(v=vs.85).aspx but not seeing any way to control it by View. And this Set Command Timeout from SQL Server 2005 rather than through the code? says "Command timeouts are always set by the client"

Considerations: These are connections for reporting from the production database of a major application, where the archived datasets (midnight last night) are not sufficiently current. We have a requirement to allow some access; we have a responsibility to not let that access adversely impact the application.

How can I identify the number of times a View is called in SQL Server?

Posted: 23 Apr 2013 12:23 PM PDT

How can I identify the number of times a View was called in SQL Server?

I am looking for an answer that is similar to this below which shows how it can be done for stored procedures.

How can I monitor the call count per stored procedure per timespan?

mysqldump: Got error: 1017: Can't find file: 'drupal_install_test' (errno: 2) when using LOCK TABLES

Posted: 23 Apr 2013 04:01 PM PDT

I'm trying to backup a drupal site database but I'm having some issues. When I ran the following command:

mysqldump -uroot -p drupaSite > drupaSite.sql

I get the following error:

mysqldump: Got error: 1017: Can't find file: 'drupal_install_test' (errno: 2) when using LOCK TABLES

if I tried to query the table I get the same error:

mysql> select * from drupal_install_test;    ERROR 1017 (HY000): Can't find file: 'drupal_install_test' (errno: 2)  

I check the status of the table in the database:

show table status from drupaSite;

I get the following output:

| drupal_install_test     | NULL   |    NULL | NULL       |   NULL |           NULL |        NULL |               NULL |         NULL |      NULL |           NULL | NULL                | NULL                | NULL                | NULL            |     NULL | NULL           | Can't find file: 'drupal_install_test' (errno: 2) |   

I ran the following query:

SELECT * FROM information_schema.tables WHERE table_name='drupal_install_test'\G

I get the following output:

*************************** 1. row ***************************    TABLE_CATALOG: NULL     TABLE_SCHEMA: drupaSite       TABLE_NAME: drupal_install_test       TABLE_TYPE: BASE TABLE           ENGINE: NULL          VERSION: NULL       ROW_FORMAT: NULL       TABLE_ROWS: NULL   AVG_ROW_LENGTH: NULL      DATA_LENGTH: NULL  MAX_DATA_LENGTH: NULL     INDEX_LENGTH: NULL        DATA_FREE: NULL   AUTO_INCREMENT: NULL      CREATE_TIME: NULL      UPDATE_TIME: NULL       CHECK_TIME: NULL  TABLE_COLLATION: NULL         CHECKSUM: NULL   CREATE_OPTIONS: NULL    TABLE_COMMENT: Can't find file: 'drupal_install_test' (errno: 2)  

I ran the following query:

CHECKSUM TABLE drupal_install_test;  

I got the following output:

+-------------------------------+----------+  | Table                         | Checksum |  +-------------------------------+----------+  | drupaSite.drupal_install_test |     NULL |   +-------------------------------+----------+  1 row in set, 1 warning (4.34 sec)  

I ran the following query:

CHECK TABLE drupal_install_test;

and I get the following output:

   +-------------------------------+-------+----------+---------------------------------------------------+      | Table                         | Op    | Msg_type | Msg_text                                          |      +-------------------------------+-------+----------+---------------------------------------------------+      | drupaSite.drupal_install_test | check | Error    | Can't find file: 'drupal_install_test' (errno: 2) |       | drupaSite.drupal_install_test | check | error    | Corrupt                                           |       +-------------------------------+-------+----------+---------------------------------------------------+    2 rows in set (0.02 sec)  

My question for you guys is how can I fix this in a way I can backup the database and restore it in another server. The site is working just fine I need to migrate the server. I would really appreciate your help guys.

Cannot Utilize Maximum CPU and Memory Usage for MySQL

Posted: 23 Apr 2013 12:37 PM PDT

Good day.

I know this may be a duplicate of other questions however I have applied all the suggestions in many of the threads, but I remain with the same problem.

I have a single stored procedure working with max 3 tables. when I run the procedure, only 30% of my CPU is used and about 25% of RAM.

I am sitting with a CPU with 4 cores and 16GB RAM.

my.ini looks as follows:

[client]  port        = 3306  socket      = /tmp/mysql.sock    [mysqld]  port        = 3306  socket      = /tmp/mysql.sock  skip-external-locking  key_buffer_size = 512M  max_allowed_packet = 32M  table_open_cache = 512  sort_buffer_size = 2M  read_buffer_size = 2M  read_rnd_buffer_size = 8M  myisam_sort_buffer_size = 128M  thread_cache_size = 16  query_cache_size= 32M  thread_concurrency = 0    log-bin=mysql-bin    binlog_format=mixed    server-id   = 1    innodb_buffer_pool_size = 12G  innodb_log_buffer_size = 256M  innodb_flush_log_at_trx_commit = 2  innodb_read_io_threads = 64  innodb_write_io_threads = 64    [mysqldump]  quick  max_allowed_packet = 16M    [mysql]  no-auto-rehash    [myisamchk]  key_buffer_size = 128M  sort_buffer_size = 128M  read_buffer = 2M  write_buffer = 2M    [mysqlhotcopy]  interactive-timeout  

Is it the nature of the procedure called which is causing mysql to under utiize the hardware or is it my configuration?

I was running XAMPP but then realised it was 32-bit so I switched to the 64-bit version of WAMP. I use a 32-bit MySQLWorkbench to run queries.

I am using the InnoDB engine.

Using MySQL Ver 14.14 Distrib 5.5.24 Win64 (x86).

Getting sql statement to refer to more than 2 tables?

Posted: 23 Apr 2013 04:04 PM PDT

I have the following tables:

StaffName

============================================================================  |  Name    |   Income_Group   |    Educational_Level  |   Country          |  ============================================================================  |  Jack    |   5              |    5                  |   1                |  |  Jill    |   3              |    3                  |   4                |  |  Jane    |   1              |    4                  |   6                |  |  June    |   4              |    2                  |   7                |  ============================================================================  

Country

==================================================  | ID   |  Country   |  Country_Description       |  ==================================================  | 1    | America    |                            |  | 7    | Japan      |                            |  ==================================================  

IncomeGroup

=======================================================  | ID   |  Income_Range        |  Description          |  =======================================================  | 1    | Below US$2500        |                       |  | 5    | US$9000 to US$12000  |                       |  =======================================================  

EducationalLevel

============================================================  | ID   |  Educational_Level   |  Country_Description       |  ============================================================  | 1    | PhD                  |                            |  | 7    | Master               |                            |  ============================================================  

My intention is to get all the values from other table to the main table and display it something like the following:

Intended Result

=======================================================================================  |  Name    |   Income_Group              |    Educational_Level  |   Country          |  =======================================================================================  |  John    |   US$9000 to US$12000       |    PhD                |   America          |  |  KoKo    |   US$5000 to US$7000        |    Master             |   Japan            |  |  Kane    |   US$1000 to US$2000        |    B.Degree           |   Thailand         |  |  Ali     |   US$8200 to US$9200        |    College            |   Malaysia         |  =======================================================================================  

I tried using the following sql:

select       s.name, s.Income_Group, s.Educational_Level, s.Country  from       StaffName s, Country c. IncomeGroup ig, EducationalLevel el   where       s.Income_Group = c.ID       AND s.Educational_Level = ig.id       AND s.Country = el.id  

But it return no results.

There is no foreign or primary keys for all tables.

What could I have been missing?

DPM and AlwaysOn - Can they work

Posted: 23 Apr 2013 01:11 PM PDT

How does AlwaysOn tie into DPM for SQL Server 2012? Can we use one or the other for High Availability ? Can they work together or must they be used separately. Thanks

PostgreSQL and default Schemas

Posted: 23 Apr 2013 05:23 PM PDT

Whenever I create a brand new database in PostgreSQL Maestro it creates the following list of default schemas:

enter image description here

Now from my understanding Schemas are like folders for organization, etc. So the question I have is are all these schemas needed when I create a new DB? If so what are they used for on PG side as I won't ever use them myself.

I can understand information_schema as this is default for an install of MySQL on a server, though I don't get why a database would need it's own as opposed to the entire server, but to each DB type his own I guess.

LATCH_EX Waits on Resource METADATA_SEQUENCE_GENERATOR

Posted: 23 Apr 2013 12:57 PM PDT

We have a process that generates an inventory report. On the client side, the process splits of a configurable number of worker threads to build a chunk of data for the report that corresponds to one store out of many (potentially thousands, typically dozens). Each worker thread calls a web service that executes a stored procedure.

The database process for processing each chunk gathers a bunch of data into a #Temporary table. At the end of each processing chunk, the data is written to a permanent table in tempdb. Finally, at the end of the process, one thread on the client side requests all the data from the permanent tempdb table.

The more users that run this report, the slower it gets. I analyzed the activity in the database. At one point, I saw 35 separate requests all blocked at one point in the process. All these SPIDs had on the order of 50 ms waits of type LATCH_EX on resource METADATA_SEQUENCE_GENERATOR (00000010E13CA1A8). One SPID has this resource, and all the others are blocking. I did not find anything about this wait resource on a web search.

The table in tempdb that we are using does have an IDENTITY(1,1) column. Are these SPIDs waiting for the IDENTITY column? What methods could we use to reduce or eliminate the blocking?

The server is part of a cluster. The server is running 64-bit SQL Server 2012 Standard Edition SP1 on 64-bit Windows 2008 R2 Enterprise. The server has 64 GB RAM and 48 processors, but the database can only use 16 because it is the standard edition.

(Note that I'm not thrilled by the design of using a permanent table in tempdb to hold all this data. Changing that would be an interesting technical and political challenge, but I'm open to suggestions.)

UPDATE 4/23/2013

We've opened a support case with Microsoft. I'll keep this question updated as we learn more.

Normalization/normal forms - May a field describe an other field?

Posted: 23 Apr 2013 06:03 PM PDT

Like this:

CREATE TABLE persons(      id serial8 NOT NULL PRIMARY KEY,      name varchar,      -- A lot of other fields      date_of_birth timestamp with time zone,      date_of_birth_precision varchar(16),      CHECK (date_of_birth_precision IN ('Years','Months','Days','Hours','Minutes'))  );  

date_of_birth_precision describes the precision of date_of_birth.

I wonder if it violates this rule (because I don't fully understand the rule):

Every non-prime attribute of R is non-transitively dependent (i.e. directly dependent) on every superkey of R.

MySQL data too long error

Posted: 23 Apr 2013 05:03 PM PDT

One of the column in my table was initially created as a varchar(1000). Now there is a need to increase the size of that column to hold around 5000 characters. I used the alter statement to increase the size of that column. DESC table_name as well as SHOW CREATE TABLE table_name clearly indicates the new size of that column to be 5000 characters. However, interestingly when I try to insert any data exceeding 1000 characters ERROR 1406 (22001) : Data too long for column error shows up. Out of desperation I changed the datatype to text, and still it's limited to 1000 characters. Any suggestion?

I created an other dummy table with a coloumn size of varchar(5000) and that works just fine. Engine used is InnoDB and Default Charset is UTF8.

Why is SQL running the same query longer from another connection?

Posted: 23 Apr 2013 02:06 PM PDT

Here is the issue overview: Why does my stored procedure run faster when executed localy vs remotely?

Dont jump to any conclusion just yet, let me explain what I mean...

Here is the setup:
A Windows 2008R2 (correction: 2003) application server executes a stored procedure that performs some action, (what its doing is really important at this point in time). This stored procedure is executed on the SQL server over a TCP/IP connection to the database server. The DB server is physicaly located right next to the application server, and they are connected to eachother via 1GB NICs to a 1GB Switch. The DB server is running SQL 2005 SP2 Enterprise Edition, and has 16GB of memory and several vLUNS striped across 48 15k drives in an HP-EVA FC connected SAN. From all indicators thus far, there are no I/O, Mem, or CPU constreints or limits being hit. Trace Falg 1118 is on and TempDB is split across 8 file on their own vLUN. Data, and TLogs also have their own vLUNS too.

So, here is what I am seeing:
Using SQLCMD on the database server, with SQLProfiler running from the same DB server, I can execute the stored procedure and I see that the execution starts immediatly, and compleats with a durration of about 2,100ms with an IO of about 1200.

Using SQLCMD on the application server, with SQLProfiler running from the DB server, I can execute the same exact stored procedure, with the SAME exact parameters, and I see that the execution starts immediatly, and compleats with a durration of about 110,000ms with an IO of about 1200.

The query results in 1 row, with 4 columns [INT, INT, VARCHAR(50), VARCHAR(100)]

ASIDE:(I know the query is a train wreck, this is a regulated system and I cannot change it on a live prodution server, so please dont make any sugestions about doing so. The next version has been rewritten to be better.)

From everything we can see, there is no reason that we should be seeing differances like this, but what is heppening is the .NET application that calls this query from the application server is timing out waiting for the responce.

We have checked locking and blocking, WAIT states, Query plans, IO contention, CPU contention, MEM contention, NETWORK saturation/utilization, performed indexes rebuilds on all indexes, updates all stats, and a hand full of other items, but haven't come up with anything that points to why this is happening.

Please ask more questions if you have any, make recomendations as you come up with them, and depending on the impact (remember this is a production environment) we will try them and respond back.

-Thanks! -Chris

consequences of using "innodb_flush_method = O_DIRECT" without having a battery backed write cache? or on a KVM guest?

Posted: 23 Apr 2013 01:26 PM PDT

Mysql 5.5.29 Innodb- 128GB Ram - 32 cores - Raid 10 SSD.

Our server which is a dedicated KVM guest on a 'baremetal' is hosting our heavy read-write DB server. Everything is file-per-table. innodb_Buffer_pool is 96GB with 1GBx2 log_file_size with about 20 minutes of writes to fill up those logs at peak time.

How bad of a situation would it be if O_DIRECT (currently running on the default) was enabled during a high work load without a battery backed write cache and a total crash were to occur on the OS, parent host or the power was cut?

Does a battery backed write cache make a difference if the server is a vm guest of the parent anyway?

.

MySQL Dump configuration file

Posted: 23 Apr 2013 04:03 PM PDT

I modified th my.cnf file, so that the innodb_data_file_path points somewhere else from the default path. But the mysqldump command seems like trying to dump from the default path.

I found that if i give the option --defaults-file=xxxx in command line i could change it, but is there a way to config mysqldump to use this option without specify it in command line?

How can I set a default session sql_mode for a given user?

Posted: 23 Apr 2013 12:03 PM PDT

I want to have a default session sql_mode that is different from the global sql_mode.

I can set it manually, but I'd like to do it automatically.

I thought I could add it to .my.cnf, but that doesn't work. I tried adding it to the [mysql] group in .my.cnf, and it does not throw an error.

Yet, when I connect my session sql_mode, the session still inherits the global sql_mode.

I'm using MySQL 5.5.16.

Users cannot view tables in non-default schema in SSMS

Posted: 23 Apr 2013 07:03 PM PDT

I'm having an issue setting the VIEW DEFINITION permission appropriately at the schema level for one of my users. I've created the schema TestSchema and added some tables. The user currently has permissions set to access & modify the table (SELECT, UPDATE, DELETE, etc) through the dbo_datareader and dbo_datawriter roles. However, they cannot see any of the tables in the SSMS object explorer.

I've tried granting permissions to view definitions:

grant view definition on SCHEMA :: [TestSchema] to [User]  

That didn't work. I tried setting the table-level permission:

grant view definition on [TestSchema].[NewTable] to [User]  

That also didn't work. Then I tried just a blanket grant:

grant view definition to [User]  

And that did work; they can now see TestSchema, as well as other schemas that they shouldn't have access to.

My goal here is to allow the user to view all tables within a given schema. How do I accomplish that? If I should be able to do this by default, what permissions should I be looking at to find why I can't?

SSRS calculated field strange behavior

Posted: 23 Apr 2013 01:09 PM PDT

I have a dataset with a calculated field:

 = IIF(String.IsNullOrEmpty(Fields!Event.Value), "\t", Fields!Event.Value) & ", " &        IIF(String.IsNullOrEmpty(Fields!Action.Value), "\t", Fields!Action.Value) & ", " &      IIF(String.IsNullOrEmpty(Fields!RequestedBy.Value), "\t", Fields!RequestedBy.Value)  

This field is then accessed inside a Table within the report using a lookup expression:

=Join(LookupSet(Fields!contactid.Value, Fields!ContactParticipantValue.Value, Fields!RowSummary.Value, "EventRequests"), Environment.NewLine)  

When I run this, initially everything works fine and the correct data is shown in the cell. However there is an warning displayed:

Warning 1   [rsRuntimeErrorInExpression] The Value expression for the field 'RowSummary' contains an error: (processing): (field.ExprHost != null)  0     

Then if I go to the next page, all of the cells now contain #Error instead of the 'RowSummary' info. If I then go back to the previous page, which used to have all the correct data in it, all of the cells are now replaced with #Error.

I have found a work around whereby I add a hidden list container to the END of the report and set DataSet property to the 'EventRequests' dataset with the calculated field in it. I don't have to add any fields or make this list visible, but somehow just having the list on the report prevents the #Error from replacing all of my data. This does not solve the problem completely because when I go to export to PDF, I still receive errors.

Anyone know why this is happening?

Search This Blog