Wednesday, July 10, 2013

[SQL Server 2008 issues] Procedure or function not working correctly

[SQL Server 2008 issues] Procedure or function not working correctly


Procedure or function not working correctly

Posted: 09 Jul 2013 04:22 PM PDT

Hi ProfessionalsI have a procedure which calls a function, the function checks to see if the 3 columns exist, if they do then run a query, if they dont then run a different query.The problem I am having is no matter what I change the passed in 3 values too it always runs the first part within my procedure which is the = 'Y' partam i missing somethingcode below and thanks in advance[code]Alter procedure [dbo].[checkcolumnsexists]ASBEGIN if dbo.ColumnExists('SOFTWARE_MANUFACTURER','PRODUCT_NAME','PRODUCT_VERSION') = 'Y' BEGIN select software_manufacturer,product_name,product_version from dbo.newtable; END else select * from dbo.newtable; END[/code][code]ALTER FUNCTION [dbo].[ColumnExists](@SMcol varchar(100),@PNcol varchar(100),@PVcol varchar(100))RETURNS varchar(1) ASBEGINDECLARE @Result varchar(1);IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.Columns WHERE UPPER(TABLE_NAME) = 'NEWTABLE' AND UPPER(COLUMN_NAME) in (@SMcol,@PNcol,@PVcol)) BEGIN SET @Result = 'Y' ENDELSE BEGIN SET @Result = 'N' END RETURN @Result;END[/code]

Need a Subquery to delete the previous dates

Posted: 09 Jul 2013 06:16 PM PDT

I have created one table like belowcreate table Verification(id int,date datetime)inserted values likeinsert into Verification values(1,'2013-07-10 03:12:05.397')insert into Verification values(1,'2013-07-09 03:12:05.397')insert into Verification values(1,'2013-07-08 03:12:05.397')insert into Verification values(2,'2013-07-10 03:12:05.397')insert into Verification values(2,'2013-07-09 03:12:05.397')insert into Verification values(2,'2013-07-08 03:12:05.397')insert into Verification values(3,'2013-07-10 03:12:05.397')insert into Verification values(3,'2013-07-09 03:12:05.397')insert into Verification values(3,'2013-07-08 03:12:05.397')I want to delete all the records except for the id who's have the max date.output record should be like this.ID date 1 2013-07-10 03:12:05.397'2 2013-07-10 03:12:05.397'3 2013-07-10 03:12:05.397'

List filegroups in partition scheme

Posted: 09 Jul 2013 05:25 AM PDT

Hi all,I am getting the following error when I try to split a partition function, but I can't find any way to get list of filegroups in the scheme. Anyone have any suggestions or better yet any script? Thanks in advance.The associated partition function MyPartitionFunction generates more partitions than there are file groups mentioned in the scheme 'MyPartitionScheme'

Log Shipping and the very last Tran Log backup

Posted: 09 Jul 2013 07:36 AM PDT

I hope this is not a silly question, I will first give some background...I am heading a project to migrate databases from one data center to another data center that are geographically not close to each other. Instead of copy a large database file across a WAN the day of, I would like to get the work done ahead of time with Log Shipping and on the day of backup my last tran log, restore it to the new server, and then recover the databases. On the primary server, I plan to make sure that no one is connected and there are no running transactions using sp_who2, DBCC opentran, or any other method someone would like to suggest.So my question is, how can I be 100% for sure that I in fact have the last Tran Log backup so that I can restore it and recover the databases on the server?Like I mentioned previously, I hope this is not a silly a question and I do appreciate any and all feedback.Thank You

Hostname is a dot in sp_who2

Posted: 09 Jul 2013 06:52 AM PDT

Hi,Does anyone know what it means when one executes sp_who2, and some of the processes show a dot (.) for the host name but there is a login listed as well as other processing data (database name, command, CPU time, etc.)? I'm trying to figure out where that connection is coming from.Thanks for any help!- webrunner

Error when trying to compress the backup

Posted: 09 Jul 2013 05:16 AM PDT

Hi all,I am getting error when trying to backup with compression:'There is insufficient system memory in resource pool 'internal' to run this query. (Microsoft SQL Server, Error: 701)'.No problem if i backup without compression.SQL server 2008 R2 SP1,STD Edition,total_physical_memory_kb 4193368available_physical_memory_kb 1042748total_page_file_kb 8619144available_page_file_kb 4950160system_memory_state_desc Available physical memory is highAny ideas ?Thank you

Cannot scan a remote server with SQL Server BPA 1.0

Posted: 20 Mar 2013 05:34 PM PDT

Hello gentsI am using Microsoft SQL Server 2008 R2 BPA 1.0 wrapped in Microsoft Baseline Configuration Analyzer 2.0 on my workstation and can perform normal scan on my local sql server instance. However, when I tried it to connect to a remote server, it kept reporting the following error:Category: PrerequisiteSource: <servernmae>Issue: 1. User is not a member of Administrators group on remote machine, OR 2. Powershell Remoting is not Enabled on Remote ServerImpact: Analysis cannot be performedResolution: 1. Add user as a member of Administrators group, OR 2. Run Commands Enable-PSRemoting -f through PowerShell command prompt using elevated privileges, AND 3. Run Command winrm set winrm/config/winrs `@`{MaxShellsPerUser=`"10`"`} through PowerShell command prompt using elevated privilegesI've verfied all three pre-requisites (being a local admin for my own AD account, executed the ps commands with elevated privileges) listed above and also turned off Windows Firewall on this target server but still have no luck at all.Do you have any other directions to point out for me?Thanks in anticipation!P.S. The target server is not clustered, just a standalone physical box; both my worksation and the server are in the same domain; my AD account has been explicitly added to local windows admin group and sysadmin role on the server and its hosted sql instance.

install SQL cluster

Posted: 09 Jul 2013 07:19 AM PDT

Hi Folks,i need assistance regarding installing SQL cluster.i need comprehensive step by step guide.

Partitioning a table with primary key

Posted: 09 Jul 2013 05:20 AM PDT

We have a table that we are going to recreate on partition scheme. Its clustered index is a primary key. 1. Can I just drop index without dropping the primary key ?2. In case if the above is impossible, what the best way to overcome the problem with foreign keys created on other tables from our primary key ?Should I drop them and re-create ? I tried to disable, but it does not work.In a short, what is the best practice and sequence of steps in partitioning a table ?Note: we already have partition function, scheme, filegroups. And some other tables are also partitioned.Thanks

Sorting based on user input..

Posted: 09 Jul 2013 03:29 AM PDT

Hi, I have one requirement and need help.I have a table:CREATE TABLE MiscItems(ID INT, Description VARCHAR(50), DisplayOrder SMALLINT)INSERT INTO MiscItems(ID,Description,DisplayOrder)SELECT 100, 'A1',1UNION ALLSELECT 101,'A2',2UNION ALLSELECT 102,'A3',3UNION ALLSELECT 104,'A4',4UNION ALLSELECT 105,'A5', 5UNION ALLSELECT 106,'A6',6SELECT * FROM MiscItemsDROP TABLE MiscItemsI will display in UI: Description & Displayorder and both are editable in UI.My requirement is User can modify DisplayOrder of his interest and clicks on save, then accordingly the display order should be saved in table.User can change multiple items at once and accordingly data should be re arranged and saved in table.Display order will not be a zero & non negative value.1) Say A1, A2, A3, A4, A5, A6 are given Order as 1,2,3,4,5,62) Now, Change the existing Order of A3-3, A4-4, A5-5, A6-6 to a New order A3-1, A4-2, A5-3, A6-4. and Save. Expected : A3-1, A4-2, A5-3, A6-4, A5-1,A6- 23)Now change to back vice versa: Existing order: : A3-1, A4-2, A5-3, A6-4, A5-1,A6- 2 change, A3 to 3, A4 to 4, A5 to 5, A6 to 6.. then expected order: A1-1, A2 -2, A3-3, A4-4,A5-5,A6-6I have below proc for updatimg display order:CREATE PROCEDURE proc_Disporder @ID INT, @OldDisplayOrder INT, @NewDisplayOrder INTASBEGIN IF(@OldDisplayOrder >= @NewDisplayOrder) OR (@OldDisplayOrder = 0) BEGIN UPDATE miscitem SET DisplayOrder = DisplayOrder + (CASE WHEN @OldDisplayOrder = @NewDisplayOrder THEN 1 ELSE 2 END) FROM MiscItems miscitem WHERE ID <> @ID AND miscitem.DisplayOrder >= @NewDisplayOrder AND miscitem.DisplayOrder > 0 END IF(@OldDisplayOrder < @NewDisplayOrder) AND(@OldDisplayOrder <> 0) BEGIN UPDATE miscitem SET DisplayOrder = (CASE (DisplayOrder - 1) WHEN 0 THEN DisplayOrder ELSE (DisplayOrder - 1) END) FROM MiscItems miscitem WHERE ID <> @ID AND miscitem.DisplayOrder <= @NewDisplayOrder AND miscitem.DisplayOrder >= @NewDisplayOrder - 1 AND miscitem.DisplayOrder > 0 END DECLARE @MiscItems TABLE(DisplayOrder INT IDENTITY(1,1), ID INT) INSERT @MiscItems SELECT ID FROM MiscItems WHERE DisplayOrder > 0 ORDER BY DisplayOrder UPDATE src SET src.DisplayOrder = mi.DisplayOrder FROM MiscItems src INNER JOIN @MiscItems mi ON src.ID = mi.ID END

SqlServer 2008 Processor Affinity

Posted: 02 Jul 2013 12:51 AM PDT

Hello All ... I am looking for some advice on CPU affinity settings in SqlServer 2008. I have a 2 node 3 instance cluster with 16 cpu's and 104 GB's of ram. The OS is 2008 Enterprise.The current setup is:Instance 1 Processor: 0-7IO : 8-15Instance 2 Processor: 8-15IO : 0-7Instance 3Processor: 0-3,8-11IO : 4-7,12-15Currently Instance 1 and Instance 3 are sharing a cluster node. I do see some spikes in CPU activity but those are generally focused during maintenance tasks, so that can be expected. Does anyone know of any best practices or any other recommendations?

CDC Cleanup Process Question....

Posted: 09 Jul 2013 03:14 AM PDT

I recently inherited a database that has CDC enabled. There is a custom cleanup process that was written to only remove the records where the operation = 1 or 2, since it appears they wanted to only keep the before and after update statements for reporting.Here is one of the many problems I face with this....If I go ahead and turn on the cleanup agent (sys.sp_MScdc_cleanup_job) with a 10 year retention, it removes records from within this timespan.This table starts out with 5 million records dating back to Nov 2011, then this cleanup job removed 2 million of them which included Dec 2011 and various times for 2012.I did find out that the database was placed into simple mode on various occasions to shrink the transaction log since there was no DBA in house at the time. :w00t:Could this be a matter of something being out of sorts within the database when it tries to go back and determine the high and low watermark for the records it thinks it needs to remove? I guess I don't have a clear understanding of how it determines what records to purge from these tables, if I have a 10 year retention set, I would have assumed that the records would have remained untouched. But I know what happens when you assume something... :-)

Minimum rights required to add users to a DB

Posted: 09 Jul 2013 02:22 AM PDT

Hi AllWe run both SQL 2008/2012 but this question stands for both versions.I would like to delegate permissions to our helpdesk so that they can add new (SQL authenticated) user accounts to a particular DB then assign the db_datareader & db_datawriter roles to the account.So, in Management Studio I have created a SQL login for the Helpdesk AD group and assigned the securityadmin server role, then gone to the relevant DB and assigned the db_securityadmin and db_access admin roles for this group.Problem is that when the Helpdesk try to create a new user for this DB they can create a server login but get the following error when assigning DB roles to that user:[i]TITLE: Microsoft SQL Server Management Studio------------------------------Add member failed for DatabaseRole 'db_datareader'. (Microsoft.SqlServer.Smo)For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=11.0.2100.60+((SQL11_RTM).120210-1917+)&EvtSrc=Microsoft.SqlServer.Management.Smo.ExceptionTemplates.FailedOperationExceptionText&EvtID=Add+member+DatabaseRole&LinkId=20476------------------------------ADDITIONAL INFORMATION:An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)------------------------------[/i]I cancel out of the message only to find that the server login has been created for the new user but not the DB assignments.The only way I have found to get around this is to assign the Helpdesk group to DBO DB role but this is just too permissive for them. Surely there is a way to do this without assign DBO for the Helpdesk??Thanks in advance

Query sql server agent jobs

Posted: 09 Jul 2013 02:22 AM PDT

Hi, I am dropping some legacy stored procedures.There are many jobs scheduled on production server. I dont know whether the stored procedures I delete are still used by any jobs or not.How can I query whether these procedures are used by any of the jobs oratleast I have SSIS packages which uses the dropped procedures. Can I query jobs to know whether these packages are called or not.I can't manually open each job and check. Sometimes the packages are called from master package.How can I search?

Cannot open New SSIS Project in SQL Server 2008 R2

Posted: 02 Nov 2012 12:03 PM PDT

I installed SQL Server 2008 R2 and Visual Studio 2008. However, after several installations and un-installations. I could no longer use the SSIS to create New Projects. I cleaned out the registry with a Registry cleaner from Wise. I deleted the SQL Server and Visual Studio folders as well.When I create a New SSIS Project, I immediately get an error as follows:"Microsoft Visual Studio""An error prevented the view from loading.""Additional information:""Specified Module cannot be Found. (Exception from HRESULT: 0x8007007E)""(System.Windows.Forms)"How do I resolve this problem? What this means is that I cannot create any new SSIS Projects.Cecil

PushD commands from MS

Posted: 08 Jul 2013 11:27 PM PDT

I have a dos commande to delete backup files older than 2 days:[quote]PushD "\\myuncloc\BackupDumps\" &&( forfiles -s -m *.bak -d -2 -c "cmd /c del /q @path" ) & PopD[/quote]The code in a cmd prompt workt fine. How can i make this work in management studio?I tried the below code. I'm not getting an error, only output with value NULL. But when i check the backup location (\\myuncloc\BackupDumps\), the files are not deleted.[quote]declare @CMDSQL as varchar(5000)select @CMDSQL = 'PushD "\\myuncloc\BackupDumps\" &&( forfiles -s -m *.bak -d -2 -c "cmd /c del /q @path" ) & PopD'print @CMDSQLEXEC master..xp_CMDShell @CMDSQL[/quote]

FOR XML PATH with Namespace issues

Posted: 09 Jul 2013 12:55 AM PDT

Good DayI have a SQL query which has to produce an XML PATH output for a financial interface between systems.My query fetches data from two views and produces the correct XML output structure as is needed by the Financial system importer.The query is as follows:[code="sql"]WITH XMLNAMESPACES('eExact-Schema.xsd' AS noNamespaceSchemaLocation, 'http://www.w3.org/2001/XMLSchema-instance' AS xsi) SELECT (SELECT TOP (1) 'E'AS "@status", DIVISION AS "Division/@code", DESCRIPTION "Description", INVOICEDATE AS "Date", REPORTDATE AS "DocumentDate", JOURNAL AS "Journal/@code", 'I' AS "Journal/@type", CC_HEADER AS "Costcenter/@code", 'TZS' AS "Amount/Currency/@code", CASE WHEN CURRENCY <> 'TZS' THEN INV_TOTAL / INV_EXCH ELSE INV_TOTAL END AS "Amount/Value", CURRENCY AS "ForeignAmount/Currency/@code", INV_TOTAL AS "ForeignAmount/Value", (SELECT CASE WHEN ROWNUMBER IS NULL THEN 1 WHEN ROWNUMBER = 0 THEN 1 WHEN ROWNUMBER > 0 THEN ROWNUMBER+1 END AS "@number", TRSTYPE AS "@type", TRSSUBTYPE AS "@subtype", REPORTDATE AS "Date", GL AS "GLAccount/@code", DESCRIPTION AS "Description", CC AS "Costcenter/@code", CREDITOR AS "Creditor/@code", RESOURCE AS "Resource/@number", QUANTITY AS "Quantity", 'TZS' AS "Amount/Currency/@code", CASE WHEN CURRENCY <> 'TZS' THEN DEBET / INV_EXCH ELSE DEBET END AS "Amount/Debit", 0 AS "Amount/Credit", VAT AS "Amount/VAT/@code", VAT AS "VATTransaction/@code", CASE WHEN CURRENCY <> 'TZS' THEN DEBETTAX / INV_EXCH ELSE DEBETTAX END AS "VATTransaction/VATAmount", CASE WHEN CURRENCY <> 'TZS' THEN DEBET / INV_EXCH ELSE DEBET END AS "VATTransaction/VATBaseAmount", 'B' AS "Payment/PaymentMethod/@code", '30' AS "Payment/PaymentCondition/@code", YOURREF AS "Payment/Reference", REPORTDATE AS "Delivery/Date", TRSTYPE AS "FinReferences/@TransactionOrigin", YOURREF AS "FinReferences/YourRef" FROM CY_EXACTEXPORTINV_FINENTRYLINES WHERE CY_EXACTEXPORTINV_HEADER.YOURREF = CY_EXACTEXPORTINV_FINENTRYLINES.YOURREF FOR XML PATH('FinEntryLine'), TYPE ) FOR XML PATH ('GLEntry'), TYPE) --AS GLEntryFROM CY_EXACTEXPORTINV_HEADER ORDER BY CY_EXACTEXPORTINV_HEADER.YOURREF ASCFOR XML PATH ('GLEntries'), ROOT('eExact')[/code]A snippet of the Results of the query is as follows:[code="xml"]<eExact xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:noNamespaceSchemaLocation="eExact-Schema.xsd"> <GLEntries> <GLEntry xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:noNamespaceSchemaLocation="eExact-Schema.xsd" status="E"> <Division code="002" /> <Description>DETAILLED TOPOGRAPHICAL SURVEY OF THE QUARRY</Description> <Date>18/12/2012</Date> <DocumentDate>18/12/2012</DocumentDate> <Journal code="675" type="I" /> <Costcenter code="311" /> <Amount> <Currency code="TZS" /> <Value>18086096.00000000</Value> </Amount> <ForeignAmount> <Currency code="TZS" /> <Value>18086096.000000</Value> </ForeignAmount> <FinEntryLine xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:noNamespaceSchemaLocation="eExact-Schema.xsd" number="1" type="N" subtype="T"> <Date>18/12/2012</Date> <GLAccount code="513003" /> <Description>DETAILLED TOPOGRAPHICAL SURVEY OF THE QUARRY</Description> <Costcenter code="311" /> <Creditor code="30850" /> <Resource number="9244" /> <Quantity>0</Quantity> <Amount> <Currency code="TZS" /> <Debit>15327200.000000</Debit> <Credit>0</Credit> <VAT code="18" /> </Amount> <VATTransaction code="18"> <VATAmount>2758896.000000</VATAmount> <VATBaseAmount>15327200.000000</VATBaseAmount> </VATTransaction> <Payment> <PaymentMethod code="B" /> <PaymentCondition code="30" /> <Reference>000003</Reference> </Payment> <Delivery> <Date>18/12/2012</Date> </Delivery> <FinReferences TransactionOrigin="N"> <YourRef>000003</YourRef> </FinReferences> </FinEntryLine> </GLEntry> </GLEntries> <GLEntries> <GLEntry xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:noNamespaceSchemaLocation="eExact-Schema.xsd" status="E"> <Division code="002" /> <Description>GYPSUM EX KILWA - HIGH TECHNOLOGY INSTITUTE</Description> <Date>14/12/2012</Date> <DocumentDate>29/12/2012</DocumentDate> <Journal code="675" type="I" /> <Amount> <Currency code="TZS" /> <Value>5712144.00000000</Value> </Amount> <ForeignAmount> <Currency code="TZS" /> <Value>5712144.000000</Value> </ForeignAmount> </GLEntry> </GLEntries>[/code]The big issue I have is that the Namespace allocation at ROOT level is fine, but I cannot find a way to control it so that it is not repeated in its entirety on the NODE level (ie: at the GLEntry and FinEntry level).Is this at all possible ?I'm using SQL Server 2008 R2.Thanks in advance

NULL Columns QUery

Posted: 08 Jul 2013 10:00 PM PDT

Hi,I have a column which contains XML data. If I were to set the values to NULL, would this use less space than the XML data? (No NC index exists on this column. Only a clustered index on the PK).Thanks

Mapping a Login to a Database at the Instance level

Posted: 08 Jul 2013 09:05 PM PDT

Friends,I am trying to map a Login to one of the Databases in the Server Instance. However, I am unable to do so through SSMS. I have login xxxxxxxx at the Instance Level. I am trying to map this Login to one of the DBs in this Instance. I going to the properties page of the login through Security folder and going to the User Mapping options. I am clicking the check box of the DB that i want say A and clicking on the OK button. The window shows as if it is executing. However, when I check the properties again, this DB remains unchecked. Is there a script using which we can map a DB to a Login at the Instance level.thanksMurali

Failing agent job

Posted: 27 Jun 2013 03:26 AM PDT

Hi,Strange one bugging me have almost identical code running in another database on the server without issue.I have an agent job which runs[code="sql"]DELETE FROM [biostratweb].[dbo].[xx_FreeTextSearch_ABS][/code]However I always get an error:Date 27/06/2013 17:23:50Log Job History (BiblioWeb Merge and FreeTextSearch Update)Step ID 2Server SQLVMJob Name BiblioWeb Merge and FreeTextSearch UpdateStep Name Free Text updateDuration 00:00:00Sql Severity 16Sql Message ID 208Operator Emailed Operator Net sent Operator Paged Retries Attempted 0MessageExecuted as user: NEFTEX\spsqlagent. Invalid object name 'biostratweb.dbo.xx_FreeTextSearch_ABS'. [SQLSTATE 42S02] (Error 208). The step failed.When I run it directly it works fine.Any ideas, I am running SQL 2008.Many Thanks as always,Oliver

datatype convertion

Posted: 08 Jul 2013 08:21 PM PDT

i have a table with colum having string data in it. i want to convert it into integer. how can i do this..example.. region south northi want to convert above into integer format like region 2 3

Tuesday, July 9, 2013

[how to] Can I force mysql server to resolve the external ip to localhost?

[how to] Can I force mysql server to resolve the external ip to localhost?


Can I force mysql server to resolve the external ip to localhost?

Posted: 09 Jul 2013 08:07 PM PDT

I'm finally succeeded with ssh tunneling. My point is to open the mysql server to local users only (user@localhost etc.) while providing a remote control to my customers through ssh tunneling.

The problem is when I connect through the mysql command line tool (mysql.exe -u root -h 127.0.0.1 --port=8600) I've got a permission denied error for user root@my.servers.ip.address (the tunnel redirects from 127.0.0.1:8600 to my 3603 remote server)

Well it makes sense since my root exists in mysql.user only with the localhost host.

But then, is there a way for me to somehow tell mysql that this ip address is actually the server's address and that the root should be allowed to connect?

I've got no clue where to start. To me if such a thing existed it would be some kind of ip resolving table...

Multi-master quorum offsetting performance gain through distribution

Posted: 09 Jul 2013 01:33 PM PDT

Suppose we have a multi-master p2p setup (3 masters) that requires a quorum of 2 to write.

Then, in order to get full consistency, reads also require quorum of 2, because on write, the 3rd server may not be updated and can serve outdated data. Correct?

Then, doesn't the extra quorum IO requests near completely offset the purpose of having a multi-master distribution model? Sure there are 3 server instead of one, but each request becomes three requests, so each server doesn't get a lighter load.

Additionally, isn't this even worse for 2-phase-commit scenarios?

Thanks!

What is the best data modelling technique for a credit business organization? [on hold]

Posted: 09 Jul 2013 01:33 PM PDT

I would like to know what is the difference between notation and technique? which notation is most intuitive? which modelling technique is the best while designing database for a small business organization?

Mysqlbinlog statement reading

Posted: 09 Jul 2013 01:11 PM PDT

I am trying to perform a point in time on a specific database using a snapshot and mysql's transaction logs. I am using the following to pull statements from the time of the snapshot to the time of recovery:

mysqlbinlog --database=**database** --start-datetime="$start_datetime" --stop-datetime="$stop_datetime" list_of_binlog_file_names  

The resulting statements that mysqlbinlog produces include INSERT/UPDATE/DELETE statements for another database that has the form:

INSERT INTO **database**_reporting.tablename VALUES (data);  

So functionally I'm getting statements from 2 different databases, one is our production database, the other is our reporting database, differentiated by nomenclature by the '_reporting'. It would appear to me that our application is inserting to the secondary database while still using the primary one and the binlogs associate both statements with the primary database.

Am I correct that mysqlbinlog is going to read out statements for both databases as long as they are executed after a USE primary_database;? Is there a way to separate out the ones for the reporting database using mysqlbinlog? I have searched quite a bit- I may be barking up the wrong tree here. Let me know if I can clarify the question.

How large are blocks of data in SQL Server?

Posted: 09 Jul 2013 01:05 PM PDT

I'm working in SQL Server 2008 R2 and have created a query that gathers and sums the total of data files and log files' sizes. However, I can't find how much actual disk space a single block of SQL data takes up on the disk so I can convert it into something more meaningful.

Here is the script:

DECLARE @DataSize INT  DECLARE @LogSize INT  SELECT @DataSize = SUM(size) from sys.database_files where type_desc = 'Rows'  SELECT @LogSize = SUM(size) from sys.database_files where type_desc = 'Log'  PRINT @DataSize  PRINT @LogSize  

How large is one block of space? Would it be easy to convert those two integer variables into something more meaningful for a sysadmin?

Would adding indexes to my foreign keys improve performance on this MySQL query?

Posted: 09 Jul 2013 02:35 PM PDT

Consider the following query:

SELECT    `locations`.`id` AS `location_id`,    `locations`.`address`,    `locations`.`lat`,    `locations`.`lng`,    `tickets`.`status_id`,    `customers`.`name`,    `tickets`.`id` AS `id`,    `tickets`.`updated_at` AS `updated_at`,    ( 3959 * acos( cos( radians('39.78222851322262') ) * cos( radians( `lat` ) ) * cos( radians( `lng` ) - radians('-86.16299560000004') ) + sin( radians('39.78222851322262') ) * sin( radians( `lat` ) ) ) ) AS `distance`  FROM `locations`  RIGHT JOIN `tickets`    ON (`tickets`.`location_id` = `locations`.`id`)  LEFT JOIN `customers`    ON (`tickets`.`customer_id` = `customers`.`id`)  WHERE `tickets`.`client_id` = '20'  AND    (      `customers`.`name` LIKE '%Mahoney%'      OR `customers`.`email` LIKE '%Mahoney%'      OR `locations`.`address` LIKE '%Mahoney%'    )  HAVING `distance` < '5'  ORDER BY `distance`  LIMIT 200;  

Using a profiling tool, I got this report:

Speed: 45.569 ms  Query analysis:  · Query: SIMPLE on tickets · Type: ALL · Rows: 160 (Using where; Using temporary; Using filesort)  · Query: SIMPLE on locations · Possible keys: PRIMARY · Key Used: PRIMARY · Type: eq_ref · Rows: 1  · Query: SIMPLE on customers · Possible keys: PRIMARY · Key Used: PRIMARY · Type: eq_ref · Rows: 1 (Using where)  

This is a MySQL database. All the tables are InnoDB with utf8_unicode_ci. The primary keys on each table are called id and are int(11) with indexes on them.

  • Should adding an index on tickets.location_id and/or tickets.customer_id and/or tickets.client_id improve performance of this query at all?
  • Why or why not?
  • Are there any other fields I should consider indexing to improve the efficiency of this query?
  • Should I be explicitly defining my foreign keys in my database schema?

My thinking is that since I'm selecting from locations first, I would want an index on the foreign keys that are being referenced. I read here: indexes, foreign keys and optimization that MySQL requires indexes on all foreign keys. Does explicitly defining the foreign keys in an InnoDB schema improve performance? Sorry for being such a total n00b. Thanks for the help!

Creating Indexed View GROUP BY Epoch Date

Posted: 09 Jul 2013 04:24 PM PDT

I have a few big tables with about 6 billion rows that I was looking to optimize. Clustered key is Epoch (unix date time which is the number of seconds that has passed after 1970) and customer ID. This table records usage data per customer per product type.

For example, if this were for a Telco, TypeID 1 is a local call and the value is how many minutes used for that customer. TypeID2 is a international call and is the value how many minutes were used in that hour for that customer. Let's say TypeID3 is a special discounted rate for domestic calling.

The data is stored in 1 hour intervals. I want the indexed view to store the aggregated 24 hour value so when we run a query for 1 day per customer, it has to only look up 1 row in the indexed view instead of 24 rows in the base table.

This is the base table:

ColRowID (bigint)  AggregateID (int)  Epoch (int)  CustomerID (int)  TypeID  (tinyint)  ErrorID (smallint)  Value (int)  

We don't care about Aggregate or RowID for our reporting purposes, so I figure the indexed view will look like this:

CREATE VIEW [ixvw_AggTbl]  WITH SCHEMABINDING  AS  SELECT Epoch, CustomerID, TypeID, ErrorID, SUM(Value)  FROM DBO.BaseTbl  -- GROUP BY Epoch  (what goes here?? Epoch/86400?  If I do that I have to   -- put Epoch/86400 in the SELECT list as well)  

EDIT:

Sample base data ( i left out the columns we don't need in this case, just assume the ID columns are there). Each "TypeID" will have a value assigned to it, which the value can be 0.

For example,

    Epoch / Customer ID / TypeID / Value      /* Epoch 90,000 is day 2 1am */    90000 (1am  day 2) / 1 / 1 / 200  90000 (1am  day 2) / 1 / 2 / 100  90000 (1am  day 2) / 1 / 3 / 120    /* Customer ID 2 as well */  90000 (1am  day 2) / 2 / 1 / 100  90000 (1am  day 2) / 2 / 2 / 50  90000 (1am  day 2) / 2 / 3 / 310    ... (repeat for 30,000 customers)    /* Customer ID 1 2am day 1) */  93600 (2am day 2) / 1 / 1 / 150  93600 (2am day 2) / 1 / 2 / 0  93600 (2am day 2) / 1 / 3 / 550    /* Customer ID 2 2am day 2) */  93600 / 2 / 1 / 80  93600 / 2 / 2 / 150  93600 / 2 / 3 / 300  ... (repeat for 30,000 customers)  

Let's assume all the other VALUE columns are 0 for the remainder of the day since the system went down and no one could use their phones after 2am. I want my indexed view to record the value column aggregated per day, per customerID and TypeID.

Sample would be:

172800 (Day 3 midnight) / 1 / 1 / 350  --Cust ID 1 aggregated all type id 1 in the past 24 hours  172800 (Day 3 midnight) / 1 / 2 / 100  172800 (Day 3 midnight) / 1 / 3 / 670  172800 (Day 3 midnight) / 2 / 1 / 180  --Cust ID 2 now  172800 (Day 3 midnight) / 2 / 2 / 200  172800 (Day 3 midnight) / 2 / 3 / 610  --Repeat by adding 86400 to the epoch to gather the summary data of the rows for the previous day.  

Audit Queries in SQL 2008 Including Start Stop Times

Posted: 09 Jul 2013 12:56 PM PDT

I have SQL Server 2008 Enterprise. I know how to set up auditing on the SQL server, but it looks like the audit logs don't include query start/stop times. We currently use a product called Splunk to do a lot of data analysis at our organization. I would like to be able to feed Splunk our machine performance data along with a SQL query audit log to be able to correlate high CPU usage to long running queries and things like that. However, I would need query start/stop times to be able to do that. It doesn't look like the built in auditing has that ability. Is there any other way to achieve this?

Why does Log Shipping .TRN file copy just stop

Posted: 09 Jul 2013 12:57 PM PDT

I apologize in advance for a long post but I have had it up to here with this error of having to delete LS configuration and starting it over for any DB thats got this error.

I have LS setup on 3 win2k8r2 servers(pri,sec,monitor) with 100 databases transactions backed up and shipped from the primary to secondary and monitored by monitor. Back ups and copies are run every 15min and then the ones older than 24hrs are deleted. Some DBs are very active and some not so much but shipped regardless for uniformity sake(basically to make secondary server identical to primary). Some DBs are for SP2010 and majority for inhouse app.

The issue is that after all LS configs are setup, all works well for about 3 to 4 days then i go to the Transaction LS Status report on the secondary, I see that randomly some LS jobs have an Alert Status because the time since last copy is over 45min so no restore has occured. This seems random and the only errors i see is from an SP2010 DB(WebAnalyticsServiceApplication_ReportingDB_77a60938_##########) which I belive is a reports db that gets created weekly and LS cannot just figure which the last copy to backup or to restore is. I posted here regarding that and i have yet to find a permanent solution. For my main error(time since last copy) i have not seen anything that could have caused that and i dont get any messages(even though some alert statuses have been ignored for 3 days). Anyway, I would really appreciate any input on understanding whats causing this and how i could fix it. Thanks.

How to determine master in mysql master-slave

Posted: 09 Jul 2013 09:07 PM PDT

I am setting up MySQL Master-slave replication and I am trying to figure out how to handle the failover situation where I promote the slave to master (in the event that the master goes down).

My application server needs to direct all writes to the current master, but I cannot use server level HA between the master and slave (heartbeat, keepalived) since the two db servers are on completely different subnets in different physical locations.

I think this is something that I need to handle at the application level. I can query the two servers and ask which one is a master, then perform all queries to that one.

Is there a query in MySQL to see if the current server is a master in a master-slave replica?

MySQL 5.6 Delayed Replication - cancel delay questions

Posted: 09 Jul 2013 02:13 PM PDT

I'm using the MySQL 5.6 Master/Slave replication, with Delay set for 2 hours.

My questions are :

Q1. What is the proper way to cancel the Delay at the slave - i.e. "roll forward" all the changes executed at the Master ?

Q2. Let's say that the Master became totally unavailable. What is the proper way to cancel the Delay and "roll forward" the Slave so, that it will apply all changes from Master ?

I've tried to : - stop slave ; - change master to master_delay = 0; - start slave ;

(At this moment - the Master is still unavailable, and Slave IO thread status is Connecting.) After the listed above steps - the relay log bin files are deleted at the Slave host, Delay value becomes 0, but the changes from the Master are NOT applied...

If anybody could please provide some useful tips - i'll very appreciate.

Best regards, Avi Vainshtein

Oracle OEM Database Backup Failure

Posted: 09 Jul 2013 02:19 PM PDT

I am trying to back up an oracle database from OEM, but upon completion the job report says the job failed with the following error:

RMAN-03002: failure of backup command at 07/09/2013 10:26:52  RMAN-06059: expected archived log not found, loss of archived log compromises recoverability  ORA-19625: error identifying file C:\APP\RM\FLASH_RECOVERY_AREA\RONNIE\ARCHIVELOG\2013_06_10\O1_MF_1_1508_8VDHNOLY_.ARC  ORA-27041: unable to open file  OSD-04002: unable to open file  

A DBA set this database up but he is not currently available. I have tried running a crosscheck and a delete expired in RMAN but experienced the same problem when trying again.

The path it is looking in: C:\APP\RM\FLASH_RECOVERY_AREA\RONNIE\ARCHIVELOG\2013_06_10\O1_MF_1_1508_8VDHNOLY_.ARC

...does not exist on the server, but it did exist on an original machine from which the DBA copied the database. The path containing the archive log now is:

C:\APP\CS\FLASH_RECOVERY_AREA\RONNIE\ARCHIVELOG\2013_06_10\O1_MF_1_1508_8VDHNOLY_.ARC

Can anyone help?

Thanks

ORA-40341: Access violation on model storage object in Oracle?

Posted: 09 Jul 2013 08:31 PM PDT

While I was trying to drop a table, it throws following error in Oracle SQL Developer:

ORA-40341: Access violation on model storage object  

The tables are temporary tables created while pushing into the database using Oracle R Enterprise. The names of the tables are: DM$PRORE$21_473, DM$PGORE$21_473, ...

I need to drop all these tables as these tables have occupied large space of my database. While googling, I found this link but it provides no solution clues.

Cumulative Game Score SQL

Posted: 09 Jul 2013 08:11 PM PDT

I have developed a game recently and the database is running on MSSQL.

Here is my database structure

Table : Player

PlayerID uniqueIdentifier (PK)  PlayerName nvarchar  

Table : GameResult

ID bigint (PK - Auto Increment)  PlayerID uniqueIdentifier (FK)  DateCreated Datetime  Score int  TimeTaken bigint  PuzzleID int  

I have done an SQL listing Top 50 players that sort by highest score (DESC) and timetaken (ASC) Sql below allowed me to get the result for each puzzle id. I'm not sure if it is 100% but I believe it is correct.

;with ResultSet (PlayerID, maxScore, minTime, playedDate)   AS  (    SELECT TOP 50 PlayerID, MAX(score) as maxScore, MIN(timetaken) as minTime, MIN(datecreated) as playedDate      FROM gameresult      WHERE puzzleID = @PuzzleID      GROUP BY PlayerID      ORDER BY maxScore desc, minTime asc, playedDate asc  )  SELECT RSP.[PlayerID], RSP.[PlayerName], RSA.maxScore, RSA.minTime, RSA.PlayedDate  FROM ResultSet RSA  INNER JOIN Player RSP WITH(NOLOCK)       ON RSA.PlayerID = RSP.PlayerID  ORDER BY       maxScore DESC,       minTime ASC,      playedDate ASC  

Question

1) I need to modify the SQL to do a cumulative rank of 3 puzzle ID. For example, Puzzle 1, 2, 3 and it should be sort by highest sum score (DESC), and sum timetaken (ASC)

2) I also need an overall score population for all the possible 1 to 7 puzzle.

3) Each player only allowed to appear on the list once. First played and first to get highest score will be rank 1st.

I tried using CTE with UNION but the SQL statement doesn't work.

I hope gurus here can help me out on this. Much appreciated.

TOAST Table Growth Out of Control - FULLVAC Does Nothing

Posted: 09 Jul 2013 12:51 PM PDT

Recently, I've had a PostgreSQL 8.2.11 server upgraded to 8.4 in order to take advantage of autovacuum features and be in line with 30ish other PGSQL servers. This was done by a separate IT group who administrates the hardware, so we don't have much choice on any other upgrades (won't see 9+ for a while). The server exists in a very closed environment (isolated network, limited root privileges) and runs on RHEL5.5 (i686). After the upgrade, the database has constantly been growing to the tune of 5-6 GB a day. Normally, the database, as a whole, is ~20GB; currently, it is ~89GB. We have a couple other servers which run equivalent databases and actually synchronize the records to each other via a 3rd party application (one I do not have access to the inner workings). The other databases are ~20GB as they should be.

Running the following SQL, it's fairly obvious there's an issue with a particular table, and, more specifically, its TOAST table.

SELECT nspname || '.' || relname AS "relation",      pg_size_pretty(pg_relation_size(C.oid)) AS "size"    FROM pg_class C    LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)    WHERE nspname NOT IN ('pg_catalog', 'information_schema')    ORDER BY pg_relation_size(C.oid) DESC    LIMIT 20;  

Which produces:

                relation              |  size    ------------------------------------+---------      pg_toast.pg_toast_16874           | 89 GB      fews00.warmstates                 | 1095 MB      ...    (20 rows)  

This TOAST table is for a table called "timeseries" which saves large records of blobbed data. A SUM(LENGTH(blob)/1024./1024.) of all the records in timeseries yields ~16GB for that column. There should be no reason this table's TOAST table should be as large as it is.

I've performed a VACUUM FULL VERBOSE ANALYZE timeseries, and the vacuum runs to completion with no errors.

INFO: vacuuming "pg_toast.pg_toast_16874"
INFO: "pg_toast_16874": found 22483 removable, 10475318 nonremovable row versions in 10448587 pages
DETAIL: 0 dead row versions cannot be removed yet.
Nonremovable row versions range from 37 to 2036 bytes long.
There were 20121422 unused item pointers.
Total free space (including removable row versions) is 0 bytes. 4944885 pages are or will become empty, including 0 at the end of the table. 4944885 pages containing 0 free bytes are potential move destinations.
CPU 75.31s/29.59u sec elapsed 877.79 sec.
INFO: index "pg_toast_16874_index" now contains 10475318 row versions in 179931 pages
DETAIL: 23884 index row versions were removed.
101623 index pages have been deleted, 101623 are currently reusable.
CPU 1.35s/2.46u sec elapsed 21.07 sec.

REINDEXed the table which freed some space (~1GB). I can't CLUSTER the table as there isn't enough space on disk for the process, and I'm waiting to rebuild the table entirely as I'd like to find out why it is so much bigger than equivalent databases we have.

Ran a query from the PostgreSQL wiki here - "Show Database Bloat", and this is what I get:

  current_database | schemaname |           tablename            | tbloat | wastedbytes |              iname              | ibloat | wastedibytes    -----------------+------------+--------------------------------+--------+-------------+---------------------------------+--------+--------------    ptrdb04          | fews00     | timeseries                     |    1.0 |           0 | idx_timeseries_synchlevel       |    0.0 |            0    ptrdb04          | fews00     | timeseries                     |    1.0 |           0 | idx_timeseries_localavail       |    0.0 |            0    ptrdb04          | fews00     | timeseries                     |    1.0 |           0 | idx_timeseries_expirytime       |    0.0 |            0    ptrdb04          | fews00     | timeseries                     |    1.0 |           0 | idx_timeseries_expiry_null      |    0.0 |            0    ptrdb04          | fews00     | timeseries                     |    1.0 |           0 | uniq_localintid                 |    0.0 |            0    ptrdb04          | fews00     | timeseries                     |    1.0 |           0 | pk_timeseries                   |    0.1 |            0    ptrdb04          | fews00     | idx_timeseries_expiry_null     |    0.6 |           0 | ?                               |    0.0 |            0  

It looks like the database doesn't consider this space as "empty," at all, but I just don't see where all the disk space is coming from!

I suspect that this database server is deciding to use 4-5x as much disk space to save the same records pulled from the other data servers. My question is this: Is there a way I can verify the physical disk size of a row? I'd like to compare the size of one row on this database to another "healthy" database.

Thanks for any help you can provide!

UPDATE 1

I ended up rebuilding the table from a dumped schema due to its size (couldn't leave it alone for another day). After synchronizing the data, via the software synch process, the TOAST table was ~35GB; however, I could only account for ~9GB of it from that blob column which should be the longest in terms of values. Not sure where the other 26GB is coming from. CLUSTERed, VACUUM FULLed, and REINDEXed to no avail. The postgresql.conf files between the local and remote data servers are exactly the same. Is there any reason this database might be trying to store each record with a larger space on disk?

UPDATE 2 - Fixed

I finally decided to just completely rebuild the database from the ground up- even going as far as to reinstall the PostgreSQL84 packages on the system. The database path was reinitialized and tablespaces wiped clean. The 3rd party software synchronization process repopulated the tables, and the final size came out to be ~12GB! Unfortunately, this, in no way, helps to solve what the exact source of the issue was here. I'm going to watch it for a day or two and see if there are any major differences with how the revitalized database is handling the TOAST table and post those results here.

Relation Size

    ptrdb04=> SELECT nspname || '.' || relname AS "relation",  ptrdb04->     pg_size_pretty(pg_relation_size(C.oid)) AS "size"  ptrdb04->   FROM pg_class C  ptrdb04->   LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)  ptrdb04->   WHERE nspname NOT IN ('pg_catalog', 'information_schema')  ptrdb04->   ORDER BY pg_relation_size(C.oid) DESC  ptrdb04->   LIMIT 2;            relation         |  size     -------------------------+---------   pg_toast.pg_toast_17269 | 18 GB   fews00.warmstates       | 1224 MB  (2 rows)  

VACUUM VERBOSE ANALYZE timeseries;

  INFO:  "timeseries": found 12699 removable, 681961 nonremovable row versions in 58130 out of 68382 pages  DETAIL:  0 dead row versions cannot be removed yet.  There were 105847 unused item pointers.  0 pages are entirely empty.  CPU 0.83s/2.08u sec elapsed 33.36 sec.  INFO:  vacuuming "pg_toast.pg_toast_17269"  INFO:  scanned index "pg_toast_17269_index" to remove 2055849 row versions  DETAIL:  CPU 0.37s/2.92u sec elapsed 13.29 sec.  INFO:  "pg_toast_17269": removed 2055849 row versions in 518543 pages  DETAIL:  CPU 8.60s/3.21u sec elapsed 358.42 sec.  INFO:  index "pg_toast_17269_index" now contains 7346902 row versions in 36786 pages  DETAIL:  2055849 index row versions were removed.  10410 index pages have been deleted, 5124 are currently reusable.  CPU 0.00s/0.00u sec elapsed 0.01 sec.  INFO:  "pg_toast_17269": found 1286128 removable, 2993389 nonremovable row versions in 1257871 out of 2328079 pages  DETAIL:  0 dead row versions cannot be removed yet.  There were 18847 unused item pointers.  0 pages are entirely empty.  CPU 26.56s/13.04u sec elapsed 714.97 sec.  INFO:  analyzing "fews00.timeseries"  INFO:  "timeseries": scanned 30000 of 68382 pages, containing 360192 live rows and 0 dead rows; 30000 rows in sample, 821022 estimated total rows  

The only noticeable difference after the rebuild (other than disk usage) is

INFO:  "pg_toast_17269": found 1286128 removable, 2993389 nonremovable row versions
as @CraigRinger mentioned in a comment. The nonremovable row count is much smaller than before.

New question: Can other tables affect the size of another table? (via foreign keys and such) Rebuilding the table did nothing, yet rebuilding the whole database proved to fix the problem.

mysqldump freezing on a specific table

Posted: 09 Jul 2013 12:57 PM PDT

I dumped a database (sys_data) which is very big (800GB, all data in one ibdata file) from a remote server. But the dump was blocked at a table (tb_trade_376). My dump command:

mysqldump -uxx -pxx -h192.168.1.xxx --single-transcation sys_data > /home/sys_data.sql  

When the dump was blocked:

show processlist;  5306612 | root | 192.168.1.161:57180 | sys_data      | Query  | 23955 | Sending data | SELECT /*!40001 SQL_NO_CACHE */ * FROM `tb_trade_376`  

On the other hand I can dump the table tb_trade_376 successfully if I just dump the table only.

mysqldump -uxx -pxx -h192.168.1.xxx \    --single-transcation sys_data tb_trade_376 > /home/tb_trade_376.sql  

This works well and quickly! The table tb_trade_376 has about 700,000-800,000 rows.

What is the next step in investigating why I can't dump the whole database? How can I make it work?

Longest prefix search in Oracle

Posted: 09 Jul 2013 08:38 PM PDT

I have a list of phone number prefixes defined for large number of zones (in query defined by gvcode and cgi). I need to efficiently find a longest prefix that matches given number PHONE_NR.

I use inverted LIKE clause on field digits (which contains prefixes in form +48%, +49%, +1%, +1232% and so on).

Therefore I can't use normal index on that field.

I managed to get substantial improvement by using IOT on gvcode and cgi field (which are part (first two cols) of primary key). I also looked at some oracle text indexes but can't find one that will match longer input with shorter prefix in the table.

Is there any other way to perform such search that is faster than this approach.

Here is the query which gives a list of all matched prefixes (I sort it afterwards on digits length).

  select  t.gvcode,  t.digits                  from NUMBERS t                       where                           t.gvcode=ZONE_SET_CODE                           and t.cgi=cgi_f                         and ( PHONE_NR like t.digits)                           order by length(digits) desc   

In place upgrade from MySQL 5.5 to 5.6.11 removes all users from user table

Posted: 09 Jul 2013 01:52 PM PDT

On Windows, I upgraded from 5.1 to 5.5 no problem.

  1. Copied my 5.1 data folder into my 5.5 instance
  2. Started mysqld skipping grants
  3. Ran mysql_upgrade

All good, but going from 5.5 to 5.6:

  1. Copied 5.5 data folder to 5.6 instance
  2. Started mysqld skipping grants
  3. Ran mysql_upgrade

but I get:

C:\Users\QAdmin>mysql_upgrade  Looking for 'mysql.exe' as: C:\Program Files\MySQL\MySQL Server 5.6\bin\mysql.exe  Looking for 'mysqlcheck.exe' as: C:\Program Files\MySQL\MySQL Server 5.6\bin\mysqlcheck.exe  Running 'mysqlcheck' with connection arguments: "--port=3306"  Running 'mysqlcheck' with connection arguments: "--port=3306"  mysql.user_info                                    OK  Running 'mysql_fix_privilege_tables'...  Running 'mysqlcheck' with connection arguments: "--port=3306"  C:\Program Files\MySQL\MySQL Server 5.6\bin\mysqlcheck.exe: Got error: 1130: Host 'localhost' is not allowed to connect to this MySQL server when trying to connect  FATAL ERROR: Upgrade failed  

If I look at the mysql.user table it is completely empty.

  • Has anyone seen this or know what is going on?
  • During the "upgrade" the user table gets erased and when it tries to connect it can't?

Thanks.

SSRS 2008 R2 setup issue

Posted: 09 Jul 2013 04:52 PM PDT

I have installed SSRS 2008 R2 on my desktop and server. When I hit the reports link on my desktop

http://mypc/Reports_mypc/Pages/Folder.aspx  

all I get to see is this home page of the desktop ssrs instance

I cant create a new folder or data source or anything of the sort

On the server where I am attempting to set up SSRS 2008 R2, all I get is a white screen that shows the virtual folder name in large fonts, followed by the version of the reporting services server on the next line. This is not leaving me any clues as to what needs to be fixed. On both pcs I am using the credentials of the local admin. Any clues on what needs to be fixed?

Get all the database in db2 through web

Posted: 09 Jul 2013 02:17 PM PDT

I would like to get all the databases available on a db2 instance from a C# application. The CLP command to be used is LIST DB DIRECTORY.

How can I fire this command from C#? Is there another select statement I can use to get all the databases on one db2 instance?

Is there an effective way to create extended event data that includes stacktraces in SQL Server on a local db instance?

Posted: 09 Jul 2013 06:43 PM PDT

I read Paul Randal's article on getting the symbols for SQL Server. I have a theory that I could get the extended events to read the symbols from the symbol server catch directory by setting the _NT_SYMBOL_PATH environment variable. The only problem is I can't create the right conditions for any data to show up when I create an extended events session with sqlos.spinlock_backoff. I tried the script in this pdf.

I tried HammerDB and I tried this script with adventure works. I don't get any spinlocks. I tried setting max server memory at 256 megs (my default setting on my laptop) and I tried it at 2 gigs. The Extended event session was running, but no data showed up.

Database design for an E-commerce website

Posted: 09 Jul 2013 06:52 PM PDT

I am new to database design. I am designing a database for an E-commerce website, there are a lot of products to be updated, but while designing for product specification table I do not understand whether I need specify all the specifications in a table or do I need to use different tables for different products?

For example, consider the products Mobile and book, each of these having unique specifications (such as color,size, cost, model for mobile and title, ISBN, author, cost, year_of_publication, etc. for book), if it is only less number of products then it is possible for me to design, but when there is a thousands of products, it takes a lot of time.

Can anyone tell me how to design my database to manage this situation?

Shrink database operation in maintenance plan failed

Posted: 09 Jul 2013 11:17 AM PDT

See updates below

I have a nightly DB maintenance plan and it's causing some strange behaviour.

About 1 out of 10 runs it causes the DB to not response to queries. It takes about 5 minutes for the DBM plan to complete at which point it starts responding.

The error logs don't seem to point to any problem at all:

2013-01-13 00:00:56.73 spid23s     This instance of SQL Server has been using a process ID of 1488 since 12/28/2012 10:06:36 AM (local) 12/28/2012 6:06:36 PM (UTC). This is an informational message only; no user action is required.  2013-01-13 22:00:06.07 spid60      Configuration option 'user options' changed from 0 to 0. Run the RECONFIGURE statement to install.  2013-01-13 22:00:06.07 spid60      FILESTREAM: effective level = 0, configured level = 0, file system access share name = 'MSSQLSERVER'.  2013-01-13 22:01:03.78 spid64      DBCC CHECKDB (RackAttackNorthAmerica) WITH no_infomsgs executed by sa found 0 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 57 seconds.  Internal database snapshot has split point LSN = 0000942b:000000d9:0001 and first LSN = 0000942b:000000d8:0001.  2013-01-13 22:02:33.13 spid80      I/O is frozen on database model. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup.  2013-01-13 22:02:33.13 spid81      I/O is frozen on database msdb. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup.  2013-01-13 22:02:33.13 spid82      I/O is frozen on database RackAttackNorthAmerica. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup.  2013-01-13 22:02:34.13 spid83      I/O is frozen on database master. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup.  2013-01-13 22:02:34.48 spid83      I/O was resumed on database master. No user action is required.  2013-01-13 22:02:34.48 spid80      I/O was resumed on database model. No user action is required.  2013-01-13 22:02:34.48 spid81      I/O was resumed on database msdb. No user action is required.  2013-01-13 22:02:34.70 spid82      I/O was resumed on database RackAttackNorthAmerica. No user action is required.  2013-01-13 22:02:35.07 Backup      Database backed up. Database: master, creation date(time): 2012/12/28(10:05:56), pages dumped: 379, first LSN: 685:336:83, last LSN: 685:384:1, number of dump devices: 1, device information: (FILE=1, TYPE=VIRTUAL_DEVICE: {'{6E5AF82F-6BBD-4343-9DA3-286FD7EA5C0E}4'}). This is an informational message only. No user action is required.  2013-01-13 22:02:35.56 Backup      Database backed up. Database: model, creation date(time): 2003/04/08(09:13:36), pages dumped: 170, first LSN: 101:1440:37, last LSN: 101:1464:1, number of dump devices: 1, device information: (FILE=1, TYPE=VIRTUAL_DEVICE: {'{6E5AF82F-6BBD-4343-9DA3-286FD7EA5C0E}1'}). This is an informational message only. No user action is required.  2013-01-13 22:02:35.69 Backup      Database backed up. Database: msdb, creation date(time): 2008/07/09(16:46:27), pages dumped: 1948, first LSN: 1647:152:131, last LSN: 1647:216:1, number of dump devices: 1, device information: (FILE=1, TYPE=VIRTUAL_DEVICE: {'{6E5AF82F-6BBD-4343-9DA3-286FD7EA5C0E}2'}). This is an informational message only. No user action is required.  2013-01-13 22:02:35.77 Backup      Database backed up. Database: RackAttackNorthAmerica, creation date(time): 2011/01/21(10:37:38), pages dumped: 193713, first LSN: 37940:102:1, last LSN: 37946:493:1, number of dump devices: 1, device information: (FILE=1, TYPE=VIRTUAL_DEVICE: {'{6E5AF82F-6BBD-4343-9DA3-286FD7EA5C0E}3'}). This is an informational message only. No user action is required.  2013-01-13 22:06:48.94 Backup      Database backed up. Database: RackAttackNorthAmerica, creation date(time): 2011/01/21(10:37:38), pages dumped: 168202, first LSN: 37974:59:36, last LSN: 37974:83:1, number of dump devices: 1, device information: (FILE=1, TYPE=DISK: {'D:\DB_Backups\RackAttackNorthAmerica_backup_2013_01_13_220612_2311062.bak'}). This is an informational message only. No user action is required.  

I'm not quite sure what info is needed to diagnose the problem.

Please let me know what you need and I'll do my best to get it.

I managed to find this error, not sure if it helps.

Executing the query "DBCC SHRINKDATABASE(N'RackAttackNorthAmerica', 10,..." failed with the following error: "Backup, file manipulation operations (such as ALTER DATABASE ADD FILE) and encryption changes on a database must be serialized. Reissue the statement after the current backup or file manipulation operation is completed.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.  

dm_os_waiting_tasks

waiting_task_address    session_id  exec_context_id wait_duration_ms    wait_type   resource_address    blocking_task_address   blocking_session_id blocking_exec_context_id    resource_description  0x0000000003FAE508  3   0   4907732 XE_DISPATCHER_WAIT  NULL    NULL    NULL    NULL    NULL  0x0000000003FAE988  15  0   11555964    BROKER_EVENTHANDLER NULL    NULL    NULL    NULL    NULL  0x00000000040E8748  5   0   33  LAZYWRITER_SLEEP    NULL    NULL    NULL    NULL    NULL  0x0000000004102508  6   0   9962    LOGMGR_QUEUE    0x0000000003413528  NULL    NULL    NULL    NULL  0x0000000004102748  17  0   1480291018  BROKER_TRANSMITTER  NULL    NULL    NULL    NULL    NULL  0x000000000411C508  4   0   199 REQUEST_FOR_DEADLOCK_SEARCH 0x00000000089A01F8  NULL    NULL    NULL    NULL  0x000000000411C748  10  0   741 SLEEP_TASK  NULL    NULL    NULL    NULL    NULL  0x0000000004136508  7   0   1480284374  KSOURCE_WAKEUP  NULL    NULL    NULL    NULL    NULL  0x0000000004150508  12  0   55058   CHECKPOINT_QUEUE    0x00000000034135A0  NULL    NULL    NULL    NULL  0x0000000004150BC8  NULL    NULL    12814   FT_IFTS_SCHEDULER_IDLE_WAIT NULL    NULL    NULL    NULL    NULL  0x000000000416A508  11  0   1480314263  ONDEMAND_TASK_QUEUE 0x00000000033F3BE0  NULL    NULL    NULL    NULL  0x000000000416A988  16  0   1480291026  BROKER_TRANSMITTER  NULL    NULL    NULL    NULL    NULL  0x0000000004904748  2   0   17621   XE_TIMER_EVENT  NULL    NULL    NULL    NULL    NULL  0x00000000804222C8  9   0   705 SQLTRACE_BUFFER_FLUSH   NULL    NULL    NULL    NULL    NULL  

Update:

DB Shrink removed. Still hanging. I'm 99% it's the rebuild index task. Here are a few example lines from the generated SQL

ALTER INDEX [childCategories] ON [dbo].[category] REBUILD PARTITION = ALL WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = OFF, ONLINE = OFF, SORT_IN_TEMPDB = OFF )   ALTER INDEX [mfgID] ON [dbo].[category] REBUILD PARTITION = ALL WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = OFF, ONLINE = OFF, SORT_IN_TEMPDB = OFF )   

Update 2013-07-09: So I'm continuing to have problems with this maintenance plan. The shrink operation was removed a few months ago and the rebuild index task was modified. I'm continuing to have failures and the web site that is driven by this DB continues to go down for a few minutes on each failure. I managed to dig up the job history task and it appears that the update statistics task is now failing. See log below:

Executed as user: WORKGROUP\WNN1106$. ...  Source: Update Statistics Task    Executing query "use [RackAttackNorthAmerica]  ".: 31% complete  End Progress  Progress: 2013-07-08 23:04:37.03    Source: Update Statistics Task    Executing query "UPDATE STATISTICS [dbo].[news]  WITH FULLSCAN  ".: 32% complete  End Progress  Progress: 2013-07-08 23:04:37.03    Source: Update Statistics Task    Executing query "use [RackAttackNorthAmerica]  ".: 33% complete  End Progress  Progress: 2013-07-08 23:04:42.65    Source: Update Statistics Task    Executing query "UPDATE STATISTICS [dbo].[orderItems]  WITH FULLSC...".: 33% complete  End Progress  Progress: 2013-07-08 23:04:42.65    Source: Update Statistics Task    Executing query "use [RackAttackNorthAmerica]  ".: 34% complete  End Progress  Progress: 2013-07-08 23:04:42.74    Source: Update Statistics Task    Executing query "UPDATE STATISTICS [dbo].[orderItemType]  WITH FUL...".: 35% complete  End Progress  Progress: 2013-07-08 23:04:42.74    Source: Update Statistics Task    Executing query "use [RackAttackNorthAmerica]  ".: 36% complete  End Progress  Progress: 2013-07-08 23:04:42.74    Source: Update Statistics Task    Executing query "UPDATE STATISTICS [dbo].[orderPaymentType]  WITH ...".: 37% complete  End Progress  Progress: 2013-07-08 23:04:42.74    Source: Update Statistics Task    Executing query "use [RackAttackNorthAmerica]  ".: 37% complete  End Progress  Progress: 2013-07-08 23:05:43.25    Source: Update Statistics Task    Executing query "UPDATE STATISTICS [dbo].[orders]  WITH FULLSCAN  ".: 38% complete  End Progress  Progress: 2013-07-08 23:05:43.25    Source: Update Statistics Task    Executing query "use [RackAttackNorthAmerica]  ".: 39% complete  End Progress  Progress: 2013-07-08 23:05:43.30    Source: Update Statistics Task    Executing query "UPDATE STATISTICS [dbo].[orders_deleted]  WITH FU...".: 40% complete  End Progress  Progress: 2013-07-08 23:05:43.30    Source: Update Statistics Task    Executing query "use [RackAttackNorthAmerica]  ".: 41% complete  End Progress  Progress: 2013-07-08 23:05:43.30    Source: Update Statistics Task    Executing query "UPDATE STATISTICS [dbo].[orderSource]  WITH FULLS...".: 41% complete  End Progress  Progress: 2013-07-08 23:05:43.30    Source: Update Statistics Task    Executing query "use [RackAttackNorthAmerica]  ".: 42% complete  End Progress  Progress: 2013-07-08 23:05:43.55    Source: Update Statistics Task    Executing query "UPDATE STATISTICS [dbo].[orderSplit]  WITH FULLSC...".: 43% complete  End Progress  Progress: 2013-07-08 23:05:43.55    Source: Update Statistics Task    Executing query "use [RackAttackNorthAmerica]  ".: 44% complete  End Progress  Progress: 2013-07-08 23:05:43.60    Source: Update Statistics Task    Executing query "UPDATE STATISTICS [dbo].[paymentProcessType]  WIT...".: 45% complete  End Progress  Progress: 2013-07-08 23:05:43.60    Source: Update Statistics Task    Executing query "use [RackAttackNorthAmerica]  ".: 45% complete  End Progress  Progress: 2013-07-08 23:05:43.64    Source: Update Statistics Task    Executing query "UPDATE STATISTICS [dbo].[priceHistory]  WITH FULL...".: 46% complete  End Progress  Progress: 2013-07-08 23:05:43.64    Source: Update Statistics Task    Executing query "use [RackAttackNorthAmerica]  ".: 47% complete  End Progress  Progress: 2013-07-08 23:05:43.71    Source: Update Statistics Task    Executing query "UPDATE STATISTICS [dbo].[priceUpdate]  WITH FULLS...".: 48% complete  End Progress  Progress: 2013-07-08 23:05:43.71    Source: Update Statistics Task    Executing query "use [RackAttackNorthAmerica]  ".: 49% complete  End Progress  Progress: 2013-07-08 23:05:43.75    Source: Update Statistics Task    Executing query "UPDATE STATISTICS [dbo].[priceUpdateCAD]  WITH FU...".: 50% complete  End Progress  Progress: 2013-07-08 23:0...  The package execution fa...  The step failed.  

Should I not be doing an update statistics task in my nightly backup? Or is this pointing to a different issue?

Update 2013-07-09:

The maintenance plan logs to file.

Here is an entry from 2013-07-08:

Execute T-SQL Statement Task (WNN1106)  Execute TSQL on Local server connection  Execution time out: 120  Task start: 2013-07-08T23:01:10.  Task end: 2013-07-08T23:03:11.  Failed:(-1073548784) Executing the query "ALTER INDEX [email] ON [dbo].[customerFeedback] RE..." failed with the following error: "Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.  

Related SQL:

ALTER INDEX [email] ON [dbo].[customerFeedback] REBUILD PARTITION = ALL WITH ( PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = OFF, ONLINE = OFF, SORT_IN_TEMPDB = OFF )  

Here is an entry from 2013-07-04:

Failed:(-1073548784) Executing the query "ALTER INDEX [cEmail] ON [dbo].[orders] REBUILD PAR..." failed with the following error: "Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.  

Related SQL:

ALTER INDEX [cEmail] ON [dbo].[orders] REBUILD PARTITION = ALL WITH ( PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = OFF, ONLINE = OFF, SORT_IN_TEMPDB = OFF )  

Here is an entry from 2013-07-03:

Failed:(-1073548784) Executing the query "ALTER INDEX [email] ON [dbo].[customerFeedback] RE..." failed with the following error: "Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.  

Related SQL:

ALTER INDEX [email] ON [dbo].[customerFeedback] REBUILD PARTITION = ALL WITH ( PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = OFF, ONLINE = OFF, SORT_IN_TEMPDB = OFF )  

All of the maintenance plan tasks are generated by the GUI wizard tool. As you can see they fail on different steps everytime.

What should I do here?

Should I increase the time out? Or will that just take longer to fail?

Should I modify the queries? ALLOW_ROW_LOCKS = OFF ? ONLINE = ON?

Should I remove the rebuild index task completely?

Should I remove the update statistics task completely?

Thanks for your input,

Tomas

Why would mysql "show global status" query be taking 15 minutes?

Posted: 09 Jul 2013 08:52 PM PDT

I'm reviewing the slow log, and on one of my slaves the average time for SHOW GLOBAL STATUS is 914s.

Any idea how to determine the cause of this?

Is there a way to export Oracle's UNDO?

Posted: 09 Jul 2013 07:52 PM PDT

I tried exp utility to dump all database. Looks like this exports only the last version of data skipping undo log. Using flashback queries I see:

01466. 00000 -  "unable to read data - table definition has changed"  *Cause:    Query parsed after tbl (or index) change, and executed             w/old snapshot  

What I'm trying to do is to capture db changes, make backup for later use with the ability to flashback to timestamp.

With rman backup I have similar situation:

ORA-01555: snapshot too old: rollback segment number 3 with name "_SYSSMU3_2472002983$"  too small   01555. 00000 -  "snapshot too old: rollback segment number %s with name \"%s\" too small"  *Cause:    rollback records needed by a reader for consistent read are overwritten by other writers`.   

Update: I managed to do what I needed only by increasing undo retention and direct copying of data files and control file modification on cloned instance.

What is the difference between int(8) and int(5) in mysql?

Posted: 09 Jul 2013 11:52 AM PDT

I found out, that if you have a field defined as INT(8) without ZEROFILL it will behave exactly as INT(5)

in both cases the maximum value is

−2,147,483,648 to 2,147,483,647, from −(2^31) to 2^31 − 1  

or do i miss something?

I found this Question: http://dba.stackexchange.com/a/370/12923

The (5) represents the display width of the field. From the manual, it states:

The display width does not constrain the range of values that can be stored in the column. Nor does it prevent values wider than the column display width from being displayed correctly. For example, a column specified as SMALLINT(3) has the usual SMALLINT range of -32768 to 32767, and values outside the range permitted by three digits are displayed in full using more than three digits.

The display width, from what I can tell, can be used to left-pad numbers that are less than the defined width. So 00322, instead of 322. TBH, I've never used it.

But it doesn't affect the storage size of the column. An int will take up more space than a smallint.

so there seems to be no difference then.

MySQL table relations, inheritance or not?

Posted: 09 Jul 2013 05:52 PM PDT

Im building a micro CMS. Using Mysql as RDMS, and Doctrine ORM for mapping.

I would like to have two types of pages. Static Page, and Blog Page. Static page would have page_url, and page_content stored in database. Blog page would have page_url, but no page_content. Blog would have Posts, Categories...

Lets say I have route like this:

/{pageurl}  

This is page, with page url that can be home, or news, or blog... That page can be either Static page, and then I would joust print page_content. But it can also be Blog Page, and then I would print latest posts as content.

How should I relate these Static Page and Blog Page tables? Is this inheritance, since both are pages, with their URL, but they have different content? Should I use inheritance, so that both Static and Blog page extends Page that would have page_url? Or should I made another table page_types and there store information about available page types?

Temporarily Disable Sql Replication

Posted: 09 Jul 2013 03:37 PM PDT

Due to a business need, I may need to disable transactional replication in my environment (Sql 2k -> Sql 2008). Functionally, I understand this to mean that I will need to drop subscriptions and articles. Is getting the create scripts enough to restore replication back to original state when the conflicting need is addressed?

Thanks.

[SQL Server] Need max date from duplicare records.

[SQL Server] Need max date from duplicare records.


Need max date from duplicare records.

Posted: 09 Jul 2013 02:07 AM PDT

Hello,I have one table which have 3 columnsdt1 dt2 dt3 dt4 dt53/31/2011 3/30/2011 4/4/2011 3/30/2011 4/4/201110/24/2012 10/23/2012 10/27/2012 10/23/2012 10/27/201212/10/2012 12/8/2012 12/19/2012 12/8/2012 12/19/201212/20/2012 12/8/2012 12/19/2012 12/8/2012 12/19/20121/16/2013 1/14/2013 1/24/2013 1/14/2013 1/24/2013when there comes duplicate record in dt2,dt3,dt4,dt5 then i just need max dt from dt1else nullso can you please help me??

SQL Server Agent won't start - The system cannot find the file specified

Posted: 08 Jul 2013 11:50 PM PDT

Hi,For a while now we have been running SQL for our ERP system.Recently i have noticed that a rebuild indexes task has not been running on the SQL database.On closer inspection, i noticed that the SQL server agent is not running - when i try and start the service it tells me 'windows could not start the service - error 2: the system cannot find the file specified.I have taken a look at some posts already which refer the checking the location of the sqlagent.out log file path in the registry.This would all appear ok - but when i try to start the service this file is not updating....I have also tried renaming the file to see if a new file is create but nothing happens....Any advice or pointers that could be given would be greatly appreciated.Cheers Alan

Variable as a location

Posted: 09 Jul 2013 03:41 AM PDT

I was wondering if its possible to have a variable hold the location of a database and table so instead of writing[code="sql"]Select * from database1.dbo.table1[/code]i want to write it like[code="sql"]Declare @Location varchar(30)set @location = 'database1.dbo.table1'Select * from @Location[/code]The end result is to use this in a stored proc that accesses two different databases on the same server

Query Reporting Services subscription start time?

Posted: 09 Jul 2013 05:13 AM PDT

Hi All - I'm trying to gather information regarding the schedules for my subscriptions. Some of the data I need is stored in XML in the database. Below, I'm simply trying to retrieve the value in StartDateTime in the MatchData column from Subscriptions. However, the results return NULL for that column (even though it appears there is data present. I'm sure something is wrong with my query, so I thought I would check. ) Since I've never queried XML before I quite certain I'm not using XMLNAMESPACES right. But, here is what I tried...This should run against any stock installation of SSRS. Just change the name space to reflect your environment:[code="sql"];WITH XMLNAMESPACES('http://schemas.microsoft.com/sqlserver/[removed]/[removed]/[removed]/[removed]' AS ns)SELECT s.EventType ,CAST(s.[MatchData] AS XML).value('(/ns:ScheduleDefinition/ns:StartDateTime)[1]', 'date') AS SubStrtFROM Subscriptions s[/code]Any help is appreciated!

Search This Blog