Wednesday, June 26, 2013

[SQL Server 2008 issues] configuring Mirroring problem

[SQL Server 2008 issues] configuring Mirroring problem


configuring Mirroring problem

Posted: 25 Jun 2013 05:44 PM PDT

Hi... when i am configuring Mirroring i am getting tcp\ip port problem..my 1st server dafault Instance[MSSQLSERVER] run under Local system service account..2nd server named Inst[pcname\shiva] run under \.shiva service account..1st server backup and 2nd server restore completed...but wen i am configuring mirroring i got problem like tcp\ip port problem, database space prob..what is the solution for ths prob??thanks...

Transactional replication fails with no meaningful error message

Posted: 20 Jun 2013 11:58 AM PDT

Hi,I am trying to setup replication between two clustered instances belonging to the same 3 node cluster. The distributor is a separate instance from publisher and subscriber.When I re-initialise a publication I get this error message:The replication agent encountered a failure. See the previous job step history message or Replication Monitor for more information. The step failed.Neither job history nor SQL Server logs provide any other meaningful messages.When I try to run the snapshot from command prompt it just works fine:"C:\Program Files\Microsoft SQL Server\100\COM\SNAPSHOT.EXE" -Publisher [sqlcluster1\publisher] -PublisherDB [MyDB] -Distributor [sqlcluster2\distributor] -Publication [MyPublication] -DistributorSecurityMode 1 Log reader and distributor agent fail with the same generic message.SQL Server agent account is a local admin on each cluster node and a member of sysadmin role on each instance.I am running SQL Server 2008R2 SP2.Any ideas?Thanks.

What to do with test Data from PROD?

Posted: 25 Jun 2013 05:13 PM PDT

Dear All, After every release into production, we are doing a rough testing in production, then deleting all those test transactions from it.is it a good practice? or is there anyAppreciating your help.

Data export where am i going wrong

Posted: 25 Jun 2013 04:43 PM PDT

Hi ExpertsI have a procedure as follows[code]USE [TestData]GO/****** Object: StoredProcedure [dbo].[exporttocsv] Script Date: 06/26/2013 15:31:34 ******/SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOALTER procedure [dbo].[exporttocsv]asBEGIN DECLARE @sql varchar(8000)SELECT @sql = 'bcp "select * from dbo.newtable" '+ 'queryout "C:\inetpub\wwwroot\uploads\cleansed.csv" -c -t, -T -S'EXEC master..xp_cmdshell @sqlend[/code]when i run this using "exec exporttocsv" it produces the following output even though the table does exist[code]SQLState = S0002, NativeError = 208Error = [Microsoft][SQL Server Native Client 10.0][SQL Server]Invalid object name 'dbo.newtable'.SQLState = 37000, NativeError = 8180Error = [Microsoft][SQL Server Native Client 10.0][SQL Server]Statement(s) could not be prepared.NULL[/code]any ideas

Is it possible to make Outer join with old style code

Posted: 25 Jun 2013 04:08 PM PDT

Hi,I'm dealing with old style code that do joins without JOIN keyword like in a piece below, in such case is it possible somehow to implement OUTER functionality ? [code]select t1.c1, t1.c2, T2.COL3 from t1, T2 where t1.c1 = T2.Col1 [/code]

Problem

Posted: 24 Jun 2013 07:16 PM PDT

SELECT eid,ename,remarks,CAST(((DATEDIFF(dd,0,AggTimedate) * 24) + DATEPART(hh,AggTimedate)) AS varchar(5)) + ':' + DATENAME(mi,AggTimedate) AS SpendtimeFROM(SELECT g.eid,e.ename,remarks,DATEADD(mi,SUM(DATEDIFF(mi,0,spendtime)),0) AS AggTimedateFROM attend_log g left join employee e on g.eid=e.eidwhere g.eid=17090 and date >'20130101' and date <'20130131'group by g.eid,e.ename,g.remarks)ti want this type of resulteid------ename----------late---------spendtime17090----abc-------------2-----------163:54if remark show 2 lates then late column show 2

Script all stored procedures modified after a given date? ...

Posted: 25 Jun 2013 02:38 PM PDT

Hello,I can get a list of all the stored procedures modified since a given date -- but now I want to script all of those procedures (so I can port changes over to another server) ...Does such functionality exist in SQL Server Management ?? Thanks in advance.

Why is my TLOG growing

Posted: 24 Jun 2013 07:25 PM PDT

SQL Server 2008R2We have an application (on a DEV box) that I am told [b]they are not using yet [/b]but the Transaction log for this Database grows in excess of 18GB everyday. Since this is a Dev box with limited resources, I am forced to shrink the log on an almost daily basis because of drive space issues. Use MYDBGO-- Perform a Full DB backup --Perform a Transaction log backup-- shrink the Transaction log file DBCC SHRINKFILE(MYDB_LOG, 1)-- backup and truncate log BACKUP LOG MYDB TO DISK= 'NUL:'-- shrink the Transaction log file DBCC SHRINKFILE(MYDB_LOG, 1)--Perform a Full DB backup Right now, I have change the recovery mode to simple because even with doing TLOG backups every 15 mins in Full Recovery Mode, the log can grow as much as 4 gb in just 15 mins. I will see how the DB and Tlog files handle being in simple recoery mode (how much growth) One thing that I need some help analyzing is the Profiler Trace that I did while the transaction log was in a 'growth spurt' there are 'RPC: Completed' messages in the profiler trace - one after he other - like this: exec sp_execute 988,511,35230,990,N'myapplication.POL_SUPRES_ISS_IND',14,N''exec sp_execute 988,511,35230,576,N'N'myapplication.FREE_LK_INCR_CD',14,N''exec sp_execute 988,511,35230,662,N'N'myapplication.INIT_PAC_REDRW_IND',14,N''These 'RPC' calls are being made from the application (hostname is app server name, login is sql login that was setup for the app to access the database) Hope that's not too vague. Any help or guidance would be appreciated.

query

Posted: 25 Jun 2013 06:06 AM PDT

dear all,i have follwoing tableREGION date_apprvd sala 20130101 1000a 20130201 2000a 20130301 3000b 20140101 4000b 20140201 5000b 20140301 6000c 20130101 7000c 20130201 8000c 20130301 9000d 20130101 1100d 20130201 2100d 20130301 3100e 20140101 4100e 20140201 5100e 20140301 6100pls help me to create a queryinput is 20130101region sal counta 1000 1b 0 0c 7000 1d 1100 1e 0 0thanksnick

Partitioning Existing table with Non clustered index on Date column

Posted: 25 Jun 2013 05:40 PM PDT

Hi All, I have one table with 10 lacks records. I partitioned that table on CreatedDate column with non clustered index( i am not removing clustered index on ID column, It is as part of primary key).It is inserting to data into relevant partition only. But i am verifying is that table partitioned or not by using below steps, in object Explorer-Database-->TestDB-->tables-->select partitioned table and Right click on table select properties --Storage [b]File Group= PrimaryTable Partitioned = False[/b]If create Partitioned with Clustered index , it is showing correctly [b] Table Partitioned = True[/b] But i am creating with non clustered.Can any one explainIs it table partitioned or not? and how to know data is coming from which partition (with out using ($partition)below are example table partition script which i followed steps for original table.CREATE TABLE tblPartition(ID int primary key identity(1,1),Name varchar(30),CreatedDate Datetime)insert into tblPartition(Name,CreatedDate)SELECT 'Name1','2013-05-26 13:53:47.650'union allselect 'Name2','2013-05-26 13:53:47.650'union allSELECT 'Name1','2013-06-26 13:53:47.650'union allselect 'Name2','2013-06-26 13:53:47.650'union allSELECT 'Name1','2013-07-26 13:53:47.650'union allselect 'Name2','2013-07-26 13:53:47.650'goCREATE PARTITION FUNCTION [PartitionFunction](datetime) AS RANGE RIGHT FOR VALUES (N'2013-05-31 23:59:59', N'2013-06-30 23:59:59', N'2013-07-31 23:59:59')CREATE PARTITION SCHEME [PartitionScheme] AS PARTITION [PartitionFunction] TO ([FGNdf10], [FGNdf11], [FGNdf12], [PRIMARY])CREATE NONCLUSTERED INDEX [IX_PartitionScheme_CreatedDate] ON [dbo].[tblPartition]( [CreatedDate])WITH (SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF) ON [PartitionScheme]([CreatedDate])goselect * from tblPartitionwhere $partition.PartitionFunction(CreatedDate)=1goThanks,PRR

Replicating db

Posted: 25 Jun 2013 11:47 AM PDT

Hi,I have a database of 200 GB. I need to replicate this db to 5 different servers and make them online? Is Transactional replication only my option? WIth such a huge DB, and replicating all the tables, I am notsure how the performance would be. DO I have anyother option? LogShipping or DBMirroing would work?

unexpected Error occured in the database server Time out expired

Posted: 25 Jun 2013 06:14 AM PDT

Hi all ,Application team mail me this error...I checked the no timeout error in SQL server error log...No I/O related error , even eventvwr not showing any thing...but this is second time application team saying they are getting this error , plz suggest how to monitor it.As i m checking with monitor disk counters and sql buffer counter as it is also normal

SSIS package in msdb & sql server agent

Posted: 25 Jun 2013 09:06 AM PDT

I have a sql agent job that calls the SSIS package stored in the same sql server in database msdb.Sometimes after I import a newer version of SSIS package to msdb and try to adhoc run a sql agent job, the job still picks the older version of the SSIS package.Is anyone seen this issue? If so how to make it right every time I import a SSIS package?Thanks in advance.

How to prevent SQL Injection Attack from SQL server side

Posted: 20 Jun 2013 05:05 AM PDT

Hi Everyone,Is it possible to stop SQL Injection Attack at SQL server level?I have gone through some posts and articles that suggest all the checks at application level so that only authentic data can be entered into database.My client has a travel portal and facing SQL injection attack. My knowledge is limited in this topic. Please can anyone help and let me know in case we can do something at SQL server level so that it can be stopped.An early response would be highly appreciated.thanks in advance.

how to restore a stored procedure from a backup

Posted: 25 Jun 2013 02:30 AM PDT

The db is too big to restore fully, can't even just restore the primary filegroup anywhere. How can I get a single stored procedure back?

Replication

Posted: 25 Jun 2013 12:21 AM PDT

can we get any faqs on sql replication as i did not configured replication as of now ?

How much Ram required.

Posted: 25 Jun 2013 02:33 AM PDT

Hi Team,I've installed sql server standard edition 2008R2, and my database size is 30GB, Harddisk is : 500GB,How much of RAM is required to overcome High physical memory utilisation issue.please suggest..!

display specic char in sql server

Posted: 25 Jun 2013 02:31 AM PDT

Hi friends i have small doubt in sql server plese tell me how to solve this issue.table data contains like name sas programing ssis programingbased on this table data i want display only 'gr' in output like name gr gri tried like this select substring(name,charindex('gr',name),len(name))its show only position of number its not display 'gr'plese tell me query how to solve this issuse in sql server.

Snapshot Replication

Posted: 24 Jun 2013 11:08 PM PDT

Hi guys.I wanted some clarification about snapshot replication. What happens when the snapshot replication is now in the middle of transferring data to the subscriber when the distribution agent job suddenly stops? What happens when I try to restart the synchronization again? Will it continue to transfer the data where it stopped? Or will it retransfer all of the data to the subscriber again?I'm asking this since we have a snapshot replication that runs for a long time and in between that time, there's another job that will run but it is suspended maybe due to resources availability. I thought we could try to stop the replication process and let the other job complete. And then rerun the distribution agent again to continue with the replication.Thank you

?? on creating a temp table for a date range

Posted: 24 Jun 2013 11:58 PM PDT

Hi,Can someone pint me in the direction of where I can see some examples on creating a temp table for a date range?What I need to do is use SSRS with parameters for a month and year o create report. I then need to choose data within that month and create a matrix report.Problem I find is some dates don't have data but I still want to show the date as null so I show all dates in the month. I assume I have to create a temp table using a dimdate table?ThanksJoe

Matrix Report in SQL

Posted: 25 Jun 2013 12:28 AM PDT

Hi i have master table called "pending_List" . ID Pending Name 1 Reason12 Reason23 Reason3in Page i will select the any of the data from pending_list and i ll save it my histoty tablenow my requirment is wants to show the past 9 days pendinglist count in Report. see the below sample.-----------------------------------------------------Pending Name 25/06 24/06 23/06 22/06-----------------------------------------------------Reason1 1 3 NULL 2Reason2 NULL 10 5 1Reason3 4 NULL 3 1Please find the attachement for more clarifications and help me for this..

Mirroring Issue

Posted: 24 Jun 2013 07:21 PM PDT

Hi all, Today i started to setup a mirror between two instances which are in same domain. The mirroring setup is failed in the final step. i got this error message from error log. Can any one help me out.2013-06-25 11:48:16.070,spid38s,Error: 1474, Severity: 16, State: 1.2013-06-25 11:48:16.070,spid38s,Database mirroring connection error 4 'An error occurred while receiving data: '10054(An existing connection was forcibly closed by the remote host.)'.' for 'TCP://QBSRECPU04.QBSTRUST.COM:5023'.2013-06-25 11:48:35.980,spid36s,Error: 1443, Severity: 16, State: 2.2013-06-25 11:48:35.980,spid36s,Database mirroring has been terminated for database 'qbs'. This is an informational message only. No user action is required.

help with indexing a table used for reports

Posted: 24 Jun 2013 07:48 PM PDT

HelloI have this table in sql server 2008 r2 which is populated periodically by a job. This table is used in a stored proc called by about 40 reports in ssrs.I am in the process of creating indexes on this table. Any suugestions / help would be valuable. Table structure (just including the bare minimum columns out of about 40)Deptid productid customerid territoryid salespersonid areaid areamanagerid netsales netsalesqty yearmonth1. Sales person, area managers. Dept heads and above log in to view reports.2. Reports have filters like dept, territory, customer, area, territory, product, month and year.3. Reports are grouped by dept / area / product4. Users can see data only relevant to them or further on the filters they selectThanks

Logshipping Issue

Posted: 24 Jun 2013 07:25 PM PDT

Hi all,I setup a log shipping between databases across two instances. the setup was complete. But when i see the error log i got this message.Progress,1,QBSRECPU04,LSBackup_IPE_QUOTE_OCT_17,Log shipping backup log job step.,,2013-06-25 10:00:13.19 *** Error: Backup failed for Server 'QBSRECPU04'. (Microsoft.SqlServer.SmoExtended) ***<nl/>2013-06-25 10:00:13.22 *** Error: An exception occurred while executing a Transact-SQL statement or batch.(Microsoft.SqlServer.ConnectionInfo) ***<nl/>2013-06-25 10:00:13.22 *** Error: Cannot open backup device 'd:\Log_shipping_shared\IPE_QUOTE_OCT_17_20130625043012.trn'. Operating system error 3(The system cannot find the path specified.).<nl/>BACKUP LOG is terminating abnormally.(.Net SqlClient Data Provider) ***<nl/>2013-06-25 10:00:13.26 ----- END OF TRANSACTION LOG BACKUP -----<nl/><nl/>Exit

Tuesday, June 25, 2013

[SQL Server] Query improvement

[SQL Server] Query improvement


Query improvement

Posted: 25 Jun 2013 03:02 PM PDT

Dear AllI have folloing queryselect sum(yy) sumyy, 'summary line' xxfrom tableawhere somedate_column < @startdate_parameterunionselect yy, xxfrom tableawhere somedate_column between @startdate_parameter and @enddate_parameterIs there a better way to achive this or this is the okRegardsKrishana

Access form front end to database

Posted: 25 Jun 2013 01:36 PM PDT

Hi,I have created a database with a table called "cable"Fields areID - identitycableID - nchar(8) not nullother fields are present but irrelevant for what I need to ask.The cableID field consists of an alphnumeric code.I have created a trigger on update of cable table which works the way I want it.My problem is that I want to use an Access 2010 form as the front end to enter information to the cable table to cause the trigger to run but as I have made cableID not null it must be filled in and it cannot be duplicated.In my form the identity field autoincrements which is great and what I want to appear in the cableID field is the next cableID based on the highest cableID present in the database.The code I have written below as an SQL query does what I want and creates the value (@newcableID) but I do not know how to have this appear automatically in the cableID field in the Access form so that when the users finishes filling the other fields and inserts the record it gets put into the database and therefore runs the trigger. This query would have to run each time a new record needs to be created so that it increments. I cannot use the ID field as there is some data already in the table that needs to stay and it is not in line with the ID field. There may also be instances where data needs to be entered manually which is why I have not used the identity field. It is there for other future uses.declare @newcableID nchar(8)declare @cableIDnum intdeclare @maxcableID nchar(8)set @maxcableID = (select max(cableid) from tblCable)set @cableIDnum = (convert(int, substring(@maxcableID,3,8)))set @cableIDnum = @cableIDnum + 1set @newcableID = (select 'CA' + right('000000' + cast((@cableIDnum) as varchar),6))I hope somebody understands what I am trying to do and I don't know if this is the correct site to ask this question but if anyone can help it would be great.

IDENTITY_INSERT, and Using Alphanumeric Autoincrement Primary Key

Posted: 31 May 2013 11:38 PM PDT

Hi Folks,Please help me on below few queries My table is Users(ID IDENTITY(1,1) PRIMARY KEY, NAME nvarchar(10), MANGER_ID fk_User_id References ID)1) now im trying SET IDENTITY_INSERT USERS.ID OFFBut error message i received is 'Cannot find the object 'Users' because it does not exists or you do not have permissions'.2) In case if want my PRIMARY KEY to be alphanumeric as well as autoincrementing. For example the existing values alike 'E1, E2, E3 ans so on.Why I am getting that error? and How to eradicate it?&How to Set our Primary KEY to Alphanumeric AutoIncrementing?

SSIS: Not able to transfer a FLAT FILE Data to Database. Please help

Posted: 24 Jun 2013 08:19 PM PDT

Hi teamMy flat file content is emp_no emp_fname emp_lname dept_no,25348 Matthew Smith d3 ,10102 Ann Jones d3 ,18316 John Barrimore d1 ,29346 James James d2 ,9031 Elsa Bertoni d2 ,2581 Elke Hansel d2 ,28559 Sybill Moser d1ROW DELIMITER is COMMA & COLUMN DELIMITER IS COLUMN.& my Table in SQL SERVER IS EMP_DATA(EMP_ID int,F_NAME varchar(20),L_NAME varchar(20),DEPT_ID varchar(10))But when i carry on the PACKAGE Execution, the below error are stopping me[Data Conversion 0 - 0 [47]] Error: The "output column "Column 3" (59)" failed because truncation occurred, and the truncation row disposition on "output column "Column 3" (59)" specifies failure on truncation. A truncation error occurred on the specified object of the specified component.[Data Conversion 0 - 0 [47]] Error: Data conversion failed while converting column "DEPT_ID" (22) to column "Column 3" (59). The conversion returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.".[SSIS.Pipeline] Error: SSIS Error Code DTS_E_PROCESSINPUTFAILED. The ProcessInput method on component "Data Conversion 0 - 0" (47) failed with error code 0xC020902A while processing input "Data Conversion Input" (48). The identified component returned an error from the ProcessInput method. The error is specific to the component, but the error is fatal and will cause the Data Flow task to stop running. There may be error messages posted before this with more information about the failure.The mapping is done Correct.The Length of column variables match in CONTROL FLOW ALSO DATA FLOW. Could anyone please help me understanding the above errors with possible scenarios?

How to move an excel file if it contains a specific sheet name like [sheet1$]

Posted: 25 Jun 2013 02:05 AM PDT

I was trying to move Excel files to a New folder if it contains a sheet name like sheet1 can any one of you help me in this ?

'No process is on the other end of the pipe'

Posted: 25 Jun 2013 01:17 AM PDT

When I try to log in to one instance of SQL Server as anything [i]other than sa[/i]*, I get the following error message (hopefully attached). Short version is[b]a connection was successfully established with the server, but then an error occurred during the login process. No process is on the other end of the pipe.[/b]I've Googled it but haven't found much.SQL Server 2008 Standard Edition.Mixed authentication mode is ON.Max user connections is set to 0 (unlimited)The password is correct.Shared Memory is enabled Named Pipes is enabledTCP/IP is enabledVIA is disabledAny ideas please?* edit: Windows authentication works fine. It's just SQL Server authentication that seems to cause this.

cant find download link for SQL server 2005 (64 bit)

Posted: 18 Jul 2011 04:14 AM PDT

Hi allI have been using SQL server 2005 Standard (32-bit).I am planning to upgrade to 64 bit. But unfortunately I cant find a download link for [b]SQl server 2005 Standard (64 bit)[/b]. If any one knows the microsoft link, it would be a great help. Thanks a million.AD

external keyword

Posted: 24 Jun 2013 04:26 PM PDT

what is external keyword when and where we use it?please give example?

[how to] How to set up ODBC for oracle in Windows7

[how to] How to set up ODBC for oracle in Windows7


How to set up ODBC for oracle in Windows7

Posted: 25 Jun 2013 07:37 PM PDT

Currently, I'm trying to figure out how to connect to Oracle database from my Client PC.

The purpose for it is to manupulate database within FileMaker/Access (But mainly with FileMaker)

My environment(192.168.5.40) is

  • Windows7(64bit)
  • Filemaker12

Server environment(192.168.10.100)

  • Oracle version 10.0.2

I've tried to install instant client(instantclient-basic-win-x86-64-10.2.0.3.0, and instantclient-odbc-win32-10.2.0.3-20061115)

It didn't work right. So I tried few ways. However, I always get some kind of errors.
Especially, I'm stucked with SysWow64 thing.

Does anyone know what files exactly I have to install for this situation?

Thanks for the help:)

Is it possible to convert mysql binlog from statement format to row format?

Posted: 25 Jun 2013 08:19 PM PDT

The mysql server online is of version 4.1, which doesn't support row-based binary log. Nevertheless, I need the row-based binlog. Can I use the binlog generated by old mysql and import it into another mysql of higher version that supports row-base binlog to get the row-based binlog?

Xtrabackup manager failed with apply-log step

Posted: 25 Jun 2013 06:27 PM PDT

I am using xtrabackup manager to backup mysql server. I have a problems with it. All my backup is run ok. But with the server that has about 200 GB of data, the backup task is always failed. I open the log file and see:

2013-06-26 01:15:09 +0700 [INFO] : [ XtraBackup step completed OK. Proceeding with apply-log step... ]  2013-06-26 01:15:09 +0700 [INFO] : [ Cleaning up files... ]  2013-06-26 01:15:21 +0700 [INFO] : [ Released lock on port 10000. ]  2013-06-26 01:15:21 +0700 [ERROR] : [ An error occurred while trying to perform the backup. Proceeding to log some details to help debug... ]  2013-06-26 01:15:21 +0700 [ERROR] : [ Error Caught: genericBackupTaker->takeFullBackupSnapshot: Error: Unable to create dir './tmp' in backup dir for apply log process to utilize. ]  2013-06-26 01:15:21 +0700 [ERROR] : [ Trace: #0 /usr/local/xtrabackup-manager/includes/fullonlyBackupTaker.class.php(110): genericBackupTaker->takeFullBackupSnapshot(Object(backupJob), Object(backupSnapshotGroup))  #1 /usr/local/xtrabackup-manager/includes/backupSnapshotTaker.class.php(221): fullonlyBackupTaker->takeScheduledBackupSnapshot(Object(backupJob))  #2 /usr/local/xtrabackup-manager/includes/cliHandler.class.php(902): backupSnapshotTaker->takeScheduledBackupSnapshot(Object(scheduledBackup))  #3 /usr/local/xtrabackup-manager/includes/cliHandler.class.php(103): cliHandler->handleBackupActions(Array)  #4 /usr/local/xtrabackup-manager/xbm(36): cliHandler->handleArguments(Array)  #5 {main} ]  

It looks like xtrabackup stucked when perform apply logs steps. Anyone has ideal to solve this problem.

Here is info about my software:

  • XtraBackup Manager v0.81
  • xtrabackup version 2.1.3 for Percona Server
  • 5.1.59 InnoDB Backup Utility v1.5.1-xtrabackup
  • Mysql Server 5.1.69

When importing raw files for internal conversion, should I use a secondary database or just isolate them within the database?

Posted: 25 Jun 2013 05:21 PM PDT

When importing raw files for internal conversion, should I use a secondary database or just isolate them within the database?

I've got between 1.25 and 2GB of CSV files to be imported (and already have most of the process running smooth as butter) so my question is: does it make sense as a "best practice" to use a "secondary" database for the import or just load them into the database that I'm going to be working in?

Example:

FinalDatabase -- this is the one I'm wanting to do, unless there's a net negative    core.Users    core.Addresses    core.Emails    core.OtherTables    staging.UsersImport    staging.ContactImport  

OR

FinalDatabase    core.Users    core.Addresses    core.Emails    core.OtherTables  StagingDB    dbo.UsersImport    dbo.ContactImport  

Obviously I'll be migrating from one table to the others via scripts, so it doesn't make much difference what the four-part name is going to be, whether it's just

INSERT INTO core.Users (fieldlist) SELECT fieldlist FROM staging.Users  

or

INSERT INTO core.Users (fieldlist) SELECT fieldlist FROM StagingDB.dbo.Users  

The cons I've identified with the first style are the following:

  • Increases backup size
  • Increases file size (including increasing the file size of the existing database -- autogrowth issues)
  • Negates the potential use of multiple spindles (in my case not an option :( )

Some of the pros I've identified with the first style:

  • The original import data stays with the database in case of future questions (until someone truncates it, of course)
  • It gets backed up with the latter data in case you need to recover something

What would be considered a best practice in this situation and why?

Convert units of measurement

Posted: 25 Jun 2013 07:52 PM PDT

Looking to calculate the most suitable unit of measurement for a list of substances where the substances are given in differing (but compatible) unit volumes.

Unit Conversion Table

The unit conversion table stores various units and how those units relate:

id  unit          coefficient                 parent_id  36  "microlitre"  0.0000000010000000000000000 37  37  "millilitre"  0.0000010000000000000000000 5   5  "centilitre"  0.0000100000000000000000000 18  18  "decilitre"   0.0001000000000000000000000 34  34  "litre"       0.0010000000000000000000000 19  19  "dekalitre"   0.0100000000000000000000000 29  29  "hectolitre"  0.1000000000000000000000000 33  33  "kilolitre"   1.0000000000000000000000000 35  35  "megalitre"   1000.0000000000000000000000 0  

Sorting by the coefficient shows that the parent_id links a child unit to its numeric superior.

This table can be created in PostgreSQL using:

CREATE TABLE unit_conversion (    id serial NOT NULL, -- Primary key.    unit text NOT NULL, -- Unit of measurement name.    coefficient numeric(30,25) NOT NULL DEFAULT 0, -- Conversion value.    parent_id integer NOT NULL DEFAULT 0, -- Relates units in order of increasing measurement volume.    CONSTRAINT pk_unit_conversion PRIMARY KEY (id)  )  

There should be a foreign key from parent_id to id.

Substance Table

The Substance Table lists specific quantities of substances. For example:

 id  unit          label     quantity   1   "microlitre"  mercury   5   2   "millilitre"  water     500   3   "centilitre"  water     2   4   "microlitre"  mercury   10   5   "millilitre"  water     600  

The table might resemble:

CREATE TABLE substance (    id bigserial NOT NULL, -- Uniquely identifies this row.    unit text NOT NULL, -- Foreign key to unit conversion.    label text NOT NULL, -- Name of the substance.    quantity numeric( 10, 4 ) NOT NULL, -- Amount of the substance.    CONSTRAINT pk_substance PRIMARY KEY (id)  )  

Problem

How would you create a query that finds a measurement to represent the sum of the substances using the fewest digits that has a whole number (and optionally real component)?

For example, how would you return:

  quantity  unit        label          15  microlitre  mercury          3.1  centilitre  water  

Mostly, I'm having troubles picking "centilitre" over "millilitre", taking into consideration millitre's parent measurement unit being the centilitre.

Source Code

So far I have (obviously non-working):

-- Normalize the quantities  select    sum( coefficient * quantity ) AS kilolitres  from    unit_conversion uc,    substance s  where    uc.unit = s.unit  group by    s.label  

Ideas

Does this require using log10 to determine the number of digits?

Does my.cfg affect number of inserts / sec?

Posted: 25 Jun 2013 04:17 PM PDT

I'm a complete noob to MySQL and after a few hours of innoDB tuning I got nowhere. Either I do something wrong, which I really hope :), or my.cfg settings doesn't effect insert performance?

On many websites I read it does, so here it goes, I start with the basic and try to explain my steps, bear with me and I hope someone can point out what I do wrong.

Server info: VPS on Tilaa, 8gb Ram, 120gb Raid10, 4 x 2.4ghz (90%, so 8.6ghz).

Database info: The table engine used is innoDB. Name table persons, primary key is "random" bigint, this ID is provided by 3th party, 6 others colums which are all ints and one is date. 2 triggers are fired, one after inserts and one after update (they insert each row in a mapping table with 2 columns (int, bigint, so I have a gapless table which I need).

Current number of inserts:
At the moment I am able to insert 800 rows a second in the persons table. But based on the "simple" table and the decent VPS, I should get more right? I read somehwere innoDB is not very font of the random ID part, but still slow right?

Steps I took: I open the my.cfg file located at etc/my.cfg and saw this:

[mysqld]  innodb_file_per_table=1  default-storage-engine=MyISAM  

So basicly this doesn't do a lot, after reading some questions on DBA stackexchange and surfing the Internet I added these extra settings:

key_buffer = 128M  thread_stack = 4M  thread_cache_size = 8  table_cache = 8192  max_heap_table_size = 256M  query_cache_limit = 4M  query_cache_size = 512M  innodb_buffer_pool_size = 4G   innodb_buffer_pool_instances=4  innodb_log_file_size = 512M  

After saving these settings I delete the files: ib_logfile1, ib_logfile0 and ibdata1 in the folder: var/lib/mysql. I did read somewhere this must be done.
Last, I restart the MySQL server with cPanel WHM.

Finally, running the insert query (10 000 times) I see again 9,7 seconds runtime, which is exact the same as before the upgrade of the cfg file.

Anyone has an idea what I do wrong/forget? Side note, when I open phpmyadmin and go to system variables I see indeed: bufferpool size 4096 MB (so the settings are really changed?).

Any help is welcome!

InnoDB pop queue

Posted: 25 Jun 2013 04:09 PM PDT

I have a question about implementing a queue with an InnoDB table, using the Python/Django framework. In a multiprocessing environment, my code hits a deadlock. Please see my question on StackOverflow for all the details.

Postgresql not starting correctly, or is it repairing itself first?

Posted: 25 Jun 2013 07:30 PM PDT

I never saw that problem before. I had problems and many postgresql processes were stuck so I killed them with a -KILL...

When I tried to restart, it says that it cannot restart, but the daemon continues to run and uses a little processor a lot of I/O. Is it trying to repair the database?

I get no log at all. I think there is a way to increase log output... I'll look into that. At this point, the socket to connect to the server doesn't get created, but the server does not quit or emit any error/message, so I have no clue what is going on!?

If anyone has a clue, I'd be glad to hear about it.

How to find mongo document by ObjectID age

Posted: 25 Jun 2013 03:19 PM PDT

I have a collection of documents I'd like to pull a subset created after a certain point in time. I understand the timestamp of creation is encoded in each documents ObjectID (assuming they are auto generated). I see the ObjectId has a getTimestamp method that returns that portion of the ObjectID as an ISOdate.

I'm not very fluent in mongo and am having trouble constructing this seemingly simple query.

For bonus points, once I figure out the "where clause", if you will, I'm wanting to select a single field from the documents using mongodump or what ever else might be available to export the results to a text file via a mongo shell.

Wrong return results

Posted: 25 Jun 2013 01:09 PM PDT

I'm trying to grab all the rows that have a risk of critical or high, with the discription or synopsis or solution or cve like password. But it keeps showing all rows not just rows with a risk of critical or high.

SELECT host,plugin_id,min(cve) as cve,risk,synopsis,description,solution, count(distinct(plugin_id)) from internal_network where risk in ('Critical','High') and description like '%password%' or synopsis like '%password%' or solution like '%password%' or cve like '%password%' group by risk,plugin_id,host,synopsis,description,solution;    192.168.21.128 |     17651 |               | None     | It is possible to retrieve the remote host's password policy using the+| Using the supplied credentials it was possible to extract the                  +|  n/a                                                                    |     1              |           |               |          | supplied credentials.                                                     | password policy for the remote Windows host.  The password policy must         +|                                                                      |               |           |               |          |                                                                        | conform to the Informational System Policy.                                     |                                                                      |   192.168.25.126 |     17651 |               | None     | It is possible to retrieve the remote host's password policy using the+| Using the supplied credentials it was possible to extract the                  +|  n/a                                                                    |     1              |           |               |          | supplied credentials.                                                  | password policy for the remote Windows host.  The password policy must         +|                                                                      |               |           |               |          |                                                                        | conform to the Informational System Policy.                                     |  

If I execute the follow query I get the correct return.

 SELECT host,plugin_id,min(cve) as cve,risk,synopsis,description,solution, count(distinct(plugin_id)) from internal_network where risk in ('Critical','High') and description like '%password%' group by risk,plugin_id,host,synopsis,description,solution;            host      | plugin_id |      cve      |   risk   |                            synopsis                            |                                   description                                   |                     solution                        | count   ----------------+-----------+---------------+----------+----------------------------------------------------------------+---------------------------------------------------------------------------------+--------  -----------------------------------------------+-------  172.18.12.150  |     11255 | CVE-1999-0502 | Critical | An account on the remote host uses a known password.           | The account 'root' on the remote host has the password 'root'. An              +| Change   the password for this account or disable it.   |     1              |           |               |          |                                                                | attacker may leverage this issue to gain total control of the affected         +|                                                     |               |           |               |          |                                                                | system.                                                                         |                                                     |   10.124.2.10    |     61708 |               | Critical | A VNC server running on the remote host is secured with a weak+| The VNC server running on the remote host is secured with a weak               +| Secure   the VNC service with a strong password.        |     1              |           |               |          | password.                                                      | password.  Nessus was able to login using VNC authentication and a             +|                                                     |               |           |               |          |                                                                | password of 'password'.  A remote, unauthenticated attacker could              +|                                                     |               |           |               |          |                                                                | exploit this to take control of the system.                                     |                                                     |   172.18.12.130  |     10205 | CVE-1999-0651 | High     | The rlogin service is listening on the remote port.            | The remote host is running the 'rlogin' service.  This service is dangerous in +| Comment   out the 'login' line in /etc/inetd.conf       |     1              |           |               |          |                                                                | the sense that it is not ciphered - that is, everyone can sniff the data that  +|                                                     |               |           |               |          |                                                                | passes between the rlogin client and the rloginserver. This includes logins    +|                                                     |               |           |               |          |                                                                | and passwords.                                                                 +|          

Query returning correct data, and additional data [on hold]

Posted: 25 Jun 2013 12:58 PM PDT

Thank you all in advance for any responses.

I am querying various Snort tables in order to produce a report. When I run my current query, I receive the expected results (as verified by our IDS console), but I also get incorrect results.

I suspect the issue is with my JOIN statements.

Here is a link for the Snort database layout:

http://acidlab.sourceforge.net/acid_db_er_v102.html

Here is my current query:

SELECT `event`.`sid`                AS `sid`,         `event`.`cid`                AS `cid`,         `event`.`signature`          AS `signature`,            /*Other columns removed for brevity*/            `signature`.`sig_gid`        AS `sig_gid`,         `sig_class`.`sig_class_id`   AS `sig_class_sig_class_id`,         `sig_class`.`sig_class_name` AS `sig_class_name`  FROM   `event`         INNER JOIN `iphdr`           ON ( `event`.`cid` = `iphdr`.`cid` )         INNER JOIN `sensor`           ON ( `event`.`sid` = `sensor`.`sid` )         INNER JOIN `signature`           ON ( `event`.`signature` = `signature`.`sig_id` )         INNER JOIN `sig_class`           ON ( `signature`.`sig_class_id` = `sig_class`.`sig_class_id` )   

Oracle 11g bug ? not returning the record until I triggered index to invisible and then to visible

Posted: 25 Jun 2013 12:42 PM PDT

We are using Oracle 11g, 11.2.0.3.

We know a record exists in a table but a select is not returning it for some odd reason.

  1. The execution plans the general query, not forcing any index, shows that an Index oneIndex is used. No record is returned.
  2. Forcing another index, using the same query returns the missing record.
  3. Setting oneIndex to invisible and back to visible...
  4. Rexecuting the initial query (not forcing any index) does return the record (but somehow, the execution plan is not using oneIndex anymore). Maybe it woke up and found that stats were old ???
  5. Taking the first query again and forcing the use of the faulty index (oneIndex) works fine now.

Facts: Statistics were quite old.

Session Parameter for ORACLE SQL Developer:

ALTER SESSION SET      OPTIMIZER_FEATURES_ENABLE = '11.2.0.1'      OPTIMIZER_MODE = FIRST_ROWS_1      "_HASH_JOIN_ENABLED" = FALSE      "_PUSH_JOIN_PREDICATE" = TRUE      "_PUSH_JOIN_UNION_VIEW" = TRUE      "_COMPLEX_VIEW_MERGING" = TRUE      "_TABLE_SCAN_COST_PLUS_ONE" = TRUE      "_ORDERED_NESTED_LOOP" = TRUE      "_NEW_INITIAL_JOIN_ORDERS" = TRUE      "_UNNEST_SUBQUERY" = TRUE      "_INDEX_JOIN_ENABLED" = TRUE      "_LIKE_WITH_BIND_AS_EQUALITY" = TRUE      NLS_SORT = BINARY      NLS_COMP = BINARY      NLS_NUMERIC_CHARACTERS = '.,'      QUERY_REWRITE_ENABLED = FORCE      CURSOR_SHARING = EXACT      DB_FILE_MULTIBLOCK_READ_COUNT = 0     OPTIMIZER_INDEX_COST_ADJ = 5     OPTIMIZER_INDEX_CACHING = 95  

The SQL is a bit had to follow since it is produced by a Content Management application. I will not provide it for now.

  • Q1: Why is an index no longer chosen by the execution plan after switching to invisible and then back to visible?

  • Q2: How can Oracle momentarily not see a record when searched from an index instead of another index?

Page Break is splitting in the middle of a single row in SSRS

Posted: 25 Jun 2013 12:56 PM PDT

I have an SSRS report for an Invoice and it generally works perfectly but occasionally it will page break in the middle of a row in the main Tablix. The row will split and leave part of the text on one page and the rest on the next. The Tablix has no inherent page breaking. I was just relying on it to break between rows (regardless of which rows). There are a couple rows that repeat on each page. There is a second Tablix below the Detail one with Summary Totals. This is just some background info about the report. I'm relatively new to SSRS and haven't had the greatest luck with the formatting thus far.

Thanks for any help!

Is it possible to build an UNDO framework for postgres?

Posted: 25 Jun 2013 05:25 PM PDT

I was thinking of a table which would automatically log all transactions made to other tables and the command to undo that modifications. So every time you issue an UNDO() command you would read the statement from the log table and execute it.

Example:

You issue a

INSERT INTO sales (id, client_id, product, value)   VALUES (4621, 231, 'Hamburguer', 500)  

so the log-trigger would do

INSERT INTO log (undo) VALUES ('DELETE FROM sales WHERE id = 4621')  

When you issue UNDO() the function would issue something like EVAL(SELECT undo FROM log ORDER BY id DESC LIMIT 1).

Someone experienced, please tell me if this is possible.

Latin1 to UTF8 on large MySQL database, minimal downtime

Posted: 25 Jun 2013 04:23 PM PDT

Let me lay out the basic scenario for the problem I'm up against.

  • I'm running MySQL 5.1 and have a single database with 2 large MyISAM tables (9M and 20M rows respectively).
  • Each of these tables has several VARCHAR and TEXT columns which are currently set to the latin1 character set and latin1_ci collation.
  • These tables are actively read / written to, so we want to avoid as much downtime as possible.

I'm confident in actually converting the data between the two character sets (using ALTER TABLE to convert columns to their binary counterparts, then back to the correct original column type in the correct charset). What I'm not sure about is how to go about the conversion without locking the entire table for the duration (which could take hours / days due to the number of rows and indexes).

One idea is to set up replication to a slave; turn off the replication; convert everything; re-enable replication and let the slave catch up; then finally promote the slave to master.

However, this doesn't solve the issue of how to set up replication in the first place. From what I've read, even mysqlhotcopy & xtrabackup still require table locks for MyISAM tables.

Is there another option I've missed, or am I going to have to turn off my entire application in order to set this up? Thanks for any advice.

Postgres Write Performance on Intel S3700 SSD

Posted: 25 Jun 2013 12:05 PM PDT

I'm not seeing the Postgres write performance increases I thought I would with a single SSD vs a hardware RAID 10 array of (8) 15k RPM SAS drives.

I have a Dell R820 with a PERC H700 hardware RAID card and 8 15k RPM SAS drives in a RAID 10 array, as well as an 800GB Intel s3700 SSD. The server has 128GB of RAM and 64 cores of Xeon E5-4640 at 2.40GHz, running CentOS 6.4 and Postgres 9.2.4.

I'm using pgbench to compare the SAS drives in a RAID 10 array to the single SSD.

15k RPM SAS RAID 10 Results

pgbench -U postgres -p 5432 -T 50 -c 10 pgbench  starting vacuum...end.  transaction type: TPC-B (sort of)  scaling factor: 1  query mode: simple  number of clients: 10  number of threads: 1  duration: 50 s  number of transactions actually processed: 90992  tps = 1819.625430 (including connections establishing)  tps = 1821.417384 (excluding connections establishing)

Single Intel s3700 SSD Results

pgbench -U postgres -p 5444 -T 50 -c 10 pgbench  starting vacuum...end.  transaction type: TPC-B (sort of)  scaling factor: 1  query mode: simple  number of clients: 10  number of threads: 1  duration: 50 s  number of transactions actually processed: 140597  tps = 2811.687286 (including connections establishing)  tps = 2814.578386 (excluding connections establishing)

In real world usage we have a very write-intensive process that takes about 7 minutes to complete, and the RAID 10 array and SSD are within 10 or 15 seconds of each other.

I expected far better performance from the SSD.

Here are Bonnie++ results for the SSD:

Version  1.96       ------Sequential Output------ --Sequential Input- --Random-  Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--  Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP  openlink2.rady 252G   532  99 375323  97 183855  45  1938  99 478149  54 +++++ +++  Latency             33382us   82425us     168ms   12966us   10879us   10208us  Version  1.96       ------Sequential Create------ --------Random Create--------  openlink2.radyn.com -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--                files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP                   16  5541  46 +++++ +++ +++++ +++ 18407  99 +++++ +++ +++++ +++  Latency              1271us    1055us    1157us     456us      20us     408us

Here are Bonnie++ results for the RAID 10 15k RPM drives:

Version  1.96       ------Sequential Output------ --Sequential Input- --Random-  Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--  Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP  openlink2.rady 252G   460  99 455060  98 309526  56  2156  94 667844  70 197.9  85  Latency             37811us   62175us     393ms   75392us     169ms   17633us  Version  1.96       ------Sequential Create------ --------Random Create--------  openlink2.radyn.com -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--                files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP                   16 12045  95 +++++ +++ +++++ +++ 16851  98 +++++ +++ +++++ +++  Latency              7879us     504us     555us     449us      24us     377us

Here are dd results for the SSD:

dd if=/dev/zero of=/path/on/ssd bs=1M count=4096 conv=fdatasync,notrunc  4294967296 bytes (4.3 GB) copied, 12.7438 s, 337 MB/s

And here are dd results for the RAID 10 15k RPM drives:

dd if=/dev/zero of=/path/on/array bs=1M count=4096 conv=fdatasync,notrunc  4294967296 bytes (4.3 GB) copied, 8.45972 s, 508 MB/s

I'd post the Postgres config, but its clear the SSD isn't outperforming the RAID 10 array, so it doesn't seem applicable.

So is the SSD performing as it should be?

Or is the RAID 10 with fast drives just so good that it outperforms a single SSD? A RAID 10 array of the SSD's would be awesome, but at $2,000 each the $8,000 price tag is hard to justify (unless we were sure to see the 2x to 5x gains we were hoping for in real world performance gains).

pg_upgrade fails with lc_ctype cluster values do not match

Posted: 25 Jun 2013 07:46 PM PDT

I'm upgrading my PostgreSQL version 9.1.4 database to version 9.2.4. Both the old and the new version are the bundled versions of postgresapp.com for Mac OS X.

When trying to upgrade the database I get this error:

# pg_upgrade -b /tmp/Postgres.app/Contents/MacOS/bin \  -B /Applications/Postgres.app/Contents/MacOS/bin \  -d var-9.1 \  -D var    Performing Consistency Checks  -----------------------------  Checking current, bin, and data directories                 ok  Checking cluster versions                                   ok  Checking database user is a superuser                       ok  Checking for prepared transactions                          ok  Checking for reg* system OID user data types                ok  Checking for contrib/isn with bigint-passing mismatch       ok  Creating catalog dump                                       ok    lc_ctype cluster values do not match:  old "de_DE", new "de_DE.UTF-8"  Failure, exiting  

I searched this error message and found no useful tip to fix this. Any idea?

Why is Postgres on 64-bit CentOS 6 significantly slower than Postgres on 32-bit CentOS 6

Posted: 25 Jun 2013 07:44 PM PDT

We have some Postgres + PostGIS applications that run well on CentOS 6 32-bit machines.

We've recently been testing them on CentOS 6 64-bit machines, with similar configuration (all our machines are managed by Puppet), and the applications run significantly slower.

Even loading the database schemas take several times as long. (On the 32-bit machines, this takes 7 seconds to load PostGIS, the schema, and fixtures; on the 64-bit machines, this takes 50-110 seconds.)

We initially had the problems with virtual servers, so ran tests on a physical machine and found the same problems. Note that the physical 64-bit machine is slower than virtual 32-bit machines.

The databases are not large at all.

We've experimented with various parameters on postgresql.conf and not gotten any improvement.

Is this a known issue with Postgres or PostGIS on 64-bit CentOS?

If not, how do we diagnose this?

How to repair Microsoft.SqlServer.Types assembly

Posted: 25 Jun 2013 04:39 PM PDT

When I run a checkdb('mydb') this is the only error message printed.

Msg 8992, Level 16, State 1, Line 1  Check Catalog Msg 3857, State 1: The attribute (clr_name=NULL) is required but is missing for row (assembly_id=1) in sys.assemblies.  

It is referring to 'Microsoft.SqlServer.Types' I do see that in the this db the clr_name is blank. but under the master db there is a value in there.

I tried to drop or alter the assembly to add this value but its restricted.

btw, this db was updated lately from sql-server 2005 to 2008R2.

Unable to connect to Amazon RDS instance

Posted: 25 Jun 2013 01:38 PM PDT

I recently created an oracle instance on Amazon RDS. Unfortunately, I'm not able to connect to the instance using Oracle SQL Developer.

The (relevant) information I have from Amazon;

Endpoint - The DNS address of the DB Instance: xxx.yyy.eu-west-1.rds.amazonaws.com

DB Name - The definition of the term Database Name depends on the database engine in use. For the MySQL database engine, the Database Name is the name of a database hosted in your Amazon DB Instance. An Amazon DB Instance can host multiple databases. Databases hosted by the same DB Instance must have a unique name within that instance. For the Oracle database engine, Database Name is used to set the value of ORACLE_SID, which must be supplied when connecting to the Oracle RDS instance: ZZZ

Master Username - Name of master user for your DB Instance: org

Port - Port number on which the database accepts connections: 1521

From this information, the connection settings in SQL Developer are pretty obvious, so I don't really see what I could be missing...

Will Partitions and Indexes on the same table help in performace of Inserts and Selects?

Posted: 25 Jun 2013 02:39 PM PDT

I have a table containing the list of visitors and this table has the following information.

  • Visitor Browser Information
  • Visitor Location Information
  • Visitor Time Information
  • No of Visits

I have a second table that maintains the history of each visits, which means I if the same visitor visits the site, I insert into the second table and update the no. of visits on the first table.

The kind of reports that I have to generate for this table are

  1. Count of Visitors/day or days (Search Between days)
  2. Count of Visitors/month
  3. Count of Visitors/year
  4. Count of Visitors/browser or grouped by browsers

On an average there are about 20000 inserts to the second table and about 15000 inserts to the first table, meaning 5000 were updates to the first table (5000 repeat visits).

I need to decide between partitioning the tables by month and sub-partitioning by days for the reports 1,2,3 and index the browser related columns for report 4.

There will be more reports in the future not sure on what clauses.

Does partitioning/sub-partitioning along with indexing help in the performance of inserts and selects?

Should I perform partitioning on both the tables?

I am currently using MySQL 5.5 + InnoDB

"Arithmetic overflow" when initializing SQL Server 2012 replication from backup

Posted: 25 Jun 2013 03:39 PM PDT

I'm initializing SQL Server replication from a backup, by following instructions from here:

http://www.mssqltips.com/sqlservertip/2386/initialize-sql-server-replication-using-a-database-backup/

...but, when I execute

USE MyDatabase  GO  EXEC sp_addsubscription   @publication = MyDatabasePublication,   @subscriber = 'AMAZONA-XXXXXXX',   @destination_db = MyDatabase,  @sync_type = 'initialize with backup',  @backupdevicetype ='disk',  @backupdevicename = 'D:\Temp\MyDatabasepublication.bak'  

I get the following error:

Msg 8115, Level 16, State 2, Procedure sp_MSsetupnosyncsubscriptionwithlsn, Line 237  Arithmetic overflow error converting expression to data type nvarchar.  

Any idea why, or at least where can I find this stored procedure to troubleshoot further?

SQL Server 2012, Standard Edition.

UPDATE: It looks like that the problem is caused by the fact that database was created using SQL Server 2008R2 and then attached here. Anyway, still need a solution for it.

limit the number of rows returned when a condition is met?

Posted: 25 Jun 2013 08:19 PM PDT

Is it possible to limit the number of rows returned when a condition is met? I am working on a query to check if a student is ready to graduate, and they need to meet a certain number of credits per subject. I don't want all classes, because any class past the number of credits needed can be used for electives.

EDIT: I forgot to mention that this is SQL 2008 R2

I was hoping to be able to do something like this (which I know doesn't work)

select top(sum(credits) > 3.0) c.name, c.subject, c.credits, c.pass  from classes as c  where c.Subject = 'Math' and c.passed = 1  

Any help would be great


Data

Subject        |  Name             | Grade | Credit | Pass  Social Studies | DL Psychology     | 83    | 0.50   | 1  Social Studies | Global Studies 10 | 82    | 1.00   | 1  Social Studies | Global Studies 9  | 83    | 1.00   | 1  Social Studies | Part. In Govern   | 84    | 0.50   | 1  Social Studies | US History 11     | 87    | 1.00   | 1  

Query

select c.Subject,              c.Name,              c.credits,              c.pass,              c.score      from @classes as c       where (c.Subject = 'Social Studies' and c.pass = 1 and c.Name like '%Econ%')      or    (c.Subject = 'Social Studies' and c.pass = 1 and c.Name like '%Gov%')      or    (c.Subject = 'Social Studies' and c.pass = 1)      group by c.Subject, c.Name, c.credits, c.pass, c.score      having Sum(credits) <= 2.0  

Im exprecting to see these rows returned

Expected Results

Subject        | Name              | Grade | Credit | Pass  Social Studies | Part. In Govern   | 84    | 0.50   | 1  Social Studies | DL Psychology     | 83    | 0.50   | 1  Social Studies | Global Studies 10 | 82    | 1.00   | 1  

tempdb logs on same drive as tempdb data or with other logs?

Posted: 25 Jun 2013 02:04 PM PDT

For many reasons I only have 3 hard drives (RAIDed and in an Always-On AG) for all my database files:

  • D: Data
  • E: Logs
  • F: Tempdb

Should the tempdb log file go on F: with the data file(s) or on E:?

My tempdb data file has the highest stalls by far, with the log file 4th out of 24.

In my limited DBA experience (I'm a developer) I would lean to putting the tempdb.ldf on E: as the writes will all be sequential.

How to remove column output in a for xml path query with a group by expression?

Posted: 25 Jun 2013 11:38 AM PDT

I forgot how to remove a column from being output in a FOR XML PATH query using a group by expression. I used it before but somehow I lost the article. In the below example. I do not wish to have idForSomething output in my result by I want to use it as condition for my inner query.

SELECT     idForSomething,      SUM(allSomething) AS [@sum],     (SELECT           innerSomething AS [@inner], innerSomething2 AS [@inner2]      FROM             someTable s2      WHERE            s2.innerSomething = s1.idForSomething      FOR XML PATH('innerlist'), TYPE)  FROM          someTable s1  WHERE         idForSomething = 1  GROUP BY       idForSomething  FOR XML PATH('listofsomethings')  

Added XML Body:

    <listofsomethings @sum="10">          <innerlist @inner="..." @inner2="..." />          <innerlist @inner="..." @inner2="..." />          <innerlist @inner="..." @inner2="..." />      </listofsomethings>  

I will look around again online, but I asking for the syntax to SQL Server to NOT USE "idForSomething" column in the final output. I thought it was something like NOOUTPUT but I can't remember and it does not work.

Why I don't need to COMMIT in database trigger?

Posted: 25 Jun 2013 11:51 AM PDT

We can't COMMIT/ROLLBACK in DML triggers because transaction is handled manually after DML statement. However, database triggers seems to be an exception. For example, suppose there's a database trigger:

CREATE OR REPLACE TRIGGER user_login_as    AFTER LOGON       ON SCHEMA  BEGIN    INSERT INTO user_login_log(username, log_date, action) VALUES (user, sysdate, 'User has logged in');  END user_login_as;  

The trigger does not contain autonomous transaction procedure with commit inside that, so who is commiting the insert? This triggger works like a charm and inserts new record into log table after user logon. It smells like hidden Oracle functionality and I can't find any reference in Oracle docs about that. I'm using Oracle11g.

Inserting query result to another table hangs on "Copying to temp table on disk" on MySQL

Posted: 25 Jun 2013 01:10 PM PDT

I started the process of inserting returned results to another table. The query groups the rows in respect of indexed IDs. This causes 149,000,000 rows to be decreased to 460,000 rows.

The query includes 3 table INNER JOINs, with each table having about 20,000,000 rows.

Further information, the process completes in about 12 seconds for a test file which has 1000 input rows, and returns 703 rows.

I started the query earlier ### we don't know when earlier is ###, but it is still running in the state: "Copying to temp table on disk" after 38000 seconds (10 and a half hours).

I think there is a problem during the insertion process. What am I probably doing wrong here? If it helps, the operating system of the computer is Windows 7, it has 3 GB RAM, an Intel Core2Duo 2.27GHz processor. ### you forgot to tell us details on the hard drive. One partition in, one out, same disk, same partitions, etc ###

Here's my query as it currently reads:

INSERT INTO kdd.contents               (adid,                descriptionwords,                purchasedkeywordwords,                titlewords)   SELECT t.adid,          dt.tokensid,          pkt.tokensid,          tt.tokensid   FROM   kdd.training t         INNER JOIN kdd.purchasedkeywordid_tokensid pkt                 ON t.keywordid = pkt.purchasedkeywordid          INNER JOIN kdd.titleid_tokensid tt                 ON t.titleid = tt.titleid          INNER JOIN kdd.descriptionid_tokensid dt                 ON t.descriptionid = dt.descriptionid   GROUP  BY adid;   

Primary key type change not reflected in foreign keys with MySQL Workbench

Posted: 25 Jun 2013 03:18 PM PDT

I have a problem with MySQL Workbench and primary/foreign keys.

I have some tables with PKs involved in relationship with other tables. If I modify the type of the PK, the type of the FK doesn't automatically update to reflect the change.

Is there any solution? Do I have to manually modify all the relations?

[Articles] More Data Security Issues

[Articles] More Data Security Issues


More Data Security Issues

Posted: 24 Jun 2013 11:00 PM PDT

The state of data security is getting better in the US, according to the 2013 survey from Symmatec. However we still have issues.

Search This Blog