Friday, June 21, 2013

[MS SQL Server] SQL browser

[MS SQL Server] SQL browser


SQL browser

Posted: 21 Jun 2013 04:14 AM PDT

If I have two instances running on a server and the instances are both named instances.Do [b]I have to[/b] do some special configuration of SQL browser, or leave as default should be OK?

Database Mail failing with attachment

Posted: 20 Jun 2013 09:00 AM PDT

We are using Database Mail on SQL Server 2008 R2 to send results of queries as jobs using SQL Server Agent. We are using an MS exchange mail server (mail.mycompany.com) & the default port 25. All emails that have the query results in the body of the email work fine. However, those that use @attach_query_result_as_file = 1 or those that have nothing in the body of the email fail with this error message:The mail could not be sent to the recipients because of the mail server failure. (Sending Mail using Account 3 (2013-06-20T13:54:39). Exception Message: Cannot send mails to mail server. (Mailbox unavailable. The server response was: 5.7.1 Requested action not taken: message refused). )The event viewer & error logs on the mail server provide no data, which leads us to believe that the emails are not making it to the mail server. All firewalls and virus protection software have been checked and are not configured to stop emails with attachments from being sent.The following code from the sproc in question is working with no issues whatsoever with the exact set up described above except it is hosted on SQL Server 2005 (please note that the value for @RecipientList is passed as a parameter to the sproc & that we've added values for @body & @body_format to ensure there is something in the body of the email): DECLARE @SQL nvarchar(MAX) DECLARE @File varchar(1000) DECLARE @SubjectLine varchar(200) SET @File = 'Dupes.txt' SET @SubjectLine = 'Active devices with duplicate MfgSerialNumbers' SET @SQL = 'SET NOCOUNT ON; DECLARE @Dupes TABLE ( MfgSerialNumber varchar(100) ) INSERT INTO @Dupes SELECT MfgSerialNumber FROM Devices WHERE Active = 1 AND MfgSerialNumber <> ''NO SERIAL NUMBER FOUND'' GROUP BY MfgSerialNumber HAVING ( COUNT(MfgSerialNumber) > 1 ) ORDER BY MfgSerialNumber; DECLARE @Data TABLE ( Customer varchar(100) ,Campus varchar(60) ,Bldg varchar(75) ,Floor varchar(15) ,CostCenter varchar(50) ,Dept varchar(75) ,Area varchar(100) ,Location varchar(100) ,AuxBarcode varchar(10) ,MfgSerialNumber varchar(50) ,DeviceID int ,Active varchar(10) ,Mfg varchar(30) ,Model varchar(60) ) INSERT INTO @Data SELECT TOP (100) PERCENT dbo.Customers.Name AS Customers, dbo.Campuses.Name AS Campus, dbo.Buildings.Name AS Building, dbo.Floors.Floor, dbo.Departments.DepartmentNumber, dbo.Departments.DepartmentName, CASE WHEN (CHARINDEX(''\'', dbo.Locations.LocationDescription)) = 0 THEN '''' ELSE LTRIM(RTRIM(LEFT(dbo.Locations.LocationDescription, (CHARINDEX(''\'', dbo.Locations.LocationDescription) - 1)))) END AS Area, CASE WHEN (CHARINDEX(''\'', dbo.Locations.LocationDescription)) = 0 THEN dbo.Locations.LocationDescription ELSE LTRIM(RIGHT(dbo.Locations.LocationDescription, (LEN(dbo.Locations.LocationDescription) - (CHARINDEX(''\'', dbo.Locations.LocationDescription))))) END AS Location, dbo.Devices.AuxBarcode, dbo.Devices.MfgSerialNumber, dbo.Devices.DeviceID, CASE WHEN Devices.Active = 0 THEN ''FALSE'' ELSE ''TRUE'' END AS Active, dbo.DeviceMfgs.MfgName, dbo.DeviceModels.ModelName FROM dbo.Customers INNER JOIN dbo.Campuses ON dbo.Customers.CustomerID = dbo.Campuses.CustomerID INNER JOIN dbo.CampusBusinessUnits ON dbo.Campuses.CampusID = dbo.CampusBusinessUnits.CampusID INNER JOIN dbo.BusinessUnits ON dbo.CampusBusinessUnits.BusinessUnitID = dbo.BusinessUnits.BusinessUnitID INNER JOIN dbo.Buildings ON dbo.Campuses.CampusID = dbo.Buildings.CampusID INNER JOIN dbo.Floors ON dbo.Buildings.BuildingID = dbo.Floors.BuildingID INNER JOIN dbo.Departments ON dbo.Campuses.CampusID = dbo.Departments.CampusID AND dbo.CampusBusinessUnits.CampusBusinessUnitID = dbo.Departments.CampusBusinessUnitID INNER JOIN dbo.SubDepartments ON dbo.Departments.DepartmentID = dbo.SubDepartments.DepartmentID INNER JOIN dbo.Devices ON dbo.SubDepartments.SubDepartmentID = dbo.Devices.SubDepartmentID INNER JOIN dbo.Locations ON dbo.Devices.LocationID = dbo.Locations.LocationID AND dbo.Floors.FloorID = dbo.Locations.FloorID INNER JOIN dbo.DeviceMfgs ON dbo.Devices.DeviceMfgID = dbo.DeviceMfgs.DeviceMfgID INNER JOIN dbo.DeviceModels ON dbo.Devices.DeviceModelID = dbo.DeviceModels.DeviceModelID INNER JOIN dbo.DeviceTypes ON dbo.DeviceModels.DeviceTypeID = dbo.DeviceTypes.DeviceTypeID INNER JOIN dbo.PrintTechnology ON dbo.DeviceModels.PrintTechnologyID = dbo.PrintTechnology.PrintTechnologyID LEFT OUTER JOIN dbo.NetworkInterfaces ON dbo.Devices.DeviceID = dbo.NetworkInterfaces.DeviceID WHERE (dbo.Customers.Active = 1) AND (dbo.Customers.CustomerID <> 81) AND (dbo.Campuses.Active = 1) AND (dbo.Devices.Active = 1) AND (dbo.Devices.MfgSerialNumber IN (SELECT MfgSerialNumber From @Dupes)) ORDER BY dbo.Customers.Name, dbo.Devices.MfgSerialNumber; select * From @Data;' EXEC msdb.dbo.sp_send_dbmail @recipients = @RecipientList, @query_result_separator = ' ' , @subject = @SubjectLine, @body = 'end of message', @body_format = 'text', @profile_name ='appropriate profile name that works with other emails', @query = @SQL, @attach_query_result_as_file = 1, @query_attachment_filename = @File, @query_result_header = 0, @query_result_no_padding = 1, @execute_query_database='appropriate db name'I've gone through a good number of the posts regarding Database Mail on this site but did not come across anything like this problem. I apologize if a solution has been posted previously.Thank you in advance.

SQL Linked Server to MySQL Problem

Posted: 21 Jun 2013 01:19 AM PDT

I successfully downloaded the MySQL drivers and installed the. There are 2 ... ANSI and Unicode. Both are successful when I do the test connection to MySQL and I can select the MySQL database that I want to use. Is the ANSI driver the correct one ?I am having trouble getting the linked server set up. I have found various instructions on line, but I must be doing something wrong, because I cannot establish a connection to MySQLAny good "MySQL Linked Server for Dummies" recommended articles ??EDIT: Running SQL 2005 64 bit, so I should have posted in SQL 2005 forum.

Tuning Parallelism

Posted: 20 Jun 2013 06:34 PM PDT

Hi AllI've been trying to tune Cost Threshold for Parallelism on a test systemUsing the query below, taken from [url]http://www.sqlskills.com/blogs/jonathan/tuning-cost-threshold-for-parallelism-from-the-plan-cache/[/url]. I have isolated a query that has a subtree cost of 93. I changed my Cost Threshold for Parallelism to 95 but the query still uses parallelism.Am I missing something here?Query:[code="sql"]SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED; WITH XMLNAMESPACES (DEFAULT 'http://schemas.microsoft.com/sqlserver/2004/07/showplan') SELECT query_plan AS CompleteQueryPlan, n.value('(@StatementText)[1]', 'VARCHAR(4000)') AS StatementText, n.value('(@StatementOptmLevel)[1]', 'VARCHAR(25)') AS StatementOptimizationLevel, n.value('(@StatementSubTreeCost)[1]', 'VARCHAR(128)') AS StatementSubTreeCost, n.query('.') AS ParallelSubTreeXML, ecp.usecounts, ecp.size_in_bytes FROM sys.dm_exec_cached_plans AS ecp ROSS APPLY sys.dm_exec_query_plan(plan_handle) AS eqp CROSS APPLY query_plan.nodes('/ShowPlanXML/BatchSequence/Batch/Statements/StmtSimple') AS qn(n) WHERE n.query('.').exist('//RelOp[@PhysicalOp="Parallelism"]') = 1 [/code]Thanks

Can't access database deferred transactions

Posted: 20 Jun 2013 08:57 AM PDT

Hi, we have a big problem.Today we change a table structure on db.Meanwhile sql was working the disk where our db is fail.After fix that we restart the server and our database start recover.this are the logs.[code="plain"] 2013-06-20 18:08:26.94 spid16s Recovery of database 'sistema' (7) is 96% complete (approximately 67 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.2013-06-20 18:08:31.70 spid16s Recovery of database 'sistema' (7) is 97% complete (approximately 62 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.2013-06-20 18:08:51.70 spid16s Recovery of database 'sistema' (7) is 97% complete (approximately 42 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.2013-06-20 18:08:53.61 spid16s Recovery of database 'sistema' (7) is 98% complete (approximately 40 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.2013-06-20 18:09:11.89 spid16s Recovery of database 'sistema' (7) is 99% complete (approximately 20 seconds remain). Phase 2 of 3. This is an informational message only. No user action is required.[/code]after db recover we cant access to the table or config (if we try to access the config management studio says the are blockings)if I run a query to sys.databases I see that the database is ONLINE.if I try to run DBCC checkdb, said the is deferred transactions.SQL server VersionMicrosoft SQL Server 2008 (SP3) - 10.0.5500.0 (X64) Sep 21 2011 22:45:45 Copyright (c) 1988-2008 Microsoft Corporation [b]Enterprise [/b]Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1) I dont know what i have to do now, I dont know if when server start try to rollback the deferred trans.Thanks for any help.greetings from argentina

Linked Server Help

Posted: 20 Jun 2013 09:55 AM PDT

Hi I am unable to query the database from my server remotely. Could you please refer the below pic and tell me the necessary actions. Here I am using the linked server to query the database from my local server(ABC12345) to other server (25.24.147.25), please let me know if you need additional information.On the other server(25.24.147.25) i do have only read permissions.Msg 18456, Level 14, State 1, Line 1Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'.ThanksAswin

[SQL 2012] GPT Partitions

[SQL 2012] GPT Partitions


GPT Partitions

Posted: 21 Jun 2013 02:54 AM PDT

Hello,I installed Windows 2012 and have 3 additional disk arrays that I'm going to configure for SQL 2012 data and logs. Are there any issues with creating the data and log disks as GPT partitions?Thank you,Terry

Windows Cluster with AG

Posted: 20 Jun 2013 07:29 AM PDT

So I have a 2012 windows cluster with three nodes. Each node has sql 2012 Enterprise sp1. We have been playing around in our lab environments trying things out, and we blew it up. The listener points to the primary sql server, but the windows cluster points to a secondary. Also in cluster manager it shows current host server as our secondary as well. I am trying to figure out what needs to be done to get the current host back to the primary. I have tried a few things, including this: http://technet.microsoft.com/en-us/library/ee461002.aspx No luck.

Application fails on cluster failover

Posted: 21 Jun 2013 01:04 AM PDT

Hi everyoneI had a problem with one of the Application/service in my cluster.Initially 4 of the 5 applications are recited on node 1. and rest 1 on node 2.For some reason failover happened.All the applications moved to node 2.here comes The problem, Every application is working fine unless one.When i failback to node 1. it is working wellBoth the nodes are possible owners for the application.Any ideas please......rookie here :hehe:

ssis package deployment

Posted: 20 Jun 2013 10:01 AM PDT

Hi,I have deployed an SSIS project that has one parent package which calls two child packages. want to deploy this project to sql server 2012. I use the sql server developer tools (VS 2010 shell) to disign my packages. My packages use OLEDB connection managers (different oledb connection managers for each package pointing to the same connection). I have set the expression for connectionstring property of each of these connection managers in every package. It looks like follows:"Data Source="+ @[$Project::DataSource] +";User ID="+ @[$Project::UserName] +";Pwd="+ @[$Project::Password] +";Initial Catalog="+ @[$Project::InitialCatalog] +";Provider=SQLNCLI11.1;Persist Security Info=True;"I have 4 project parameters for data source, username, password and initial catalog.Now when I deploy the package to my local machine, (which has the same server name as I used while designing the package) it runs fine. For this, I had to create SQL server agent jobs for each of 3 packages and change the "Execute Child Package" task in the "Main Package" to "Execute SQL Server Agent Job" task.In the Job step properties >> Counfiguration tab >> Parameters for the Job for Main package, I set the parameter values to point to the server that I deployed the report on. But in the connection Managers tab, the values stay the same as the design time values.Is it something I have to set for every package? Is the overall design I did bad?The thing is, I want to be able to deploy the package on any server with minimal changes.I have gone through following links, but they explain the deployment and design for sql server 2008.[url=http://www.mssqltips.com/sqlservertip/2450/ssis-package-deployment-model-in-sql-server-2012-part-1-of-2/] ssis package deployment - part 1[/url][url=http://www.mssqltips.com/sqlservertip/2451/ssis-package-deployment-model-in-sql-server-2012-part-2-of-2/] ssis package deployment - part 2[/url][url=http://www.youtube.com/watch?v=GiGHkQi2SAk]passing values to child packages - youtube[/url][url=http://www.youtube.com/watch?v=zdxOxHM5Ijw]deploying ssis package designed ib above link - Youtube[/url]I am a n00bie. Please correct me if I have used any term incorrectly.TIA.

SSIS 2012 Log Provider for SQL Server displays progress error when logging messages using shared project OLE DB connection

Posted: 20 Jun 2013 03:58 PM PDT

Hi AllI am building a SSIS project using the Project Deployment model of SSIS 2012. I am configuring package level logging using the SSIS log provider for SQL Server and using a [i]project[/i]-level connection in the configuration. The [i]project[/i]-level connection has a project.param of the connection string.The log events are being correctly logged to SQL Server however the Progress information tab indicates two duplicate errors that the project-level connection used for the log provider cannot be found."Error: The connection "xxxx_SSIS_Logging" is not found. This error is thrown by Connections collection when the specific connection element is not found."If I use a [i]package[/i]-level connection in the SSIS log provider, the log events are correctly logged and there are no connection errors in the Progress information tab.Is anyone else observing this behaviour with project-level connections in the SQL Server log provider?Has anyone had failure to log events due to a project-level connection in the SQL Server log provider?

[T-SQL] MERGE Statement DELETE from TARGET when not in SOURCE 'hanging'

[T-SQL] MERGE Statement DELETE from TARGET when not in SOURCE 'hanging'


MERGE Statement DELETE from TARGET when not in SOURCE 'hanging'

Posted: 20 Jun 2013 11:39 PM PDT

My code runs fine until I try Delete an entry that is in my target table but not in my source. The insert & update all work.[code] MERGE dbo.cart AS c_target USING (SELECT @sessionid as sessionid ,ISNULL(o.free_subsku,0) as subsku ,o.free_sku as mainsku ,ko.cartrsn ,o.qty_free * ko.qty as qty ,COALESCE(sp.price,p.price) as price ,COALESCE(sp.[weight],p.[weight]) as [weight] FROM offer o inner join @cart_offer ko on o.id = ko.offer_id inner join products p on o.free_sku = p.sku left join subproducts sp on o.free_subsku = sp.sku WHERE o.active = 1 and o.offer_type = 'GWP' ) as c_source ON c_target.sessionid = c_source.sessionid and c_target.mainsku = c_source.mainsku and c_target.subsku = c_source.subsku --Issue here?!?! WHEN NOT MATCHED BY SOURCE AND c_target.sessionid = @sessionid THEN DELETE WHEN MATCHED AND c_target.qty <> c_source.qty THEN UPDATE SET c_target.qty = c_source.qty ,c_target.price = c_source.price ,c_target.[weight] = c_source.[weight] WHEN NOT MATCHED BY TARGET THEN INSERT (sessionid,subsku,mainsku,qty,price,[weight]) VALUES (@sessionid,c_source.subsku,c_source.mainsku,c_source.qty,c_source.price,c_source.[weight]); [/code]I know I should attach some test data, but can anyone see anything obviously wrong in my code?Also cart table has about 5mil rows in it.thanks

Update raise error

Posted: 20 Jun 2013 09:21 PM PDT

Hi,Is there any setting in SQL Server that makes an UPDATE statement raise an error if no rows are affected?I'm using Red Gate SQL Data Compare to sync databases but there are some issues with the GUIDs (on a first phase they where NEWID() and not static) and when updating old databases the sync script can update 0 rows.I'd like for the script to raise an error if no rows are affected. Is there any thing to do this, rather than writing code after every update statement checking @@ROWCOUNT?Thanks,Pedro

Do I have to use repetative joins to get values

Posted: 21 Jun 2013 12:21 AM PDT

Hello,I have a query that returns several key values:SELECT * FROM TableXWhich returns:Value1 = 1001Value2 = 9671.....etcNow Value1 & Value2 are stored in the same ValueTable. To get one translated to a textual representation I would join like so:SELECT MRX.Col1 VAT.TextName, MRX.Col3, Col4....FROM TableX MRXINNER JOIN ValueTable VAT ONMRX.Col2 = VAT.Value1This is fine and dandy and gets one key translated to user friendly text but what about the other (Col4)?Do I have to join again..add an OR clause or...?SELECT MRX.Col1 VAT.TextName, MRX.Col3, VAT2.TextName....FROM TableX MRXINNER JOIN ValueTable VAT ONMRX.Col2 = VAT.Value1INNER JOIN ValueTable VAT2 ONMRX.Col4 = VAT2.Value2TIA'JB

Retrives the subject wise highest marks list in a table

Posted: 17 Jul 2011 11:45 PM PDT

Hi All,I have a requirement like, we are having two tables in our database.Table names: student, marklistStudent table values:id studname------------------1 x2 y3 z4 a5 bMarklist table values:id maths physics English---------------------------------1 50 60 702 70 60 403 50 80 704 50 100 705 90 60 70But my requirement is, I need to display the data "subject wise" highest marks for each student.for example:id name highestmark---------------------------------1 x EnglishAny boxy help me how to reach this scenario.Thanks in AdvanceBest regardsRadh

roll up the data as per period between start and enddate

Posted: 20 Jun 2013 06:33 AM PDT

Hi I have a query that returns aggregated data for a day between start and end date . Now My manager says he would send the start and end date and wants to rollup or aggregate the data to either month ,quarter or year as per the dates on the same data as per the parameter period(month,day,year ,quarter) that is sent .Suppose he sends startdate = 06/20/2012 and enddate = 06/20/2013 and period = month then he wants to aggregate the data for every month Until the end date .if period = quarter then he wants to see the aggregate for every 3months data from start to end datebelow is the query ;with Vol_Factdata as (Select CONVERT(char(10),EventStartDate,126) as Eventstartdate, Agg.ProductTypeID, Agg.ProductTypeName, Agg.LoanProgramTypeId, MIN(MetricValue) AS MetricValall, SUM(SampleSize) as SampleSizeall FROM [Reporting].[AggregatedRatesforWidgets] Agg with (NOLOCK) WHERE Agg.EventStartDate >= @ipstartdate and Agg.EventStartDate <=@ipenddate and Agg.LoanProgramTypeID= @ipProducttypeID GROUP BY CONVERT(char(10),agg.EventStartDate,126),Agg.ProductTypeID,Agg.ProductTypeName,Agg.LoanProgramTypeId) Select vcy.EventStartDate as EventStartDate, vcy.ProductTypeID as ProductTypeID, vcy.ProductTypeName as ProductTypeName, vcy.LoanProgramTypeID AS LoanProgramTypeId, 'ALL' as Loanstate, vcy.MetricValall_1year as MetricValue, vcy.SampleSizeall_1year as SampleSize FROM Vol_Factdata vcyCan some one suggest me a better way ?Thanks&RegardsSC

include the outptut of storedprocedure in select

Posted: 20 Jun 2013 04:46 AM PDT

I have stored prcuder like [code="sql"]create proc calcaulateavaerage@studentid intasbegin-- some complecated business and queryreturn @result -- single decimale value value end[/code]and then I want to [code="sql"]create proc the whole resultselect * , ................................ from X where X.value > (calcaulateavaerage X.Id)[/code]it always give error muli part identifier calcaulateavaerage couldn't be bound any idea how to solve that

Remove NewLine Charachters on a Select All.

Posted: 20 Jun 2013 03:20 AM PDT

Hi Guru's I'm trying to run a Select on an entire table Via [quote]Select * From[/quote] However a couple of the columns contain addresses with the NewLine Cahrachter embedded in it which is ruining my export to Excel. I know that I can run a [code="sql"]REPLACE(REPLACE(REPLACE(MyField, CHAR(10), ''), CHAR(13), ''), CHAR(9), '')[/code] on a specific field, my question is how to do this on the entire selection?Thanks.

Thursday, June 20, 2013

[how to] mongodb replication node stuck at “STARTUP2” with optimeDate as 1970

[how to] mongodb replication node stuck at “STARTUP2” with optimeDate as 1970


mongodb replication node stuck at “STARTUP2” with optimeDate as 1970

Posted: 20 Jun 2013 08:36 PM PDT

i have just setup replica sets with three nodes . the third node is stuck at stateStr STARTUP2 with "optimeDate" : ISODate("1970-01-01T00:00:00Z"). However its showing no error message. Is this alright. On primary rs.status() yeilds

{      "set" : "qdit",      "date" : ISODate("2013-06-18T22:49:41Z"),      "myState" : 1,      "members" : [          {              "_id" : 0,              "name" : "q.example.com:27017",              "health" : 1,              "state" : 1,              "stateStr" : "PRIMARY",              "uptime" : 2940,              "optime" : {                  "t" : 1371593311,                  "i" : 1              },              "optimeDate" : ISODate("2013-06-18T22:08:31Z"),              "self" : true          },          {              "_id" : 1,              "name" : "q1.example.com:27017",              "health" : 1,              "state" : 2,              "stateStr" : "SECONDARY",              "uptime" : 457,              "optime" : {                  "t" : 1371593311,                  "i" : 1              },              "optimeDate" : ISODate("2013-06-18T22:08:31Z"),              "lastHeartbeat" : ISODate("2013-06-18T22:49:40Z"),              "lastHeartbeatRecv" : ISODate("2013-06-18T22:49:40Z"),              "pingMs" : 0,              "syncingTo" : "twitnot.es:27017"          },          {              "_id" : 2,              "name" : "q2.example.com:27017",              "health" : 1,              "state" : 5,              "stateStr" : "STARTUP2",              "uptime" : 300,              "optime" : {                  "t" : 0,                  "i" : 0              },              "optimeDate" : ISODate("1970-01-01T00:00:00Z"),              "lastHeartbeat" : ISODate("2013-06-18T22:49:40Z"),              "lastHeartbeatRecv" : ISODate("2013-06-18T22:49:41Z"),              "pingMs" : 7          }      ],      "ok" : 1  }  

also

db.printSlaveReplicationInfo() on yields

source:   qdit1.queuedit.com:27017       syncedTo: Tue Jun 18 2013 22:08:31 GMT+0000 (UTC)           = 2894 secs ago (0.8hrs)  source:   qdit2.queuedit.com:27017       syncedTo: Thu Jan 01 1970 00:00:00 GMT+0000 (UTC)           = 1371596205 secs ago (380998.95hrs)  

Is this alright. Also how can i test my replication especially the third node

How to convert Postgres from 32 bit to 64 bit

Posted: 20 Jun 2013 09:06 PM PDT

I would like to convert from PG 32 bit to 64 bit. I am testing with pg_upgrade (in several scenarios) but I think it cannot do that.

With pg_upgrade, I can upgrade PG version 8.5 to v.9 (both of them run in 32 bit or 64 bit).

Until now, I haven't found how to change from a 32 bit install to 64 bit. How can this be done?

storing csv files

Posted: 20 Jun 2013 06:51 PM PDT

Does anyone have any recommendations on how I am handling the storage of csv files? Anyone can upload any number of columns, differing column names, and the amount of data doesn't matter. I am validating the data with an application csv parser.

Right now, the data does need to be queried for searching.

I am storing the data in a EAV table, so it would have a column in the database that holds the column name from the CSV and the data for that row.

If the csv file has 10 columns and 10 rows, the database would have 100 rows, so it can get large fairly quickly.

Lastly, I am able to efficiently query the data due to the application building out the query on the fly, it gathers the distinct column names and then uses max if to return the column names with the data, even if there is null data present.

Update primary SQL Server in mirrored environment using transaction logs from old environment

Posted: 20 Jun 2013 08:00 PM PDT

I'm currently migrating a large (~40gb) database to a new environment. The new environment is a HA/mirrored set up with primary, mirror and witness nodes.

I'm backing up the database in the old environment, restoring to primary & mirror, and turning on mirroring. During this time the old environment is still active, and data is changing. When we are ready to go live with the new environment I plan to take another transaction log from the old environment and restore that to the new primary server. Is that possible? Will this be successfully synchronised to the mirror?

Why do my WAL logs start with 000000010000000000000004 instead of 000000010000000000000001?

Posted: 20 Jun 2013 02:38 PM PDT

I could swear that when I first created my cluster a week ago, my logs started with 000000010000000000000001 or perhaps even 000000000000000000000001. Now, if I look in my pg_xlog directory, it begins with this 000000010000000000000004. I have wal_keep_segments = 1024, and there are only currently 70 segments in the directory, so why would it have deleted the first few already? Or, is there something I'm missing / doing wrong?

Note, I'm using Postgres 9.2.4 on Ubuntu 12.04.2 (via Pitt's ppa)

Running functions in parallel

Posted: 20 Jun 2013 04:42 PM PDT

This is for SQL Server 2012.

We have some import processes for FTP files that are picked up and read into a staging table, from there we massage/check the data before moving into production. One of the areas that are causing some issues is dates, some are valid, some are typos, some are just plain gibberish.

I have the following example table(s):

Create Table RawData  (   InsertID int not null,   MangledDateTime1 varchar(10) null,   MangledDateTime2 varchar(10) null,   MangledDateTime3 varchar(10) null  )  

I also have a destination table(say in production)

Create Table FinalData  (    PrimaryKeyID int not null, -- PK constraint here, ident    ForeighKeyID int not null, -- points to InsertID of RawData    ValidDateTime1 SmallDateTime null,    ValidDateTime2 SmallDateTime null,    ValidDateTime3 SmallDateTime null  )  

I insert the following into the RawData table:

 Insert Into RawData(InsertID, MangledDateTime1, MangledDateTime2, MangledDateTime3)   Values(1, '20001010', '20800630', '00000000') -- First is legit, second two are not   Insert Into RawData(InsertID, MangledDateTime1, MangledDateTime2, MangledDateTime3)   Values(1, '20800630', '20130630', '20000000') -- middle is legit, first/third are not   Insert Into RawData(InsertID, MangledDateTime1, MangledDateTime2, MangledDateTime3)   Values(1, '00001010', '00800630', '20130630') -- Last is legit, first two are not  

I wrote a function dbo.CreateDate to address the issue(s). We try to clean the data as best we can (use NULL if we can't), then convert the data to the correct datatype (in this case smalldatetime).

Insert Into FinalData(ForeighKeyID , ValidDateTime1, ValidDateTime2, ValidDateTime3)  Select    InsertID   ,dbo.CreateDate(MangledDateTime1)   ,dbo.CreateDate(MangledDateTime2)   ,dbo.CreateDate(MangledDateTime3)  From RawData  

We're running into some performance problems with functions. I'm wondering if/how they work in parallel.

I'm assuming here that the function CreateDate is being run in parallel as each row inserts. Such that each column/value has it's "own" function and is running at the same time as it inserts.

But I could be wrong, is it running serial over each column in each row as it inserts?

CreateDate() code:

Alter Function dbo.CreateDate  (  @UnformattedString  varchar(12)  )  Returns smalldatetime  As  Begin  Declare @FormattedDate smalldatetime    If(@UnformattedString Is Not Null)  Begin      Declare @MaxSmallDate varchar(8) = '20790606'          -- We got gibberish      If Len(@UnformattedString) = 1      Begin          return null      End        -- To account for date and time      If Len(@UnformattedString) = 12      Begin          Select @UnformattedString = Substring(@UnformattedString, 0,9)      End        If @UnformattedString = '20000000'      Begin          Select @UnformattedSTring = @MaxSmallDate      End        -- Some people are sending us two digit years, won't parse right      If Substring(@UnformattedString,0,3) = '00'      Begin          Select @UnformattedString = Replace(@UnformattedString, '00','20')      End        -- Some people are fat fingering in people born in 18??, so change to 19??      If Substring(@UnformattedString,0,3) in ('18')      Begin          -- We only want to change the year '18', not day 18           SELECT @UnformattedString = STUFF(@UnformattedString,                              CHARINDEX('18', @UnformattedString), 2, '19')      End        -- We're getting gibberish      If Substring(@UnformattedString,0,3) not in ('19','20')                  And Len(@UnformattedString) != 6      Begin          Select @UnformattedString = Replace(@UnformattedString,                          Substring(@UnformattedString,0,3),'20')      End        -- If the 4 digit year is greater than current year, set to max date      If Convert(int, Substring(@UnformattedString,0,5)) > Year(getdate())      Begin          Set @FormattedDate = CONVERT(smalldatetime,@MaxSmallDate,1)      End      -- If the 4 digit year is less than 100 years ago, set to max date      Else If Year(getdate()) - Convert(int, Substring(@UnformattedString,0,5)) >= 100      Begin          Set @FormattedDate = CONVERT(smalldatetime,@MaxSmallDate,1)      End      Else -- valid date(we hope)      Begin          Set @FormattedDate = CONVERT(smalldatetime,@UnformattedString,1)       End        End    Return @FormattedDate  End  Go  

Make sure slave has finished reading from master

Posted: 20 Jun 2013 04:19 PM PDT

I am planning to have a master-slave system in which occasionally the master and slave will reverse roles. To do this, I will stop further updates to master, and redirect traffic to the slave. But, before I make the slave the new master, I need to make sure the slave has finished reading all the updates from the old master. How can I do that? Is running "flush logs" on the master sufficient?

An solution that can be readily scripted would be preferred.

Create table group sub-folders in Management Studio

Posted: 20 Jun 2013 12:36 PM PDT

When I was exploring the master database in SSMS I noticed that under the Tables folder there is another folder called Systems Tables that houses a bunch of tables. Is it possible for us to create a structure akin to Systems Tables within our database?

I am looking to organize tables and stored procedures into project specific folders. Under the new setup - when I am referring to my table object I would have to use the following syntax (I am guessing here):

[dbName].[projectName].[dbo].[tableName]   

Also, apart from clearing up the clutter, do anybody foresee any performance improvement/degradation because of this re-organization? I use Microsoft SQL Server 2008 R2.

SQL Import/Export defaults to byte stream on export

Posted: 20 Jun 2013 08:21 PM PDT

So, I've done some research and I can't seem to figure this out. I'm not sure if it's some setting that I'm missing or what.

For the basic info, running SQL2012, 64 bit, all that good stuff. I noticed that, for some reason, when I export from a table into a flat file using a query, the data type is defaulting to bytestream. In the past, it always defaulted to DT_STR and went through without any issues.

Here's the query:

SELECT pin from IncrementalDM_Emails  where email in (select email from [7755_forsql])  order by pin  

Here's the error I'm getting:

Error dialog

Here's what the export is showing when I select "Edit Mappings..."

Mapping dialog

Now, this can be easily fixed by simply selecting "DT_STR" under the Mappings dialog but I frequently export using this method so I'd like to find out why it's doing it and fix it so I don't always have to go into the Edit Mappings dialog.

Is it something to do with the query and the use of the IN function? Any help would be greatly appreciated!

EDIT: The data in both tables is stored as varchar(50)

MySQL bin log missing data?

Posted: 20 Jun 2013 10:34 AM PDT

I'm trying to make heads and tails of my binary logs and am coming up short. I have many entries similar to the following from mysqlbinlog but I'm missing log data that I know should be there.

# at 1050548  #130617 18:40:03 server id 1  end_log_pos 1050619   Query   thread_id=844988    exec_time=0 error_code=0  SET TIMESTAMP=1371519603/*!*/;  BEGIN  /*!*/;  # at 1050619  # at 1050782  #130617 18:40:03 server id 1  end_log_pos 1050809   Xid = 33121028  COMMIT/*!*/;  

It's puzzling because I get expected SQL in the mysqlbinlog output for statements executed in phpmyadmin but those coming from other PHP-based remote web servers appear to not be recorded.

My settings bin logging are:

bin_log = /path_to_sql_incrementals  binlog_ignore_db = mysql  

Am I missing a logging option?

Mysql 5.0.95 / CentOS 5.9

Azure compatible client tools

Posted: 20 Jun 2013 01:53 PM PDT

I'm building a DB for someone as a favor and whilst I'm perfectly OK to create a DB in Azure SQL for them and do the requisite TSQL coding they require I do also need to give them a client based way to access the data that involves no non-SQL coding from me.

Ideally this would be a light weight alternative to SSMS that is compatibile with Azure. That way I can give them a series of scripts and paramtised SPs to run. Can someone recommend something that works please?

Thanks for reading

MYSQL matching one column to another in a different table

Posted: 20 Jun 2013 11:08 AM PDT

I current have two different tables.

The first table has a list of titles and IDs associated to these titles, the second table is a list of random heading.

What I would like to know is if there is a way to match up all the titles in table2 to the closest matching title in table1 is this possible?

Ive tried :

SELECT title_table .*,random_titles.*, MATCH (title_table.title)AGAINST (random_titles.title) AS relevance FROM title_table ORDER BY relevance DESC

But that did not work.

I know I could use this query as each title is being put in table2 with PHP but I already have a lot of titles in the database.

Any help would be amazing thanks

SQL Server grant permissions to STANDBY database

Posted: 20 Jun 2013 12:25 PM PDT

So, first: the setup. We have SQL Server 2012 (ServerA) running in domain A. We have SQL Server 2012 (ServerB) running in domain B, separate subnet, no trusts. These are completely separate domains for security reasons and they cannot be joined/trusted/etc. We need to be able to query the data directly from domain B via Windows Authenticated logins. I was able to use this guide to set up transaction log shipping to get the databases from ServerA to ServerB (summary: create the transaction log shipping config, use WinSCP to copy the logs to the remote server, manually create the secondary using SQL script). So now we have the two databases running in STANDBY/read-only on ServerB.

Now, the problem: we cannot grant access to these databases because they are in read-only so we cannot modify the permissions. How can I grant read-only access to these databases (either at the server level or DB level) to a domain group from DomainB on ServerB? I've found several references to creating a SQL login on the sending side, but I can't find any way to replicate it with a Windows Auth Login.

Transaction Log Maintanance While Using AlwaysOn Availability Group

Posted: 20 Jun 2013 10:45 AM PDT

We are using HADR ( AlwaysOn Availability Group) feature of SQL Server 2012. Server and AG Configuration as below:

  1. SQL12_SRV1 --> AG1(PRIMARY) - SYNC -->> DBTest

  2. SQL12_SRV2 --> AG1(SECONDARY) -->> DBTest - ASYNC

DBTest Database is growing (200GB) Day to day approximate monthly and same Transaction Log File will also grow according to Data.

So How to minimize Transaction Log File Size by using proper way of taking LOG backup. On which Replica we have to take log backup.

Thanks In Advance.

trouble creating a database with postgreql

Posted: 20 Jun 2013 04:36 PM PDT

I just installed Postgresql on my Windows laptop. I am now trying to create my first database. I launched the Windows PowerShell and tried the following:

PS C:\Users\Chris\Documents\test> & 'C:\Program Files\PostgreSQL\9.2\bin\createdb.exe' mydb  Password:  Password:  createdb: could not connect to database template1: FATAL:  password authentication failed for user "Chris"  

From what I can gather, the password that you give PostgreSQL during installation is for a different user type? After some poking around on the internet, I modified my pg_hba.conf file by appending the following:

local   all all trust  

Now I get the following error message:

PS C:\Users\Chris\Documents\test> & 'C:\Program Files\PostgreSQL\9.2\bin\createdb.exe' mydb  createdb: could not connect to database template1: FATAL:  could not load pg_hba.conf  

Not sure what I am doing wrong. Any suggestions?


Well, this is interesting. I went back to the pg_hab.conf file and added a newline after that line of code I added earlier. I got a new error message:

PS C:\Users\Chris\Documents\test> & 'C:\Program Files\PostgreSQL\9.2\bin\createdb.exe' mydb  createdb: could not connect to database template1: could not connect to server: Connection refused (0x0000274D/10061)      Is the server running on host "localhost" (::1) and accepting      TCP/IP connections on port 5432?  could not connect to server: Connection refused (0x0000274D/10061)      Is the server running on host "localhost" (127.0.0.1) and accepting      TCP/IP connections on port 5432?  

When I installed postgreSQL on my laptop, I set the port to 5432. Looks like postgreSQL is expecting the server to be running. Going to look into this...

Is this a good strategy for importing a large amount of data and decomposing as an ETL?

Posted: 20 Jun 2013 02:15 PM PDT

I have a set of five tables (a highly decomposed schema for an ETL if I understand the nomenclature) that I'm going to load via bulk import, then run some inserts from those five tables into a SLEW of other tables, including inserts that just rely on the values I just inserted into the first tables.

I can do my inserts as an A, B, C process, where I insert into the first table, then insert into some table S where exists in A + T (being some other table that has preloaded "configuration data"), then inserting into Z where exists in B + U, etc.

Should I be trying to batch those inserts with a cursor (I know, stone the traitor) or should I just run the raw insert into scripts and let the server die a thousand heat deaths? Basically I'm worried about starving the server or causing it to collapse from too many index or something else with inserts.

Should I stage out the inserts as:

  1. Insert into one set of tables
  2. Insert into secondary tables based on the first tables
  3. Insert into tertiary tables, etc.

OR should I insert into all the tales where the data is needed but do it via cursor, in a "for loop" style pattern of 100k rows at a time.

FWIW, this is a behavior I saw from the DBAs at my last job, so I figure that's "what I'm supposed to do" (the batch process via cursors) but maybe I don't understand enough about what they were doing (they were also live-loading into systems that already had data, and were loading new data afterwards).

Also bear in mind that I'm normally a C# dev, but I've got the most TSQL experience here and I'm trying to make the best process I can for raw-loading this data as opposed to our "current" method that is mostly webservice fetches and NHibernate save-commits.

Things I think are important to the question:

  1. There will be no other load on this server when I do this, I will have complete and total control of the box, and I'm the only user interested here (this is the initial data load before anyone else can do anything with the box)
  2. There are indexes on all the tables, where appropriate, and there are FKs and other predicates
  3. The entire db system is in use for slews of other clients right now, so I know the indexes are needed for operation, but if I should do something to disable those and re-enable them to make the inserts etc faster, I'm looking for those sorts of optimization hacks.
  4. I'm waiting on sample data but I expect the maximum records in a given table to be about a million rows, fairly wide, and that I'm going to insert into those other tables at most a million rows for the ancillary tables that can come secondary or tertiary.
  5. The hardware is "average".

Dropping Hypothetical Indexes

Posted: 20 Jun 2013 10:33 AM PDT

In the past I thought I'd deleted hypothetical indexes using either a DROP INDEX statement for clustered indexes and DROP STATISTICS statement for non-clustered indexes.

I have a database that is full of DTA remnants that I would like to cleanup; however, when I try to drop the object I always receive an error telling me that I cannot drop the object "because it does not exist or you do not have permission". I am a full sysadmin on the server so would expect to have rights to do anything.

I've tried this with both DROP STATS and DROP INDEX statements but both give me the same error.

Has anyone deleted these before and is there a trick I'm missing?


Addendum

Poking around in this, I just noticed that if I R-Click on the object, both the 'Script As' and 'DELETE' options are greyed out.

SA permissions issues with many nested objects

Posted: 20 Jun 2013 04:42 PM PDT

I have a broker application that's relatively complicated.

Today, after I made some changes, I started getting the error:

The server principal 'sa' is not able to access the database 'XYZ' under the current security context.

The whole scenario up to the point of the error is:

(In Database ABC)

  • A broker message gets submitted into a queue
  • The queue reader picks it up
  • The queue reader updates a table to indicate the work is starting
  • This table has an update trigger on it. The trigger
    • Checks a table in database XYZ to make sure the inserted value for this field is valid
    • The table is accessed via a synonym

The check in the trigger I believe is what is causing the issue.

If I run the update manually, it works fine. I have also used EXECUTE AS to run the update manually as sa which works fine.

Other relevant facts:

  • sa is the owner of both database ABC and database XYZ
  • there's no funny business with the sa account - it's db_owner role in both DBs as well

Is there some sort of strange scoping happening because all this is running in the context of broker?

Updates

Some more info:

  • DB ownership chaining is on at the server level but not in the DBs. I turned it on and it made no difference.
  • Using a three part name instead of a synonym didn't make a difference
  • Profiler trace shows that the statements are running as SA and not another login
  • Setting both databases TRUSTWORTHY to ON didn't make a difference
  • If I run the queue activation proc manually, it processes correctly (under my credentials).

General tool to load dump files

Posted: 20 Jun 2013 03:33 PM PDT

I am a big fan of Postgres both for its price but also for its features. I am going to have to need to upload into it both Oracle dump and SQL Server files. I will try to ask and beg for plain .csv for schema DDL but I suspect that I will be given dmp files.

Is there a tool, most preferably open source one, that would allow me to read, profile and possibly load Oracle/SQL Server files into Postgres?

Thank you, Edmon

Optimizing bulk update performance in Postgresql

Posted: 20 Jun 2013 02:55 PM PDT

Using PG 9.1 on Ubuntu 12.04.

It currently takes up to 24h for us to run a large set of UPDATE statements on a database, which are of the form:

UPDATE table  SET field1 = constant1, field2 = constant2, ...  WHERE id = constid  

(We're just overwriting fields of objects identified by ID.) The values come from an external data source (not already in the DB in a table).

The tables have handfuls of indices each and no foreign key constraints. No COMMIT is made till the end.

It takes 2h to import a pg_dump of the entire DB. This seems like a baseline we should reasonably target.

Short of producing a custom program that somehow reconstructs a dataset for Postgresql to re-import, is there anything we can do to bring the bulk UPDATE performance closer to that of the import? (This is an area that we believe log-structured merge trees handle well, but we're wondering if there's anything we can do within Postgresql.)

Some ideas:

Basically there's a bunch of things to try and we're not sure what the most effective are or if we're overlooking other things. We'll be spending the next few days experimenting, but we thought we'd ask here as well.

I do have concurrent load on the table but it's read-only.

Thanks.

High Mysql Load , over 700% CPU

Posted: 20 Jun 2013 02:33 PM PDT

I had high mysql load on server linux 64 bit , 24 G.B ram , Intel(R) Core(TM) i7 CPU 950 @ 3.07GHz ,

Alot of quiers in sending data mode

Here is mysql status

+------+--------------+-----------+--------------+---------+------+----------------+---------------------------------------------------------------------------------------------+  | Id   | User         | Host      | db           | Command | Time | State          | Info                                                                                        |  +------+--------------+-----------+--------------+---------+------+----------------+---------------------------------------------------------------------------------------------+  | 1    | leechprotect | localhost | leechprotect | Sleep   | 507  |                |                                                                                             |  | 422  | face_book  | localhost | face_book  | Query   | 0    | Sending data   | SELECT * FROM likes WHERE userid='100002047302002' and pageid='113623891994626' Limit 1     |  | 440  | face_book  | localhost | face_book  | Query   | 0    | Sending data   | SELECT * FROM likes WHERE userid='100003610486105' and pageid='137067399778568' Limit 1     |  | 745  | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100000156154972' and pageid='259472294088694' Limit 1     |  | 813  | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM likes WHERE userid='100001079730269' and pageid='111612552199698' Limit 1     |  | 817  | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM likes WHERE userid='100003718366503' and pageid='105790599509795' Limit 1     |  | 888  | face_book  | localhost | face_book  | Query   | 0    | Sending data   | SELECT * FROM friends WHERE userid='100004673917946' and friendid='100004836366957' Limit 1 |  | 930  | face_book  | localhost | face_book  | Query   | 4    | Sending data   | SELECT * FROM likes WHERE userid='100001857826693' and pageid='379878825440539' Limit 1     |  | 940  | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100002525443288' and pageid='432454306781258' Limit 1     |  | 976  | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100001786746020' and pageid='266169045612' Limit 1        |  | 980  | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100000721604707' and pageid='188587591283392' Limit 1     |  | 999  | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM likes WHERE userid='100001661124181' and pageid='161323847303028' Limit 1     |  | 1033 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM likes WHERE userid='100002273583367' and pageid='447287665321823' Limit 1     |  | 1064 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM likes WHERE userid='100003902289965' and pageid='315361025251697' Limit 1     |  | 1100 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM likes WHERE userid='100002720670629' and pageid='114370700612' Limit 1        |  | 1109 | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM friends WHERE userid='100002279885489' and friendid='100002385585461' Limit 1 |  | 1111 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM friends WHERE userid='100001087100886' and friendid='100005456647732' Limit 1 |  | 1132 | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100005404566097' and pageid='225594034251253' Limit 1     |  | 1148 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM likes WHERE userid='100005551654543' and pageid='104088939622341' Limit 1     |  | 1172 | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100000009185323' and pageid='110343285691930' Limit 1     |  | 1188 | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM likes WHERE userid='100003468150624' and pageid='182937471830173' Limit 1     |  | 1192 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM likes WHERE userid='100000619411698' and pageid='527695070578211' Limit 1     |  | 1196 | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100002866966279' and pageid='244651575605946' Limit 1     |  | 1208 | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM friends WHERE userid='100001057034709' and friendid='1080136538' Limit 1      |  | 1230 | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM friends WHERE userid='100005246283878' and friendid='100002513789129' Limit 1 |  | 1240 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM friends WHERE userid='100005028574119' and friendid='100001229276848' Limit 1 |  | 1241 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM friends WHERE userid='1681467791' and friendid='1537753959' Limit 1           |  | 1242 | face_book  | localhost | face_book  | Query   | 0    | Sending data   | SELECT * FROM friends WHERE userid='100001845705855' and friendid='1668437534' Limit 1      |  | 1247 | face_book  | localhost | face_book  | Query   | 2    | Sending data   | SELECT * FROM friends WHERE userid='100003854517927' and friendid='100002322873932' Limit 1 |  | 1257 | face_book  | localhost | face_book  | Query   | 0    | Sorting result | SELECT lastpost,id FROM facesessions ORDER BY lastpost DESC                                 |  | 1276 | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100001579975664' and pageid='402312375509' Limit 1        |  | 1284 | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM likes WHERE userid='100001827038305' and pageid='254365179238' Limit 1        |  | 1291 | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='1587203387' and pageid='197678701083' Limit 1             |  | 1309 | face_book  | localhost | face_book  | Query   | 1    | Sending data   | SELECT * FROM friends WHERE userid='100002156769339' and friendid='100001641695726' Limit 1 |  | 1318 | face_book  | localhost | face_book  | Query   | 3    | Sending data   | SELECT * FROM likes WHERE userid='100002230633105' and pageid='343669279056732' Limit 1     |  | 1325 | face_book  | localhost | face_book  | Sleep   | 8    |                |                                                                                             |  | 1333 | face_book  | localhost | face_book  | Sleep   | 2    |                |                                                                                             |  | 1338 | face_book  | localhost | face_book  | Sleep   | 0    |                |                                                                                             |  | 1339 | root         | localhost |              | Query   | 0    |                | show processlist                                                                            |  +------+--------------+-----------+--------------+---------+------+----------------+---------------------------------------------------------------------------------------------+  Uptime: 508  Threads: 38  Questions: 65938  Slow queries: 0  Opens: 51156  Flush tables: 1  Open tables: 34  Queries per second avg: 129.799  

and here is /etc/my.cnf

[client]  socket=/var/lib/mysql/mysql.sock    [mysqld]  tmpdir=/mysqltmp    port = 3306  socket = /var/lib/mysql/mysql.sock  skip-networking  skip-external-locking  local-infile=0    back_log = 100  #skip-innodb  max_connections = 50  table_cache = 32  key_buffer_size = 12284M  myisam_sort_buffer_size = 512M  myisam_max_sort_file_size = 8192M  join_buffer_size = 512M  read_buffer_size = 512M  sort_buffer_size = 512M  read_rnd_buffer_size = 512M  table_definition_cache = 8000  table_open_cache = 8000  thread_cache_size = 4  wait_timeout = 360  interactive_timeout = 60  connect_timeout = 10  tmp_table_size = 256M  max_heap_table_size = 256M  max_allowed_packet = 512M  max_seeks_for_key = 1000  group_concat_max_len = 1024  max_length_for_sort_data = 1024  net_buffer_length = 16384  max_connect_errors = 100000  concurrent_insert = 2  bulk_insert_buffer_size = 8M  query_cache_limit = 1M  query_cache_size = 256M  query_cache_type = 1  query_prealloc_size = 262144  query_alloc_block_size = 65536  range_alloc_block_size = 4096  transaction_alloc_block_size = 8192  transaction_prealloc_size = 4096  default-storage-engine = MyISAM  max_write_lock_count = 8    innodb_open_files = 500  innodb_data_file_path= ibdata1:10M:autoextend  innodb_buffer_pool_size = 64M  innodb_additional_mem_pool_size = 32M    innodb_log_files_in_group = 2  innodb_log_file_size = 5M  innodb_log_buffer_size = 8M  innodb_flush_log_at_trx_commit = 2  innodb_thread_concurrency = 8    #slow_query_log=0  #long_query_time=1  #slow_query_log_file=/var/log/mysql/log-slow-queries.log    open_files_limit=50000  [mysqld_safe]  log-error=/var/log/mysqld.log  pid-file=/var/run/mysqld/mysqld.pid  nice = -5  open-files-limit = 8192    [mysqldump]  quick  max_allowed_packet = 512M    [myisamchk]  key_buffer_size = 6400M  sort_buffer_size = 1M  read_buffer_size = 1M  write_buffer_size = 1M    [mysqlhotcopy]  interactive-timeout  

i tried to optimize table and adjust my.cnf with mysqlreport still the same

i don't use InnoDB

mysql version

# mysql -V  mysql  Ver 14.14 Distrib 5.1.68, for unknown-linux-gnu (x86_64) using readline 5.1  

mysql> SHOW CREATE TABLE friends\G

*************************** 1. row ***************************         Table: friends  Create Table: CREATE TABLE `friends` (    `id` int(100) unsigned NOT NULL AUTO_INCREMENT,    `userid` mediumtext COLLATE latin1_general_ci,    `friendid` mediumtext COLLATE latin1_general_ci,    `name` varchar(255) COLLATE latin1_general_ci NOT NULL,    `dateline` varchar(255) COLLATE latin1_general_ci NOT NULL,    `lastsend` varchar(255) COLLATE latin1_general_ci NOT NULL,    `sendstatus` varchar(255) COLLATE latin1_general_ci NOT NULL,    PRIMARY KEY (`id`),    KEY `name` (`name`),    KEY `lastsend` (`lastsend`),    KEY `sendstatus` (`sendstatus`)  ) ENGINE=MyISAM AUTO_INCREMENT=1079024 DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci  1 row in set (0.00 sec)  

mysql> SHOW CREATE TABLE likes\G

*************************** 1. row ***************************         Table: likes  Create Table: CREATE TABLE `likes` (    `id` int(100) unsigned NOT NULL AUTO_INCREMENT,    `userid` mediumtext COLLATE latin1_general_ci,    `pageid` mediumtext COLLATE latin1_general_ci,    `name` varchar(255) COLLATE latin1_general_ci NOT NULL,    `link` varchar(255) COLLATE latin1_general_ci NOT NULL,    `dateline` varchar(255) COLLATE latin1_general_ci NOT NULL,    `lastsend` varchar(255) COLLATE latin1_general_ci NOT NULL,    `sendstatus` varchar(255) COLLATE latin1_general_ci NOT NULL,    PRIMARY KEY (`id`)  ) ENGINE=MyISAM AUTO_INCREMENT=2008744 DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci  1 row in set (0.00 sec)  

mysql> SHOW CREATE TABLE facesessions\G

*************************** 1. row ***************************         Table: facesessions  Create Table: CREATE TABLE `facesessions` (    `id` int(100) unsigned NOT NULL AUTO_INCREMENT,    `session_key` varchar(255) COLLATE latin1_general_ci NOT NULL,    `uid` varchar(255) COLLATE latin1_general_ci NOT NULL,    `expires` varchar(255) COLLATE latin1_general_ci NOT NULL,    `secret` varchar(255) COLLATE latin1_general_ci NOT NULL,    `access_token` varchar(255) COLLATE latin1_general_ci NOT NULL,    `sig` varchar(255) COLLATE latin1_general_ci NOT NULL,    `username` varchar(255) COLLATE latin1_general_ci NOT NULL,    `lastposttime` varchar(255) COLLATE latin1_general_ci NOT NULL,    `lastpost` varchar(255) COLLATE latin1_general_ci NOT NULL,    `nextsend` varchar(50) COLLATE latin1_general_ci DEFAULT NULL,    `lastpoststatus` varchar(50) COLLATE latin1_general_ci DEFAULT NULL,    `gender` varchar(20) COLLATE latin1_general_ci DEFAULT NULL,    `birthday` varchar(20) COLLATE latin1_general_ci DEFAULT NULL,    `location` varchar(50) COLLATE latin1_general_ci DEFAULT NULL,    `imported` int(10) DEFAULT NULL,    PRIMARY KEY (`id`),    KEY `uid` (`uid`),    KEY `access_token` (`access_token`),    KEY `sig` (`sig`),    KEY `username` (`username`),    KEY `lastposttime` (`lastposttime`),    KEY `lastpost` (`lastpost`),    KEY `nextsend` (`nextsend`),    KEY `lastpoststatus` (`lastpoststatus`)  ) ENGINE=MyISAM AUTO_INCREMENT=16238 DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci  1 row in set (0.00 sec)  

mysql> SELECT SUM(index_length) FROM information_schema.tables WHERE engine='MyISAM';

+-------------------+  | SUM(index_length) |  +-------------------+  |         150684672 |  +-------------------+  1 row in set (0.01 sec)  

How do I execute an Oracle SQL script without sqlplus hanging on me?

Posted: 20 Jun 2013 01:33 PM PDT

For an automated task I would very much like to run some SQL scripts and make sure that sqlplus does not hang under any circumstancees, i.e.:

  • If the script contains any named substitution variable that has to be entered by the user, sqlplus should return with an error instead of prompting for the variable -- I cannot use set define off, as some of these scripts need to accept command line parameters that have to be resolved with &1
  • The script must not "hang" when it doesn't contain an exit; at the end.

    Solved: I think now that I can achieve this by wrapping the sql-script in a secondary "caller script". I.e., the caller script calls the other script with @ and then has a fixed exit; after that. The other script doesn't need an exit that way.

  • Anything else: If it would require a prompt, it should return with an error.

How can i do this with Oracle (and sqlplus or something else)?

Second time query execution using different constants makes faster?

Posted: 20 Jun 2013 11:33 AM PDT

Can someone explain or direct me how execution on indexes happen with different constants at intervals in Mysql. I notice only for the first execution on the table it takes time, after that with different constants it executes the query very quickly. I would like to know how to execute the query in such a way that it should take same amount of time every time it executes with different constants, is there a way to set some parameter off / on?

Query executed time : 9 mins.

mysql>  EXPLAIN SELECT     chargetype,    COUNT(br.`id`),   SUM(br.`charge`)  FROM  billingreport AS br  WHERE     br.`addeddate` BETWEEN '2013-02-01 00:00:00'    AND '2013-02-28 23:59:59'  AND br.`status` = 'success'    AND br.`idvendor` = 10     GROUP BY chargetype \G     *************************** 1. row ***************************         id: 1  select_type: SIMPLE      table: br       type: index_merge   possible_keys: NewIndex3,NewIndex6,idx_br_status        key: NewIndex3,idx_br_status     key_len: 4,1        ref: NULL       rows: 2887152      Extra: Using intersect(NewIndex3,idx_br_status); Using where; Using temporary; Using filesort  1 row in set (0.00 sec)  

Query executed time : 18 Secs.

 mysql>  EXPLAIN SELECT     chargetype,    COUNT(br.`id`),   SUM(br.`charge`)     FROM     billingreport AS br  WHERE     br.`addeddate` BETWEEN '2013-01-01 00:00:00'    AND    '2013-01-31 23:59:59'    AND br.`status` = 'success'    AND br.`idvendor` = 10  GROUP BY chargetype \G  *************************** 1. row ***************************         id: 1    select_type: SIMPLE      table: br       type: index_merge   possible_keys: NewIndex3,NewIndex6,idx_br_status        key: NewIndex3,idx_br_status    key_len: 4,1        ref: NULL       rows: 3004089      Extra: Using intersect(NewIndex3,idx_br_status); Using where; Using temporary; Using filesort   1 row in set (0.01 sec)  

SSRS appears to be ignoring Permissions set using Report Manager

Posted: 20 Jun 2013 05:33 PM PDT

I have setup SSRS on SQL Server 2008 in native mode.

As an administrator I can login to report manager, upload reports and run them, and also use the Web Service URL to generate reports.

I have also created a local user on the machine, I went into Report Manager as Admin, and at the top level set permissions that should assign the local user to all roles.

When I login to the machine as that user, and then navigate to Report Manager I just get the heading for the page, but do not see any of the folders that are configured.

I've checked and the folders are set to inherit parent permissions and they are showing the newly created local user in there too.

It seems odd that I have set the permissions, yet SSRS is still not showing what I should be able to see. Is there another step I need to take other than configuring the permissions in Report Manager?

When logged in as the newly created local user:

Report Manager - Shows the heading for the page, but no folders/items    Web Service URL (http://machine/ReportServer) - rsAccessDeniedError  

postgis problem with shortest distance calculation

Posted: 20 Jun 2013 12:33 PM PDT

while working with POSTGIS pgrouting, for calculateing the distance between two roads(lines) i got the shortest_path function.

But the logic is based on Start_point(Start_id) and end_point(end_id) but in my data the linestring contains so many internal points like ('linestring(1 1,2 2,3 3,4 4,5 5)' just for example..)

it is taking start point (1 1) endpoint(5 5)

if other line starting with (5 5) it is showing as route...like ('linestring(5 5,6 6)')

But line which crossing the point inside the linestring like(2 2,3 3,4 4) which is not telling as connected.. example

table roads: id name way 1 A linestring(1 1,2 2,3 3,4 4,5 5) 2 B linestring(5 5,6 6) 3 c linestring(2 1,2 2,2 3)

if i am applying shortest_path function from point(1 1) to (6 6) its showing the way but for (1 1) to (2 3) it is not showing anything...but there is a route for this (1 1,2 2,2 3)

can anyone please help me out for finding the solution..

Regards Deepak M

How do I find my current SCN?

Posted: 20 Jun 2013 01:07 PM PDT

Given any version of Oracle:

  • How do I find my current SCN?
  • What is the maximum possible SCN?

Meaning of 'SET' in error message 'Null value is eliminated by an aggregate or other SET operation'

Posted: 20 Jun 2013 10:55 AM PDT

I saw the above 'ANSI warning' message today when running a colleague's script (and I don't know which of the many statements caused the warning to be shown).

In the past I've ignored it: I avoid nulls myself and so anything that would eliminate them is a good thing in my book! However, today the word 'SET' literally shouted out at me and I realised I don't know what the meaning of the word is supposed to be in this context.

My first thought, based on the fact it is upper case, is that it is referring to the SET keyword and means 'assignment', as in

UPDATE <table> SET ...    ...ON DELETE SET NULL...    SET IDENTITY_INSERT <table> ON  

According to the SQL Server Help, the 'ANSI warnings' feature is based on ISO/ANSI SQL-92, the spec for which makes just one use of the term 'Set operation' in a subsection title, hence in title case, in the data assignment section. However, after a quick Googling of the error message I see examples that are SELECT queries with seemingly no assignment involved.

My second thought, based on the wording of the SQL Server warning, was that the mathematical meaning of set is implied. However, I don't think that aggregation in SQL is strictly speaking a set operation. Even if the SQL Server team consider it to be a set operation, what is the purpose of putting the word 'set' in capitals?

While Googling I noticed a SQL Server error message:

Table 'T' does not have the identity property. Cannot perform SET operation.  

The same words 'SET operation' in the same case here can only refer to the assignment of the IDENTITY_INSERT property, which brings me back to my first thought.

Can anyone shed any light on the matter?

Can you give me one example of Business Intelligence?

Posted: 20 Jun 2013 06:11 PM PDT

I don't really understand what Business Intelligence is all about. If I start from having a corporate DB, what is it that a BI person would do? I found plenty of material on the web, but it usually is a bit too complex. I want a simple example that would make me understand what BI is all about and what would a BI person produce that is of value to my organization.

Search This Blog