Monday, June 3, 2013

[SQL 2012] Availability Group Wizard Fails on TransactionLog restore

[SQL 2012] Availability Group Wizard Fails on TransactionLog restore


Availability Group Wizard Fails on TransactionLog restore

Posted: 03 Jun 2013 02:46 AM PDT

So I am running through the wizard in a lab environment to create an AG. Using windows 2012, sql 2012 sp1, I have a database on one with just one table with 42 rows. When I run through the wizard every single time it restores the full backup then fails to restore the translog back with the error "Can't find backup". However the wizard is what is creating the files. Anyone else run across this issue?

Group by field, include or not include

Posted: 02 Jun 2013 06:01 PM PDT

Guys,I found some strange behavior of indexes (or I just think it is strange).It looks like if I add field as a key field into index it will gain less performance then I add group by field as included one. Can anyone tell what to do with group by fields, add as key or add as included?

[T-SQL] triggers

[T-SQL] triggers


triggers

Posted: 02 Jun 2013 09:38 PM PDT

Hello all, I've created a trigger that I'm having a bit of a problem with.After a user completes a warrantly form online, I do a lookup against an internal table to see if they are a registered installer, and mark them as so... [code="other"]ALTER TRIGGER [dbo].[TRIG_Update_Installer] ON [dbo].[tbl_warranties] AFTER insertAS BEGIN update tbl_Warranties set siteID = 'O' where ID in ( SELECT dbo.tbl_warranties.ID FROM dbo.Tbl_InstallersList INNER JOIN dbo.tbl_warranties ON dbo.Tbl_InstallersList.Gas_id = dbo.tbl_warranties.business_gsn WHERE (dbo.tbl_warranties.business_gsn IS NOT NULL) AND (LEN(dbo.tbl_warranties.business_gsn) > 0 and datePickup is null) ) END[/code]If I run the code within the BEGIN, it works perfectly.. I just can't get it to run from the trigger after the record has been inserted :(I've tried 'on insert' and 'after insert'Any help would be most appreaciated..Many thanksDave

(SQL) Excel file has bigger size using OPENQUERY to update it

Posted: 02 Jun 2013 10:13 PM PDT

Maybe the solution can be so easy but I can't find it so I write here for some help.We have this sql function:CREATE FUNCTION [dbo].[updateExcel]( -- Add the parameters for the function here @cell VARCHAR(4), @description VARCHAR(200))RETURNS BITASBEGIN DECLARE @sql NVARCHAR(1000) SET @sql = 'UPDATE openquery(LinkedServer2ExcelFile, ''SELECT * FROM [Sheet1$'+@cell+':'+@cell+']'') set F1 = '''+@description+'''' --PRINT @sql EXEC sp_executesql @sql RETURN 0ENDthat we use to update some excel fileEXEC @Result = updateExcel 'somecell', 'somevalue'The problem is that after this update the excel has a bigger size. But when we open it and save it again, the file's size get normal againI hope to find here some answers ...Thanx !!!

[SQL Server 2008 issues] Import Export Wizard Performance

[SQL Server 2008 issues] Import Export Wizard Performance


Import Export Wizard Performance

Posted: 02 Jun 2013 05:58 PM PDT

I am having access to sql server 2005.Question : I need to load a text file data to the sql server table.Which of the following options will be better?1.Use BIDS,Import & Export Wizard,use source as text file and load it into sql server table.2. Use BIDS,Import & Export Wizard,use source as XML file and load it into sql server table.3. Use SSMS,Import & Export Wizard,use source as text file and load it into sql server table.4. Use SSMS,Import & Export Wizard,use source as XML file and load it into sql server table.

Connectivity issue

Posted: 02 Jun 2013 12:52 PM PDT

Hi,I am trying to connect to a named instance on another server myhost\myinstance, but get an error message:A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1)For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=-1&LinkId=20476If I use IP address and the port number, it all works fine. I can even connect using the hostname and the port number, but if I use the instance name, it does not work.E.g.:xxx.xxx.xxx.xxx,3000 -- worksmyhost,3000 -- worksmyhost\myinstance -- does not workI can connect to myhost\myinstance from another server just fine.The server which has problem is a clustered instance of SQL 2008 R2 SP2, the two other servers are SQL 2008 R2 SP1.Any ideas?Thanks.

How to reduce the size of the t-log file of a mirrored DB that is still in restoring mode

Posted: 02 Jun 2013 12:50 PM PDT

Hello gentsThis might look like an oddball question, but one mirrored DB on our DR box has a huge t-log file that is occupying way too much space and my boss is asking me to reduce it.Has anyone encountered the same kind of situations before? I am a bit reluctant to shrink the PROD side ( hopefully and theoretically the t-log shrink action will be propagated to the DR?). And way to accomplish this purely on DR?Thanks in advance for any inputs!

SSRS Plug-ins

Posted: 02 Jun 2013 06:28 PM PDT

Hi, I have users who want to graphically create their own SQL Reports (in R2 and 2012) . We have setup a set of standard SQL reports for them however specific users have been asking for a graphical interface (drag n drop) so they can create their own reports using the SSRS engine. Can anyone help with recommendations of front-ends that I can install so semi-skilled users can create their own reports ralatively easily? This can be either open source or cheap applications to purchase.ThanksPete

Error

Posted: 03 Dec 2011 12:45 AM PST

I am getting while using database. I was trying to take databse in OFFLine.Msg 952, Level 16, State 1, Line 1Database abc is in transition. Try the statement later.

SQL server Query

Posted: 02 Jun 2013 06:26 PM PDT

PLease tell me what is the Query Recasting

How to Desc the memory Pressure in SSIS

Posted: 02 Jun 2013 06:04 PM PDT

How to Desc the memory Pressure in SSIS ?1.Set the diff buffer max=02.Reduce the engine threads3.Increase BLOB temp storage4.Reduce buffer size

Gmail Setup For SMTP

Posted: 02 Jun 2013 04:56 PM PDT

Hi,I am set Gmail For SMTPin mssql.Configuration had been done. But While sending Mail.It is showing Error "530.5.5.1 Must issue a STARTTLS Command first "Any have solution??Thank in Advance.

SSRS - reports rendering incorrectly

Posted: 02 Jun 2013 05:16 PM PDT

Hi allI am currently having this problem on SSRS rending incorrectly.The problem is basicall that reports do not display correctly in Chrome, Firefox, Safari browsers etc – the IFRAME's height defaults to a few hundred pixels, so you only see the top 2 inches of the report. There are artlices which suggest changes to the ReportViewer.aspx file found under SSRS directory. I am not confortable making changes to SQL system files, but if it has to get done, then so be it. Other than making a change to this file, is there any other options i can look at?thanks all for inputs!!!

How to rollback an insert query on sql server ? , please suggest bit high priority .. thanks in advance

Posted: 02 Jun 2013 01:28 AM PDT

How to rollback an insert query on sql server ? , please suggest bit high priority .. thanks in advance

Location and name of SQL Server 2008 R2 for Connection String

Posted: 01 Jun 2013 11:21 PM PDT

I have SQL Server 2008 R2.I need the location and name of SQL Server 2008 R2 to put into my connection string.

Agent Job Links to Another SQL Server and Fails due to Service Account Privs

Posted: 22 May 2013 02:54 PM PDT

To my friends in the community, here is a stumper that I can't seem to find sufficient information on despite repeated Google searches to figure out how to solve. Let me preface my explanation by saying that I am not a DBA and I have only limited access to the involved servers because they belong to a customer for which we are implementing an application.What is going to be needed is kind of a step-by-step, cookbook approach because I'm not sure at quite what level their DBA resources operate. Without further adieu:1. Our application runs on a virtualized server named SQL2. I do not have direct access to this server but I can access the database for our application using SSMS from the application tier's server.2. There is a SQL Server Agent job defined that runs on SQL2 but links to a different SQL Server (SQL1) containing another application from which our application needs to draw some data. I do have limited access directly to SQL1 but very limited privileges there.3. On both servers, there is a SQL account defined ('myaccount') and the password is the same on both. This account does have sufficient privs on SQL1 to access the customer's application's database, or at least the table I need to read from.On my application server, I can launch SSMS, connect to SQL2 and run the SP below and it runs just fine. It is but a small subset of the actual SP that needs to run to keep this somewhat simple. I have defined an agent job on SQL2 whose owner is 'myaccount.' When the agent job runs it gets an error message. It is set up to run the exact same EXEC as shown below.[code="sql"]-- From SSMS this runs without issues:EXEC [dbo].[TestConnectToLinkedServer];-- Above is defined as the job step in the agent job named "Test Linked Server Connection"-- When I run this, it failsEXEC msdb..sp_start_job N'Test Linked Server Connection';-- I can get the error message it fails on with this (top 2 rows):EXEC dbo.sp_help_jobhistory @job_name = N'Test Linked Server Connection',@mode='FULL' ;[/code]The error message returned by the last query is:[code="plain"]The job failed. The Job was invoked by User myaccount. The last step to run was step 1 (Run SP).Executed as user: NT AUTHORITY\SYSTEM. Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. [SQLSTATE 28000] (Error 18456). The step failed.[/code]Clearly the account that is being used on SQL1 (AUTHORITY\ANONYMOUS LOGON) isn't the same as the one on SQL2 (NT AUTHORITY\SYSTEM). I'm thinking this must have something to do with the account mappings but I am not sure on which server the mapping needs to be changed (or for that matter what it needs to be changed to).Here is the SP that is run either manually or by the agent:[code="sql"]CREATE PROCEDURE [dbo].[TestConnectToLinkedServer]AS BEGIN DECLARE @CustAppDBname NVARCHAR(20) = N'CustApp' ,@LinkedServer NVARCHAR(20) = N'SQL1' ,@retval INT ,@ErrorCount INT = 0 ,@ErrorCount1 INT = 0 ,@SQL NVARCHAR(MAX); IF NOT EXISTS ( SELECT 1 FROM sys.servers WHERE name = @LinkedServer) EXEC sp_addlinkedserver @server=@LinkedServer, @srvproduct=N'SQL Server'; BEGIN TRY -- This will fail if there is no server to link to (i.e., account not recognized) EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname=@LinkedServer ,@useself='TRUE', @locallogin='myaccount'; SELECT @retval = 0; END TRY BEGIN CATCH SELECT -3, 'Problem linking to Customer'' app server: ' + @LinkedServer, @retval ,[SQL Error Number] = ERROR_NUMBER() ,[SQL Error Severity] = ERROR_SEVERITY() ,[SQL Error State] = ERROR_STATE() ,[SQL Error Procedure] = ERROR_PROCEDURE() ,[SQL Error Line] = ERROR_LINE() ,[SQL Error Message] = ERROR_MESSAGE();; SELECT @ErrorCount = @ErrorCount + 1, @retval = -1; SELECT @ErrorCount1 = -3 END CATCH IF @ErrorCount = 0 EXEC @retval = sys.sp_testlinkedserver @LinkedServer; IF @retval <> 0 BEGIN SELECT -4, 'Test of linked server failed' SELECT @ErrorCount = @ErrorCount + 1, @retval = -1; END SELECT @SQL = N' SELECT TOP 10 * FROM [' + @LinkedServer + N'].[' + @CustAppDBname + N'].dbo.AppTable' EXEC sp_executesql @SQL IF @LinkedServer IS NOT NULL AND EXISTS ( SELECT 1 FROM sys.servers WHERE name = @LinkedServer) BEGIN IF @ErrorCount1 <> -3 EXEC sp_droplinkedsrvlogin @rmtsrvname=@LinkedServer, @locallogin='myaccount'; EXEC sp_dropserver @LinkedServer, 'droplogins'; ENDEND[/code]Any help would be greatly appreciated. BTW. I am not sure there's sufficient information here to lead someone to identify and precisely resolve the issue. If there is not, please let me know what additional information you need to diagnose the issue and I'll try to get it. It may take a day or two to get it as I'll be relying on others arms and legs to get it done (most likely).

help needed urgently with datediff query

Posted: 01 Jun 2013 09:03 PM PDT

Hi - I have been trying to return the date diff value and keep getting an error saying:character constant must contain exactly one characterThis is what I am currently trying - can anyone let me know what I am doing wrong please? any help appreciated...return Convert.ToDecimal(microsoft.visualbasic.DateDiff(microsoft.visualbasic.DateInterval.Day, Date.Parse("[CUST_Tickets.CREATE_DATE)"), microsoft.visualbasic.Now))Julie

Sunday, June 2, 2013

[how to] Transforming natural language to SQL [closed]

[how to] Transforming natural language to SQL [closed]


Transforming natural language to SQL [closed]

Posted: 02 Jun 2013 03:33 PM PDT

So i have this example.

E ≡ O (f.codf,COUNT(DISTINCT f.codp)) I (furnizeaza f) W(E1) G(f.codf)  E1 ≡ 2 = E2  E2 ≡ O (COUNT(s.cods)) I (sectii s) W ((s.cods,f.codf) τ furnizeaza f2[f2.cods,f2.codf]    SELECT f.codf, COUNT(DISTINCT f.codp) AS numar      FROM furnizeaza f      WHERE 2 = (SELECT COUNT(s.cods)                  FROM sectii s                  WHERE (s.cods,f.codf) IN (SELECT f2.cods,f2.codf         FROM furnizeaza f2))      GROUP BY f.codf  

It demonstrates a technique that helps create SQL queries from natural language.I tried finding some more information on this subject, but all i found is just libraries for automation of this process. Can anyone direct me to a book or a tutorial that describes the exact step by step process ,so i could do something like that manually ?

Optimize multiple self-JOINs or redesign DB?

Posted: 02 Jun 2013 03:55 PM PDT

I'm looking for advice on either optimizing multiple self-joins, or a better table/DB design.

One of the tables looks as follows (relevant cols only):

CREATE TABLE IF NOT EXISTS CountryData (    countryDataID INT PRIMARY KEY AUTO_INCREMENT,    dataID INT NOT NULL REFERENCES DataSources (dataID),    dataCode VARCHAR(30) NULL,    countryID INT NOT NULL REFERENCES Countries (countryID),    year INT NOT NULL ,    data DEC(20,4) NULL,    INDEX countryDataYear (dataID, countryID, year));  

The data column has values for a few hundred indicators, 90 countries, and 30 years for ~1mn rows total. A standard query requires selecting N indicators for a particular year and C countries, yielding a CxN table for 90 rows max.

With all values in a single column, self-joins seemed like the way to go. So I have experimented with various suggestions to speed those up, including indexing and creating new (temp) tables. At 9 self-joins, the query takes a little under 1 min. Beyond that, it spins forever.

The new table from where the self-joins take place has only about 1,000 rows, indexed on what seem to be the relevant variables - creation takes about 0.5 sec:

CREATE TABLE Growth      SELECT dataID, countryID, year, data      FROM CountryData      WHERE dataID > 522 AND year = 2017;    CREATE INDEX growth_ix       ON Growth (dataID, countryID);  

The SELECT query then arranges up to XX indicators in the results table, with XX unfortunately <10:

SELECT       Countries.countryName AS Country,         em01.em,      em02.em,      em03.em      ...      emX.em  FROM          (SELECT          em1.data AS em,          em1.countryID      FROM Growth AS em1      WHERE      em1.dataID = 523) as em01      JOIN       (SELECT          em2.data AS em,          em2.countryID      FROM Growth AS em2      WHERE      em2.dataID = 524) as em02      USING (countryID)      JOIN      (SELECT          em3.data AS em,          em3.countryID      FROM Growth AS em3      WHERE      em3.dataID = 525) as em03      USING (countryID)      ...      JOIN      (SELECT          emX.data AS em,          emX.countryID      FROM Growth AS em5      WHERE      emX.dataID = 527) as emXX      USING (countryID)      JOIN Countries       USING (countryID)  

I'd actually like to retrieve a few more variables, plus potentially join other tables. Now I'm wondering whether there's a way to run this more efficiently, or whether I should take an altogether different approach, such as using wide tables with indicators in different columns to avoid self-joins.

Am I wrong in table design or wrong in selected index when made the table?

Posted: 02 Jun 2013 05:08 PM PDT

I've build web application as a tool to eliminate unnecessary data in peoples table, this application mainly to filter all data of peoples who valid to get an election rights. At first, it wasn't a problem when the main table still had few rows, but it is really bad (6 seconds) when the table is filled with about 200K rows (really worse because the table will be up to 6 million rows).

I have table design like below, and I am doing a join with 4 tables (region table start from province, city, district and town). Each region table is related to each other with their own id:

CREATE TABLE `peoples` (                        `id` mediumint(8) unsigned NOT NULL AUTO_INCREMENT,                        `id_prov` smallint(2) NOT NULL,                        `id_city` smallint(2) NOT NULL,                        `id_district` smallint(2) NOT NULL,                        `id_town` smallint(4) NOT NULL,                        `tps` smallint(4) NOT NULL,                        `urut_xls` varchar(20) NOT NULL,                        `nik` varchar(20) NOT NULL,                        `name` varchar(60) NOT NULL,                        `place_of_birth` varchar(60) NOT NULL,                        `birth_date` varchar(30) NOT NULL,                        `age` tinyint(3) NOT NULL DEFAULT '0',                        `sex` varchar(20) NOT NULL,                        `marital_s` varchar(20) NOT NULL,                        `address` varchar(160) NOT NULL,                        `note` varchar(60) NOT NULL,                        `m_name` tinyint(1) NOT NULL DEFAULT '0',                        `m_birthdate` tinyint(1) NOT NULL DEFAULT '0' ,                        `format_birthdate` tinyint(1) NOT NULL DEFAULT '0' ,                        `m_sex` tinyint(1) NOT NULL DEFAULT '0' COMMENT ,                        `m_m_status` tinyint(1) NOT NULL DEFAULT '0' ,                        `sex_double` tinyint(1) NOT NULL DEFAULT '0',                        `id_import` bigint(10) NOT NULL,                        `id_workspace` tinyint(4) unsigned NOT NULL DEFAULT '0',                        `stat_valid` smallint(1) NOT NULL DEFAULT '0' ,                        `add_manual` tinyint(1) unsigned NOT NULL DEFAULT '0' ,                        `insert_by` varchar(12) NOT NULL,                        `update_by` varchar(12) DEFAULT NULL,                        `mark_as_duplicate` smallint(1) NOT NULL DEFAULT '0' ,                        `mark_as_trash` smallint(1) NOT NULL DEFAULT '0' ,                        `in_date_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,                        PRIMARY KEY (`id`),                        KEY `ind_import` (`id_import`),                        KEY `ind_duplicate` (`mark_as_duplicate`),                        KEY `id_workspace` (`id_workspace`),                        KEY `add_manual` (`add_manual`),                        KEY `il` (`stat_valid`,`mark_as_trash`,`in_date_time`),                        KEY `region` (`id_prov`,`id_city`,`id_district`,`id_town`,`tps`),                        KEY `name` (`name`),                        KEY `place_of_birth` (`place_of_birth`),                        KEY `ind_birth` (`birthdate`(10)),                        KEY `ind_sex` (`sex`(2))                      ) ENGINE=MyISAM AUTO_INCREMENT=1 DEFAULT CHARSET=latin1;    CREATE TABLE `test_prov` (            `id` smallint(2) NOT NULL,            `name_prov` varchar(60) NOT NULL,            `head_manager` varchar(60) DEFAULT NULL,            `handprint` blob,            `ibu_kota` varchar(60) DEFAULT NULL,            `ketua_kpu` varchar(60) DEFAULT NULL,            PRIMARY KEY (`id`),            UNIQUE KEY `name` (`name_prov`)          ) ENGINE=MyISAM DEFAULT CHARSET=latin1        CREATE TABLE `test_city` (            `id` smallint(2) NOT NULL,            `id_prov` smallint(2) NOT NULL,            `name_city` varchar(60) NOT NULL,            `head_manager` varchar(60) DEFAULT NULL,            `ibu_kota` varchar(60) DEFAULT NULL,            `ketua_kpu` varchar(60) DEFAULT NULL,            PRIMARY KEY (`id_prov`,`id`),            KEY `name_city` (`name_city`)          ) ENGINE=MyISAM DEFAULT CHARSET=latin1    CREATE TABLE `test_district` (            `id` smallint(2) NOT NULL,            `id_city` smallint(2) NOT NULL,            `id_prov` smallint(2) NOT NULL,            `name_district` varchar(60) NOT NULL,            `head_manager` varchar(60) DEFAULT NULL,            `handprint` blob ,            `ppk_1` varchar(60) DEFAULT NULL,            `ppk_2` varchar(60) DEFAULT NULL,            `ppk_3` varchar(60) DEFAULT NULL,            `ppk_4` varchar(60) DEFAULT NULL,            `ppk_5` varchar(60) DEFAULT NULL,            PRIMARY KEY (`id_prov`,`id_city`,`id`),            KEY `name_district` (`name_district`)          ) ENGINE=MyISAM DEFAULT CHARSET=latin1    CREATE TABLE `test_town` (                        `id` smallint(4) NOT NULL,                        `id_district` smallint(2) NOT NULL,                        `id_city` smallint(2) NOT NULL,                        `id_prov` smallint(2) NOT NULL,                        `name_town` varchar(60) NOT NULL,                        `handprint` blob,                        `pps_1` varchar(60) DEFAULT NULL,                        `pps_2` varchar(60) DEFAULT NULL,                        `pps_3` varchar(60) DEFAULT NULL,                        `tpscount` smallint(2) DEFAULT NULL,                        `pps_4` varchar(60) DEFAULT NULL,                        `pps_5` varchar(60) DEFAULT NULL,                        PRIMARY KEY (`id_prov`,`id_city`,`id_district`,`id`),                        KEY `name_town` (`name_town`)                      ) ENGINE=MyISAM DEFAULT CHARSET=latin1;  

and the query like

SELECT `E`.`id`, `E`.`id_prov`, `E`.`id_city`, `E`.`id_district`, `E`.`id_town`,   `B`.`name_prov`,`C`.`name_city`,`D`.`name_district`, `A`.`name_town`, `E`.`tps`,     `E`.`urut_xls`,   `E`.`nik`,`E`.`name`,`E`.`place_of_birth`, `E`.`birth_date`, E.age, `E`.`sex`,  `E`.`marital_s`, `E`.`address`,`E`.`note`   FROM peoples E    JOIN test_prov B ON  E.id_prov = B.id    JOIN test_city C ON E.id_city = C.id and (C.id_prov=B.id)    JOIN test_district D ON E.id_district = D.id and ((D.id_city = C.id) and (D.id_prov= B.id))    JOIN test_town A ON E.id_town = A.id and ((A.id_district = D.id)       and (A.id_city = C.id) and (A.id_prov = B.id)) AND E.stat_valid=1 AND E.mark_as_trash=0  

mark_as_trash is a mark column which only contain 1 and zero just to know if the data has been mark as a deleted record, and stat_valid is the filtered result value - if value is 1 then the data is valid to get the rights of election.

I've tried to see the explain but no column is used as an index lookup. I believe that's the problem why the application so slow in 200K rows. The query above only shows two conditions, but the application has a feature to filter by name, place of birth, birth date, age with ranges and so on.

How can I make this perform better?

Looking for SQL Training Options? [closed]

Posted: 02 Jun 2013 10:43 AM PDT

I am interested in getting an entry level job working with SQL and I'm wondering if anyone has any recommendations for training programs. I am looking for something that would provide a certificate that employers would value. I'm in the age old position of no job without experience, no experience without a job. I've done some online training, but I am interested in doing something a little more in-depth that would give me some kind of credentials, but not to the extent of going through a 2 or 4 year program.

HOW to "SUM" and "JOIN" for single query of a poll results?

Posted: 02 Jun 2013 10:54 AM PDT

I have a poll database as

CREATE TABLE poll_answer  answer_id,  answer varchar(255),  poll_id int(11),  FOREIGN KEY(poll_id) REFERENCES polls(poll_id) ON DELETE CASCADE,  PRIMARY KEY(answer_id)  ) ENGINE=InnoDB    CREATE TABLE poll_results  vote_id int(11),  user_id int(11),  answer_id int(11),  poll_id int(11),  FOREIGN KEY(answer_id) REFERENCES poll_answers(answer_id) ON DELETE CASCADE,  FOREIGN KEY(poll_id) REFERENCES polls(poll_id) ON DELETE CASCADE,  PRIMARY KEY(vote_id)  ) ENGINE=InnoDB  

How can I run a query to sum the votes for each answer to get results for the list of answers in the poll?

SELECT ... FROM poll_results ... WHERE poll_results.poll_id='x'    answer_id  |  answer  |  SUM of Votes  |  % of votes  

Linked server available to users without permissions

Posted: 02 Jun 2013 02:08 PM PDT

I have users seeing a linked server which they are not supposed to see.

The linked server is defined so that only I will have access to it, but everybody can see and use it.

I have created the new linked server using the following steps:

  1. Connect SSMS Object Explorer to the SQL Server instance
  2. Expand Server Objects, right click Linked Servers and left-click New Linked Server...
  3. Under General tab choose SQL Server for Server type and write the Server's name
  4. Under Security tab in the upper part click Add, choose "sa" as Local Login, enter Remote User's name and Remote Password
  5. Under Security tab in the lower part (under For a login not defined in the list above, connections will:) choose the first option: Not be made
  6. Click OK and start testing

Now the only person supposed to see the linked server is me ("sa"), but somehow other users can see it and use it.

Note 1: The users who can use the linked server have permissions on the remote server, they are not seeing data that they should not see, they are just able to access it from the linked server when they should not be able to.

Note 2: I am the only sysadmin on both instances.

unloading data from oracle dbf files [duplicate]

Posted: 02 Jun 2013 10:47 AM PDT

This question is an exact duplicate of:

I have a list of oracle dbf files and I need to restore the information inside these files, there are no control files and anything else. only DBF file http://s9.postimg.org/mxmx0g73j/Untitled.jpg

Is there any parsing tool or recovery tool that can open DBF files and unload the information in it..

Restore oracle database from DBF files [closed]

Posted: 02 Jun 2013 05:40 AM PDT

I have a list of DBF oracle files and I need to rebuild a database from these files, but I am a little newbie in oracle db administration

http://s9.postimg.org/mxmx0g73j/Untitled.jpg

postgres service suddenly not working after the computer crashed while working

Posted: 02 Jun 2013 09:37 AM PDT

I've been working with postgres on a linux server for the last couple of months. Last week, half of our servers crashed and my one with them. I was in the middle of uploading data to the postgres database with osm2pgsql when this happened. After we revived the server, postgres has stopped working.

commands:

service postgresql start (Alternatively, /etc/rc.d/init.d/postgresql start, same thing happens)

Starting postgresql service: [ OK ]

service postgresql status

postmaster dead but pid file exists

Trying to start the postmaster using

su postgres

postmaster -D /var/lib/pgsql/data

yields nothing either.


As for the pid file, /var/run/postmaster.5432.pid exists, and when I remove it then the status changes to "postmaster is stopped", but if I try to start it again:

service postgresql start

Starting postgresql service: [FAILED]

If anyone has any idea what I could do to get it working, where the problem might be etc, I would be overjoyed to have an answer. What really gets me is that oit was working until last week, I even rebooted a backup and nothing seems to have changed, I'm really at my wuit's end with this problem!

Unable to create a new listener in oracle 11g

Posted: 02 Jun 2013 12:44 PM PDT

In net manager i get an error message "A valid logfile is required" when i click on listener and choose '+' button. and then it keeps on giving me that error when i click on any other tab or button in net manager.

Format function returning minutes instead of months?

Posted: 02 Jun 2013 05:56 AM PDT

While exploring the new function FORMAT I have come across the following issue, would like to know if any one else has faced a similar problem and found the fix for this, or if it is a bug?

SELECT FORMAT(GETDATE(),'mm/dd/yyyy','en-US');  SELECT GETDATE();  

Output:

  1. 02/02/2013
  2. 2013-06-02 12:02:54.067

How to import a text file with '|' delimited data to PostgreSQL database?

Posted: 02 Jun 2013 06:19 PM PDT

I have a text file with | delimited data that I want to import to a table in PostgreSQL database. PgAdminIII only exports CSV files. I converted the file to a CSV file but still was unsuccessful importing data to PostgreSQL database. It says an error has occurred:

Extradata after last expected column.   CONTEXT: COPY <file1>, line1:  

What I am doing wrong here?

SSIS - How do I execute a single OLE DB Command for mutiple paths?

Posted: 02 Jun 2013 02:54 AM PDT

I have an SSIS package (2008 R2) that I'm working on and I find myself duplicating the same OLE DB commands repeatedly and whenever there is a change its a nightmare to go back through and update all the duplicate OLE DB command tasks.

Is there a way to execute a single OLE DB command for multiple paths?

enter image description here

Does a SQL Server job run on the server or the local machine?

Posted: 02 Jun 2013 09:38 AM PDT

Does a SQL Server job always run on the server, or will it run in the context of the local machine, similar to a DTS package run from Enterprise Manager on the user's machine?

The job in question calls a DTS package. The package succeeds when run locally; the job fails whether run manually or scheduled.

SQL server 2012 upgrade causing reindexing to take a long time / fill up log file

Posted: 02 Jun 2013 12:55 AM PDT

I have a customer who has upgraded their SQL Server from SQL Server 2005 to SQL Server 2012. According to them, no other changes were made. They have an ERP database which does a full reindex once a week (the SQL just locates every index in the DB and rebuilds it) and it completed normally in 3 hours before the upgrade.

The DB is 178GB. After the upgrade, suddenly performance tanked. They performed a reindex and it was taking several hours and before being half way finished the log file grew to close to 500 GB. They ran out of disk space, so they dumped the log and did the reindex in pieces.

Would an upgrade to SQL Server 2012 change how indexes need to be rebuilt? Did something about the upgrade change the indexes? Do upgrades like this change indexes? We are going to monitor this on the next go around and see if it drops back to 3 hours or if it takes a huge amount of time and space again. Any imaginative thoughts would be appreciated as well, since I'm getting a lot of this information second hand from my customer :)

How to see what is cached in memory in SQL server 2008?

Posted: 02 Jun 2013 11:10 AM PDT

Is there a way how to find out what is cached in SQL Server 2008 R2? I have found the following nice article: http://blog.sqlauthority.com/2010/06/17/sql-server-data-pages-in-buffer-pool-data-stored-in-memory-cache . However, I would like to know how much data (e.g. in percentage and KB) are stored of each table and index. Is there some simple way how to obtain such data?

Re enable Windows Authetication in SQL Server

Posted: 02 Jun 2013 05:38 PM PDT

My old employee has disabled Windows Authentication in our server. Now I'm not able to access the SQL Server even though I have Administrator access to the server. I need to reset the sa password.

I tried logging in using single user mode as Remus described but I get the following error:

Login failed for user 'SERVER\Administrator'.
Reason: The account is disabled.
(Microsoft SQL Server, Error: 18470)

Configuring PostgreSQL for read performance

Posted: 02 Jun 2013 12:52 PM PDT

Our system write a lots of data (kind of Big Data system). The write performance is good enough for our needs but the read performance is really too slow.

The primary key (constraint) structure is similar for all our tables: timestamp(Timestamp) ; index(smallint) ; key(integer)

A table can have millions of row, even billion of rows, and a read request is usually for a specific period (timestamp / index) and tag. It's common to have a query that return around 200k lines. Currently, we can read about 15k lines per second but we need to be 10 times faster. Is this possible and if so, how?

Note: PostgreSQL is packaged with our software, so the hardware is different from one client to another.

[Edit] Added details below, performance was better for this test because I don't have access to the real setup right now. I will update as soon as I can access the setup.

[Edit2] Applied "dezso" suggestions, see configuration changes below and the specs of the server used for testing. Yes it's a VM used for testing, the VMs host is a Server 2008 R2 x64 with 24.0 GB of ram.

Server Spec (Virtual Machine VMWare)

Server 2008 R2 x64  2.00 GB of memory  Intel Xeon W3520 @ 2.67GHz (2 cores)  

postgresql.conf optimisations

shared_buffers = 512MB (default: 32MB)  effective_cache_size = 1024MB (default: 128MB)  checkpoint_segment = 32 (default: 3)  checkpoint_completion_target = 0.9 (default: 0.5)  default_statistics_target = 1000 (default: 100)  work_mem = 100MB (default: 1MB)  maintainance_work_mem = 256MB (default: 16MB)  

Table Definition

CREATE TABLE "AnalogTransition"  (    "KeyTag" integer NOT NULL,    "Timestamp" timestamp with time zone NOT NULL,    "TimestampQuality" smallint,    "TimestampIndex" smallint NOT NULL,    "Value" numeric,    "Quality" boolean,    "QualityFlags" smallint,    "UpdateTimestamp" timestamp without time zone, -- (UTC)    CONSTRAINT "PK_AnalogTransition" PRIMARY KEY ("Timestamp" , "TimestampIndex" , "KeyTag" ),    CONSTRAINT "FK_AnalogTransition_Tag" FOREIGN KEY ("KeyTag")        REFERENCES "Tag" ("Key") MATCH SIMPLE        ON UPDATE NO ACTION ON DELETE NO ACTION  )  WITH (    OIDS=FALSE,    autovacuum_enabled=true  );  

Query

The query take about 30 seconds to execute in pgAdmin3, but we would like to have the same result under 5 seconds if possible.

SELECT       "AnalogTransition"."KeyTag",       "AnalogTransition"."Timestamp" AT TIME ZONE 'UTC',       "AnalogTransition"."TimestampQuality",       "AnalogTransition"."TimestampIndex",       "AnalogTransition"."Value",       "AnalogTransition"."Quality",       "AnalogTransition"."QualityFlags",       "AnalogTransition"."UpdateTimestamp"  FROM "AnalogTransition"  WHERE "AnalogTransition"."Timestamp" >= '2013-05-16 00:00:00.000' AND "AnalogTransition"."Timestamp" <= '2013-05-17 00:00:00.00' AND ("AnalogTransition"."KeyTag" = 56 OR "AnalogTransition"."KeyTag" = 57 OR "AnalogTransition"."KeyTag" = 58 OR "AnalogTransition"."KeyTag" = 59 OR "AnalogTransition"."KeyTag" = 60)  ORDER BY "AnalogTransition"."Timestamp" DESC, "AnalogTransition"."TimestampIndex" DESC  LIMIT 500000;  

Explain (Edit2: Updated)

"Limit  (cost=0.00..125668.31 rows=500000 width=33) (actual time=2.193..3241.319 rows=500000 loops=1)"  "  Buffers: shared hit=190147"  "  ->  Index Scan Backward using "PK_AnalogTransition" on "AnalogTransition"  (cost=0.00..389244.53 rows=1548698 width=33) (actual time=2.187..1893.283 rows=500000 loops=1)"  "        Index Cond: (("Timestamp" >= '2013-05-16 01:00:00-04'::timestamp with time zone) AND ("Timestamp" <= '2013-05-16 15:00:00-04'::timestamp with time zone))"  "        Filter: (("KeyTag" = 56) OR ("KeyTag" = 57) OR ("KeyTag" = 58) OR ("KeyTag" = 59) OR ("KeyTag" = 60))"  "        Buffers: shared hit=190147"  "Total runtime: 3863.028 ms"  

In my latest test, It took 7 minutes to select my data!!! See below

Explain (Edit3)

"Limit  (cost=0.00..313554.08 rows=250001 width=35) (actual time=0.040..410721.033 rows=250001 loops=1)"  "  ->  Index Scan using "PK_AnalogTransition" on "AnalogTransition"  (cost=0.00..971400.46 rows=774511 width=35) (actual time=0.037..410088.960 rows=250001 loops=1)"  "        Index Cond: (("Timestamp" >= '2013-05-22 20:00:00-04'::timestamp with time zone) AND ("Timestamp" <= '2013-05-24 20:00:00-04'::timestamp with time zone) AND ("KeyTag" = 16))"  "Total runtime: 411044.175 ms"  

Thanks a lot for help!!

Mysql Connection Pooling Similar To PostgreSQL's PgBouncer

Posted: 02 Jun 2013 04:32 PM PDT

I'm looking for a pooling solution for mysql that is independent of the language. I am using PGBouncer as an example because it

  1. Resides on the server the database is on
  2. Just change the connection string to use the pooling connector
  3. No modification to the code required

Is there something similiar in MySQL where I can using pooling by installing a 3rd party connector and just connect through that port?

What's are the different ways to keep track of active and archived data?

Posted: 02 Jun 2013 07:23 PM PDT

I'm looking for different ways to keep track of both active and archived data so I can pro and con them.

The system: I have a computer with a database on it. The database has several tables in it; one of which contains a list of users that can use the computer; and several tables for auditing (user 1 did this, user 2 did that, etc). This database is a slave of a master database in which a Content Management System is used to say, add a new user and see reports on what user did what.

Example: As stated above, I have a table (lets call it users) that keeps track of all the users that are allowed to use the computer. As time goes by users will be added and removed. The problem is the audit tables keep track of a user ID so if the user is removed I lose the user information because the rows can't be joined. One idea I had was to use MySql's triggers so that if a user is added, an insert trigger is triggered and inserts a copy of the data to an 'archived' user table (lets call it users_archive). That way the computer can use users to determine if the user has permission to use it and reports can use users_archive for reports.

This seems like the easiest and most simple way to do it, but I can't find any other ways via google search to see if there are any other ways to do something like this.

Database design - do I need another table?

Posted: 02 Jun 2013 08:23 PM PDT

I am trying to make a database that follows a form that the company uses. When a client walks in the membes of staff have to fill in a form and the data is recorded. The form on paper is 10 pages long. The first time a client is seen the entire form is filled in and the client gets a clientID.

I have split the form into sections that make sense like accommodation and employment. I know I can link these tables together with the clientsID. Thats the simple part.

Now when a client returns the form comes out again but this time only certain parts are filled in, what ever the clients needs are. The records for most parts don't need updating but a new record needs inserting. what would be the best way around this.

So at the moment I have for example a table called client with an id and name another table called accommodation with clientid and address and another table employment with clientid and employer.

But how do I go about it when a client comes in to let us know he has a new employer. I cant update the current one as that is needed but I would need to add new record for the client.

Would this mean I would have to add a look up table for all my current tables?

event scheduler not called

Posted: 02 Jun 2013 02:17 PM PDT

I had created one event scheduler which looks like this

mysql> show create event event1      -> ;  +--------+----------+-----------+--------------------------------------------------------------------------------------------+  | Event  | sql_mode | time_zone | Create Event  | character_set_client | collation_connection | Database Collation |  +--------+----------+-----------+-----------------------------------------------------------------------+----------------------+----------------------+    | event1 |          | SYSTEM    | CREATE DEFINER=`root`@`localhost` EVENT `event1` ON SCHEDULE EVERY 1 MONTH STARTS '2013-02-02 00:00:00' ON COMPLETION NOT PRESERVE ENABLE DO BEGIN   update samp set col1 =col1  + 1; END | utf8                 | utf8_general_ci      | latin1_swedish_ci  |  +--------+----------+-----------+-----------------------------------------------------------------------+----------------------+----------------------+---------  -----------+  1 row in set (0.00 sec)  

This events has not called on 1st of month. So i tried show processlist\g to find it is runnung or not; it showed following o/p

mysql> show processlist\g;  +-----+-----------------+-----------------+---------------+---------+------+---     | Id  | User            | Host            | db            | Command | Time | State                       | Info             |  +-----+-----------------+-----------------+---------------+---------+------+---  | 136 | event_scheduler | localhost       | NULL          | Daemon  | 1855 | Waiting for next activation | NULL    |  

so NULL in db col means that no DB is assigned to it?

Please help me to solve it.

How can I improve my table design for different types of an entity?

Posted: 02 Jun 2013 06:23 PM PDT

Consider an accounting system as an example. I have an Entity called Client. Client can be of different types, with different fields applicable to different types. I consider creating separate tables for different types of Client, each having fields applicable to the respective type and have one master table referencing all of them and have fields applicable to all types.

Currently, I come up with the following design:

enter image description here

But I don't think my design is efficient enough (or even correct and free of errors). What would you suggest? Also, if this is important in any way, I am planning to utilize MariaDB.

FETCH API_CURSOR causing open transaction in tempDB

Posted: 02 Jun 2013 04:00 PM PDT

A select statement run from Oracle 11gR1 to SQL Server 2005 using Gateway leaves an open transaction in tempdb. The user is a datareader in Database XYZ. Not sure why the open tran is in tempDB when the statement run is SELECT.

Any one had seen this issue before ?

Thanks in advance sekhar

Ubuntu 12, PostgreSQL 9.2, PostGIS 2.0

Posted: 02 Jun 2013 03:17 AM PDT

At the current moment, March 4th 2013, can PostGIS2.0 be install with PostgreSQL 9.2?

I checked their website and to my understanding it is not possible...

I hope that's not the case. Can anyone tell me and point out the instructions on how to install PostGIS 2.0 on PostgreSQL 9.2 on Ubuntu?

Cannot find MySQL Server Config Wizard for MySQL 5.6 build

Posted: 02 Jun 2013 01:17 PM PDT

Not sure if this is the right Stack Exchange site for this but...

I am trying to reconfigure my mySQL instance but cannot find the config wizard. I looked here: http://dev.mysql.com/doc/refman/5.0/en/mysql-config-wizard-starting.html

As one user pointed out, the config .exe file is not included in version above 5.5. Does anyone know how to reconfigure the MySQL server instance?

Viewing MySQL Account Resource Limits

Posted: 02 Jun 2013 02:16 AM PDT

Is there any way of viewing an account's remaining resources that are allocated to it? I setup an account that's allowed 7200 queries an hour. At any point, could I then run a query to find out how many remaining queries it's allowed?

MySQL must be storing this information somewhere as FLUSH USER_RESOURCES; will reset the counters however, I tried a few variants such as SHOW USER_RESOURCES and they don't seem to display anything. I've also hunted around information_schema and mysql tables.

Is it just not possible to retrieve that information?

[SQL Server] get lookup data

[SQL Server] get lookup data


get lookup data

Posted: 02 Jun 2013 01:15 PM PDT

I have two tables, lets say Table 1 and Table 2 as belowTable 1:Col1 Col2 1 _A2 _B3 14 2Table 2:ID Val_A A_B B_C C1 AA2 BB3 CCI need an output From Table1 and Table 2 as below1 _A A2 _B B3 1 AA4 2 BBI am very new to SQL Server. Can you please help me in getting this output?Thank you in advance

[SQL 2012] mysql connector ODBC 5.2 not showing on destination drop down

[SQL 2012] mysql connector ODBC 5.2 not showing on destination drop down


mysql connector ODBC 5.2 not showing on destination drop down

Posted: 01 Jun 2013 08:38 PM PDT

I tried to transfer data from ms sql server 2012 to mysql using sql server export, but could not find mysql connector ODBC 5.2 on the distination dropdown. I already have mysql connector ODBC 5.2. Any ideas on how to make it work?Thanks.

DQSInstaller issue ...

Posted: 01 Jun 2013 08:09 AM PDT

(if this entry is not in the correct area - please move it - and sorry ... )-o-The DQS Installer log is below. I saw elsewhere to make sure that Named Pipes and TCP/IP were enabled. So I made sure of this (changed one or two settings) and restarted SQL Server 2012 (both the service and the application)Still Getting the same error though. I am experienced with SQL Server - but not overly so. I need to study DQS/MDS for the 70-463 Certification Test - and I would really like to get this properly installed on my home computer. The DQS/MDS files were installed initially (a few months back), but I never ran the DQS Installer till now.One note - I am trying to install this on my Personal Computer (plenty of power to run it ... ) - I do not think this is limited to a Windows Server, but I could be wrong.Also - I'm noting the failure to connect "Login failed for user 'FrogRoom-PC\Frog Room'". There is no User by that name - that is computer/sql server instance. So I am not sure why a user is not getting picked up here.Your help very much appreciate.Scott MacDonaldTucson, AZMicrosoft (R) DQS Installer Command Line ToolCopyright (c) 2012 Microsoft. All rights reserved.[5/31/2013 10:19:47 PM] DQS Installer started. Installation log will be written to c:\Program Files\Microsoft SQL Server\MSSQL11.FROGOFFICE\MSSQL\Log\DQS_install.log [5/31/2013 10:19:47 PM] Setting the collation to default value: SQL_Latin1_General_CP1_CI_AS[5/31/2013 10:19:47 PM] Failed to connect to local instance 'FROGROOM-PC\FROGOFFICE'.System.Data.SqlClient.SqlException (0x80131904): Login failed for user 'FrogRoom-PC\Frog Room'. at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning() at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlInternalConnectionTds.CompleteLogin(Boolean enlistOK) at System.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(ServerInfo serverInfo, String newPassword, Boolean redirectedUserInstance, SqlConnection owningObject, SqlConnectionString connectionOptions, TimeoutTimer timeout) at System.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(SqlConnection owningObject, TimeoutTimer timeout, SqlConnectionString connectionOptions, String newPassword, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, Object providerInfo, String newPassword, SqlConnection owningObject, Boolean redirectedUserInstance) at System.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionFactory.CreatePooledConnection(DbConnection owningConnection, DbConnectionPool pool, DbConnectionOptions options) at System.Data.ProviderBase.DbConnectionPool.CreateObject(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.UserCreateRequest(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject) at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection) at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory) at System.Data.SqlClient.SqlConnection.Open() at Microsoft.Ssdqs.DqsInstaller.Utilities.InstallerUtility.TestConnection(String instance, Exception& exception)

Search This Blog