Tuesday, October 8, 2013

[SQL Server 2008 issues] Transferring sql logins and whilst retaining User Mappings

[SQL Server 2008 issues] Transferring sql logins and whilst retaining User Mappings


Transferring sql logins and whilst retaining User Mappings

Posted: 07 Oct 2013 01:16 AM PDT

HiI need to re-create the logins on my old server onto my new server. I wish to use a method that transfers the users password and BUT ALSO keeps their current database mappings and permissions. I keep getting forwarded to the following link http://support.microsoft.com/kb/918992 which creates the user ok (still got to test if it keeps the current password) but it does not map the users database permissions and I am having to manually go in and add this, which is a pain. Is there a way to script this task?

error when add and update in sp

Posted: 07 Oct 2013 07:26 AM PDT

CREATE PROCURE w ASALTER TABLE t ADD x char(1)UPDATE t set x =1Even when it lets me create that stored procedure (if I create it when x exists), when it runs, there is an error on the UPDATE statement because column x doesn't exist.What's the conventional way to deal with this, it must come up all the time? I can work around it by putting the UPDATE inside EXEC, is there another/better way?Thanks

Split character sting - Selected Part

Posted: 07 Oct 2013 08:32 AM PDT

I am looking for help on splitting a string and select nth occurrence only to display like@String='123,0.934,98,928.34,987.45'The above string contains 5 values delimited by Comma.I need to display only 3rd split - 98I need to display only 5th split value - 987.45I could find functions to display all values together, but I want only nth occurrence only.I appreciate your kind help.

Replication Log Reader Agent error

Posted: 07 Oct 2013 08:25 AM PDT

I have a replication that was working fine we and then a reboot occurred and now I'm getting these error messages from replication monitor. The funny thing is I don't know why it's spitting out the user 'NA\MajAdmins' because we are using account 'NA\MajService' for the replication jobs and as the account to set up replication on publisher and subscriber servers. Publisher and Subscriber are on two different servers. Help is much appreciated! Thanks.Error messages:The process could not execute 'sp_replcmds' on 'Server1\MAJ'. (Source: MSSQL_REPL, Error number: MSSQL_REPL20011)Get help: http://help/MSSQL_REPL20011Could not obtain information about Windows NT group/user 'NA\MajAdmins', error code 0xffff0002. (Source: MSSQLServer, Error number: 15404)Get help: http://help/15404The process could not execute 'sp_replcmds' on 'Server1\MAJ'. (Source: MSSQL_REPL, Error number: MSSQL_REPL22037)Get help: http://help/MSSQL_REPL22037

sparse column - how it works

Posted: 23 Jan 2011 10:21 PM PST

I have read few articles on sparse columns. I found that when you store null values in sparse columns it wont need any storage space and when you store non null in any sparse column , it would need additional 4 byteCan anyone tell me WHY?May be something related to bitmap but want to know more . if anyone has the answer please let me know

Calling Netbackup from SQL Job

Posted: 05 Oct 2012 02:29 AM PDT

Hi there,I am trying to initiate a Netbackup task from a SQL Job using the following CmdExec code:"C:\Program Files\VERITAS\NetBackup\bin\dbbackex.exe" -f "C:\Program Files\Veritas\NetBackup\DbExt\MsSql\<Filename.bch>" -p <profilename> -u <username> -pw <password> -s <server> -npIf I run it from a Command Prompt window it works fine, however if I copy the exact same code into a SQL Job and use the job type 'Operating System (CmdExec)' it fails to run. All I get in the Job History is:"Executed as user: Domain\Username. The step did not generate any output. Process Exit Code 2. The step failed."I have managed to get this to work on another almost identical server using the same accounts and everything but for some reason it won't run on the one I'm currently using.Any suggestions? Anything that needs to be enabled on the server perhaps?Thanks,Matt

select joining tables without any where clause

Posted: 07 Oct 2013 06:57 AM PDT

Could someone desipher this query that was handed to me? Primarily, how are they joining OeOrders with DOeCategories? I can't understand how you can join tables this way.Shouldn't there be a where clause somewhere?[code="sql"]SELECT IP.SourceID ,IP.FacilityID ,IP.AccountNumber ,IP.UnitNumber ,IP.Name AS PatientName ,IP.VisitID ,IP.PatientID ,IP.AdmitDateTime ,IP.DischargeDateTime ,IP.BirthDateTime ,IP.ErServiceDateTime ,IP.PtStatus ,OE.OrderID ,OE.OrderNumber ,DOP.[Name] AS [OrderName] ,OE.OrderDateTime ,OMC.Interface AS CategoryInterface ,OE2.OrdSourceID /* Whether CPOE was used */ ,CASE WHEN DARP.OeDefaultSourceID IS NOT NULL THEN 'Y' ELSE NULL END AS CPOE INTO dbo.#TotalOrders FROM ( dbo.OeOrders AS OE INNER JOIN dbo.DOeCategories AS OMC ON OE.SourceID = OMC.SourceID AND OE.Category = OMC.CategoryID AND OE.OrderDateTime BETWEEN @FromDate AND @ThruDate AND ( OMC.Interface IN ( SELECT ParameterValue FROM dbo.#MedInterface ) OR OMC.Interface IN ( SELECT ParameterValue FROM dbo.#LabInterface ) OR OMC.Interface IN ( SELECT ParameterValue FROM dbo.#RadInterface ) ) AND OE.[Status] NOT IN ( 'CANC', 'UNVER' ) INNER JOIN dbo.DOeProcs AS DOP ON OE.SourceID = DOP.SourceID AND OE.Category = DOP.CategoryID AND OE.OrderedProcedure = DOP.ProcedureID INNER JOIN dbo.OeOrders2 AS OE2 ON OE.SourceID = OE2.SourceID AND OE.OrderID = OE2.OrderID LEFT OUTER JOIN ( dbo.DrArraAppReportParameters AS DARP INNER JOIN dbo.mt_fn_2014_MG_NPRApplicationSourceIDs(@EnvironmentID, 'DR') AS DRSI ON DARP.SourceID = DRSI.DatabaseSourceID ) ON OE2.OrdSourceID = DARP.OeDefaultSourceID ) INNER JOIN dbo.mt_fn_2014_CoreMenuInitialPopulation(@BeginDate, @EndDate, NULL-- Indicates do not use Admit or Discharge DateTime , CASE WHEN UPPER(@EDMethod) = 'A' THEN 'IN_ER' -- All ED Visits Method WHEN UPPER(@EDMethod) = 'O' THEN 'IN_INO' -- Observation Services Method ELSE 'IN_ER' END, @EnvironmentID) AS IP ON IP.SourceID = OE.SourceID AND IP.VisitID = OE.VisitID[/code]

SQL Server Agent Performance Issue

Posted: 07 Oct 2013 02:38 AM PDT

Hello EveryoneI am facing one issue with SQL Server Agent.I schedule on job,There is one T-SQL in it. If i execute that T-SQL in SSMS it takes only 12 minutes to run but same T-SQL Script I schedule from SQL Server Agent and it takes 6 Hours to Finish. I refresh SQL Server Agent once and re-ran Job but still it takes same time.If anybody knows this issue please help me to resolve.

Rounding issue

Posted: 07 Oct 2013 05:11 AM PDT

Hello,I need to convert the following calculation. The 2 and 13 are sums from other columns that I need to divide. Not sure why this isn't working, but here is what I have to do...This should be the end result... 2/13 = 15.385%However, I keep getting 2/13 = 0.1538461Any help would be greatly appreciated!

High Latch Waits/Sec

Posted: 07 Oct 2013 03:24 AM PDT

Hi,One of my Servers running on SQL Server 2008 R2 , while monitoring perfmon I can see high values for latch waits/sec. The value keeps fluctuating consistently from 20 to 100. I read somewhere that a value more than 10 for this is considered as a problem.How do I find where these latches come from ? I want to find which queries running in the database causes these high latches and I am unable to find a good documentation to troubleshoot this. Any help regarding this is highly appreciated.Thanks in Advance !

Sharepoint Slow, Queries Recompiling everytime

Posted: 07 Oct 2013 12:39 AM PDT

We have a sharepoint 2010 server running in native mode. The backend server is SQL 2008 R2 RTM.We recently changed the credentials a report was using to connect to SSAS. After that change we have been getting timeouts regularly for the report. We changed the credentials back, but the timeouts persist.When I run SP_WHOISACTIVE I see queries that appear to be recompiling, blocking the select queries.[IMG]http://i44.tinypic.com/f07v9t.jpg[/IMG]The stored Procs do not have the 'with recompile' option in them. The Create statements run for more than 2 minutes, which exceeds the Sharepoint timeout.I found this reference - [url=http://support.microsoft.com/kb/2691331]http://support.microsoft.com/kb/2691331[/url] And based on it I redeployed the entire solution, however I'm still seeing these problems.

Using Execute as and encryption in a stored procedure

Posted: 06 Oct 2013 08:47 PM PDT

HI Experts,I am search for coding criteria I need create a stored procedure with execute as and along with encryption. How can I use the same ? My main motive is to create proc with execute as a user also at the same time I need to encrypt the same from other users seeing the code.The below query is getting errors, kindly some one help.Create procedure testprocwith execute as 'user' and with encryptionas truncate table some tableThanks Mj

How to register SP automatically on the database.

Posted: 06 Oct 2013 10:27 PM PDT

Hi, I am using Sql server 2008 R2,In my Organisation,we daily restore the Database.We need to give the daily reports from this updated newly restored database.For this cuurently,after restoring the database,we register(create) all the Stored Procedure,on this server and execute the SP.My requirement is,once the database is restored, all the SP should be automatically created on the restored database.Or any other alternative to do this.Please help me how to do this, this is urgent for me.Thanks in Advance!!

There is insufficient system memory to run this query

Posted: 06 Oct 2013 07:47 PM PDT

Hi,We have sql 2005 running on win 2003 ,user getting error as '" There is insufficient system memory to run this query".Ram is 4GB and sql is assigned to max 2.4 GB and Min 1 GBAT presnt CPU and Memory is looking normal.I have connected to server and there is no blockings and tried to connect the error log but got error as below,( XP-readerrorlog)Msg 22004, Level 16, State 1, Line 0Failed to open loopback connection. Please see event log for more information.Msg 22004, Level 16, State 1, Line 0error log location not foundThen i have cheked eventvwer an found below error.1)The description for Event ID ( 17052 ) in Source ( MSSQLSERVER ) cannot be found. The local computer may not have the necessary registry information or message DLL files to display messages from a remote computer. You may be able to use the /AUXSOURCE= flag to retrieve this description; see Help and Support for details. The following information is part of the event: Severity: 16 Error:8, OS: 8 [Microsoft][SQL Native Client]SQL Network Interfaces: Not enough storage is available to process this command..2)There is insufficient system memory to run this query.any help is much appricated...THanks in advance..

Audit record in Audit Table

Posted: 06 Oct 2013 11:13 PM PDT

Hello geeks,I have a problem with the SSIS audit values recording in the SSISAudittable. I try to execute a group of packages with Audit open and Audit close working stored procedures. when i execute the packages individually it shows all the values recorded in the Audit table, but when i try to execute the packages as a group , i see the values recorded for each of the package but the values for the main package is not recorded in the audit table, it shows the starting execution of the package values such as startdatetime, version, ...but doesnot show the values like enddatetime processduration(which are the part of closed stored procedure).Does any help me out with this issue.Thanks San

Backup on shared drive

Posted: 06 Oct 2013 07:41 PM PDT

can we create split backup files on shared drive while generating the backup.is so please help me in the script

MaxDOP setting?

Posted: 06 Oct 2013 09:57 PM PDT

As of now SQL Server maxdop value 0, it is using all available CPU..--Generic Servers or general settings is MAXDOP value is 8SELECT COUNT(*) AS proc# FROM sys.dm_os_schedulers WHERE status = 'VISIBLE ONLINE' AND is_online = 1--Result is 80, does server have logical CPU is 80? so server Hyperthead is enabled- how much value set int maxdop of result 80?select (select top 1 count(*) schedulers_per_node from sys.dm_os_schedulers where scheduler_id < 255 group by parent_node_id) schedulers_per_node , (select value_in_use from sys.configurations where configuration_id = 1539) current_max_dop --schedulers_per_node = 20 --current_max_dop = 10--Can we set the value 10 in maxTop?Here i have doubt First query result 80 and second query 20, so which is the correct one?How to check server have NUMA enabled or not?Is there relation between maxtop and NUMA?How it will be matching maxtop and NUMA?Thanksananda

Best cheap hosting, domain and website services

Monday, October 7, 2013

[SQL Server] How to collapse rows but take a specific value?

[SQL Server] How to collapse rows but take a specific value?


How to collapse rows but take a specific value?

Posted: 07 Oct 2013 02:57 AM PDT

Hi,I'm struggling as to how to collapse rows in a view based on different values in a column, but take one specific value. Here's the sample data with four columns:AccountNumber ------MaturityDate ------ResetFreq ------ Amount123 ------ 20140331 ------ 1D ------- 100.00123 ------ 20140331 ------ 1M ------- 2400.00123 ------ 20140331 ------ '' ------- 8700.00123 ------ 20140331 ------ 1Y ------- -99.00123 ------ 20140331 ------ 1M ------- 299.00I want to collapse these four rows into one row:AccountNumber ------MaturityDate ------ResetFreq ------ Sum(Amount)123 ------ 20140331 ------ 1Y ------- 11400.00For ResetFrequency, I need to take the highest value among the distinct values listed for that account.The hierarchy is: '' (blank) < 1D (1 day) < 1M (1 month) < 3M (3 months) < 1Y (1 year) < 3Y < 5Y < 10YSo, in this case, the distinct values are: '', 1D, 1M, 1Y -- so I take 1Y as it is the highest.Any help will be appreciated.Amit

Duplicate Data Reports Issue

Posted: 07 Oct 2013 05:42 AM PDT

We have a database named NorthStar (NS) that pushes data into a CRM Application (Salesforce) through an SSIS via a sql agent job. Lately, NS has been pushing duplicate data and Salesforce users are complaining of receiving duplicate data/records. Average amount received daily should be 156. Today 270 records were received. How do I solve this problem? Here is the path to the package. --- J:\Projects\NS to SF\NS Contacts Update with Org\Weekly NS Push\Package.dtsxThanks!

Site for good SQL learnings

Posted: 06 Oct 2013 07:35 PM PDT

Hi to all give me a site that has a good content about SQLthanks:-)

[how to] postgres create extension postgis error

[how to] postgres create extension postgis error


postgres create extension postgis error

Posted: 07 Oct 2013 08:20 PM PDT

In the process of creating the postgresql database "map" which uses the postgis extension, via CREATE EXTENSION postgis;, user "mapmaker" gets the following error:

permission denied to create extension "postgis"   HINT:  Must be superuser to create this extension.   

But user "mapmaker" is the database owner as specified by sudo -u postgres psql via the command:

CREATE DATABASE map OWNER mapmaker;   GRANT ALL PRIVILEGES ON DATABASE map TO mapmaker;   

Once mapmaker is a superuser at the user level I no longer receive the error and the extension is created so I understand all I have to do is adjust the permission of mapmaker via the postgres user to superuser but I am interested in knowing why this is the case if the mapmaker was granted all privileges on the database map? Are extensions treated differently? In order to use extensions does a user have to be a user level superuser or can the permissions be allocated on a database level?

I did see http://stackoverflow.com/questions/16527806/cannot-create-extension-without-superuser-role but the answer to the question did not explain why and, unfortunately, I do not have enough points to comment, hence the question.

PostgreSQL 9.1.9 PostGIS 2.0.3

Run a program on a remote computer after a SQL job has completed

Posted: 07 Oct 2013 07:24 PM PDT

We have a SQL Agent Job that performs a few tasks every night (schedulled to run using SQL Agent). The database server is SQL 2012 if that makes a difference.

We have another process (a .net program) that needs to run on a different server after the job has finished on each server. The program cannot be installed on the database server.

Is there a way to get a SQL Agent Job to run a program on another computer after it has finished, or is there an easy way to indicate to a program on another computer that the job has finished?

Seeing recompiles when changed AUTO_UPDATE_STATISTICS to OFF

Posted: 07 Oct 2013 03:34 PM PDT

The reasons why proc cache is flushed are listed here http://msdn.microsoft.com/en-us/library/bb522682.aspx, and AUTO_UPDATE_STATISTICS is not this list. So I changed this option to OFF on production and saw a 100% spike that lasted about 4 minutes and saw a lot of recompiles. What am I missing?

Help designing tables for storing lab test results: values include numbers, <number, >number

Posted: 07 Oct 2013 03:18 PM PDT

We're using MySQL 5.5 and attempting to design tables to store the results from tests run in a lab. The results of some of the tests can be either a plain number, e.g. "5", or a range such as "< 5" or "> 25", and this is where we're getting stuck. The < and > values are due to the limits of the sensitivity of the tests; above or below a certain threshold for each test the exact amount can't be detected. Values right at the threshold are possible, so for the example above, both < 5 and 5 would be valid values for that particular test, and we need to be able to distinguish between these values.

There's a set list of tests, so needing to add new ones easily isn't a concern. Our preliminary design has a TestResults table and each test is a column in that. We were thinking of maybe adding a column for the tests that require < and > just to hold those values or possibly having "flag" values in the application code that signals a value is below/above the threshold and having the thresholds in a reference table. Neither of these solutions seems that great though. We don't want to just store the results as strings because we'll need to be able to get statistics, averages, etc. on the results.

Anyway we were hoping someone out there had done something like this and had some ideas on the best way to do it. We've been Googling (e.g. on LIMS) and haven't been able to find anything on database designs, but it seems like it wouldn't be that uncommon of a problem... Any ideas on the most appropriate way to model this type of data would be greatly appreciated.

How to optimize a query that includes counts of rows in multiple tables?

Posted: 07 Oct 2013 05:05 PM PDT

My query looks like this:

EXPLAIN SELECT ...      FROM USER u  LEFT OUTER JOIN   (SELECT COUNT(*) notified_count, user_id      FROM user_email_tracking ON u.id = email_tracking    WHERE email_type_id = 4    GROUP BY user_id) email_tracking ON u.id = email_tracking.user_id    JOIN ( SELECT user_id, count(*) as subscriptionCount         FROM user_filter         GROUP BY user_id) uf on u.id = uf.user_id    WHERE u.stage = 10  

My intuition is this is pretty doable. That u.stage = 10 narrows it down a lot. There's only a few hundred users who it applies to. If I could narrow down by that first then perform all other work, seems like this query should take a few seconds.

But - user_filter is huge, and user_email_tracking is pretty big. (Both are indexed on their user_id columns). If those have to get performed on everything, it's much slower. And I also must avoid joining user_filter against user_email_tracking on user_id, since that would be a huge times a huge table.

If I only had one count to do, I could move the group by all the way outside and just have a row for each row of user_filter (or user_email_tracking).

How to get SQL to do these things in the right order?

Engine is InnoDB.

How to import .FRM Tables into MySQL DB as .SQL?

Posted: 07 Oct 2013 03:03 PM PDT

After upgrading to the latest version of MAMP (2.2), all my LocalHost Websites & Databases are unusable. All the DB Files are .FRM (Form). In phpMyAdmin, the Tables look like they are present, but upon click, an error message appears saying "Table Not Found/Doesn't Exist."

TempDB on Solid State Drives on SQL Server Cluster

Posted: 07 Oct 2013 07:18 PM PDT

Can solid state drives be used for the TempDB on a SQL Server failover cluster if they are local drive on both machines? It is supported in 2012 but our cluster is a SQL 2005. The application that it is running isn't upgradeable to 2012.

If the SSD drives are on the same local path on both servers wouldn't it just rebuilt the TempDB when it failed over? Wouldn't the cluster recognize any drive the OS recognizes? Or are there other issues that need to be accounted for.

I realize that Microsoft won't support it but the performance gain would be great. Our TempDB on that cluster is the heaviest used DB and we can't change any code in the application it runs.

Cloning rows with minor changes

Posted: 07 Oct 2013 01:47 PM PDT

I am attempting to clone a row in the database, not that difficult. However the row has a reference to another row in the database that when inserted needs to use a different value, perhaps provided by a previous insert.

For example this is the table structure.

CREATE TABLE `usergroup` (      `id` int(11) unsigned NOT NULL AUTO_INCREMENT,      `vhost` int(11) unsigned NOT NULL,      `groupType` tinyint(3) unsigned NOT NULL DEFAULT '1',      `name` varchar(80) NOT NULL,      `description` text,      `note` text,      `logo` int(11) unsigned NOT NULL DEFAULT '0',      `created` datetime DEFAULT NULL,      `treeleft` int(10) NOT NULL,      `treeright` int(10) NOT NULL,      `parentGroup` int(11) unsigned NOT NULL DEFAULT '0',      `geo_radius` int(11) unsigned NOT NULL DEFAULT '0',      `geo_autoRemove` tinyint(4) NOT NULL DEFAULT '1',      `geo_autoAdd` tinyint(4) NOT NULL DEFAULT '1',      PRIMARY KEY (`id`),      KEY `vhost` (`vhost`,`groupType`,`moderationStatus`,`parentGroup`,`created`),      KEY `tree` (`vhost`,`treeleft`,`treeright`)  ) ENGINE=InnoDB  

With the following rows

INSERT INTO `usergroup` (id, vhost, groupType, name, description, note, created, treeleft,                            treeright, parentGroup, geo_radius, geo_autoRemove, geo_autoAdd)         VALUES (9696, 222, 1, 'Group 1.1', NULL, NULL, NULL, 1, 12, 0, 0, 0, 0),               (9697, 222, 1, 'Group 1.2', '', '', NULL, 2, 3, 9696, 0, 0, 0),               (9698, 222, 1, 'Group 1.3', '', '', NULL, 4, 9, 9696, 0, 0, 0),               (9700, 222, 1, 'Group 1.4', NULL, NULL, NULL, 5, 6, 9698, 0, 0, 0),               (9701, 222, 1, 'Group 1.5', NULL, NULL, NULL, 7, 8, 9698, 0, 0, 0),               (9702, 222, 1, 'Group 1.6', NULL, NULL, NULL, 10, 11, 9696, 0, 0, 0),               (9706, 222, 1, 'Group 1.9', NULL, NULL, NULL, 13, 16, 0, 0, 0, 0),               (9707, 222, 1, 'Group 1.10', NULL, NULL, NULL, 14, 15, 9706, 0, 0, 0),               (9704, 222, 1, 'Group 1.7', NULL, NULL, NULL, 17, 18, 0, 0, 0, 0),               (9705, 222, 1, 'Group 1.8', NULL, NULL, NULL, 19, 20, 0, 0, 0, 0),               (9799, 222, 1, 'Group 1.11', '', '', '2012-06-21 17:11:41', 21, 22, 0, 0, 0, 0),               (9800, 222, 1, 'Group 1.12', '', '', '2012-06-24 10:21:34', 23, 24, 0, 0, 0, 0);  

I now want to clone all these usergroups into a new vhost, 333 for example.

    INSERT INTO usergroup (vhost, groupType, name, description, note,           created,  treeleft, treeright, parentGroup, geo_radius,           geo_autoRemove, geo_autoAdd) VALUES               (SELECT 333 as vhost, groupType, name, description,                       note, created, treeleft, treeright, parentGroup,                       geo_radius, geo_autoRemove, geo_autoAdd                  FROM usergroup                 WHERE vhost = 222);  

The problem of course is while everything works out perfectly the parentGroup is incorrect. I suppose inserting by joining the table with itself might work but I need to be able to clone a vhost that has 8 levels of depth but could be as few as two in some places.

Does anyone have any ideas?

Create new database with large initial size

Posted: 07 Oct 2013 01:50 PM PDT

I want to create a new database with 200GB MDF and 50GB LDF. but it is very slow (it's about 15 minutes and still hasn't created). and it is very time consuming. Is that normal? if yes what does it do that takes time? can I enhance its creating speed?

I am using sql server 2008R2, windows server 2008R2, 16GB RAM(wich I limited that to 12GB in ssms) and Corei7 Cpu

Is DBCA in Oracle for a single database or the entire system?

Posted: 07 Oct 2013 12:12 PM PDT

I ask this because I was going to create a simple phone list database for learning. But when I used dbca, I was asked did I want to install the sample databases, HR, etc. It also had the default locations for all archive_log files, passwords for sys, sysman...I thought dbca was just to create a single database.

What is dbca for?

How to completely remove EnterpriseDB Installation of Postgresql on Ubuntu?

Posted: 07 Oct 2013 12:10 PM PDT

All,

I'm trying to install/run PostgreSQL 9.3, but am receiving a multitude of errors which seem to point to a failed installation of postgresql from the enterpriseDB package.

For example, trying to start the server with this command:

sudo /etc/init.d/postgresql-9.3 restart (or start)

...yields the following:

Restarting PostgreSQL 9.3:   -su: /opt/PostgreSQL/9.3/bin/pg_ctl: No such file or directory  PostgreSQL 9.3 did not start in a timely fashion, please see /opt/PostgreSQL/9.3/data/pg_log/startup.log for details  

Except there is no /opt/PostgreSQL/ folder, and I've been told that creation of that was attempted during the EnterpriseDB installation...why the start command is trying to go there, I do not know!

I've started another thread which gave me a very temporary solution, but that doesn't seem to help after I do a restart of my machine.

So the question, how do I completely remove the EnterpriseDB failed installation?

I've tried these commands, but the issue still lingers:

sudo apt-get autoremove postgresql-9.3-postgis pgadmin3  sudo apt-get --purge remove postgresql\*  

Any thoughts or recommendations are welcome!

Thank you!

-m

SQL Server 2012 Resource Governor and memory used by CLR

Posted: 07 Oct 2013 11:34 AM PDT

In SQL 2012 memory used by CLR is now part of what is controlled by "MAX Server Memory"

a) Does Resource Governor in SQL 2012 Enterprise (MAX_MEMORY_PERCENT for resource pool) governs CLR memory?

b) Also, Max_Memory_Percent is the percentage of "Max Server Memory"? So my max server memory is 28 GB on a server with 32 GB RAM. Does that mean setting Max_MEMORY_PERCENT to 10 percent mean I set 2.8 GB for that particular resource pool.

How to determine which `Foos` have no associated `Bars` [duplicate]

Posted: 07 Oct 2013 10:53 AM PDT

This question already has an answer here:

I want to find all Foos that do not have any associated Bars.

Here's the query I'm using. It's returning no records:

select * from foos f where f.id not in(select b.foo_id from bars b);  

However, I know that the foo with id = 1583 has no associated Bars, so there should be at least one result in my previous query.

Can somebody point out a mistake I am making? Thanks.

built-in administrator is disabled and sa is lost [duplicate]

Posted: 07 Oct 2013 10:41 AM PDT

This question already has an answer here:

I've set a password to SQL server SA accunt instance and then disabled windows built-in administrator. Now I've forgotten the SA account's password. How can I recover access to the database ? I can log on Windows OS.

mysql match against boolean mode

Posted: 07 Oct 2013 10:12 AM PDT

I'm using MATCH AGAINST in boolean mode to add a simple search functionality to my site.

The column to be searched is varchar, with no full text index. The column value can be alphanumeric, all letters or all numbers, as well as certain combinations of both (eg. ABC-123').

There seems to be a few problems. First, the search doesn't work when there's a hyphen (like ABC-123). Second, it will not find partial matches. For a value of 856059, the searches look like this:

MATCH (field) AGAINST ('+856049' IN BOOLEAN MODE) // finds the record  MATCH (field) AGAINST ('+85604' IN BOOLEAN MODE) // does not find the record  MATCH (field) AGAINST ('+8560' IN BOOLEAN MODE) // does not find the record  

I'm not sure why this is working this way. For various reasons I can't add a fulltext index to the column and use the "default" match against mode.

Optimizing a query using derived tables

Posted: 07 Oct 2013 07:39 PM PDT

This is actually a cross-post. I had posted this already on StackOverflow, and then someone suggested me to post here instead. If required, I'm willing to delete the StackOverflow post, if it's more relevant to keep a single copy here.

As part of a daily cron job, I need to run a query that processes a whole lot of data. This data is related to the visitors coming to a website, and updating the data with what we have captured previously.

The query relies on 2 derived tables (select queries in the FROM section), to do its work —

SELECT      new_visits.visitor_id, new_visits.visit_id, new_visits.visit_first_action_time,    new_visits.purchased as purchased,      ifnull(existing_visitors.purchased, 0) as existing_purchased   FROM           ( SELECT                    tv.visitor_id, tv.visit_id, tv.visit_first_action_time,           if(tc.idgoal=0,1,0) as purchased                               FROM           tbl_log_visit tv left outer join tbl_log_conversion tc                 ON           tv.visit_id = tc.visit_id AND tc.idgoal = 0                               WHERE          tv.idsite= 12 AND tv.visit_id >= 477256                      ORDER BY tv.visit_id               LIMIT 1000 ) new_visits       LEFT JOIN                 ( SELECT          visitor_id, max(visit_seq) as visit_seq, purchased        FROM          tbl_last_input_visit where site_id = 12        GROUP BY visitor_id, purchased ) existing_visitors             ON new_visits.visitor_id = existing_visitors.visitor_id     ORDER BY new_visits.visitor_id, new_visits.visit_id;  

With smaller datasets, this query works just fine. However, as the data increases, the slowly becomes progressively slower. Until a point where it starts to take around 30 seconds to executed (at the start it takes around 1.5 seconds).

The query plan is as follows —

+----+-------------+------------------------+-------+-----------------------------------------------------------------------------------+---------------+---------+-------------------+---------+---------------------------------+  | id | select_type | table                  | type  | possible_keys                                                                     | key           | key_len | ref               | rows    | Extra                           |  +----+-------------+------------------------+-------+-----------------------------------------------------------------------------------+---------------+---------+-------------------+---------+---------------------------------+  |  1 | PRIMARY     | <derived2>             | ALL   | NULL                                                                              | NULL          | NULL    | NULL              |    1000 | Using temporary; Using filesort |  |  1 | PRIMARY     | <derived3>             | ALL   | NULL                                                                              | NULL          | NULL    | NULL              |  705325 |                                 |  |  3 | DERIVED     | tbl_input_visit        | ref   | visitorid_seq,visitorid_idx                                                       | idvisitor_seq | 4       |                   |  490047 | Using where                     |  |  2 | DERIVED     | tv                     | range | PRIMARY,index_idsite_config_datetime,index_idsite_datetime,index_idsite_idvisitor | PRIMARY       | 4       | NULL              | 4781309 | Using where                     |  |  2 | DERIVED     | tc                     | ref   | PRIMARY                                                                           | PRIMARY       | 8       | tv.idvisit        |       1 | Using index                     |  +----+-------------+------------------------+-------+-----------------------------------------------------------------------------------+---------------+---------+-------------------+---------+---------------------------------+  

At this point, one option I have explored is creation of temporary tables. However, the overhead of doing so is quite significant. I also realise that since this query relies on derived tables, MySQL will not be able to reuse any underlying indexes.

Here are the create statements for the tables involved —

CREATE TABLE `tbl_last_input_visit` (    `site_id` int(10) unsigned NOT NULL,    `visitor_id` binary(8) NOT NULL,    `visit_seq` int(10) unsigned NOT NULL,    `purchase_cycle_seq` int(10) unsigned NOT NULL,    `visit_in_cycle_seq` int(10) unsigned NOT NULL,    `purchased` smallint(5) unsigned NOT NULL COMMENT 'l_ij',    UNIQUE KEY `idvisitor_seq` (`site_id`,`visitor_id`,`visit_seq`),    KEY `idvisitor_idx` (`site_id`,`visitor_id`)  ) ENGINE=InnoDB    CREATE TABLE `tbl_log_visit` (    `visit_id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `idsite` int(10) unsigned NOT NULL,    `visitor_id` binary(8) NOT NULL,    `visit_last_action_time` DATETIME,    `config_id` int(10) unsigned NOT NULL,    PRIMARY KEY (`visit_id`),    KEY `index_idsite_config_datetime` (`site_id`,`config_id`,`visit_last_action_time`),    KEY `index_idsite_datetime` (`site_id`,`visit_last_action_time`),    KEY `index_idsite_idvisitor` (`site_id`,`visitor_id`)  ) ENGINE=InnoDB    CREATE TABLE `tbl_log_conversion` (    `visit_id` int(10) unsigned NOT NULL,    `site_id` int(10) unsigned NOT NULL,    `visitor_id` binary(8) NOT NULL,    `idgoal` int(10) NOT NULL,    `idorder` int(10) NOT NULL,    PRIMARY KEY (`visit_id`,`idgoal`),    UNIQUE KEY `unique_idsite_idorder` (`site_id`,`idorder`)  ) ENGINE=InnoDB  

Is there some way I can go about improving the performance of this query?

Database design: Modifying an open source database without a lot of changes

Posted: 07 Oct 2013 10:17 AM PDT

  • I am adding more functionality/features to an open source project.
  • The project is about a forum where each post can be given a specific list of tags.
  • The tags are homogeneous. I.e they are of similar type. Ex: Subject
  • I am required to add multiple homogeneous tags. For ex: Subject, Course, College, Experience. All of them should have the same features as current tag
  • Currently each post can have multiple tags of same type. The requirement is to multiple types where each type allows multiple tags
  • Below is a snippet of a already existing datamodel

A snippet of already existing database.

  • How do I make changes to the already existing Database? (Please don't discard the question as opinion based since I saw many of them having similar doubts)
  • Do I need to add a separate table for each type of tags and redo whatever the original tag did?
  • Or should I create a level above tags which is a type of tags and they all inherit the basic tag functionality.

One or two tables (for two things: cancellations and moves)?

Posted: 07 Oct 2013 10:54 AM PDT

A lesson may be canceled, moved to an other time, and possibly switched to an other teacher.

In other words, we have

  lessonid INT UNSIGNED  canceled BIT(1)  time TIME  teacherid INT UNSIGNED  

Should all these "lesson modifications" be stored in one table? Or one table for canceled and another for moved lessons?

I am inclined to store it into one table (with possibly NULL time and probably NULL teacherid, because I can add a common "modification ID" (the primary ID of the table) for both cancellations a moves to an other time. But I am open to hear arguments pro and contra.

MySQL 5

Script out XMLA for all databases

Posted: 07 Oct 2013 01:26 PM PDT

I have several SSAS databases. I only want to script out the XMLA for them on a nightly basis. This will be a second tier backup in addition to the regular backups we have.

How do I auto-generate XMLA scripts for all the databases?

Connecting to sql azure from ssms throw exception 'sys.configurations'

Posted: 07 Oct 2013 01:45 PM PDT

Connecting to Sql Server Azure from MSSQL Server MGMT Studio, following exception occures :

Cannot connect to xxxxxxxxx.database.windows.net    An exception occured while executing a Transact-SQL Statement or batch.     (Microsoft.SqlServer.ConnectionInfo)    sql management studio connection to windows azure invalid exception object name   'sys.configurations'  

Any clue ?

And here's the whole message : Cannot connect to ndkkyxvogj.database.windows.net.    ===================================    An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)    Program Location:    at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteWithResults(String sqlCommand) at Microsoft.SqlServer.Management.UI.VSIntegration.ObjectExplorer.ObjectExplorer.CheckDacAndSmo(SqlConnection sqlConnection) at Microsoft.SqlServer.Management.UI.VSIntegration.ObjectExplorer.ObjectExplorer.ValidateConnection(UIConnectionInfo ci, IServerType server) at Microsoft.SqlServer.Management.UI.ConnectionDlg.Connector.ConnectionThreadUser()    ===================================    Invalid object name 'sys.configurations'. (.Net SqlClient Data Provider)    For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&ProdVer=11.00.9164&EvtSrc=MSSQLServer&EvtID=208&LinkId=20476    Server Name: ndkkyxvogj.database.windows.net Error Number: 208 Severity: 16 State: 1 Line Number: 17    Program Location:    at Microsoft.SqlServer.Management.Common.ConnectionManager.ExecuteTSql(ExecuteTSqlAction action, Object execObject, DataSet fillDataSet, Boolean catchException) at Microsoft.SqlServer.Management.Common.ServerConnection.ExecuteWithResults(String sqlCommand)  

Can non-domain-connected users use Report Builder

Posted: 07 Oct 2013 02:07 PM PDT

I'm dealing with a scenario where SSRS is being provided to remote clients on a different domain. We've discovered that they can launch Report Builder (a ClickOnce application) from their Report Manager browser session, but RB then uses their local credentials when trying to do anything (e.g. hit a datasource) in the RB session.

This article (http://technet.microsoft.com/en-us/library/ms365173(v=sql.105).aspx) seems to indicate that it won't be possible to hack our way around this; whereas this one about Azure (http://msdn.microsoft.com/en-us/library/windowsazure/dn189155.aspx) appears to indicate that a local RB session may be able to interact successfully with Azure - does this suggest there's a glimmer of hope?

We are currently running SQL 2008 R2.

Query_cache doesn't work with join

Posted: 07 Oct 2013 12:25 PM PDT

I have a simple join query, but for some reason query_cache won't cache it!

SELECT id, news, approve, FIXED, DATE, allow_main FROM post LEFT JOIN post_plus ON post.id = post_plus.news_id  WHERE approve =1 AND allow_main =1 ORDER BY FIXED DESC , DATE DESC  LIMIT 7 , 7;  

without the JOIN post_plus ON ( post.id = post_plus.news_id ) it's working.

query_cache is on

| query_cache_limit              | 10485760             |  | query_cache_min_res_unit       | 4096                 |  | query_cache_size               | 536870912            |  | query_cache_strip_comments     | OFF                  |  | query_cache_type               | ON                   |  | query_cache_wlock_invalidate   | OFF                  |  

Server version: 5.6.12-56 Percona Server (GPL), Release rc60.4, Revision 393

both tables are innodb with utf8 charset

mysqldump freezing on a specific table

Posted: 07 Oct 2013 03:25 PM PDT

I dumped a database (sys_data) which is very big (800GB, all data in one ibdata file) from a remote server. But the dump was blocked at a table (tb_trade_376). My dump command:

mysqldump -uxx -pxx -h192.168.1.xxx --single-transcation sys_data > /home/sys_data.sql  

When the dump was blocked:

show processlist;  5306612 | root | 192.168.1.161:57180 | sys_data      | Query  | 23955 | Sending data | SELECT /*!40001 SQL_NO_CACHE */ * FROM `tb_trade_376`  

On the other hand I can dump the table tb_trade_376 successfully if I just dump the table only.

mysqldump -uxx -pxx -h192.168.1.xxx \    --single-transcation sys_data tb_trade_376 > /home/tb_trade_376.sql  

This works well and quickly! The table tb_trade_376 has about 700,000-800,000 rows.

What is the next step in investigating why I can't dump the whole database? How can I make it work?

In place upgrade from MySQL 5.5 to 5.6.11 removes all users from user table

Posted: 07 Oct 2013 02:25 PM PDT

On Windows, I upgraded from 5.1 to 5.5 no problem.

  1. Copied my 5.1 data folder into my 5.5 instance
  2. Started mysqld skipping grants
  3. Ran mysql_upgrade

All good, but going from 5.5 to 5.6:

  1. Copied 5.5 data folder to 5.6 instance
  2. Started mysqld skipping grants
  3. Ran mysql_upgrade

but I get:

C:\Users\QAdmin>mysql_upgrade  Looking for 'mysql.exe' as: C:\Program Files\MySQL\MySQL Server 5.6\bin\mysql.exe  Looking for 'mysqlcheck.exe' as: C:\Program Files\MySQL\MySQL Server 5.6\bin\mysqlcheck.exe  Running 'mysqlcheck' with connection arguments: "--port=3306"  Running 'mysqlcheck' with connection arguments: "--port=3306"  mysql.user_info                                    OK  Running 'mysql_fix_privilege_tables'...  Running 'mysqlcheck' with connection arguments: "--port=3306"  C:\Program Files\MySQL\MySQL Server 5.6\bin\mysqlcheck.exe: Got error: 1130: Host 'localhost' is not allowed to connect to this MySQL server when trying to connect  FATAL ERROR: Upgrade failed  

If I look at the mysql.user table it is completely empty.

  • Has anyone seen this or know what is going on?
  • During the "upgrade" the user table gets erased and when it tries to connect it can't?

Thanks.

SSRS 2008 R2 setup issue

Posted: 07 Oct 2013 05:25 PM PDT

I have installed SSRS 2008 R2 on my desktop and server. When I hit the reports link on my desktop

http://mypc/Reports_mypc/Pages/Folder.aspx  

all I get to see is this home page of the desktop ssrs instance

I cant create a new folder or data source or anything of the sort

On the server where I am attempting to set up SSRS 2008 R2, all I get is a white screen that shows the virtual folder name in large fonts, followed by the version of the reporting services server on the next line. This is not leaving me any clues as to what needs to be fixed. On both pcs I am using the credentials of the local admin. Any clues on what needs to be fixed?

I would like to query a range of criteria on multiple columns in MySQL

Posted: 07 Oct 2013 11:53 AM PDT

I have 3 columns in a mysql table. I'm using Innodb engine. I want to be able to search for some values on those columns and also anything close to those values. For example :

We have 3 columns as "Fast" "Smart" "Sharp" and they are of type boolean/int I have 1000 records in the table, as an example, I list 5 here.

Fast  |  smart  |  Sharp  ------------------------  0         1          1  1         1          1  0         0          1  1         1          1  1         0          0  

and let's say we wanna search for people who are smart and sharp as 0 1 1 but if there is no record with the value of 0 1 1 we want to get the closest to it which is 1 1 1 or 1 0 1 or etc. So now when I search for it, I get the exact value, if the value doesn't exist, I won't get anything back from DB. So how can I achieve this and if I get a bunch of result, how can I sort them from the closets to my query to the feartest ?

What is the difference between int(8) and int(5) in mysql?

Posted: 07 Oct 2013 01:25 PM PDT

I found out, that if you have a field defined as INT(8) without ZEROFILL it will behave exactly as INT(5)

in both cases the maximum value is

−2,147,483,648 to 2,147,483,647, from −(2^31) to 2^31 − 1  

or do i miss something?

I found this Question: http://dba.stackexchange.com/a/370/12923

The (5) represents the display width of the field. From the manual, it states:

The display width does not constrain the range of values that can be stored in the column. Nor does it prevent values wider than the column display width from being displayed correctly. For example, a column specified as SMALLINT(3) has the usual SMALLINT range of -32768 to 32767, and values outside the range permitted by three digits are displayed in full using more than three digits.

The display width, from what I can tell, can be used to left-pad numbers that are less than the defined width. So 00322, instead of 322. TBH, I've never used it.

But it doesn't affect the storage size of the column. An int will take up more space than a smallint.

so there seems to be no difference then.

MySQL table relations, inheritance or not?

Posted: 07 Oct 2013 06:25 PM PDT

Im building a micro CMS. Using Mysql as RDMS, and Doctrine ORM for mapping.

I would like to have two types of pages. Static Page, and Blog Page. Static page would have page_url, and page_content stored in database. Blog page would have page_url, but no page_content. Blog would have Posts, Categories...

Lets say I have route like this:

/{pageurl}  

This is page, with page url that can be home, or news, or blog... That page can be either Static page, and then I would joust print page_content. But it can also be Blog Page, and then I would print latest posts as content.

How should I relate these Static Page and Blog Page tables? Is this inheritance, since both are pages, with their URL, but they have different content? Should I use inheritance, so that both Static and Blog page extends Page that would have page_url? Or should I made another table page_types and there store information about available page types?

Most idiomatic way to implement UPSERT in Postgresql nowadays

Posted: 07 Oct 2013 07:16 PM PDT

I've read about different UPSERT implementations in PostgreSQL, but all of these solutions are relatively old or relatively exotic (using writeable CTE, for example).

And I'm just not a psql expert at all to find out immediately, whether these solutions are old because they are well recommended or they are (well, almost all of them are) just toy examples not appropriate to production use.

So my question is following. Regarding to the fact that it is year 2012, what is the most common, most thread-safe way to implement UPSERT in PostgreSQL?

What's the difference between a CTE and a Temp Table?

Posted: 07 Oct 2013 06:06 PM PDT

What is the difference between a Common Table Expression (CTE) and a temp table? And when should I use one over the other?

CTE

WITH cte (Column1, Column2, Column3)  AS  (      SELECT Column1, Column2, Column3      FROM SomeTable  )    SELECT * FROM cte  

Temp Table

SELECT Column1, Column2, Column3  INTO #tmpTable  FROM SomeTable    SELECT * FROM #tmpTable  

[MS SQL Server] Seeing recompiles when changed AUTO_UPDATE_STATISTICS to OFF

[MS SQL Server] Seeing recompiles when changed AUTO_UPDATE_STATISTICS to OFF


Seeing recompiles when changed AUTO_UPDATE_STATISTICS to OFF

Posted: 07 Oct 2013 04:15 AM PDT

The reasons why proc cache is flushed are listed herehttp://msdn.microsoft.com/en-us/library/bb522682.aspx, and AUTO_UPDATE_STATISTICS is not this list.So I changed this option to OFF on production and saw a 100% spike that lasted about 4 minutes and saw a lot of recompiles.What am I missing?

Update statistics job is failing

Posted: 07 Oct 2013 02:12 AM PDT

I've set up a Maintenance Plan (yeah, yeah, code it up myself and create the Agent job, I know... :hehe:) to run an Update Statistics once a week on a server here.Well, the past couple times its run, it's coming back with an error, and I'm not sure how to resolve it:Executing the query "UPDATE STATISTICS [dbo].[TABLENAME] WITH FULLSCAN" failed with the following error: "Error converting data type varchar to numeric.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.No, the table in question isn't really named "TABLENAME." The table in question does have quite a few computed columns, but there's nothing in BOL to indicate that these should be a problem. Yes, this table has quite a few columns (106 of which 39 are computed,) but it was (I believe) inherited from a mainframe system.I know I could exclude this DB from the update job, but I'd rather not. Further, it has worked in the recent past (about 3 weeks back,) and the dev responsible for it hasn't made any schema changes in months.Any suggestions?Thanks,Jason

Missing Drives on Remote server

Posted: 07 Oct 2013 03:42 AM PDT

Hi,I am looking to find missing drives on remote server. actually we are monitoring bulk servers, but we got some issue with one server that one drive is disapeared. we don't have any choice until it come to notice. later backups failures started and we identified drive is missed. can anyone help me to find those issue using T-SQL script. or even it would be better in anyway to find those issues. Replays are most welcome.cheers

[SQL 2012] SQL 2012 Availability groups

[SQL 2012] SQL 2012 Availability groups


SQL 2012 Availability groups

Posted: 07 Oct 2013 03:00 AM PDT

I have had a google around how to set these up and it seems fairly simple, I am just trying to get my head around the storage side.We know it can't be shared storage as both nodes (in a two node cluster) need to have access at the same time.On a windows 2008 server, how would you set the storage up? Would you use a file share as the witness? And you could have one of the SQL instances taking the data load etc and the replica doing the reporting, backups etc?On a windows 2012 Server how would it work?

SSRS report with multiple datasets

Posted: 06 Oct 2013 11:11 AM PDT

I need to create one report with 5 different data sets and from 5 diff data sources. All the 5 data sources are from 5 diff servers and they are not linked servers. So I am creating 5 data sets with 5 diff data sources and executing 5 diff stored procedures and getting 5 diff result sets with same column names and need to add all these 5 result sets into one report. I need to union the result set of all the 5 Stored procs and use it one report.How can I do this?Thanks.

How to capture trace data for update statistics testing

Posted: 06 Oct 2013 11:10 PM PDT

I want to focus on tracing queries that are most dependent on really good statistics.I've learned how to do server-side traces which seem to have a light impact on our production server. Now I'm asked to capture traces which can be replayed in our disaster-recovery environment in an attempt to find the "sweet spot" for update statistic sample rate. For years we ran them at full scan ( 100% sample ), then as the database grew we backed off to 80%. Now we want to see if we should go back to 100% which will multi-thread ( parallelize ) or lower the rate.This database has clients of many different sizes and the application uses an ORM. Its quite relational so often does numerous joins. The business allows end users to run LIKE queries from the application, sometimes double wildcard like queries so I was leaning towards capturing those. After numerous Monday-morning performance crisis over the years, everyone is afraid to lower the sample rate, but I found several articles that indicated that until you get below 50% sample, full scan may run faster since it doesn't have to sort and use tempdb ( and runs parallel threads ).Auto create/update stats is on as is the update asynchronous. We update stats using an algorithm that finds those needing update the most based on rows modified and size of table -- 3 hours every weeknight and several hours on Saturday -- all in addition to regular index rebuild/reorgs. Sql 2012 Enterprise 64 cores hyperthreaded with max memory at 368GB.

About sql server carrier.

Posted: 06 Oct 2013 09:43 PM PDT

Hi All, This is prashant.I have completed my b.tech in electronics and communication engg.I m working on .net and my favorite subject is sql server.i want to be a sql server dba.is there any oppertunities for freshers in sql server dba.please reply me,i am confusing in choosing my carrer.thank you

How do I add a new sheet with the servers name to an open workbook? using power shell

Posted: 07 Oct 2013 12:45 AM PDT

Hi,I am running my PS script against a list of servers. I am pushing myoutput to Excel.What I would like to wind up with is one sheet per server. Right now,I am getting one WorkBook per server.How do I add a new sheet with the servers name to an open workbook?

File Format Not Valid message when installing SQL Enterprise 2012

Posted: 06 Oct 2013 07:48 PM PDT

Hi all,I am getting the "File format not valid" message at a very early stage in the installation process of SQL Enterprise 2012 on my Windows 7 machine. I have tried these 2 options but both failed:(1) mounting the iso on Virtual CloneDrive and running setup from there.(2) extracting the files from the iso using Winrar and running setup from the extracted files.The message appear just after I press the Next button, following the Product Key code.Any ideas how to fix this please?

Need help in writing a query

Posted: 06 Oct 2013 04:47 AM PDT

Hi, I have the following tables: 1. Units table: •ID•Serial number•Name•Location 2. UnitsData table: •ID•UnitID (int)•DateOfData (datetime)•Value (float) I have to show the following information in a gridview and I do not succeed doing it through SQL query and need help: I have to show a grid with one line for every unit with the following information about it: 1.Unit's name.2.Unit's serial number3.Unit's location.4.Average of the value column for the unit in the last 30 days. 5.The latest DateOfData of the unit.6.The value in the latest dateofdata.7.true if the latest data was received in the last 48 hours. else - false8.true if I got the same value in different dates in the past 48 hours. I know it is a big question. I will appreciate any help about it. Thanks

Search This Blog