Monday, July 15, 2013

[T-SQL] Sum Negative Numbers

[T-SQL] Sum Negative Numbers


Sum Negative Numbers

Posted: 14 Jul 2013 03:53 PM PDT

Hi All,I have a column GNLBalance with negative and positive balances. I want to add these numbers together when another column GNLFSMID in the same table is equal to a series of numbers.I need the total closing\opening balance for each client with there account, account group & division.Balance Buisness Rules:2013 Opening balance = Closing balance 2012 (opening 2012 + Jan 12 to Dec 12 balance)Closing balance 2013 = Opening + Jan13 to Current month balanceSoOpening: FSMID 113 to 125Closing: FSMID (113 to 125) + 127,128,129,130DDLTable GeneralLedgerGNLID GNLCLIID GNLCOAID GNLFSMID GNLBalanceBase1 1385 576 113 -4845.00002 1385 713 114 -395.85003 3139 39 115 8703.34004 727 39 116 -1321.65005 727 39 117 -8811.00006 3139 713 118 -15416.52007 1480 713 119 18429.92008 3144 39 120 1321.65009 1480 713 121 -10799.000010 3144 576 122 4371.000011 3139 713 123 1619.850012 727 45 124 1786.140013 3139 39 125 46.420014 727 576 127 -12802.400015 1480 713 128 1198.260016 1480 713 129 -1785.000017 3139 713 130 800.0000Table ClientsCLIID CLICode CLIName727 HARK HARK CONSULTING1385 3HPARC 3HARCHITECTS1480 GUYCAR GUY CARPENTER3139 ABNAP ABB PTE LTD3144 SYSACC SYSTEM ACCESSTABLE ChartOfAccountCOAID COANAME1 COACode2Code COACode2NAme2 COAAGPID39 Total Billings RV10 RV10 Billings 145 Prod Billing RV40 RV40 Revenue 2576 Loan FD100G FD100G Financial Debts 3713 Receivables FA301G FA301G Loans Advances 4Table AccountGroupAGPID AGPNAME11 Assets2 Goodwill3 Deferred4 Trade5 CashTable FiscalMonthFSMID FSMNAME1113 Opening Balance 12114 Jan 12115 Feb 12116 Mar 12117 Apr 12118 May 12119 Jun 12120 Jul 12121 Aug 12122 Sep 12123 Oct 12124 Nov 12125 Dec 12127 Opening Balance 13128 Jan 13129 Feb 13130 Mar 13Table ClientOwnerCLOID CLOCLIID CLODIVID1 727 12 1385 23 1480 34 3139 35 3144 46 727 5Table DivisionDIVID DIVName11 Digital2 Zenith3 Stars4 MSL5 LeoExpected ResultsClient code(CLICODE), GL Account Group(AGPNAME1), GL Account(COANAME1),3HPARC, Deferred, Loan,3HPARC, Trade, Receivables,HARK, Assests, Total Billings,GUYCAR, Trade, Receivables,--continue on same lineClosing balance(Total of GNLBalanceBase when FSMID = 113 to 130), Division(DIVName1), GLChartCode(COACode2Code), -4845, Zenith, FD100G -395.85, Zenith, FA301G -10132.65, Digital, RV10 7044.18, Stars, FA301G[code="sql"]select distinct c.CLIName1 as 'Client', c.CLICode as 'Client Code', g.AGPName1 as 'GL Account Group', a.COAName1 as 'GL Account', CASE WHEN l.GNLFSMID IN(113,114,115,116,117,118,119,120,121,122,123,124,125,127,128,129,130) THEN SUM(l.GNLBalanceBase) ELSE '0' END AS [Closing Balance], d.DIVName1 as 'Division', d.DIVName2 as 'Division (2)', t.CLTName1 as 'Client Type', a.[COACode2Code] as 'GLChart Code 2', a.COACode2Name1 as 'GLChart Code 2 Name', CASE WHEN l.GNLFSMID IN(113,114,115,116,117,118,119,120,121,122,123,124,125) THEN SUM(l.GNLBalanceBase) ELSE '0' END AS [Opening Balance] from vwClient cinner join [vwChartOfAccount] aon a.[COASourceID] = c.[CLISourceID]inner join [vwChartOfAccountGroup] gon g.[AGPID] = a.COAAGPIDinner join [vwGeneralLedger] lon l.GNLCLIID = c.[CLIID]inner join [vwclientOwner] oon o.CLOID = c.CLIID inner join [vwDivision] don d.DIVID = o.CLODIVIDinner join [vwClientType] ton t.CLTID = c.[CLICLTID]inner join [DW].[vwFiscalMonth] mon l.GNLFSMID = m.FSMIDgroup by c.CLIName1, c.CLICode, g.AGPName1, a.COAName1, l.GNLFSMID, d.DIVName1, d.DIVName2, t.CLTName1, a.COACode2Code, a.COACode2Name1[/code]

previous week query 0700 - 0700

Posted: 14 Jul 2013 07:34 AM PDT

HI all I have been asked to generate reports from my c# package I have created every Monday at 0700hrs, the previous week till Monday 0700hrs.Monday 0700 > 0000Tues 24hrsWed 24hrsThurs 24hrsFri 24hrsSat 24hrsSun 24hrsMon 0000 > 07001 weeks worth of data 0700hrs Monday to Monday can anybody help me make this possible?I am using this but it needs amending for time and I am not sure how!select * from dbo.DocketTB where Docket_EngFinish between (getdate()-7) and getdate()Docket_EngFinish is a DateTime DatatypeMany thanks for help and adviceJay

[SQL Server 2008 issues] Error on Restore

[SQL Server 2008 issues] Error on Restore


Error on Restore

Posted: 14 Jul 2013 06:24 PM PDT

Hi, The file 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\sample.mdf' cannot be overwritten. It is being used by database 'sample'.File 'sample' cannot be restored to 'C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\sample.mdf'. Use WITH MOVE to identify a valid location for the file.Problems were identified while planning for the RESTORE statement. Previous messages provide details.RESTORE DATABASE is terminating abnormally.Plz.... Help..?

Restore the database into temp database

Posted: 14 Jul 2013 05:28 PM PDT

Hi,How can i restore the database into temp database in SQL server 2008 R2Advance Thanks.......,

Raid 10

Posted: 14 Jul 2013 06:15 PM PDT

Hello.The average bytes/read is between 64k and 128k and the average bytes/write is between 32k and 64k.What is the best, Raid 10 2+2 with stripe size 128k and cluster 64k or Raid 10 4+4 with stripe size 128k and cluster 32k?Thanks.

decalre select

Posted: 14 Jul 2013 05:02 PM PDT

create table kateqor(id int not null identity(1,1),Name nvarchar(50))insert into kateqor(Name)values(N'Mənzildə qurum')select*from kateqorbut when I do a query declare @kat nvarchar(50)set @kat='Mənzildə qurum'select * from kateqor where Name=N(@kat)I receive an errorPost 195, Level 15, state 10, line 4'N' is not a recognized function name.

SQL 2008 Error: 4014, Severity: 20, State: 11

Posted: 23 Feb 2010 10:46 PM PST

HiI have had a problem with our SQL server which has resulted in the error message below. I have searched Microsoft but they don't have any further information on the error. I wondered if anyone else has come across it. It is causing a problem because it caused our ISA to crash as it was unable to write to the database for more than 30 seconds! Any ideas much appreciated!Many thanks!FraggleLog Name: ApplicationSource: MSSQL$CORPORATEDate: 24/02/2010 12:20:53Event ID: 4014Task Category: ServerLevel: ErrorKeywords: ClassicUser: N/AComputer: W8-SQL3A.XXXDescription:A fatal error occurred while reading the input stream from the network. The session will be terminated (input error: 64, output error: 0)

SQL Server Error Log retention

Posted: 14 Jul 2013 02:14 PM PDT

Hi Guys,Understand that by default SQL Server has 7 error logs.The oldest error log records will be deleted each time after SQL Server restarts.I'm looking for a solution that will retain the error logs for one year. (due to some security policy)Increasing the number of error logs doesn't help much as I won't be able to determine how many times SQL Server will be restarted a year.Anyone have solution to it on how can I have a rule to only remove error logs older than one year?thanks

sp_addextendedproperty

Posted: 14 Jul 2013 05:38 PM PDT

Hi All,I came across "sp_addextendedproperty" in Adventureworks2008R2 .EXEC sys.sp_addextendedproperty @name=N'MS_Description', @value=N'Primary key for ContactType records.' , @level0type=N'SCHEMA',@level0name=N'Person', @level1type=N'TABLE',@level1name=N'ContactType', @level2type=N'COLUMN',@level2name=N'ContactTypeID'GO Can someone help me explaining the use of this sp..Where and when do you use it..If possible can you please explain me with someother example..

SQL Disk Error

Posted: 14 Jul 2013 05:00 PM PDT

Dear Gurus,I am getting attached error of disk when installing SECOND INSTANCE of SQL 2008 R2 please help .http://tinypic.com/view.php?pic=2nuk85z&s=5http://tinypic.com/r/14crv5u/5http://tinypic.com/r/t057jr/5Your kind help is needed.ThanksMalik Adeel Imtiaz

Restoring diff backup problem

Posted: 14 Jul 2013 09:14 AM PDT

i am trying to mirror the database,i have restored the full database backuopnow i need to restore diff back and log backup.can i please has so assistance, i am not sure how to do it.

Clustered Index Ordering & Insert Speeds

Posted: 14 Jul 2013 12:13 PM PDT

Hi,We used to have a clustered index which consisted of 5 fields, this was ordered incorrectly which meant new inserts caused a lot of bad page splits.This CI was reordered to be correct and that composite (5 fields) key would now be always ascending therefore it will not insert anything in between the pages.I confirmed that there were minimal page splits via the transaction logs. However the inserts now are taking approximately 10 times longer than before.There was an additional non-clustered index put in to cover for scenarios that the new CI doesn't cover, this is causing some new page splits, however I doubt that the non-clustered index page splits cause that much of a difference.I use SSIS OLE DB Destination with Fast Load & Check Constraints Options.Any idea what would be the issue, or what I should look at?

restoring databae to setup Mirroring

Posted: 14 Jul 2013 11:55 AM PDT

i have 123Gig of database,i am trying to setup mirroring for it.we take full backup every week and diff backup every night, and trans log every hour.and last full database backup was taken 2 days ago and we have diff backups since last full backuop.what should be the restore strategy for mirrored database.Please advise

SQL 2000 - Maintenance Plan

Posted: 14 Jul 2013 09:13 AM PDT

In SQL 2000 while creating maintenance plan, there was Optimizations which REORGANIZED the data and indexes. What it does really? Does it reorganize all the indexes in the database?

How to get the month wise row of record if transaction present or not

Posted: 14 Jul 2013 03:06 AM PDT

Hi,Is that possible to write in single query to get the below results -I need to get the month wise records from the transaction table. If records for that month is not present, then previous month set of common columns values to be created for current month (Company, Customer ID, Account No, Type) with previous month amount as opening balance and current amount as 0 and closing balance as opening balance.thanks

Sunday, July 14, 2013

[how to] SQL Database Hosting Solutions?

[how to] SQL Database Hosting Solutions?


SQL Database Hosting Solutions?

Posted: 14 Jul 2013 05:30 PM PDT

I have a client who is running a local SQL Server (for 2 computers) for a client application. They are considering upgrading the local DB to a hosted database.

In all the places i look Amazon RDS, Azure, etc, all the prices seem to be expensive. Their database has about less than 1,000 records. Any suggestions? Am i looking in the wrong places?

SQL Server 2005 onwards- Capturing Baseline

Posted: 14 Jul 2013 10:30 AM PDT

I want design a process by which I can capture the baseline data of multiple SQL servers( 2005 onwards) in a single database from where I can get all the info and alerts ..as of now I want to capture the info about below counters:

1) Free space available on disks 2) Database File and Log space available 3) Wait statistics 4) CPU and memory counters

Is there any free tool that I can refer to and modify if needed or any useful article that can help me to set up the same.

As of now I am referring to the article by Erin Stellato in SQLServerCentral.com.

Any help would be greatly appreciated.

Thanks Subhro

Finding rows for a specified date range

Posted: 14 Jul 2013 02:08 PM PDT

I have a table which stores which teacher (teacherid) works for which group (groupid) of pupils starting from the date startdate:

CREATE TABLE `pupilgroupteacher` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `groupid` smallint(5) unsigned NOT NULL,    `startdate` date NOT NULL,    `teacherid` int(10) unsigned DEFAULT NULL,    PRIMARY KEY (`id`),    UNIQUE KEY `date` (`groupid`,`startdate`),    KEY `teacher` (`teacherid`),    KEY `group` (`groupid`),    CONSTRAINT `fk_pupilgroupteacher_2` FOREIGN KEY (`groupid`) REFERENCES `pupilgroup` (`id`) ON DELETE NO ACTION ON UPDATE NO ACTION,    CONSTRAINT `fk_pupilgroupteacher_1` FOREIGN KEY (`teacherid`) REFERENCES `employee` (`personid`)  ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin

Having a teacher ID and a month, I need to find all groups for which this teacher worked (or is going to work) at least one day in the given month.

Note: For a given teacherid there may be more than one startdate (if, for example, a teacher worked for a group, was replaced with another teacher and then replaced back on a later startdate).

How does Log shipping knows to keep track

Posted: 14 Jul 2013 12:39 PM PDT

I have set up log shipping between 2 SQL Servers (SQL Server 2008 and SQL Server 2012).

At first i took a full backup of the first one and restored it on the other one and than i set up the log shipping. The databases first backup-restore took a few hours and below are my questions that I need to understand :

  • How does the log shipping process knows from when to start the logs it is shipping?
  • Does it start from when the last backup-restore process happened or from the moment the log shipping itself is set ?
  • Can i be sure that no information was lost in these few hours ?

GeomFromText in trigger has error

Posted: 14 Jul 2013 05:50 PM PDT

I have this trigger:

DELIMITER $$  CREATE TRIGGER geometrytrigger BEFORE INSERT ON users  FOR EACH ROW BEGIN    SET NEW.user_geometry=GeomFromText(NEW.user_geometry);  END$$  DELIMITER ;  

when I insert a row with this query:

INSERT INTO `users` (`id`, `name`, `email`, `password`, `user_geometry`) VALUES (NULL, 'arash', 'arash@host.com', 'sssssss', 'POINT(-0.23 169.93)');  

this error has occurred:

 #1416 - Cannot get geometry object from data you send to the GEOMETRY field  

How does Log shipping knows to keep track [duplicate]

Posted: 14 Jul 2013 01:21 PM PDT

This question already has an answer here:

I have set up log shipping between 2 SQL Servers (SQL Server 2008 and SQL Server 2012),

At first i took a full backup of the first one and restored it on the other one and than i set up the log shipping,

The databases first backup-restore took a few hours and i wish to know:

  • how does the log shipping process knows from when to start the logs it is shipping? does it start from when the last backup-restore process happened or from the moment the log shipping itself is set ?
  • Can i be sure that no information was lost in these few hours ?

Thanks

How can we check for data integrity when we upgrade from MySQL 4.1 to MySQL 5.5?

Posted: 14 Jul 2013 02:57 AM PDT

By data integrity, I mean:

  • Schema integrity
  • Data (record level) integrity

Checksum was an option, but since the storage format for some data types in MySQL 5.5 has been changed, it won't be reliable.

Another option is to parse the XML file generated by mysqldump. Since the file is too large (in GBs), is it worth it?

Any other way you have confirmed it?

School database design

Posted: 14 Jul 2013 02:56 AM PDT

I am trying to design a database management system with my PHP and intermediate MySQL knowledge, keeping normalization in mind.

These are the assumptions:

  1. The school has many students from grade 1 to grade 6.
  2. Each grade is divided into classes (e.g. we have grade 1 A, grade 1 B, grade 1 C) due to the class size.
  3. Grade 1 to grade 3 offer the same subjects, but from grade 4 to grade 6 users can select which courses to offer.
  4. Many teachers teach ONLY one subject BUT can teach MORE THAN one grade.

As an intermediate programmer, it all got twisted. I was able to create the following tables and columns

students{(id),(name),(name),(parent_id)} -student details    parents{(id),(name),(email),(phone),(address)} - parent details    teachers{(id),(name),(email),(subject_id)} - teachers details    subjects{(id),(name),(description)} - subject details    attendance{(date),(status),(student_id)} - attendance of student with id of student_id,                                             status in boolean TRUE = present, FALSE=absent  

This is how far I have gone.

Now I want to show a table where a student can be linked to all the subjects he/she is taking and a teacher linked to all the grades (no need to specify classes) he is teaching. How can I do that?

I will appreciate it if I can be pointed in the right direction.

Postgres database locked up

Posted: 13 Jul 2013 10:10 PM PDT

I was running a process to update a bunch of rows in postgres database and it locked up. Now I can't select anything out of it. None of my processes are "idle in transaction" when I execute ps auxwww|grep ^postgres

Also the output of my locks select is:

-[ RECORD 1 ]------+----------------  locktype           | relation  database           | 16384  relation           | 11000  page               |   tuple              |   virtualxid         |   transactionid      |   classid            |   objid              |   objsubid           |   virtualtransaction | 3/4  pid                | 18655  mode               | AccessShareLock  granted            | t  -[ RECORD 2 ]------+----------------  locktype           | virtualxid  database           |   relation           |   page               |   tuple              |   virtualxid         | 3/4  transactionid      |   classid            |   objid              |   objsubid           |   virtualtransaction | 3/4  pid                | 18655  mode               | ExclusiveLock  granted            | t  

but I'm not sure what I can with that info. I also restarted my db and no luck. Any ideas?

Check existence with EXISTS outperform COUNT! ... Not?

Posted: 14 Jul 2013 07:26 PM PDT

I've often read when one had to check existence of a row should always be done with EXISTS instead of with a COUNT.

Yet in several recent scenarios I've measured a performance improvement when using count.
The pattern goes like this:

LEFT JOIN (      SELECT          someID          , COUNT(*)      FROM someTable      GROUP BY someID  ) AS Alias ON (      Alias.someID = mainTable.ID  )  

I'm not familiar with methods to tell what's happening "inside" SQL Server so I was wondering if there was a unheralded flaw with EXISTS that gave perfectly sense to the measurements I've done (could EXISTS be RBAR?!).

Do you have some explanation to that phenomena?

EDIT:

Here's a full script you can run:

SET NOCOUNT ON  SET STATISTICS IO OFF    DECLARE @tmp1 TABLE (      ID INT UNIQUE  )      DECLARE @tmp2 TABLE (      ID INT      , X INT IDENTITY      , UNIQUE (ID, X)  )    ; WITH T(n) AS (      SELECT          ROW_NUMBER() OVER (ORDER BY (SELECT NULL))      FROM master.dbo.spt_values AS S  )   , tally(n) AS (      SELECT          T2.n * 100 + T1.n      FROM T AS T1      CROSS JOIN T AS T2      WHERE T1.n <= 100      AND T2.n <= 100  )  INSERT @tmp1  SELECT n  FROM tally AS T1  WHERE n < 10000      ; WITH T(n) AS (      SELECT          ROW_NUMBER() OVER (ORDER BY (SELECT NULL))      FROM master.dbo.spt_values AS S  )   , tally(n) AS (      SELECT          T2.n * 100 + T1.n      FROM T AS T1      CROSS JOIN T AS T2      WHERE T1.n <= 100      AND T2.n <= 100  )  INSERT @tmp2  SELECT T1.n  FROM tally AS T1  CROSS JOIN T AS T2  WHERE T1.n < 10000  AND T1.n % 3 <> 0  AND T2.n < 1 + T1.n % 15    PRINT '  COUNT Version:  '    WAITFOR DELAY '00:00:01'    SET STATISTICS IO ON  SET STATISTICS TIME ON    SELECT      T1.ID      , CASE WHEN n > 0 THEN 1 ELSE 0 END AS DoesExist  FROM @tmp1 AS T1  LEFT JOIN (      SELECT          T2.ID          , COUNT(*) AS n      FROM @tmp2 AS T2      GROUP BY T2.ID  ) AS T2 ON (      T2.ID = T1.ID  )  WHERE T1.ID BETWEEN 5000 AND 7000  OPTION (RECOMPILE) -- Required since table are filled within the same scope    SET STATISTICS TIME OFF    PRINT '    EXISTS Version:'    WAITFOR DELAY '00:00:01'    SET STATISTICS TIME ON    SELECT      T1.ID      , CASE WHEN EXISTS (          SELECT 1          FROM @tmp2 AS T2          WHERE T2.ID = T1.ID      ) THEN 1 ELSE 0 END AS DoesExist  FROM @tmp1 AS T1  WHERE T1.ID BETWEEN 5000 AND 7000  OPTION (RECOMPILE) -- Required since table are filled within the same scope    SET STATISTICS TIME OFF   

On SQL Server 2008R2 (Seven 64bits) I get this result

COUNT Version:

Table '#455F344D'. Scan count 1, logical reads 8, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table '#492FC531'. Scan count 1, logical reads 30, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 81 ms.

EXISTS Version:

Table '#492FC531'. Scan count 1, logical reads 96, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table '#455F344D'. Scan count 1, logical reads 8, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 76 ms.

Constraint based rules engine

Posted: 14 Jul 2013 10:03 AM PDT

I am evaluating software systems which configure products that tout constraint based versus rules based engines. Can someone explain (with maybe an example or two) to a non-expert what the difference is? My guess is that a constraint based engine is rules defining relationships between objects, and rules based engines are more like the WHERE clause of a SQL language query.

Oracle schema migration to new database with zero downtime

Posted: 14 Jul 2013 07:03 AM PDT

I have a problem: I have two production schemas on one database each serving two different appplicaitons.

I need to migrate (move) one schema to an entirely new database that will be dedicated for that application with zero downtime. SO EXPORT/IMPORT can not be used.

I have license to even active data guard. But if I setup data guard from current database DB to new DB, should I switch over OR failover or what else so that the current primary will continue to be primary in its role and only the new DB will change its role from standby to fully operational primary. Then we simply direct the application to new and start the new db. Thanks for your help.

MySQL HDD activity with empty process list

Posted: 14 Jul 2013 01:04 PM PDT

What does this mean and how can I understand what is mysql doing? I use InnoDB, my database is huge. This happens when I disconnect my app.

Thanx.

How do I use subquery on the same table in MySQL?

Posted: 14 Jul 2013 06:03 AM PDT

I have a query like this which takes a really long time to run. The table is around 4 million rows.

DELETE FROM TABLE WHERE value_was IS NULL OR value_was <= value_now;  

I'm hoping I could create an index for (value_was, value_now) so I could do something like

DELETE FROM TABLE WHERE   ID1 IN (SELECT ID1 from TABLE where value_was IS NULL)   OR ID2 IN (SELECT ID2 FROM TABLE WHERE value_was <= value_now);  

This table doesn't have primary key. It has two composite keys. And I guess I cannot use the same table in subquery, but how do I improve the performance of the first query?

Thanks very much any suggestion would be much appreciated.

InnoDB Tablespace critical error in great need of a fix

Posted: 14 Jul 2013 05:03 AM PDT

Link to screenshot : http://www.nouvellesduquartier.com/i/1/p/Munin_%20Critical_MySql_InnoDB_.JPG (The value reported is outside the allowed range - Byte free, free, gauge, warn, critic)

Question: Could the error shown on the screenshot be the reason why my site is very slow?

If so, I really need help to fix it since I am far from beeing an engeneer! Thank you in advance.

Getting "BadImageFormatException" when attempting to connect through Visual Studio to Oracle

Posted: 14 Jul 2013 05:04 PM PDT

I get the following message: "Attempt to load Oracle client libraries threw BadImageFormatException. This problem will occur when running in 64 bit mode with the 32 bit Oracle client components installed." when I attempt to create a connection from both Visual Studio 2008 and 2012 to Oracle from my Windows 8 64bit workstation.

I'm not running a web app when this message happens. I'm only trying to establish an Oracle connection with Visual Studio. Any solutions posted on the internet that I have seen that resolve this issue with others, doesn't seem to apply to my circumstances.

  • I did not have this issue when I was using Widows 7 64bit.
  • I have set the Active Solution Platform to both Any PC and x86 when I have a solution open, but I still get the same message with either setting.
  • I have the latest Oracle 32 bit Client since 64 bit Oracle client won't work with 32 bit Visual Studio.
  • At this point, I am only trying to connect to Oracle from Visual Studio and not run a web application. So IIS settings have nothing to do with this issue.

My environment is

  • Windows 8 64bit
  • Visual Studio 2012 and 2008
  • Oracle 32 bit Client. But also tried 64 Bit Client.
  • Oracle 11.2.0.3 database.
  • Active Solution Platform was tried with both Any PC and x86.

Any suggestions would be appreciated.

phpMyAdmin Error After Upgrade to 3.5.8.1

Posted: 14 Jul 2013 06:23 AM PDT

Im Nubie in mysql, I use that only for PHP.

to the point, I try to upgrade myphpadmin 3.5.8.1, after I that, I cant accses my database and have error like This:

#1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'WHERE db_name = 'happy2'  AND table_name = 'foto'  ORDER BY version DESC' at line 1  

My Other ALL DB have error like that.

Anyone can help me?

Im very Appreciated for your answer.

SHOW TABLE STATUS very slow on InnoDB

Posted: 14 Jul 2013 03:04 PM PDT

Recently we've been migrating from MyISAM to InnoDB and I understand that MyISAM uses meta information against each table to track information and such - however, the database is about 1.8gb with 1+ million records across 400+ or so tables.

The problem comes with software like PHPMyAdmin runs "SHOW TABLE STATUS FROM dbase;" where it can take up to 50 seconds to execute that command. Is there a way to optimise this? (MySQL or MariaDB)

Thanks!

Syncronize mysql databases between local and hosted servers automatically

Posted: 14 Jul 2013 04:03 AM PDT

We have many website with Development , Staging and Production Server. we have many developers for many projects, we need a solution to synchronize the database with developer database with staging database. after that one is works we can move to live database.

That one is need to be fully automatically synchronize so that developer dont need to run that tool each and every time

SELECTing multiple columns through a subquery

Posted: 14 Jul 2013 08:03 AM PDT

I am trying to SELECT 2 columns from the subquery in the following query, but unable to do so. Tried creating alias table, but still couldn't get them.

SELECT DISTINCT petid, userid,  (SELECT MAX(comDate) FROM comments WHERE petid=pet.id) AS lastComDate,  (SELECT userid FROM comments WHERE petid=pet.id ORDER BY id DESC LIMIT 1) AS lastPosterID    FROM pet LEFT JOIN comments ON pet.id = comments.petid  WHERE userid='ABC' AND deviceID!='ABC' AND comDate>=DATE_SUB(CURRENT_TIMESTAMP, INTERVAL 2 MONTH);  

Basically, I am trying to get the lastComDate & lastPosterID from the same row - the row which is the latest one in comments for the specific pet. Please suggest how can I get them in an efficient way.

The above query works, but seems overkill as same row is fetched twice. Moreover, the ORDER BY clause is significantly slower than the aggregate function - as I found while profiling query. So, a solution avoiding sorting would be appreciated.

Designing Simple Schema for Disaggregation of Demand Forecast

Posted: 14 Jul 2013 02:04 PM PDT

I am doing a simple database design task as a training exercise where I have to come up with a basic schema design for the following case:

I have a parent-child hierarchy of products (example, Raw Material > Work in Progress > End Product).

  • Orders are placed at each level.
  • Number of orders shall be viewable in weekly buckets for the next 6 months.
  • Demand forecast can be done for each product level.
  • Demand forecast for any week within next 6 months can be done today.
  • Demand forecast is done for weekly buckets, for the next 6 months.

Demand Forecast is usually done at the higher level in hierarchy (Raw Material or Work in Progress level) It has to be disaggregated to a lower level (End Product).

There are 2 ways in which demand forecast can be disaggregated from a higher level to lower level:

  1. User specifies percentage distribution for end product. Say, there's a forecast of 1000 for Work In Progress.. and user says I want 40% for End Product 1 and 60% for End Product 2 in bucket 10.. Then for 10th week (Sunday to Saturday) from now, forecast value for End Product 1 would be 400 and, for End Product 2 would be 600.
  2. User says, just disaggregate according to orders placed against end products in Bucket 5, and orders in bucket 5 for End Product 1 and 2 are 200 and 800 respectively, then forecast value for EP1 would be ((200/1000) * 100)% and for EP2 would be ((800/1000) * 100)% of forecast for 'Work in Progress'.

Forecast shall be viewable in weekly buckets for the next 6 months and the ideal format should be:

product name | bucket number | week start date | week end date | forecast value | created_on  

PRODUCT_HIERARCHY table could look like this:

id  |   name                |   parent_id  __________________________________________  1   |   raw material        |   (null)  2   |   work in progress    |   1  3   |   end product 1       |   2  4   |   end product 2       |   2  

ORDERS table might look like this:

id | prod_id | order_date | delivery_date | delivered_date  

where,

prod_id is foreign key that references id of PRODUCT_HIERARCHY table,

How to store forecast? What would be a good basic schema for such a requirement?


My idea to select orders for 26 weekly buckets is:

SELECT      COUNT(*) TOTAL_ORDERS,      WIDTH_BUCKET(          delivery_date,          SYSDATE,          ADD_MONTHS(sysdate, 6),           TO_NUMBER( TO_CHAR(SYSDATE,'DD-MON-YYYY') - TO_CHAR(ADD_MONTHS(sysdate, 6),'DD-MON-YYYY') ) / 7      ) BUCKET_NO  FROM      orders_table  WHERE      delivery_date BETWEEN SYSDATE AND ADD_MONTHS(sysdate, 6);  

But this will give weekly buckets starting from today irrespective of the day. How can I convert them to Sunday to Saturday weeks in Oracle?

Please help designing this database structure.

(will be using Oracle 11g)

SQL Server BPA 2008R2

Posted: 14 Jul 2013 12:04 PM PDT

I have been using SQL server BPA for getting good information from Microsoft. I was using its 2005 version in which i used to export the results in csv format but recently I got two new servers which has got SQL Server 2008 R2 installed on it and I know I can't run BPA 2005 on these, so chose the R2 version of it, but it doesn't have an option to save the report on csv format, only xml, I have tried the excel to convert it into csv but no use, even-though it display the details but can't narrow down the results any idea of converting the results to csv format?

MySQL optimization - year column grouping - using temporary table, filesort

Posted: 14 Jul 2013 04:04 PM PDT

I have a transactions table which is having 600,000 records, I need to list the count for the dashboard on financial year basis. The table used is MyISAM. I tried adding index for the transaction date (tran_date). Even though it is using the index it creates temporary table which is taking more time because of the temporary table and the filesort. Is there any way to optimize the query to improve the query time?

  SELECT COUNT( * ) AS cnt, CASE WHEN MONTH( tran_date ) >=3  THEN concat( YEAR( tran_date ) , '-', YEAR( tran_date ) +1 )  ELSE concat( YEAR( tran_date ) -1, '-', YEAR( tran_date ) )  END AS financial_year  FROM `transactions1`  WHERE tran_date >= '2010-06-01'  GROUP BY financial_year    Showing rows 0 - 4 (5 total, Query took 1.2095 sec)  
  id select_type  table       type  possible_keys     key key_len    ref  rows    Extra  1   SIMPLE    transactions1 range PRIMARY,tran_date tran_date 8  NULL   346485  Using where; Using index; Using temporary; Using filesort  
  Keyname     Type    Unique  Packed  Field       Cardinality   Collation   PRIMARY     BTREE   Yes       No    tran_date      205720         A                                           tran_ID        617162         A   coupon_No   BTREE   No        No    coupon_No      617162         A       account_typeBTREE   No        No    account_type   3              A       prodCode    BTREE   No        No    prodCode       430            A                                           tran_date      308581         A   tran_date   BTREE   No        No    tran_date      205720         A       cust_ID     BTREE   No        No    cust_ID        3265           A                                           tran_date      308581         A                                       account_type   308581         A                                       points_earned  617162         A  

Update :

Tried adding partition which is not that much helpful in comparison with non partitioned one. Does replication help in this case for reading this table?. There will be more grouping based on the dates (using the date functions) when reading the data.

Edit:

I altered the query and reduced the query execution time. The query I used is,

  SELECT SUM( count )  FROM (  SELECT COUNT( * ) AS count,  CASE WHEN MONTH( tran_date ) >=3  THEN concat( YEAR( tran_date ) , '-', YEAR( tran_date ) +1 )  ELSE concat( YEAR( tran_date ) -1, '-', YEAR( tran_date ) )  END AS format_date  FROM transactions1  GROUP BY tran_date  ) AS s  GROUP BY format_date    Showing rows 0 - 4 (5 total, Query took 0.5636 sec)  
  id  select_type     table     type  possible_keys   key     key_len     ref     rows    Extra  1   PRIMARY     <derived2>    ALL       NULL        NULL      NULL      NULL    229676  Using temporary; Using filesort  2   DERIVED     transactions1 index     NULL        tran_date   8       NULL    617162  Using index  

But when using

  SELECT COUNT( * ) AS count,  CASE WHEN MONTH( tran_date ) >=3  THEN concat( YEAR( tran_date ) , '-', YEAR( tran_date ) +1 )  ELSE concat( YEAR( tran_date ) -1, '-', YEAR( tran_date ) )  END AS format_date  FROM transactions1  GROUP BY tran_date    Showing rows 0 - 29 (229,676 total, Query took 0.0006 sec)  

gives less time without using the SUM(count) in the derived table. Is there any other way to get the sum without using the subquery in MySQL or can the subquery be optimized to get the index.

oracle local partition index

Posted: 14 Jul 2013 05:31 AM PDT

I have a table like this:

CREATE TABLE employees  (employee_id NUMBER(4) NOT NULL,   last_name VARCHAR2(10),    department_id NUMBER(2))  PARTITION BY RANGE (department_id)  (PARTITION employees_part1 VALUES LESS THAN (11) ,    PARTITION employees_part2 VALUES LESS THAN (21) ,    PARTITION employees_part3 VALUES LESS THAN (31) );  

Are these statements the same?

CREATE INDEX employees_local_idx ON employees (employee_id) LOCAL;    CREATE INDEX employees_local_idx ON employees (employee_id) LOCAL  (   PARTITION employees_part1,    PARTITION employees_part2,    PARTITION employees_part3  )  

Thanks,

[MS SQL Server] PREEMPTIVE_COM_GETDATA

[MS SQL Server] PREEMPTIVE_COM_GETDATA


PREEMPTIVE_COM_GETDATA

Posted: 28 Oct 2011 06:58 AM PDT

We are running SQL2008R2 64 bit standard version.One of the application encountered (via Tomcat service) error " database.pool.ConnectionPool Exception: com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host has failed. java.net.BindException: Address already in use: connect" when the application has high usages. I noticed there were PREEMPTIVE_COM_GETDATA wait types associated. What is the Preemptive _com_getdata wait type? or any one know what the tomcat error mean?Thanks in advance for any response.Jenny

Please share the adventurenetworks database link for downloading

Posted: 13 Jul 2013 02:41 PM PDT

Hi allPlease share the adventure networks database link for downloading

[T-SQL] Help needed-checking TSQL statements for errors

[T-SQL] Help needed-checking TSQL statements for errors


Help needed-checking TSQL statements for errors

Posted: 13 Jul 2013 03:16 AM PDT

while trying to hone my skils on T-SQL, i came across a procedure thatLoad blank per-activity data files for multi-instance activities to a DB's BlankData table, then it should be -- creating this table if BlankData isn't in the DB and recreating it otherwise. THE PROCEDURE IS AS BELOW)-- Directory containing files to load specified as a @path argument to this procedure.-- Directory containing files to load specified as a @path argument to this procedure.MY QUESTIONS:1. should procedure warn if table already present before deleting BlankData?2. HOW DO I :-- -. add checks for the following conditions, with suitable messages-- -. failed "exec xp_cmdshell @cmd" command-- -. @path's referencing a directory that's devoid of .xml files-- -. failed attempts to read .xml files-- -. failed "select name from #filenames where name like '%.xml'" command-- -. failed "exec (@sql)" command3. -. should option be added for writing messages to a log?4. HOW TO: collapse all sp_Load<documentXX>toDB procedures to a single,-- parameterized procedure5.HOW DO I SPECIFY A SECOND PARAMAMETER TO-- -. specifies qualifier (e.g., 201308) for table from which to load documents-- -. defaults to value given by a new "current epoch" function-- concern here is supporting multi-schema operation by allowing for extraction of different schema versions' of-- blank documents from different tables in a set of related tables: -- e.g.., BlankData_201308, BlankData_201309...THE PROCEDURECREATE PROCEDURE [dbo].[sp_LoadBlankDataToDB] @path varchar(256)ASBEGIN -- To allow advanced options to be changed. EXEC sp_configure 'show advanced options', 1 -- To update the currently configured value for advanced options. RECONFIGURE -- To enable the feature. EXEC sp_configure 'xp_cmdshell', 1 -- To update the currently configured value for this feature. RECONFIGURE SET NOCOUNT ON; IF (EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA = 'dbo' AND TABLE_NAME = 'BlankData')) BEGIN print('exist') drop table fas.dbo.[BlankData] END create table BlankData(id int identity(1,1),fileName varchar(256),blankFile xml) declare @cmd varchar(256) set @cmd = 'dir /b ' +'"'+ @path+'"' create table #filenames(name varchar(256)) insert into #filenames exec xp_cmdshell @cmd declare @file nvarchar(256) declare fileNameCursor CURSOR SCROLL FOR select name from #filenames where name like '%.xml' open fileNameCursor fetch next from fileNameCursor into @file WHILE @@FETCH_STATUS = 0 begin declare @sql varchar(max) --insert into fas.dbo.SampleData(fileName) values (@file) set @sql = 'insert into [fas].[dbo].[BlankData] select '''+@file+''', * from openrowset (BULK N'''+@path+'\'+@file+''', SINGLE_BLOB) as xmlfile' exec (@sql) FETCH NEXT FROM fileNameCursor INTO @file end CLOSE fileNameCursor DEALLOCATE fileNameCursor DECLARE @fileCount int select @fileCount = COUNT(*) from #filenames print ('There are '+ convert(varchar(max),(@fileCount-1)) + ' files under the directory') select @fileCount = COUNT(*) from BlankData print (convert(varchar(max),@fileCount) +' xml files are imported') select name as 'File Not Imported' from #filenames where name not in (select fileName from fas.dbo.BlankData) select fileName as 'File Imported' from BlankData ENDGO

[SQL Server 2008 issues] Filegroup

[SQL Server 2008 issues] Filegroup


Filegroup

Posted: 10 Jul 2013 08:39 PM PDT

Hi All,Please let me know whether it is possible to use the same secondary filegroup for 2 different databases.I tried manually and it did not allow.Just wanted to confirm the same.

Find The Similar Matching String

Posted: 13 Jul 2013 04:35 PM PDT

Dear,I have a customer table containing many duplicate names and custid is the unique key. But the customer names are not 100% similar. For example, [code="plain"]CustID CustName------- -------------100 ABC CO101 ABC CO.102 ABC CO&103 ABC CO,NY[/code]Here all the custname refers to similar customer. Now I wanna delete the duplicate names. I need to search custnames which are 80% similar and delete those customers.Please help me to do this.Regards,Akbar

How to take backup of Single table and how to restore?

Posted: 13 Jul 2013 03:44 PM PDT

Hi... How to take backup of Single table and how to restore? is there any query like database backup?shivakumar...

what is database refresh?

Posted: 13 Jul 2013 03:45 PM PDT

hi...How to Refresh database in sqlserver 2008? database refresh or rebuild both r same or not?

IN and NOT IN for sql xml column

Posted: 13 Jul 2013 12:18 AM PDT

I have a table with a xml column.Xml is similar to [code="xml"]<Root> <Row> <user>abc</user> <Rowid>1</Rowid> </Row> <Row> <user>vf</user> <Rowid>2</Rowid> </Row> <Row> <user>ert</user> <Rowid>3</Rowid> </Row> <Maxrowid>3</Maxrowid></Root>[/code]Now below query return sl_no column and myxmlcolumn of rows containing xml column having values 'abc' or 'xyz' in node 'user'(<user>).Below query i am using similar to IN option of sql. [code]SELECT [mytable].[Sl_no],[mytable].[myxmlcolumn] FROM [mydb].dbo.[mytable] WHERE ( [myxmlcolumn].exist('for $x in /Root/Row where ( ($x/user[fn:upper-case(.)=(''ABC'',''XYZ'')]) ) return $x')>0 ) [/code]I want similar kind of query which does same work as sql 'NOT IN' does. That is in my case i want rows not having values 'abc' or 'xyz' in node 'user'(<user>) in xml column.So please help me on this.

Will the mirroring will be able to work properly if restricted growth or change the setting of max size of database

Posted: 13 Jul 2013 09:19 AM PDT

I have a mirror database.they are set to unrestricted growth. As per company policy the database should be in restricted growth.should I change it to restricted growth or change the setting of max size , but then I am in confusion will the mirroring will be able to work properly.

Alter colum on table with many constraint HHEELLPP!!

Posted: 13 Jul 2013 05:05 AM PDT

Thanks you very much for your time as its greatly appreciated. The situation seems to be simple as it should be but for some reason this situation is throwing me some massive headaches…I have a table and that table has a column definition needs to be altered from numeric to int..this table also has 41 constraints on it as I ran to show me all the constraints. The server is set so I should be able to make the changes as I went into the object explorer ->tools->options->designer->table and databse design and uncheck the proper box. SELECT * FROM sys.objectsWHERE OBJECT_DEFINITION(OBJECT_ID) LIKE '%bl%'Then taking all the constraints that came back..next I ran:IF EXISTS (SELECT * FROM sys.foreign_keys WHERE object_id = OBJECT_ID(N'[dbo].[DF__bl__count_em__0275918A]') AND parent_object_id = OBJECT_ID(N'[dbo].[bl]'))ALTER TABLE [dbo].[bl] DROP CONSTRAINT [DF__bl__count_em__0275918A] GORan this on all constraints and then tried to run:ALTER TABLE [dbo].[bl]--NOCHECK CONSTRAINT allALTER column [count_em] int not null;Yet this still return the error :Msg 5074, Level 16, State 1, Line 1The object 'DF__bl__count_em__0275918A' is dependent on column 'count_em'.Msg 4922, Level 16, State 9, Line 1ALTER TABLE ALTER COLUMN count_em failed because one or more objects access this column.Any help or ideas are greatly appreciated. THANK YOU

How do I run query from Query Designer ?

Posted: 12 Jul 2013 11:54 PM PDT

when I right click on [b]Query Designer[/b] , I find [b]Execute SQL[/b] menu is disabled .How do I run query from Query Designer editor then ?I am using MS SQL Server Management Studio 2008

help with query

Posted: 13 Jul 2013 06:39 AM PDT

hi,I have a table:version item price1 a 1.002 a 1.203 a 1.301 b 2.002 b 2.20I need to create a query that will select the latest version price for each itemthe outcome should be like that:version item price3 a 1.302 b 2.20please help :-)regards,aviv

Data center migration

Posted: 10 Jul 2013 07:35 AM PDT

I hope this forum is "general" enough for this question. I figure most people here have experienced this issue in one way or another.We are migrating our data center and we are concerned about our SQL servers with relative data.Our goal is to move the server as smooth as possible to the new location, which is only about 200 yards around the corner, but there are bumps, door entries, rough sidewalks, etc.Does anybody on here have some best practices regarding server transportation he or she would be willing to share?Thanks in advance.

Difference between Full backup and Copy-only full backup

Posted: 07 Jul 2013 06:13 PM PDT

I saw in this [url=http://www.sqlservercentral.com/Forums/Topic567010-357-1.aspx#bm1311005]link[/url] that full backup does not truncate the log.So what is the difference between full backup and copy-only full backup?For the log backup there is copy-only backup which prevent the log chain from breaking without truncating the log. So what is copy-only full backup?

Search This Blog