Monday, June 17, 2013

[SQL 2012] Nesting of Transaction

[SQL 2012] Nesting of Transaction


Nesting of Transaction

Posted: 17 Jun 2013 01:02 AM PDT

SQL Server allows the nesting of transactions. Like:BEGIN TRAN BEGIN TRAN code... COMMIT TRAN BEGIN TRAN code... COMMIT TRANCOMMIT TRANBut is it really required the nesting of transaction, as we can do all transaction in a single BEGIN...COMMIT TRAN?

sql server 2012 scheduled jobs won't start

Posted: 16 Jun 2013 05:57 AM PDT

Hello,I have done a big migration from Sql2008 to Sql2012 and besides a few Oracle related problems I have run into strange behavior with scheduled JOBs - they won't run...:)I have created the jobs by migrating the create scripts from Sql 2008 server and by just replacing the server names and the users that own the JOBs.I can run the JOBs fine manually but nothing happens when scheduled...?I have even added the accounts running both Sql Server and Sql Agent services to local Admin groupMicrosoft SQL Server 2012 (SP1) - 11.0.3128.0 (X64) Dec 28 2012 20:23:12 Copyright (c) Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1)Can you help?Ville

AG Groups Vs Shared Storage

Posted: 29 May 2013 09:22 PM PDT

Hello,Im pretty new to SQL and im currently on a steep learning curve in my new job role. Im going through setting up an Availability Group (AG) with the need for shared storage, i have this working in conjunction with WSFC on Server 2012.My question is, the AG asks for a shared network location that the nodes can both see, now correct me if i'm wrong but isn't this essentially the same as using shared storage on a SAN? We currently use SQL 2008 on a NetApp SAN for shared storage using full blown Server 2008 clustering for fail-over.It occurred to me while writing documentation that its basically the same thing as shared storage so why would you want to use AG over shared storage? I realize it has the benefits of offloading reporting to secondary replicas but i assume these same functions can be used with SAN storage. Also would you use a local shared folder (local to the node) or a share on a separate server all together that isn't part of the cluster?Apologies if i'm not making sense, its my first time and want to get it right in my head before i present it to my colleagues.Cheers

[T-SQL] performance tunning.

[T-SQL] performance tunning.


performance tunning.

Posted: 16 Jun 2013 11:50 PM PDT

I have a query. This select query has few fat tables attached to each other. Is there a way i can improve the performance of this query? currently it is table more that 50 secs.SELECT VISIT.PAT_VISIT_ID id, (DBO.GET_CD_TITLE_FROM_CD_ID(VISIT.VISIT_TYP_CD, ''Visit Type'')) code_title, '''' code_mnemonic, VISIT.CRTE_DTTM created_at, VISIT.VISIT_END_DTTM ends_at, DBO.GET_CD_TITLE_BY_CDID_USERCODE(LOCSITE.SITE_TYP_CD, ''Mnemonic'') location_code_mnemonic, DBO.GET_CD_TITLE_BY_CDID_USERCODE(LOCSITE.SITE_TYP_CD, ''title'') location_code_title, LOCSITE.SITE_DESC location_site_description, LOCSITE.SITE_NM location_site_name, LOCSITEORG.ORG_NM organization_name, LOCSITEORG.ORG_ABBR organization_abbr, LOCSITEORG.ORG_ID organization_id, LOCSITEORG.CRTE_DTTM organization_created_at, LOCSITEORG.UPDT_DTTM organization_updated_at, VISIT.PAT_ID patient_id, VISIT.SCHEDULED_PROV_ID performer_id, PERFORMER.NM_FIRST performer_name_first, PERFORMER.NM_LAST performer_name_last, PERFORMER.NM_MID performer_name_middle, PERFORMER.NM_SUFFIX performer_suffix, (CASE WHEN PRINC.USER_NM = @P0 THEN 1 ELSE 0 END ) is_mine, DBO.GET_CD_TITLE_BY_CDID_USERCODE(VISIT.VISIT_REASON, ''Mnemonic'') reason_mnemonic, DBO.GET_CD_TITLE_BY_CDID_USERCODE(VISIT.VISIT_REASON, ''description'') reason_original_text, DBO.GET_CD_TITLE_BY_CDID_USERCODE(VISIT.VISIT_REASON, ''title'') reason_title, VISIT.VISIT_REASON_TXT reason_text, ( CASE ( SELECT COUNT(ALIAS_USE) FROM W_CODE_ALIAS WHERE STS_CD =''137'' AND CD_ID = (VISIT.VISIT_REASON) AND ALIAS_USE = ''requiresConsent'') WHEN 0 THEN 0 ELSE 1 END) reason_requires_consent, VISIT.VISIT_START_DTTM start_date, VISIT.SCHED_START_DTTM sched_start_date, DBO.GET_CD_TITLE_BY_CDID_USERCODE(VISIT.LIFECYCLE_CD, ''title'') state, VISIT.UPDT_DTTM updated_at, PARENT_ORG.ORG_ABBR parent_org_abbr, PARENT_ORG.ORG_NM parent_org_name, PARENT_ORG.ORG_ID parent_org_id, PATIENT.NM_LAST patient_nm_last, PATIENT.NM_FIRST patient_nm_first, PATIENT.NM_MID patient_nm_mid, PATIENT.DOB patient_dob, DBO.GET_CD_TITLE_BY_CDID_USERCODE(PATIENT.GENDER_CD, ''Mnemonic'') patient_gender, DBO.GET_PATIENT_MRNS_BY_PATID(VISIT.PAT_ID) patient_mrn, MPI.ENTITY_ALIAS patient_mpi, PATIENT.UPDT_SRC_ID authority_id, PATREL.PAT_RELTN_ID patient_provider_rltnshp_id FROM W_PATIENT_VISIT VISIT LEFT OUTER JOIN W_LOCATION_SITE LOCSITE ON VISIT.ENTITY_LOC_ID = LOCSITE.LOC_SITE_IDLEFT OUTER JOIN W_ORGANIZATION LOCSITEORG ON LOCSITE.ORG_ID = LOCSITEORG.ORG_IDJOIN W_ORGANIZATION PARENT_ORG ON DBO.ORG_ID_FROM_DDID(VISIT.DATA_DOMAIN_ID) = PARENT_ORG.ORG_IDJOIN W_PATIENT PATIENT ON VISIT.PAT_ID = PATIENT.PAT_IDJOIN W_ENTITY_ALIAS MPI ON (VISIT.PAT_ID = MPI.ENTITY_ID AND MPI.ALIAS_TYP_CD = '1000516' AND MPI.STS_CD = '137')LEFT OUTER JOIN W_PERSON PERFORMER ON VISIT.SCHEDULED_PROV_ID = PERFORMER.PERS_IDLEFT OUTER JOIN W_MPI_USERID_VIEW PRINC ON (PRINC.PERS_ID = VISIT.SCHEDULED_PROV_ID)LEFT OUTER JOIN W_PATIENT_RELATIONSHIP PATREL ON (VISIT.PAT_ID = PATREL.PAT_ID AND PATREL.STS_CD = '137' AND PATREL.ENTITY_TYP_CD = '935' AND PATREL.ENTITY_ID IN (SELECT PERS_ID FROM W_ORG_USER_VIEW WHERE USER_NM = @P1) AND PATREL.RLTN_CD IN (SELECT CD_ID FROM W_CODE_ALIAS WHERE CD_SET_CD IN (SELECT CD_SET_CD FROM W_CODE_SET WHERE CD_SET_NM ='Provider-Patient Relationship') AND ALIAS_USE='title' AND STS_CD = '137') )WHERE VISIT.STS_CD = '137'AND VISIT.SCHEDULED_PROV_ID IN (@P2, @P3, @P4, @P5, @P6)AND VISIT.SCHED_START_DTTM >= DBO.CONVERT_DATE_FROM_TO(@P7, @P8)AND VISIT.SCHED_START_DTTM < dateadd(DAY,1,DBO.CONVERT_DATE_FROM_TO(@P9, @P10))ORDER BY VISIT.VISIT_START_DTTM ASC

how to get name only from these column of enmae from emp table?

Posted: 16 Jun 2013 06:33 PM PDT

Hai friends, My emp Table iscreate table emp(location varchar(20),ename varchar(30))insert into emp (location,ename) values('A','00001:ravi')insert into emp (location,ename) values('A','00002:rahie')insert into emp (location,ename) values('B','00003:raghul')insert into emp (location,ename) values('B','00004:ram')my requring output isonly name from ename column of emp table

Need Query for a problem

Posted: 16 Jun 2013 01:57 AM PDT

Hello Everybody,Consider the following table,create table SampleTable(Category varchar(100), Organisams varchar(100) )Insert into Sampletable values ('Animals','Lion')Insert into Sampletable values ('Birds','Dove')Insert into Sampletable values ('Plants','Neem')Insert into Sampletable values ('Animals','Tiger')Insert into Sampletable values ('Birds','Eagle')Insert into Sampletable values ('Plants','Mango Tree')Insert into Sampletable values ('Animals','Cow')Insert into Sampletable values ('Birds','Parrot')Insert into Sampletable values ('Plants','Lime Tree')I Need following output,Category | Organisms-----------------------------Animals | Lion,Tiger,CowBirds | Dove,Eagle,ParrotPlants | Neem,Mango Tree,Lime TreeThanks in Advance

[SQL Server 2008 issues] CHECKDB issue

[SQL Server 2008 issues] CHECKDB issue


CHECKDB issue

Posted: 16 Jun 2013 06:57 PM PDT

Hi all,Someone restarted a server last week & SQL Server came back up as "in recovery" then completed after a while. Today I've just tried to run "DBCC CHECKDB WITH PHYSICAL_ONLY" on 2005 sp3. The output I get is as follows:[code]Msg 926, Level 21, State 6, Line 1Database 'mdb' cannot be opened. It has been marked SUSPECT by recovery. See the SQL Server errorlog for more information.Msg 0, Level 20, State 0, Line 0A severe error occurred on the current command. The results, if any, should be discarded.[/code]My session is then disconnected from the database. It says suspect (although it later says to discard this) but when I reconnect and look at 'IsSuspect' it says it isn't (0). Any idea why this would be the case?Error log says:[code]Date,Source,Severity,Message06/17/2013 08:45:58,spid123,Unknown,Database 'mdb' cannot be opened. It has been marked SUSPECT by recovery. See the SQL Server errorlog for more information.06/17/2013 08:45:58,spid123,Unknown,Error: 926<c/> Severity: 21<c/> State: 6.06/17/2013 08:45:58,spid80,Unknown,E:\Application-data\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\mdb.mdf:MSSQL_DBCC6: Operating system error 1784(error not found) encountered.06/17/2013 08:45:58,spid80,Unknown,Error: 17053<c/> Severity: 16<c/> State: 1.06/17/2013 08:45:58,spid80,Unknown,The operating system returned error 1784(error not found) to SQL Server during a write at offset 0x0000006430e000 in file 'E:\Application-data\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\mdb.mdf:MSSQL_DBCC6'. Additional messages in the SQL Server error log and system event log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information<c/> see SQL Server Books Online.06/17/2013 08:45:58,spid123,Unknown,DBCC CHECKDB (mdb) WITH no_infomsgs<c/> physical_only executed by GBAHES777\GBAHES777 found 0 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 3 seconds.06/17/2013 08:45:58,spid80,Unknown,E:\Application-data\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\mdb.mdf:MSSQL_DBCC6: Operating system error 1784(error not found) encountered.06/17/2013 08:45:58,spid80,Unknown,Error: 17053<c/> Severity: 16<c/> State: 1.06/17/2013 08:45:58,spid80,Unknown,The operating system returned error 1784(error not found) to SQL Server during a write at offset 0x0000034f840000 in file 'E:\Application-data\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\mdb.mdf:MSSQL_DBCC6'. Additional messages in the SQL Server error log and system event log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information<c/> see SQL Server Books Online.[/code]

merge query

Posted: 16 Jun 2013 06:46 PM PDT

i want to merge this query[size="1"]DECLARE @MinDate datetime,@MinDate datetimeSELECT @MinDate = MIN([Date]),@MaxDate = MAX([Date])FROM attend_logSELECT p.Date,q.TimeIn,q.TimeOut,q.shift,p.eidFROM(SELECT f.[Date],eidFROM dbo.CalendarTable(@MinDate,@MaxDate,0,0) fCROSS JOIN (SELECT DISTINCT eid FROM attend_log) t)pLEFT JOIN attend_log qON q.eid = p.eidAND q.[Date] = p.[Date][/size]into this query[size="1"]--drop table #temp1select[date],min([Timein]) as First_Record,sum(DATEDIFF(minute, [Timein], [Timeout])) as Time_Minutesinto #temp1 from Atendwhere eid = 26446group by [date]GOselectt.[date],t.eid,t.[Timein] as timein,t.[Timeout] as timeout,CONVERT(VARCHAR(8), DATEADD(minute, Time_Minutes, 0), 108) AS SpendTime,CONVERT(VARCHAR(8), DATEADD(minute, ABS(540 - Time_Minutes), 0), 108) as excesshorttime,case when (540 - Time_Minutes) > 0 then 'Short'when (540 - Time_Minutes) < 0 then 'Excess'else NULL end as ExcessShort,case when (540 - Time_Minutes) >= 120 then 'HalfDay' else '' end as RemarksFROM Atend tleft join #temp1 t2 on t.[date] = t2.[date] and t.[Timein] = t2.First_Recordwhere eid = 26446order by t.[date], t.[Timein][/size]and show result like this[size="1"]date-------------------------------eid------timein------------------------timeout-------------------spendtime--------excessshort2013-01-04 00:00:00.000---26446--2013-06-12 09:29:00.000---2013-06-12 18:47:00.000---09:18:00--------00:18:002013-01-05 00:00:00.000---26446--2013-06-12 09:08:00.000---2013-06-12 13:34:00.000---07:41:00-------01:19:002013-01-06 00:00:00.000---26446-------------null---------------------null--------------------------null-----------------null[/size]thanks for the help

Delete records without logging

Posted: 16 Jun 2013 06:15 PM PDT

Deal all,we have a database AA in SQL Server,while inserting the records to table DB log file size are growing same scenario happening while deleting the records.we dont want log growing for deleting the records without affecting DB Mirror,how to handle this ONE? Please help me on this?

Shrinking the log file does not reduce size

Posted: 29 Apr 2013 10:23 PM PDT

I have a database which had mdf size of 350 MB and ldf size 4.9 GBWhen i try to shrink the log file it's not shrinking. I know shrinking a database is not good and it should not be done. But still i am trying to do it for shrinking the log file. The recovery model is set to FULL.Then i followed some steps:When i run dbcc SQLPerf(logspace) i found that logsize is 4932 MB and Log space used is 98.76%So large amount of (98%) of log is using now.Then i run this command use <databasename> dbcc loginfoNow almost all VLF has status 2 which means all are in use.then I tried to take log backup. After log backup also shrinking didnt reduce the size. Then i changed recovery model to simple and then tried shrinking.But this also didn't help. Also i ran dbcc opentran (database)and found that no transaction is open now. So what is making the database which does not allow me to shrink the log size.How can i solve this?

snapshot generation

Posted: 16 Jun 2013 04:38 PM PDT

what is the difference between snapshot generation process in snapshot replication and transactional replication

Maintenance plans

Posted: 16 Jun 2013 04:16 PM PDT

Hi all,Five days back i created a maintenance plan to delete backup files older than 2 days and the plan is running every day. Today i saw the disk filling up . So i went to maintenance plan history and saw the history. In history it is showing the task is executed successfully. I went to jobs and saw the history there . The jobs are running fine.I went to the location where where backups are taken. In this location all the backups are existing from the day i created the maintenance plan. I am unable figure it out..

which is better performance wise

Posted: 16 Jun 2013 02:48 PM PDT

Two Scenarios1) A table with 200 fields. OR2) 200 fields spanning across multiple tables? For e.g 10 tables with 20 fields each.Which one is better in terms of performance and space used and why?

How to avoid deadlocks

Posted: 16 Jun 2013 01:56 AM PDT

Hi,I am facing few problems with deadlocks. we have few tables and each table is having more than 6 million rows and we have done proper indexing on those tables. but when two different users are accessing the same table for different rows, ex: One user is retrieving the data from that table and another user is doing insert / update on one particular row in that time we are getting deadlocks.we are using SQL server 2008, but the compatibility level of database is 80. that we can't change right now because if we change that then our application will not work.so please help me on how to avoid the deadlocks.Regards,BSL

How can we find the Null values on indexed columns from all tables in SQL Server.

Posted: 06 Jun 2013 07:16 AM PDT

Hi,Please help me on finding the Null values on indexed columns from all tables in SQL Server.I need a SQL query to find all the indexed columns which is having null values.Please do me needful.

Sunday, June 16, 2013

[how to] How to keep an Oracle 11g instance lean?

[how to] How to keep an Oracle 11g instance lean?


How to keep an Oracle 11g instance lean?

Posted: 16 Jun 2013 09:13 PM PDT

As a Mac user I run a local Oracle Enterprise Linux (OEL) Developer Day VM that houses an Oracle 11g instance for development/unit testing. I frequently refresh the data in the schemas (sometimes multiple times a day) and do not have a need for snapshots or backups. The number of rows in each schema's tables are orders of magnitude smaller than those of a production instance.

I've had to extend the tablespaces' size more than a few times and I suspect I have improper resource allocation to begin with (SGA, PGA) for the way in which I'm using the instance (not for SQL Developer, TimesTen, etc. -- just for the Oracle 11g instance).

I'm looking for recommendations/resources to help me

  • Size my instance for a small amount of data in the first place
  • 'Clean' my instance so that I don't need to increase data file size
  • Modify my OEL VM instance to most efficiently run the only thing I need it to run -- the Oracle instance

For example, I have turned off snapshot collection. But I think there are other (more?) impactful changes I can make to help increase the performance and reduce the footprint of VM.

MYSQL Timezone support

Posted: 16 Jun 2013 07:23 PM PDT

We are having a shared hosting plan and they are saying that do provide MYSQL Timezone support in a shared hosting plan. I can create timezone related tables in our database and populate them with required data(data from from our local MYSQL Timezone related tables. How to view the code syntax for MySQL "CONVERT_TZ" function?

Thanks Arun

Which design to choose in this case?

Posted: 16 Jun 2013 04:16 PM PDT

I am designing a database for a video game. I have some data structures like {x, y, z, dimension} used across multiple tables.

For instance :

Table player_death

id: int  player_id: int  death_x: int  death_y: int  death_z: int  death_dimension: int  

Since {x, y, z, dimension} represents a geographic vector in the video game, a lot of table contain those columns. My question is : should I create a table named vector containing a pool of all the vectors used in my tables (and then use reference to them) or should I keep adding {x, y, z, dimension} columns to each table using a vector ? What is the best practice ? I am a beginner in database design, sorry if the question sounds stupid.

reset mysql root password on ubuntu

Posted: 16 Jun 2013 03:24 PM PDT

In short: my goal is to reset mysql root password on ubuntu.

Background: I set up a ubuntu desktop and installed LAMP stack last week. I then went to do something else, and just got back to carry on today. Either I did not set mysql password, or I have forgotten what I typed in.

So I tried to reset it.

I stopped mysql by

sudo stop mysql  

I tried to start mysql using command line options, but it seems start only takes service name(??)

So I ran:

sudo /etc/init.d/mysql start --skip-network  --skip-grant-tables  

Went into mysql, fine. Then

mysql> UPDATE mysql.user SET Password=PASSWORD('newpasswd') WHERE User='root';  ERROR 1142 (42000): UPDATE command denied to user ''@'localhost' for table 'user'  mysql> show grants for ''@'localhost';  +--------------------------------------+  | Grants for @localhost                |  +--------------------------------------+  | GRANT USAGE ON *.* TO ''@'localhost' |  +--------------------------------------+  1 row in set (0.00 sec)  

If ''@'localhost' has USAGE on all-schemal.all-tables, I'd think I can just update the password like this... Where did I miss?

I used the instruction in mysql official website, the last part. I also tried the unix one but with no luck. And I was unsure when it says 'unix', does it mean all *nix system or just unix???

Anyway, any thoughts are welcome! many thx!!!

Deleting large chunks of data

Posted: 16 Jun 2013 02:12 PM PDT

    create table Grid(      grid_id,      //other columns      )      //grid_id is primary key        create table grid_info(      grid_id,      colulm_id,      column_vale      )      //grid_id is foreign key to grid_id of table Gird    // These is around 10 million      select grid_id from grid into #temp where //some logic        CREATE CLUSTERED INDEX ix_temp_id ON #temp(grid_id)        set rowcount 10000                      while 1 = 1                      begin                           delete from grid_info where grid_id in (select grid_id from #temp)                          if @@rowcount = 0                          break                      end                      set rowcount 0  

This batch is slow because of large number of rows to be deleted. What are the things to be considered to make things faster. Like how the indexing,triggers plays a role and how can you test the performance of your queries using what kind tools?

A table with a foreign-key, referencing the tables own primary key

Posted: 16 Jun 2013 08:42 PM PDT

I was looking at the database sample that is provided by the DB2 server installaion. This database is the one created when running db2samle.exe from the "bin" folder. I noticed that the table "ACT" contains a primary key with a references to it self, see sql below, and was wondering if there are any logic reason for this? or is it just a glitzh in the design?

CREATE TABLE "DB2ADMIN"."ACT" (          "ACTNO" SMALLINT NOT NULL,          "ACTKWD" CHAR(6) NOT NULL,          "ACTDESC" VARCHAR(20) NOT NULL      )    ALTER TABLE "DB2ADMIN"."ACT" ADD CONSTRAINT "PK_ACT" PRIMARY KEY      ("ACTNO");    ALTER TABLE "DB2ADMIN"."ACT" ADD CONSTRAINT "RPAA" FOREIGN KEY      ("ACTNO")      REFERENCES "DB2ADMIN"."ACT"      ("ACTNO")      ON DELETE RESTRICT;  

Update performance: clustered versus covering index

Posted: 16 Jun 2013 07:19 PM PDT

I have a simple, 3-column table with about 50 million rows in it. This table gets about 5,000 inserts/updates per second, and perhaps 20 queries per second are executed against the table. The table looks like this:

Controller: Int  ExecutionTime: DateTime  Result: Int  

To maximize the efficiency of my queries, I need two indexes. (Result Includes Execution Time) and (Controller, ExecutionTime). These two indexes fully cover my queries - all information is served directly from the indices, no table lookups required.

I chose nonclustered indices because I was worred about the performance hit using a clustered index with so many updates. But it occurs to me that since I am fully covering the queries, this might not be a valid concern - perhaps my covering, nonclustered indices require the same amount of maintenance as a clustered index would.

So my question: In a table with a lot of inserts/updates, will a covering, nonclustered index usually have a lower UPDATE performance hit than a clustered index?

Thanks for your time and help!

Mysql slow.log with noindex, but without tmp_tables

Posted: 16 Jun 2013 01:17 PM PDT

Is it possible to change anything in config to logging in slow log queries without index use, but not in the temporary tables. I have a lot enough subquerys, so it creating anytime temporary table, that can not be have a index, so the full query landing in the slow.log. How i can change it?

Why is simple timestamp comparison performing slow

Posted: 16 Jun 2013 11:00 AM PDT

I have a table with 12,582,912 rows and can't figure it out why is a simple comparison ad_hot_expire > 1371399048 so slow (SHOW PROFILES: http://i.imgur.com/6L5HQyW.png)? The subquery is faster, but still slow for a backend.

Explain http://i.imgur.com/PdOnH1V.png

Queries

SELECT SQL_NO_CACHE id              FROM mod_ad              WHERE active != 0 AND usr_active != 0 AND ad_hot_expire != 0              ORDER BY id DESC LIMIT 9;    SELECT SQL_NO_CACHE id              FROM mod_ad              WHERE active != 0 AND usr_active != 0 AND ad_hot_expire > 1371399048              ORDER BY id DESC LIMIT 9;    SELECT SQL_NO_CACHE id              FROM (          SELECT id FROM mod_ad                  WHERE active != 0 AND usr_active != 0 AND ad_hot_expire > 1371399048                  LIMIT 9) as ad           ORDER BY id DESC;  

Schema

CREATE TABLE `mod_ad` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `user_id` int(10) unsigned NOT NULL,    `page_id` int(10) unsigned NOT NULL,    `cat_id` int(10) unsigned NOT NULL,    `subcat_id` int(10) unsigned NOT NULL,    `program` tinyint(1) unsigned NOT NULL,    `region_id` int(10) unsigned NOT NULL,    `zone_id` int(10) unsigned NOT NULL,    `city_id` int(10) unsigned NOT NULL,    `sort` enum('firm','priv') NOT NULL,    `type` enum('predaj','kúpa','výmena','darujem','hľadám','ponúkam','iné') NOT NULL,    `condition` varchar(24) NOT NULL,    `name` varchar(128) NOT NULL,    `desc` text NOT NULL,    `location` varchar(128) NOT NULL,    `keywords` varchar(255) NOT NULL,    `price` decimal(10,2) NOT NULL,    `price_type` varchar(20) NOT NULL,    `cperson` varchar(128) NOT NULL,    `firmname` varchar(128) NOT NULL,    `zip` varchar(5) NOT NULL,    `email` varchar(255) NOT NULL,    `tel` varchar(20) NOT NULL,    `tel2` varchar(20) NOT NULL,    `web` varchar(255) NOT NULL,    `video` varchar(255) NOT NULL,    `marked_expire` int(11) unsigned NOT NULL,    `top_expire` int(11) unsigned NOT NULL,    `ad_hot_expire` int(11) unsigned NOT NULL,    `ad_border_expire` int(11) unsigned NOT NULL,    `ad_heading_expire` int(11) unsigned NOT NULL,    `ad_weblink_expire` int(11) unsigned NOT NULL,    `active` int(10) unsigned NOT NULL,    `usr_active` int(10) unsigned NOT NULL,    `added` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,    `archive` int(10) unsigned NOT NULL,    `expire` int(11) unsigned NOT NULL,    `token` varchar(32) NOT NULL,    `views` mediumint(9) NOT NULL,    `ip` varchar(15) NOT NULL,    PRIMARY KEY (`id`),    KEY `user_id` (`user_id`),    KEY `page_id` (`page_id`),    KEY `cat_id_subcat_id` (`cat_id`,`subcat_id`),    KEY `region_id_zone_id` (`region_id`,`zone_id`),    KEY `program` (`program`)  ) ENGINE=MyISAM DEFAULT CHARSET=utf8;  

Should my Postgres backup server be as beefy (IO, processor, RAM) as my master?

Posted: 16 Jun 2013 04:48 PM PDT

I've set up a Postgres database on a server with 2 CPU cores, 4GB of RAM and an SSD volume (on top of RAID 10 of course). This is our (sort-of) "beefy" production database server (I can easily add more RAM, etc. when the time comes).

Now it's time to set up a backup Postgres server (not used for reads or anything else, just strictly a replication/backup). I don't want to have to fork out another $150/mo for the same exact setup for my backup server, and I know that a lot of the server resources used for a production database have to do with complicated queries, sequential scans, etc., so my thinking was that I could get away with a 512MB server with about 1/6 the I/O performance, and 1 CPU core.

Is that thinking correct, or does a replication/backup Postgres server typically need nearly the same specs as a production Postgres server?

Note: I'm using Postgres 9.2.4 with streaming (non-synchronous) replication on an Ubuntu 12.04.2 server.

What is the best way to transport database fields from one database to another?

Posted: 16 Jun 2013 01:41 PM PDT

I have two databases. The table name and fields name are different and field numbers are unequal. I need to transport all fields from one database to another. I can import the database as CSV format.

I can use a PHP script which will accomplish this.

But is there any other way to do this easily without any script.

Is it possible to integrate Oracle NoSQL Enterprise solution with Google app engine. If yes, then how..? [closed]

Posted: 16 Jun 2013 05:34 AM PDT

Integration Google App engine and oracle NOSql for enterprise application.

Is it possible to avoid filesort?

Posted: 16 Jun 2013 06:11 AM PDT

Is it possible to avoid 'Using temporary' and 'Using filesort' for the following SELECT query? I can't figure out a way to do it.

I tried adding indexes, for both top_expire and program, but didn't help With the ORDER BY the query takes over 1 second and withoud it is just 0.003 seconds on localhost

Query

SELECT ad.*, p.link      FROM (SELECT ad.*          FROM mod_ad ad           JOIN mod_ad_auta auta ON ad.id = auta.ad_id          WHERE ad.active != 0 AND ad.usr_active != 0 AND ad.expire > 1371151608  AND ad.cat_id = '1' AND ad.price <= '10000'             AND auta.rocnik BETWEEN '1950' AND '2013'             AND auta.km BETWEEN '0' AND '500000'          ORDER BY top_expire DESC, program DESC,  ad.id DESC  LIMIT 0,10) as ad  JOIN pages p ON ad.page_id=p.page_id;  

Schema

CREATE TABLE `mod_ad` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `user_id` int(10) unsigned NOT NULL,    `page_id` int(10) unsigned NOT NULL,    `cat_id` int(10) unsigned NOT NULL,    `subcat_id` int(10) unsigned NOT NULL,    `program` tinyint(1) unsigned NOT NULL,    `region_id` int(10) unsigned NOT NULL,    `zone_id` int(10) unsigned NOT NULL,    `city_id` int(10) unsigned NOT NULL,    `sort` enum('firm','priv') NOT NULL,    `type` enum('predaj','kúpa','výmena','darujem','hľadám','ponúkam','iné') NOT NULL,    `condition` varchar(24) NOT NULL,    `name` varchar(128) NOT NULL,    `desc` text NOT NULL,    `location` varchar(128) NOT NULL,    `keywords` varchar(255) NOT NULL,    `price` decimal(10,2) NOT NULL,    `price_type` varchar(20) NOT NULL,    `cperson` varchar(128) NOT NULL,    `firmname` varchar(128) NOT NULL,    `zip` varchar(5) NOT NULL,    `email` varchar(255) NOT NULL,    `tel` varchar(20) NOT NULL,    `tel2` varchar(20) NOT NULL,    `web` varchar(255) NOT NULL,    `video` varchar(255) NOT NULL,    `marked_expire` int(11) unsigned NOT NULL,    `top_expire` int(11) unsigned NOT NULL,    `ad_hot_expire` int(11) unsigned NOT NULL,    `ad_border_expire` int(11) unsigned NOT NULL,    `ad_heading_expire` int(11) unsigned NOT NULL,    `ad_weblink_expire` int(11) unsigned NOT NULL,    `active` int(10) unsigned NOT NULL,    `usr_active` int(10) unsigned NOT NULL,    `added` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,    `archive` int(10) unsigned NOT NULL,    `expire` int(11) unsigned NOT NULL,    `token` varchar(32) NOT NULL,    `views` mediumint(9) NOT NULL,    `ip` varchar(15) NOT NULL,    PRIMARY KEY (`id`),    KEY `user_id` (`user_id`),    KEY `page_id` (`page_id`),    KEY `cat_id` (`cat_id`),    KEY `region_id` (`region_id`),    KEY `zone_id` (`zone_id`)  ) ENGINE=MyISAM DEFAULT CHARSET=utf8;    CREATE TABLE `mod_ad_auta` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `ad_id` int(10) unsigned NOT NULL,    `model` int(10) unsigned NOT NULL,    `rocnik` smallint(5) unsigned NOT NULL,    `palivo` varchar(10) NOT NULL,    `karoseria` varchar(12) NOT NULL,    `km` mediumint(8) unsigned NOT NULL,    `prevodovka` varchar(12) NOT NULL,    `farba` varchar(16) NOT NULL,    `metaliza` tinyint(1) unsigned NOT NULL,    `obsah` smallint(5) unsigned NOT NULL,    `vykon` smallint(5) unsigned NOT NULL,    `vybava` text NOT NULL,    PRIMARY KEY (`id`),    KEY `ad_id` (`ad_id`)  ) ENGINE=MyISAM DEFAULT CHARSET=utf8;  

What happens when we create index

Posted: 16 Jun 2013 11:14 AM PDT

What happens exactly when I create an index on the empid column of this employee table

CREATE TABLE employee  (    empid number(5),    ename varchar2(30),    sal   number(10,2)  );  

Let's assume the table contains the below data

empid   ename      sal  1       hema       10000  3       suma       10000  2       bheema     15000  4       bhama      20000  5       panama     25000  

what exactly my doubt is what will be created and stored internally in index..... to improve the performance , whether sorted content of empid or some other information related to rows....?

Fast way of delete MySQL record

Posted: 16 Jun 2013 12:16 PM PDT

I have a 1.2 million record in a database table( it use index column also)

I want to delete old 1 million record. I try to do it using PhpMyadmin but it take more time.

DELETE FROM `table` WHERE `id` < 1000000  

Is there any way to do it fast?

I have another question: if i did this using SSH client in command line, will it be fast?

Several PC Cannot Connect To MySQL Server on Certain Time

Posted: 16 Jun 2013 11:54 AM PDT

I have a network of 10 PCs and 1 Server. I installed MySQL Community Server v5.6.10 64-bit on the Server, and all the PCs are Windows XP, using ODBC Connection 5.2(a).

Some of the PC (previously it was 3, now become 4) cannot connect to the MySQL at a certain time, which is 2.00 pm. Before, it was OK, the client can connect to the Server normally, but at 2.00 pm, those PC cannot connect anymore, with this error:

Your connection attempt failed for user xxx@xxx from your host to server at   xxx.xxx.xxx.xxx:xxxx. Cannot connect to MySQL Server on xxx.xxx.xxx.xxx (10055)  Please:  1. Check that mysql is running on server xxx.xxx.xxx.xxx  2. Check that mysql is running on port xxxx  3. Check the xxxxxx has rights to connect to xxx.xxx.xxx.xxx from your address (mysql rights define what clients can connect to the server and from which machines)  4. Make sure you are both providing a password if needed and using the correct password for xxx.xxx.xxx.xxx connection from the host address you're connecting from  

From the above error message:

  1. I already check that mysql is running (other PC can access it normally)
  2. I also check MySQL to run in port xxxx
  3. I also check the user xxxxxx has rights to connect (because it can connect to the server before 2 pm)
  4. I also check the password has been provided correctly
  5. I also check the maximum number of connection in my.ini, which I set to 0 (unlimited connection)
  6. I also check the network connection, but find no problem since I can still ping the server and other PC

The only action I do to make everything work again is by restarting the PC (client), however, I don't want such solution. I want to know what's the cause, is it because of the ODBC connector, the PC, or any other reason. Could anyone give me some hints on what I should check?

Thank you

Oracle DBF import brings me to a weird situation

Posted: 16 Jun 2013 01:01 PM PDT

I have a big problem .

One of my client send me some files to install them to his Oracle server but as a new user in oracle I am not sure how to do that. What I've recieved are this files :

DRSYS01.DBF  INDX01.DBF  SYSTEM01.DBF  TEMP01.DBF  TOOLS01.DBF  UNDOTBS01.DBF  USERS01.DBF  XDB01.DBF  

And if helps , those files are dated at 2007 (maybe do I need another version or Oracle?)

I've googled for those files and what I get is that maybe files are from Acess , Are I am right?

I've hexed the files and in top of them they have this hex

6D6C6B6A1635 >>> mlkj.5  

And In some parts of the files(hexing) are references to Oracle , so I dont know if It's Acess or Oracle. I've also tried to import the files using Navicat

But the result was just not as expected because all tables where nulled and with all columns the same name (from f1-fx)

That's all I know . Thanks you for your help

How to drop inheritance?

Posted: 16 Jun 2013 10:05 AM PDT

I am new to PostgreSQL. The situation I have is someone created a child table inherits from the parent table. And dropped the child table. But the parent table still has the "down arrow" sign on it.

I checked and there's no other link/relation on the parent table. Is this really an issue?

I was told that the parent table is still in the 'been inherited' status and causing the performance issue. How to resolve this? By removing the 'been inherited' status' from the parent table?

Call pl/sql function and discard result

Posted: 16 Jun 2013 04:59 AM PDT

I'm calling a function from pl/sql, but I do not require the result in this case. Is there a cleaner way to call this function without using a dummy variable?

Works:

junk := performWork();  

Does not work:

performWork();         --PLS-00221: performWork is not a procedure or is undefined  EXEC performWork();    --this might work from SQL*Plus, but not from pl/sql  SELECT pkg.performWork() FROM DUAL;  --PLS-00428: INTO clause is expected in this SELECT statement  

Oracle DB 11gR2 netca installation failing

Posted: 16 Jun 2013 04:29 AM PDT

I am getting below error when I run netca from command line

cmd: /u01/app/oracle/product/11.2.0/db_1/bin/netca /silent /responsefile=/oracle_binaries/database/response/netca.rsp    [main] [ 2013-05-17 10:52:47.975 IST ] [CmdlineArgs.parseLogParam:290]  Parsing /log argument...  [main] [ 2013-05-17 10:52:47.976 IST ] [CmdlineArgs.parseLogParam:345]  Returning Log File = null  java.lang.ArrayIndexOutOfBoundsException: 2          at oracle.net.ca.CmdlineArgs.parseArgs(CmdlineArgs.java:416)          at oracle.net.ca.InitialSetup.<init>(NetCA.java:3969)          at oracle.net.ca.NetCA.main(NetCA.java:405)  

copy package from schema B to schema C

Posted: 16 Jun 2013 06:29 AM PDT

I am in the next situation: I am using oracle 11g. I am connected to an user, lets say schema1, where are a lot of permissions. I want to create a script which will copy the Package1 (and its body) from schema2 to schema3.

The script will be executed from schema1.

PS: I already look for a solution and I am not interested in export/import or in using other tools from toad, sql developer etc.

If an account has REQUIRE SUBJECT, does it still need a password?

Posted: 16 Jun 2013 09:29 AM PDT

I'm in the process of setting up SSL-secured replication between two servers. Each server has its own public/private keypair, and the CA cert is just the concatenation of the two public certs, like this answer.

Now I'm updating the replication account with REQUIRE SUBJECT "exact subject of the client"

Is there any practical value to also having a password on the replication account (IDENTIFIED BY "secret")?

MySQL users corrupt

Posted: 16 Jun 2013 11:29 AM PDT

I have a strange situation here:

From time to time I cannot log in with any of my mysql users. I even cannot make a mysql dump.

So I started searching in mysql files and I found that users.MYD and users.MYI are modified in the time when the login problem occurs. The only way to return everything to work is to restore the users.* files from the time when the system was running okay.

I searched about the problem and I found that there was some bug in the MySQL, but it was in the older versions (4.X). I'm running 5.5.

Any ideas? Thanks!

Databases list not showing up?

Posted: 16 Jun 2013 05:29 AM PDT

This morning when I logged into SQL Server Management Studio (v 10.0.16) I clicked on the databases node and saw all by db's (they are hosted remotely) as I have done for the last 3 years. This evening, when I click on the databases node I see NOTHING - except the system databases.

According to my hosting company, this is an issue with my local permissions. I have searched everywhere and found nothing to help me. I'm now desperate and and help would be massively appreciated.

I have multiple sources INSERTing into a MySQL innodb table. My periodic aggregation script never makes accurate aggregates. Why?

Posted: 16 Jun 2013 12:29 PM PDT

I apologize in advance if this is a repeat. I'm not really sure how to properly ask for what I'm running into.

I have a large InnoDB table set up. I have 3 sources that all INSERT concurrently at a moderately high volume (300-500 INSERT/s).

I have a PERL script running on a cron job every 5 minutes that aggregates data from the last time it ran. It keeps track of the last record it processed by storing the auto_increment value of the last row in a metadata table.

The aggregates are always off. But not by much. This has been so frustrating because it's just plain simple math (SELECT account_id,sum(sold) GROUP BY account_id). I have a suspicion that it has something to do with the transaction isolation (repeatable-read).

I recently found FlexViews which looks very cool and might address my problem. But I was wondering if anyone could:

  • Tell me what is going on here. Why is this happening?
  • What can be done to produce 100% accurate aggregates

I'm like 95% sure the auto_increment thing along with transaction commit ordering is hosing me up, but I don't understand enough about DBs to really ask the right question.

Oh, one thing to note, I've already checked over the field types. This issues isn't the result of rounding.

Converting dbo file from windows to linux

Posted: 16 Jun 2013 07:29 AM PDT

I have a .dbo file which is created from windows. This file is succesfully reloaded into the mysql database in windows. I need to reload the dbo file into the mysql/mariadb database in linux. How I convert the file that was created from windows to linux?

Allow users to do anything within his own schema but cannot create or drop the schema itself

Posted: 16 Jun 2013 09:29 AM PDT

I have created a schema in SQL Azure using following permissions on my database role:

CREATE ROLE myrole AUTHORIZATION dbo;  EXEC sp_addrolemember 'myrole', 'myuser';    CREATE SCHEMA myschema AUTHORIZATION dbo;    GRANT ALTER, CONTROL, DELETE, EXECUTE, INSERT, REFERENCES, SELECT, UPDATE, VIEW   DEFINITION ON SCHEMA::myschema TO myrole;    GRANT CREATE TABLE, CREATE PROCEDURE, CREATE FUNCTION, CREATE VIEW TO myrole;  

Through the above defined permissions a user can create/drop his own schema, so to overcome the problem I tried the ALTER ANY SCHEMA permission. But this permission also denies the user to create/drop tables. What permissions are required in order to allow the user to do anything within his own schema but not be able to create or drop the schema itself?

[SQL Server] ldf file size

[SQL Server] ldf file size


ldf file size

Posted: 15 Jun 2013 08:22 PM PDT

Dear AllI have a db for which recovory model is Full. And i am taking only full back up. .Ldf size is bigger than the .mdf file. I wanted to reduce the file size. On one site i saw following solution1. change the recovery model to simple2. change the ldf initial file size to 1mb3.Change the recovery model back to Full.Is it a correct way? What will be the side effects of this?RegardsKrishna1

[MS SQL Server] How to Export data from SQL Server to CSV using SQL server Management studio?

[MS SQL Server] How to Export data from SQL Server to CSV using SQL server Management studio?


How to Export data from SQL Server to CSV using SQL server Management studio?

Posted: 15 Jun 2013 12:57 PM PDT

Hi All,Good morning,I have requirement export data from SQL server to CSV using SSMS, because we don't have access to execute SSIS package from management studio using [b]XP_cmdshell [/b], so can you please any one help me ? how to export the data without xp_cmdshell ?Thanks in Advance.Kanagarajan s.

[SQL 2012] Best Way for reporting Purpose:In the Same Instance

[SQL 2012] Best Way for reporting Purpose:In the Same Instance


Best Way for reporting Purpose:In the Same Instance

Posted: 16 Jun 2013 12:04 AM PDT

In our environment we have sql server 2012 . we need the database in sql server 2012 to be used for reporting services. we hvae already being using it for reporting but we face lot's of blockings in it. Which is the best way to reports fetching. we dont need to replicate to secondary for reporting purpose as it depends on the application settings..As of now i had taken a snapshot of the database everyday (job) for reporting purpose in live server itself. Disadvantages are there for snapshot as there is a over-head in I/O for the live database. If we consider to schedule a job for backup/restore then i am not sure whether restoration happens without any problem as there would be existing connections to the database as it couldn't be killed . so anyone suggest which type of high availability can be used.Also logshipping and mirorring cannot be done on the same instance. Anyone could help me in solving this problem.

Search This Blog