Wednesday, August 28, 2013

[how to] A Possible indexing strategy for the given schema?

[how to] A Possible indexing strategy for the given schema?


A Possible indexing strategy for the given schema?

Posted: 28 Aug 2013 08:46 PM PDT

We have these four tables:

Store (      row bigint,      id uniqueidentifier,      name varchar  )    Products (      row bigint,      id uniqueidentifier,      storeID uniqueidentifier,      productname varchar  )    Customer (      row bigint,      id uniqueidentifier,      storeID uniqueidentifier,      fName,      lName,      email  )    orders (      row bigint,      id uniqueidentifier,      store_ID uniqueidentifier,      cust_id uniqueidentifier,      prod_id uniqueidentifier,      date datetime  )  

After looking at the current design, since row is just counting 1 up to the row number that was last inserted, I don't see any need to index this column. The row column is present on all the table and hard to image this ever showing up in the where clause to justify any indexing strategy column.

However, all the tables have an ID column which will uniquely identify that row item, so this will be a highly random value and not ordered, so this column will make a great column to be a private key for the table, thus a clustered index on this column along with the name associated with that row. So the indexing will look like this for each table:

 create clustered index on table STORE on column (id,name)  

--> This will speed up quiers on finding the ID on a given store name or vis-versa...

 create clustered index on table Products on column (id,store_ID, productname )  

--> Easy to image a lot of queies trying to find a product id for a product name or find a name of what a product ID is? But I'm not sure if I should be using a primary key for this combination?

 create clustered index on table Customer on column (id,firstname, lastname )  

--> I imagine there will be a lot of queries taking a name (first, last) to find the ID so we can join the tables together to find relevant data. So I was guessing a primary key for these columns or a cluster index. Am I thinking right on this?

 create clustered index on table orders on column (id,cust_id, store_id, prod_id )  

--> This table is the most interesting because it has id columns that link it to the other three. This one is very perplexing. I would think that the first index to be listed would be the order ID (which is the same as the others above) since I see this table being in a lot of queries for a given order ID or storeID or cust_id. Since this table will most likely have all of those IDs in a join statement, I would think that creating a primary key with all ID values is a good idea. But is it?

.

Thank you for any assistance from a struggling DBA.

v/r

How to make Postgres autovacuum not impact performance?

Posted: 28 Aug 2013 05:49 PM PDT

I am running into problems where Postgres autovacuum processes are interfering with the overall performance of my database. It is making my select, insert, and update queries slower than usual. What are some settings, and optimization strategies that will make autovacuum have less of an impact on my database performance?

Is there a way to make it not consume as much memory/CPU so that client queries can run more efficiently?

Postgres nested hstore select query

Posted: 28 Aug 2013 05:36 PM PDT

I have a carts table with items hstore column. An example entry in this column is:

carts.items row#1 = {"1614"=>{:quantity=>"100"}, "1938"=>{:quantity=>"50"}, "1983"=>{:quantity=>"100"}, "1322"=>{:quantity=>"10"}, "1691"=>{:quantity=>"25"}, "1734"=>{:quantity=>"20"}}    carts.items row#2 = {"1614"=>{:quantity=>"50"}}  

You will notice that there is one duplicate id (1614) in the hash, but its quantity is different.

I want to write a query that will return a table with the item id counts and the total quantity. It should look like this:

 item_id | count | total   ---------+-------+------     1614  |   2   |  150     1938  |   1   |  50     1983  |   1   |  50     1322  |   1   |  100  

Here is the query that I am working with:

SELECT    skeys(carts.items) as item_ids,  COUNT(*) AS count,  svals(carts.items) as items  FROM carts  GROUP BY  skeys(carts.items),  svals(carts.items)  

It returns:

 item_id | count | total   ---------+-------+------     1614  |   1   |  {:quantity=>100}     1614  |   1   |  {:quantity=>50}     1938  |   1   |  {:quantity=>50}     1983  |   1   |  {:quantity=>50}     1322  |   1   |  {:quantity=>100}  

I aslo have tried:

SELECT key, count(*) FROM    (SELECT (each(items)).key FROM carts) AS stat  GROUP BY key  ORDER BY count DESC, key;  

Which gives me this:

item_id | count   ---------+-------     1614  |   2         1938  |   1        1983  |   1        1322  |   1     

Deciding the best way to partition (Postgresql)

Posted: 28 Aug 2013 11:56 AM PDT

I have been trying to research this for quite a while now and I am having a hard time trying to figure it out. I am out of ideas other then setting up an extra test database and running through all of the possibilities to see how they perform. Hopefully someone can pass on a bit of knowledge to prevent having to do that.

I have a database (Postgresql 9.2 with PostGIS) with a large amount of spatial data in it. Typically I look at it by location and date, however I do have to be able to search across all of the fields pretty regularly. The amount of data we receive in a day varies and continues to grow over time, so we cant guarantee the size of a partition on disc or the number of rows in one if we partition by date. I have read that setting a limit on the number of rows in a partition helps, because you can basically make a maximum size for each partition and allow for faster reads, but I do not have a sequential id field to partition off of either.

I have found functions that will allow me to partition by date, or possibly partition off of a calculated serial field (not sure on this one yet). My main question for now is: how should I decide which is the best way to do it? Is there an inherent benefit to one way or the other? Is there even a way to tell which will be better without implementing both and testing them?

Postgres DDL and DML best practices and coding standards

Posted: 28 Aug 2013 11:53 AM PDT

Is there a best practice or coding/design conventions for Postgres both DDL and DML/SQL that you could share and recommend.

I am looking for something similar to what Google has for programming.

Thank you in advance.

What is the difference between Completion and Success in SQL Server Maintenance Plans?

Posted: 28 Aug 2013 11:59 AM PDT

Using SQL Server 2008/R2 Maintenance Plans to setup some automated backups/cleanups and notifications. One thing that I can't figure out is what is the difference between Success and Completion?

The way I think about it, is that completion doesn't care if the job failed or succeeded, but if it failed, then obviously it didn't complete maybe??

Perhaps its left there in case of scripts where there might not be a way to tell what the actual outcome was, just that the script completed, but this still feels like its the same as success and failed.

I couldn't find much documentation for it either.

Oracle 11gr2 instant client installation

Posted: 28 Aug 2013 10:45 AM PDT

I have an application that uses Oracle 11gr2. For the Oracle client installation, I first install InstantClient and then run the setup again, choose Custom, and choose "Oracle Database Utilities" since the InstantClient doesn't have exp/imp with it.

My question is, instead of doing 2 steps for installation, how can I just do it once? What corresponding options do I have to select in Custom mode to install InstantClient? I have attached a pic for the complete list of items available.

Thanks.

Oracle 11gr2 Custom client install

Incorrect information in .frm file after a power outage?

Posted: 28 Aug 2013 10:25 AM PDT

  • CentOS release 6.3 (Final)
  • x86_64
  • mysql Ver 14.14 Distrib 5.5.31, for Linux (x86_64) using readline 5.1

SHOW SLAVE STATUS\G:

             Slave_IO_Running: Yes              Slave_SQL_Running: No                Replicate_Do_DB:             Replicate_Ignore_DB:              Replicate_Do_Table:          Replicate_Ignore_Table:         Replicate_Wild_Do_Table:     Replicate_Wild_Ignore_Table:                      Last_Errno: 1696                     Last_Error: Error executing row event: 'Failed to read from the .par file'  

/var/log/mysqld.log:

130706  2:26:04 [ERROR] /usr/libexec/mysqld: Incorrect information in file: './reportingdb/v3_ban_date.frm'  130706  2:26:04 [ERROR] Slave SQL: Error executing row event: 'Failed to read from the .par file', Error_code: 1696  130706  2:26:04 [Warning] Slave: Failed to read from the .par file Error_code: 1696  130706  2:26:04 [Warning] Slave: Incorrect information in file: './reportingdb/v3_ban_date.frm' Error_code: 1033  

v3_ban_date is a partitioned table. The .par file is... empty:

# ls -l v3_ban_date.par  -rw-rw---- 1 mysql mysql 0 Jul  6 01:54 v3_ban_date.par  

and this is the .frm format (hexdump -v -C v3_ban_date.frm):

|partition.... PA|  |RTITION BY RANGE|  | (TO_DAYS(dt)).(|  |PARTITION p00 VA|  |LUES LESS THAN (|  |0) ENGINE = Inno|  |DB,. PARTITION p|  |1 VALUES LESS TH|  |AN (734653) ENGI|  |NE = InnoDB,. PA|  |RTITION p02 VALU|  |ES LESS THAN (73|  |4745) ENGINE = I|  |nnoDB,. PARTITIO|  |N p031 VALUES LE|  |SS THAN (734863)|  | ENGINE = InnoDB|  |,. PARTITION p04|  | VALUES LESS THA|  |N (734959) ENGIN|  |E = InnoDB,. PAR|  |TITION p05 VALUE|  |S LESS THAN (735|  |103) ENGINE = In|  |noDB,. PARTITION|  | p06 VALUES LESS|  | THAN (735160) E|  |NGINE = InnoDB,.|  | PARTITION p07 V|  |ALUES LESS THAN |  |(735210) ENGINE |  |= InnoDB,. PARTI|  |TION MERGER_2013|  |324 VALUES LESS |  |THAN (735316) EN|  |GINE = InnoDB,. |  |PARTITION pcurre|  |nt_2013324 VALUE|  |S LESS THAN (735|  |317) ENGINE = In|  |noDB,. PARTITION|  | MERGER_201375 V|  |ALUES LESS THAN |  |(735419) ENGINE |  |= InnoDB,. PARTI|  |TION pcurrent_20|  |1375 VALUES LESS|  | THAN (735420) E|  |NGINE = InnoDB).|  

The partition definitions on the first slave looks like this:

00000000  1f 00 00 00 74 00 01 07  0c 00 00 00 0c 0c 0c 0c  |....t...........|  00000010  0c 0c 0c 0c 0c 0c 0c 0c  60 00 00 00 70 30 30 00  |........`...p00.|  00000020  70 31 00 70 30 32 00 70  30 33 31 00 70 30 34 00  |p1.p02.p031.p04.|  00000030  70 30 35 00 70 30 36 00  70 30 37 00 4d 45 52 47  |p05.p06.p07.MERG|  00000040  45 52 5f 32 30 31 33 33  32 34 00 70 63 75 72 72  |ER_2013324.pcurr|  00000050  65 6e 74 5f 32 30 31 33  33 32 34 00 4d 45 52 47  |ent_2013324.MERG|  00000060  45 52 5f 32 30 31 33 38  32 38 00 70 63 75 72 72  |ER_2013828.pcurr|  00000070  65 6e 74 5f 32 30 31 33  38 32 38 00              |ent_2013828.|  0000007c  

I've tried to copy this file to the second slave, then used bvi to edit the date to 201375 (corresponding with what is in the .frm file), then tried to start the second slave, but it didn't work.

So the questions are:

  1. How do I know what information is incorrect in the .frm file?
  2. Can I re-create the .par file from the one on the first slave?

PS: I know I can rebuild the second slave but I think this is an interesting challenge.

MySQL multiple joins, not getting one value, but no error

Posted: 28 Aug 2013 10:02 AM PDT

I apologize if this is a noob question. I have already tried google without much success, so I'm trying this.

I have the below query, which is working fine for the most part, but I am not getting a value from the f.date_modified field. All other fields populate fine, including r.rating (which is a similar snippet to the favorite) and the query does not throw an error, so I'm not sure what else to try.

SELECT i.*, u.username, r.rating, f.date_modified   FROM image i   INNER JOIN user u ON u.uid = 1   LEFT JOIN rating r ON r.iid = 32 AND r.uid = 1   LEFT JOIN favorite f ON f.iid = 32 AND f.uid = 1  WHERE i.iid = 32   GROUP BY i.iid    FAVORITE table includes only:   - uid (int)   - iid (int)   - date_modified (timestamp)  

Any help or advice would be appreciated. Thanks, TE

How to check what stats were used in query

Posted: 28 Aug 2013 09:35 AM PDT

respected DBAs . . . is it possible to tell when statistics were last accessed by a query using T-SQL? I'm trying to ascertain what statistics are involved in a query, I know I can check the visual query plan but I wondered whether there was a table (or tables) with this information.

Thanks!

Why doesn't running a full sample *always* improve query performance

Posted: 28 Aug 2013 10:41 AM PDT

Conventional wisdom would suggest that running a full sample of statistics would provide SQL with the optimum information to make the best decisions when forming a query plan for execution and therefore the best performance.

However, I have a number of queries (produced by Business Objects) which prefer a sample below 100%. The exact percent required for sample varies for success. What I want to know (but am struggling to find online) is why is why a 100% sample doesn't produce the best performance. Index maintenance is regular, every night following the 'Ola Hallengren' method.

The nature of the query makes it difficult to post (contains sensitive information) but those who are familiar with business objects queries will know they can be unwieldy. . .rather unusual in their construction, I often think its their setup which pokes the performance.

Thanks!

Ranking MySQL, group by, with ties

Posted: 28 Aug 2013 09:13 AM PDT

I am trying to rank some MySQL results and this works quite well (from here):

SET @rank := 0;  SET @prev := NULL;  UPDATE rankings,        (SELECT @rank := IF(@prev = subset, @rank + 1, 1) AS rank,         id, subset, score, @prev := subset  FROM rankings ORDER BY subset, score DESC) AS rankcalc  SET rankings.lrank = rankcalc.rank WHERE rankings.id = rankcalc.id  

However it doesn't allow for ties, meaning if you have the same number of points as someone else one of you will be a different rank to the other. Using other ideas from Google I have been able to get the rank working correctly, but not able to use any kind of GROUP BY function which is quite important.

Please help! Any ideas gratefully received.

Get missed transactions from slave

Posted: 28 Aug 2013 08:42 AM PDT

I'm trying to write a disaster recovery plan, I have one way replication with 1 master(A) and 3 slaves(B,C,D)

     A       /|\     / | \    B  C  D  

Let's say my master crash, and I loose all data on that server, I still have 3 slaves, and I'll promote the most up-to-date to be my new master, let say slave B is up to date and C and D are not, at the point I promote B to master my slaves have the follow result for SHOW SLAVE STATUS\G:

slaveB> SHOW SLAVE STATUS\G  *************************** 1. row ***************************        Relay_Master_Log_File: master-bin.000002          Exec_Master_Log_Pos: 1476884    slaveC> SHOW SLAVE STATUS\G  *************************** 1. row ***************************        Relay_Master_Log_File: master-bin.000002          Exec_Master_Log_Pos: 78684    slaveD> SHOW SLAVE STATUS\G  *************************** 1. row ***************************        Relay_Master_Log_File: master-bin.000001          Exec_Master_Log_Pos: 746848  

In order to promote B to master, I want to apply all missing transactions on C and D, so before B become the new master and start to receive queries from applications, I've all slaves on a consistent state, how can I find the missed transactions from C and D on binary logs from B (I've the --log-slave-updates option enabled on all server).

For legacy issues, the envoirement use MySQL 5.0

How to find a previous date, and the counts between those dates, based on these tables

Posted: 28 Aug 2013 08:37 AM PDT

I have yet to find an answer to this problem. Can anyone help? I figured it out when I only had two tables now I have broken them up into three tables.

The SQL Fiddle is here: SQL Fiddle Code.

I'm looking to keep the tables as is but find the Previous Date that a songs was played, once that's found count those shows between the current and previous date. Right now the Previous date finds the previous show date but not based on an individual song_id. Which is what I'm trying to do. If it would help here's my original code for the Previous Date based on only two tables and that worked. WORKING ORIGINAL TWO TABLE ONLY Fiddle

This is been holding me up for weeks. Thank you for any assistance!!!

How to model medium.com paragraph comment

Posted: 28 Aug 2013 09:20 AM PDT

If you have used medium.com, I am interested to know more about how to model readers' paragraph comments in database.

Specifically, how connections between these comments and paragraphs are modeled?

My initial thought is (1) using the order of paragraphs in article, e.g. paragraph 1, 2... It won't work if the author deletes/adds/moves paragraphs, which messes up the order.

Then I think about (2) assigning id to paragraphs, e.g. SHA hash the paragraph as its unique id. But it won't work if the author edits the paragraph.

Could anyone help me?

Btw, could the solution extends to sentences/phrases comments as well?

ORA-1033: ORACLE initialization or shutdown in progress

Posted: 28 Aug 2013 08:17 AM PDT

I am stuck with a little Oracle problem. My PC was shut down improperly after my graphics card freaked out, which caused Oracle 11g Express to not properly close down. This means that I get the above error when I try to connect. My PC has been on for 3 hours, and I still get the error, so it is safe to assume that there's something wrong.

The usual way of remedying the problem is to use the following sequence of commands from the command line:

sqlplus /nolog  SQL> connect / as sysdba  SQL> shutdown abort  SQL> startup nomount  SQL> alter database mount;  SQL> alter database open;  

However, when I give the shutdown command, I get the error

ORA-12560: TNS:protocol adapter error

So now, I can't mount my db. ANy help would be appreciated.

SQL Server Database Design for Reporting

Posted: 28 Aug 2013 08:04 AM PDT

We are designing a reporting solution for survey results. Although datasets are reasonable in size (rarely more than 500.000 respondents and 50 questions), performance is obviously a major concern.

Due to the nature of the solution, most queries return aggregated values and no locks are needed.

Storing answers in a "normal" tabular format (i.e. a column for each question and a row for each respondent) works well in terms of performance, and allows us to query the data like so:

SELECT COUNT(*) FROM Answers WHERE Gender = 'M' and Age < 20   

However, this design requires a new table for each survey as the questions (columns) differ, which is obviously not an ideal solution.

Therefore, we are considering a design where we store answer data in a table that would basically just hold a respondent ID, a question ID and an answer value, thereby "transposing" the data (i.e. there would be a row for each respondent/question combination in the Answers table).

In this design, we would have to use exists conditions (or joins) to filter our data, e.g:

SELECT COUNT(*)   FROM Answers AS A1   WHERE A1.QuestionID = 'Gender' AND A1.VALUE = 'M' AND EXISTS  (       SELECT *       FROM Answers AS A2       WHERE A2.RespondentID = A1.RespondentID AND A2.QuestionID = 'Age' AND A2.Value > 18  )  

This would allow as to handle any survey without changing the database schema but we are concerned about what the impact might be on performance?

Or perhaps there is a better way to deal with this issue altogether?

Is there a difference between granting SELECT a user and adding them to the db_datareader role?

Posted: 28 Aug 2013 08:48 AM PDT

I was asked to grant SELECT, INSERT and UPDATE to a given user for all tables in a database. I ended up adding the user to db_datareader and granting them INSERT and UPDATE at the database level.

But that got me thinking, what is the difference (if any) between granting a user SELECT permission at a database level, or adding them to the db_datareader role? And is there any best practice one way or the other?

Reasons to Backup (or not backup) QA and Dev Databases

Posted: 28 Aug 2013 11:16 AM PDT

Pretty simple question: is it standard/best practice to backup dev and QA instances of the database?

Assume that the schema information is kept in some form of source control.

What is causing Waiting for table level lock errors?

Posted: 28 Aug 2013 11:51 AM PDT

We got the database hanging twice already and trying to find a cause.

show processlist  Waiting for global read lock | INSERT INTO {myisam_table} ...  

In here the disk space was full so we thought the problem was over after giving it some more but the next day at midday it hanged again:

show processlist  Waiting for table level lock | UPDATE {myisam_table} ...   

What could be causing it?

Mysql Default Engine: InnoDB.

Database has a mixture of tables with both MyISAM and InnoDB engines.

Log posted here:

http://arturito.net/2013/08/28/mysql-waiting-for-table-level-lock-errors/

Does TRIGGER improve the performance?

Posted: 28 Aug 2013 12:55 PM PDT

I understand that TRIGGERs are useful, as they take care of subsequent queries, but does they have an advantage from performance point of view too?

Replacing two or three queries as

INSERT INTO table1 ...  UPDATE table2 ...  UPDATE table3 ...  

with a trigger-based query as

INSERT INTO table 1  

with trigger

CREATE TRIGGER test  AFTER INSERT ON table1  FOR EACH ROW BEGIN  UPDATE table2 ... WHERE id=NEW.id;  UPDATE table3 ... WHERE id=NEW.id;  END  
  1. Does we still have the same three queries with identical performance? I mean does it matter if we perform the subsequent queries or the TRIGGER does?

  2. What if we use an API (like conducting queries through PHP). The subsequent queries are internal with no need to connector/driver. Does the TRIGGER improve the performance?

Additional Information: The database is mysql 5.5 with innoDB.

CASE Statement - 1st case is being ignored

Posted: 28 Aug 2013 04:52 PM PDT

I have a table (usr_t_user_reg) in which I register all the users of the database. Now I want to create a view (usr_v_user_not_reg) which shows me all the users that I haven't registered yet or users that are no longer on the database.

The table looks like this:

| username | db_instance | forename | surname | ...... |  |----------|-------------|----------|---------|--------|  |  xxxxx   |    DB1      |   xxxxx  |   xxxx  | ...... |  |  xxxxx   |    DB1      |   xxxxx  |   xxxx  | ...... |  |  xxxxx   |    DB2      |   xxxxx  |   xxxx  | ...... |  

Here is the statement to create the view:

CREATE OR REPLACE FORCE VIEW usr_v_user_not_reg AS    SELECT username "User", db_instance "Instance",   (    CASE       WHEN username IN      (        SELECT username         FROM SYS.dba_users        MINUS        SELECT username         FROM usr_t_user_reg      ) THEN 'not registered'      WHEN username IN      (        SELECT username         FROM usr_t_user_reg        MINUS        SELECT username         FROM SYS.dba_users      ) THEN 'no longer present'    END   ) "Status"   FROM usr_t_user_reg   WHERE db_instance = 'DB1'   AND username NOT IN   (    SELECT username    FROM usr_t_user_reg    INTERSECT    SELECT username    FROM SYS.dba_users   );  

At first I thought it would work fine and then I noticed that the first case is kind of being ignored. I have some users who are not registered yet and they are not displayed in the output. If I use the first case as a standalone query

SELECT username   FROM SYS.dba_users  MINUS  SELECT username   FROM usr_t_user_reg  WHERE db_instance = 'DB1';  

then it works. As soon as I want to use it in the whole query (even without the CREATE command) it doesn't work. The second case works without any problems.

I think I'm missing something, but I don't know what.

Edit: I think I forgot to mention something. On the database are multiple instances and with the view I want to know on which instance there is a user who isn't registered yet or who isn't available anymore. The view I want to create should have an output similiar to this:

| username | instance |     status     |  |----------|----------|----------------|  |   XYA    |   DB1    | not registered |  |   XYB    |   DB2    | not registered |  |   XYC    |   DB3    | non-existend   |  

SQL Server Replication: "ALTER TABLE ALTER COLUMN" is not propagated to subscribers

Posted: 28 Aug 2013 09:17 AM PDT

We are running SQL Server 2008 R2 SP1 as publisher & distributor, and SQL Server 2005 SP3 as subscriber. The replication of schema changes is activated, and the replication has been running for years, including frequent schema changes (new column, new constraints, etc).

The following instruction was sent on the publisher:

use myDatabase  alter table Tbl_TypeLignePaye        alter column natureTypeLignePaye nvarchar(12)  go  

where field natureTypeLignePaye was originally nvarchar(3) null

The query ran without errors on the main database. The result is the following:

  1. The field natureTypeLignePaye still appears as nvarchar(3) in the object browser
  2. The column properties screen indicates a nvarchar type with a length of 12
  3. The change was not propagated to the subscribers

Any idea on what is going on with this database?

Publisher: object browser window vs property window give incoherent data

field type and length

Is it possible to have extra tables in a Slave with MySQL Replication

Posted: 28 Aug 2013 03:17 PM PDT

As my title mention I have a Master and a Slave database.

Master if for operations data and my slave mainly for reporting stuff.

The issue is that I need to create extra tables on reporting that can't be on the master, but the way my replication is set (the simplest one mentioned by the official doc) at the moment, this breaks the replication system.

How could I add tables on the Slave without Master caring about it ? Is it even possible ?

Oracle 11g listener fails with ORA-12514 and ORA-12505 errors

Posted: 28 Aug 2013 12:01 PM PDT

I run an instance of Oracle 11g locally on my development machine and can connect to the local instance directly via SqlPlus:

c:\>sqlplus ace    SQL*Plus: Release 11.2.0.2.0 Production on Mon Mar 11 11:50:20 2013    Copyright (c) 1982, 2010, Oracle.  All rights reserved.    Enter password:    Connected to:  Oracle Database 11g Express Edition Release 11.2.0.2.0 - Beta    SQL> select count(*) from my_table ;      COUNT(*)  ----------        5297  

But I cannot connect to it via the listener:

c:\>sqlplus -L "user/pw@(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = XE)))"    SQL*Plus: Release 11.2.0.2.0 Production on Mon Mar 11 11:52:40 2013    Copyright (c) 1982, 2010, Oracle.  All rights reserved.    ERROR:  ORA-12514: TNS:listener does not currently know of service requested in connect  descriptor      SP2-0751: Unable to connect to Oracle.  Exiting SQL*Plus  

Similarly, if I connect via SqlDeveloper I get an error (albeit ORA-12505, TNS:listener does not currently know of SID given in connect descriptor).

This instance has been stable and working fine for a year or more until today, a Monday morning. Our corporate IT do sometimes push new policies and updates over the weekend, so I'm assuming that something has changed, but I've not been able to work out what.

I've restarted the service and the listener several times, the listener log doesn't give any clues.

The listener seems fine:

c:\>lsnrctl status    LSNRCTL for 32-bit Windows: Version 11.2.0.2.0 - Beta on 11-MAR-2013 11:55:33    Copyright (c) 1991, 2010, Oracle.  All rights reserved.    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))  STATUS of the LISTENER  ------------------------  Alias                     LISTENER  Version                   TNSLSNR for 32-bit Windows: Version 11.2.0.2.0 - Beta  Start Date                11-MAR-2013 11:17:30  Uptime                    0 days 0 hr. 38 min. 3 sec  Trace Level               off  Security                  ON: Local OS Authentication  SNMP                      OFF  Default Service           XE  Listener Parameter File   C:\oraclexe\app\oracle\product\11.2.0\server\network\admin\listener.ora  Listener Log File         C:\oraclexe\app\oracle\diag\tnslsnr\FBC305BB46560\listener\alert\log.xml  Listening Endpoints Summary...    (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\EXTPROC1ipc)))    (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=machine.domain.com)(PORT=1521)))  Services Summary...  Service "CLRExtProc" has 1 instance(s).    Instance "CLRExtProc", status UNKNOWN, has 1 handler(s) for this service...  Service "PLSExtProc" has 1 instance(s).    Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...  The command completed successfully  

Port 1521 seems ok:

c:\>netstat -an -O | find /i "1521"    TCP    0.0.0.0:1521           0.0.0.0:0              LISTENING       4368    TCP    169.243.90.109:55307   159.185.207.100:1521   ESTABLISHED     12416    TCP    [::]:1521              [::]:0                 LISTENING       4368  

(PID 4368 is TNSLSNR.exe process.)

Also, I can tnsping to the XE service:

c:\>tnsping xe    TNS Ping Utility for 32-bit Windows: Version 11.2.0.2.0 - Beta on 11-MAR-2013 12:27:47    Copyright (c) 1997, 2010, Oracle.  All rights reserved.    Used parameter files:  C:\oraclexe\app\oracle\product\11.2.0\server\network\admin\sqlnet.ora      Used TNSNAMES adapter to resolve the alias  Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = machine.domain.com)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = XE)))  OK (210 msec)  

The listenerr.ora file:

SID_LIST_LISTENER =    (SID_LIST =      (SID_DESC =        (SID_NAME = PLSExtProc)        (ORACLE_HOME = C:\oraclexe\app\oracle\product\11.2.0\server)        (PROGRAM = extproc)      )      (SID_DESC =        (SID_NAME = CLRExtProc)        (ORACLE_HOME = C:\oraclexe\app\oracle\product\11.2.0\server)        (PROGRAM = extproc)      )    )    LISTENER =    (DESCRIPTION_LIST =      (DESCRIPTION =        (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1))        (ADDRESS = (PROTOCOL = TCP)(HOST = machine.domain.com)(PORT = 1521))      )    )    DEFAULT_SERVICE_LISTENER = (XE)  

Additionally, and I've no idea if it is related, I can't seem to access apex on https://127.0.0.1:8080/apex (even though the permissions for that seem fine).

So where else should I be looking?

Update with requested information:

SQL> show parameter service_names    NAME                                 TYPE        VALUE  ------------------------------------ ----------- ------------------------------  service_names                        string      XE  SQL> show parameter local_listener    NAME                                 TYPE        VALUE  ------------------------------------ ----------- ------------------------------  local_listener                       string  

Update2: as @miracle173 correctly points out, the listener was not fine. With the updated 'local_listener' parameter now shows extra information:

Listening Endpoints Summary...    (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\EXTPROC1ipc)))    (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=machine.domain.com)(PORT=1521)))    (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=1521)))  Services Summary...  Service "CLRExtProc" has 1 instance(s).    Instance "CLRExtProc", status UNKNOWN, has 1 handler(s) for this service...  Service "PLSExtProc" has 1 instance(s).    Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...  Service "XEXDB" has 1 instance(s).    Instance "xe", status READY, has 1 handler(s) for this service...  Service "xe" has 1 instance(s).    Instance "xe", status READY, has 1 handler(s) for this service...  The command completed successfully  

Replication issue - CREATE SELECT alternative?

Posted: 28 Aug 2013 05:17 PM PDT

I've an MySQL 5.1 slave for our BI team.

They need to make some CREATE SELECT with big select queries (several million lines).

As CREATE SELECT is a DDL, if the replication attempts to update some rows in same tables than the SELECT statement, replication is blocked until the freeing of the CREATE SELECT.

Do you now a good non-blocking alternative to thoses CREATE SELECT statements?

I thought to an SELECT INTO OUTPUT FILE then LOAD DATA INFILE but they will fill out our disks as BI guys like to do... :)

Max.

How to drop a DB2 instance when the instance owner was removed

Posted: 28 Aug 2013 12:17 PM PDT

This is a real sticky situation. I was handed over a machine (running an AIX 7.1), and my first task was to re-install DB2 server on it. But someone before me had conveniently removed an instance owner account, and probably recreated it. Now, the problem is this:

1) When I try to uninstall DB2, it says the instance is active and has to be dropped first.

2) When I try to drop this instance, DB2 says there is no such instance.

I am quite new to DB2 administration. Not sure how to proceed here. Any help is appreciated

Thanks

Ensure correct username when using pg_restore

Posted: 28 Aug 2013 10:17 AM PDT

I have just installed postgres 9.1.6 on a local Ubuntu server. Now I'm trying to restore a database dump from a database on Heroku. The local database is setup like this:

sudo -u postgres psql -c "create user app_user with password 'pass';"  sudo -u postgres psql -c "create database app_production owner app_user;"  

Now, when I try to restore the the dump I use the following command:

pg_restore --verbose --schema=public --no-acl --no-owner --jobs=8 --exit-on-error --username=app_user --dbname=app_production /tmp/app_production.dump  

Now in psql with \l to see ownerships I get the following:

                                         List of databases            Name    |   Owner   | Encoding |   Collate   |    Ctype    |   Access privileges    ------------------+-----------+----------+-------------+-------------+-----------------------   app_production   | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |    postgres         | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 |    template0        | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +                    |           |          |             |             | postgres=CTc/postgres   template1        | postgres  | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +                    |           |          |             |             | postgres=CTc/postgres  

As you can see, the ownership of app_production database has now turned from app_user to postgres. I would have expected the owner of the app_production database to still be app_user, so what am I doing wrong?

BTW, The --schema=public was added, because I was getting a weird error:

"Could not execute query: ERROR: must be owner of extension plpgsql"

Another thing is, that the owner of the dump is the user that the database was having on heroku, which would be something like 'jebf473b73bv73v749b7'

No comments:

Post a Comment

Search This Blog