Sunday, June 16, 2013

[how to] How to keep an Oracle 11g instance lean?

[how to] How to keep an Oracle 11g instance lean?


How to keep an Oracle 11g instance lean?

Posted: 16 Jun 2013 09:13 PM PDT

As a Mac user I run a local Oracle Enterprise Linux (OEL) Developer Day VM that houses an Oracle 11g instance for development/unit testing. I frequently refresh the data in the schemas (sometimes multiple times a day) and do not have a need for snapshots or backups. The number of rows in each schema's tables are orders of magnitude smaller than those of a production instance.

I've had to extend the tablespaces' size more than a few times and I suspect I have improper resource allocation to begin with (SGA, PGA) for the way in which I'm using the instance (not for SQL Developer, TimesTen, etc. -- just for the Oracle 11g instance).

I'm looking for recommendations/resources to help me

  • Size my instance for a small amount of data in the first place
  • 'Clean' my instance so that I don't need to increase data file size
  • Modify my OEL VM instance to most efficiently run the only thing I need it to run -- the Oracle instance

For example, I have turned off snapshot collection. But I think there are other (more?) impactful changes I can make to help increase the performance and reduce the footprint of VM.

MYSQL Timezone support

Posted: 16 Jun 2013 07:23 PM PDT

We are having a shared hosting plan and they are saying that do provide MYSQL Timezone support in a shared hosting plan. I can create timezone related tables in our database and populate them with required data(data from from our local MYSQL Timezone related tables. How to view the code syntax for MySQL "CONVERT_TZ" function?

Thanks Arun

Which design to choose in this case?

Posted: 16 Jun 2013 04:16 PM PDT

I am designing a database for a video game. I have some data structures like {x, y, z, dimension} used across multiple tables.

For instance :

Table player_death

id: int  player_id: int  death_x: int  death_y: int  death_z: int  death_dimension: int  

Since {x, y, z, dimension} represents a geographic vector in the video game, a lot of table contain those columns. My question is : should I create a table named vector containing a pool of all the vectors used in my tables (and then use reference to them) or should I keep adding {x, y, z, dimension} columns to each table using a vector ? What is the best practice ? I am a beginner in database design, sorry if the question sounds stupid.

reset mysql root password on ubuntu

Posted: 16 Jun 2013 03:24 PM PDT

In short: my goal is to reset mysql root password on ubuntu.

Background: I set up a ubuntu desktop and installed LAMP stack last week. I then went to do something else, and just got back to carry on today. Either I did not set mysql password, or I have forgotten what I typed in.

So I tried to reset it.

I stopped mysql by

sudo stop mysql  

I tried to start mysql using command line options, but it seems start only takes service name(??)

So I ran:

sudo /etc/init.d/mysql start --skip-network  --skip-grant-tables  

Went into mysql, fine. Then

mysql> UPDATE mysql.user SET Password=PASSWORD('newpasswd') WHERE User='root';  ERROR 1142 (42000): UPDATE command denied to user ''@'localhost' for table 'user'  mysql> show grants for ''@'localhost';  +--------------------------------------+  | Grants for @localhost                |  +--------------------------------------+  | GRANT USAGE ON *.* TO ''@'localhost' |  +--------------------------------------+  1 row in set (0.00 sec)  

If ''@'localhost' has USAGE on all-schemal.all-tables, I'd think I can just update the password like this... Where did I miss?

I used the instruction in mysql official website, the last part. I also tried the unix one but with no luck. And I was unsure when it says 'unix', does it mean all *nix system or just unix???

Anyway, any thoughts are welcome! many thx!!!

Deleting large chunks of data

Posted: 16 Jun 2013 02:12 PM PDT

    create table Grid(      grid_id,      //other columns      )      //grid_id is primary key        create table grid_info(      grid_id,      colulm_id,      column_vale      )      //grid_id is foreign key to grid_id of table Gird    // These is around 10 million      select grid_id from grid into #temp where //some logic        CREATE CLUSTERED INDEX ix_temp_id ON #temp(grid_id)        set rowcount 10000                      while 1 = 1                      begin                           delete from grid_info where grid_id in (select grid_id from #temp)                          if @@rowcount = 0                          break                      end                      set rowcount 0  

This batch is slow because of large number of rows to be deleted. What are the things to be considered to make things faster. Like how the indexing,triggers plays a role and how can you test the performance of your queries using what kind tools?

A table with a foreign-key, referencing the tables own primary key

Posted: 16 Jun 2013 08:42 PM PDT

I was looking at the database sample that is provided by the DB2 server installaion. This database is the one created when running db2samle.exe from the "bin" folder. I noticed that the table "ACT" contains a primary key with a references to it self, see sql below, and was wondering if there are any logic reason for this? or is it just a glitzh in the design?

CREATE TABLE "DB2ADMIN"."ACT" (          "ACTNO" SMALLINT NOT NULL,          "ACTKWD" CHAR(6) NOT NULL,          "ACTDESC" VARCHAR(20) NOT NULL      )    ALTER TABLE "DB2ADMIN"."ACT" ADD CONSTRAINT "PK_ACT" PRIMARY KEY      ("ACTNO");    ALTER TABLE "DB2ADMIN"."ACT" ADD CONSTRAINT "RPAA" FOREIGN KEY      ("ACTNO")      REFERENCES "DB2ADMIN"."ACT"      ("ACTNO")      ON DELETE RESTRICT;  

Update performance: clustered versus covering index

Posted: 16 Jun 2013 07:19 PM PDT

I have a simple, 3-column table with about 50 million rows in it. This table gets about 5,000 inserts/updates per second, and perhaps 20 queries per second are executed against the table. The table looks like this:

Controller: Int  ExecutionTime: DateTime  Result: Int  

To maximize the efficiency of my queries, I need two indexes. (Result Includes Execution Time) and (Controller, ExecutionTime). These two indexes fully cover my queries - all information is served directly from the indices, no table lookups required.

I chose nonclustered indices because I was worred about the performance hit using a clustered index with so many updates. But it occurs to me that since I am fully covering the queries, this might not be a valid concern - perhaps my covering, nonclustered indices require the same amount of maintenance as a clustered index would.

So my question: In a table with a lot of inserts/updates, will a covering, nonclustered index usually have a lower UPDATE performance hit than a clustered index?

Thanks for your time and help!

Mysql slow.log with noindex, but without tmp_tables

Posted: 16 Jun 2013 01:17 PM PDT

Is it possible to change anything in config to logging in slow log queries without index use, but not in the temporary tables. I have a lot enough subquerys, so it creating anytime temporary table, that can not be have a index, so the full query landing in the slow.log. How i can change it?

Why is simple timestamp comparison performing slow

Posted: 16 Jun 2013 11:00 AM PDT

I have a table with 12,582,912 rows and can't figure it out why is a simple comparison ad_hot_expire > 1371399048 so slow (SHOW PROFILES: http://i.imgur.com/6L5HQyW.png)? The subquery is faster, but still slow for a backend.

Explain http://i.imgur.com/PdOnH1V.png

Queries

SELECT SQL_NO_CACHE id              FROM mod_ad              WHERE active != 0 AND usr_active != 0 AND ad_hot_expire != 0              ORDER BY id DESC LIMIT 9;    SELECT SQL_NO_CACHE id              FROM mod_ad              WHERE active != 0 AND usr_active != 0 AND ad_hot_expire > 1371399048              ORDER BY id DESC LIMIT 9;    SELECT SQL_NO_CACHE id              FROM (          SELECT id FROM mod_ad                  WHERE active != 0 AND usr_active != 0 AND ad_hot_expire > 1371399048                  LIMIT 9) as ad           ORDER BY id DESC;  

Schema

CREATE TABLE `mod_ad` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `user_id` int(10) unsigned NOT NULL,    `page_id` int(10) unsigned NOT NULL,    `cat_id` int(10) unsigned NOT NULL,    `subcat_id` int(10) unsigned NOT NULL,    `program` tinyint(1) unsigned NOT NULL,    `region_id` int(10) unsigned NOT NULL,    `zone_id` int(10) unsigned NOT NULL,    `city_id` int(10) unsigned NOT NULL,    `sort` enum('firm','priv') NOT NULL,    `type` enum('predaj','kúpa','výmena','darujem','hľadám','ponúkam','iné') NOT NULL,    `condition` varchar(24) NOT NULL,    `name` varchar(128) NOT NULL,    `desc` text NOT NULL,    `location` varchar(128) NOT NULL,    `keywords` varchar(255) NOT NULL,    `price` decimal(10,2) NOT NULL,    `price_type` varchar(20) NOT NULL,    `cperson` varchar(128) NOT NULL,    `firmname` varchar(128) NOT NULL,    `zip` varchar(5) NOT NULL,    `email` varchar(255) NOT NULL,    `tel` varchar(20) NOT NULL,    `tel2` varchar(20) NOT NULL,    `web` varchar(255) NOT NULL,    `video` varchar(255) NOT NULL,    `marked_expire` int(11) unsigned NOT NULL,    `top_expire` int(11) unsigned NOT NULL,    `ad_hot_expire` int(11) unsigned NOT NULL,    `ad_border_expire` int(11) unsigned NOT NULL,    `ad_heading_expire` int(11) unsigned NOT NULL,    `ad_weblink_expire` int(11) unsigned NOT NULL,    `active` int(10) unsigned NOT NULL,    `usr_active` int(10) unsigned NOT NULL,    `added` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,    `archive` int(10) unsigned NOT NULL,    `expire` int(11) unsigned NOT NULL,    `token` varchar(32) NOT NULL,    `views` mediumint(9) NOT NULL,    `ip` varchar(15) NOT NULL,    PRIMARY KEY (`id`),    KEY `user_id` (`user_id`),    KEY `page_id` (`page_id`),    KEY `cat_id_subcat_id` (`cat_id`,`subcat_id`),    KEY `region_id_zone_id` (`region_id`,`zone_id`),    KEY `program` (`program`)  ) ENGINE=MyISAM DEFAULT CHARSET=utf8;  

Should my Postgres backup server be as beefy (IO, processor, RAM) as my master?

Posted: 16 Jun 2013 04:48 PM PDT

I've set up a Postgres database on a server with 2 CPU cores, 4GB of RAM and an SSD volume (on top of RAID 10 of course). This is our (sort-of) "beefy" production database server (I can easily add more RAM, etc. when the time comes).

Now it's time to set up a backup Postgres server (not used for reads or anything else, just strictly a replication/backup). I don't want to have to fork out another $150/mo for the same exact setup for my backup server, and I know that a lot of the server resources used for a production database have to do with complicated queries, sequential scans, etc., so my thinking was that I could get away with a 512MB server with about 1/6 the I/O performance, and 1 CPU core.

Is that thinking correct, or does a replication/backup Postgres server typically need nearly the same specs as a production Postgres server?

Note: I'm using Postgres 9.2.4 with streaming (non-synchronous) replication on an Ubuntu 12.04.2 server.

What is the best way to transport database fields from one database to another?

Posted: 16 Jun 2013 01:41 PM PDT

I have two databases. The table name and fields name are different and field numbers are unequal. I need to transport all fields from one database to another. I can import the database as CSV format.

I can use a PHP script which will accomplish this.

But is there any other way to do this easily without any script.

Is it possible to integrate Oracle NoSQL Enterprise solution with Google app engine. If yes, then how..? [closed]

Posted: 16 Jun 2013 05:34 AM PDT

Integration Google App engine and oracle NOSql for enterprise application.

Is it possible to avoid filesort?

Posted: 16 Jun 2013 06:11 AM PDT

Is it possible to avoid 'Using temporary' and 'Using filesort' for the following SELECT query? I can't figure out a way to do it.

I tried adding indexes, for both top_expire and program, but didn't help With the ORDER BY the query takes over 1 second and withoud it is just 0.003 seconds on localhost

Query

SELECT ad.*, p.link      FROM (SELECT ad.*          FROM mod_ad ad           JOIN mod_ad_auta auta ON ad.id = auta.ad_id          WHERE ad.active != 0 AND ad.usr_active != 0 AND ad.expire > 1371151608  AND ad.cat_id = '1' AND ad.price <= '10000'             AND auta.rocnik BETWEEN '1950' AND '2013'             AND auta.km BETWEEN '0' AND '500000'          ORDER BY top_expire DESC, program DESC,  ad.id DESC  LIMIT 0,10) as ad  JOIN pages p ON ad.page_id=p.page_id;  

Schema

CREATE TABLE `mod_ad` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `user_id` int(10) unsigned NOT NULL,    `page_id` int(10) unsigned NOT NULL,    `cat_id` int(10) unsigned NOT NULL,    `subcat_id` int(10) unsigned NOT NULL,    `program` tinyint(1) unsigned NOT NULL,    `region_id` int(10) unsigned NOT NULL,    `zone_id` int(10) unsigned NOT NULL,    `city_id` int(10) unsigned NOT NULL,    `sort` enum('firm','priv') NOT NULL,    `type` enum('predaj','kúpa','výmena','darujem','hľadám','ponúkam','iné') NOT NULL,    `condition` varchar(24) NOT NULL,    `name` varchar(128) NOT NULL,    `desc` text NOT NULL,    `location` varchar(128) NOT NULL,    `keywords` varchar(255) NOT NULL,    `price` decimal(10,2) NOT NULL,    `price_type` varchar(20) NOT NULL,    `cperson` varchar(128) NOT NULL,    `firmname` varchar(128) NOT NULL,    `zip` varchar(5) NOT NULL,    `email` varchar(255) NOT NULL,    `tel` varchar(20) NOT NULL,    `tel2` varchar(20) NOT NULL,    `web` varchar(255) NOT NULL,    `video` varchar(255) NOT NULL,    `marked_expire` int(11) unsigned NOT NULL,    `top_expire` int(11) unsigned NOT NULL,    `ad_hot_expire` int(11) unsigned NOT NULL,    `ad_border_expire` int(11) unsigned NOT NULL,    `ad_heading_expire` int(11) unsigned NOT NULL,    `ad_weblink_expire` int(11) unsigned NOT NULL,    `active` int(10) unsigned NOT NULL,    `usr_active` int(10) unsigned NOT NULL,    `added` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,    `archive` int(10) unsigned NOT NULL,    `expire` int(11) unsigned NOT NULL,    `token` varchar(32) NOT NULL,    `views` mediumint(9) NOT NULL,    `ip` varchar(15) NOT NULL,    PRIMARY KEY (`id`),    KEY `user_id` (`user_id`),    KEY `page_id` (`page_id`),    KEY `cat_id` (`cat_id`),    KEY `region_id` (`region_id`),    KEY `zone_id` (`zone_id`)  ) ENGINE=MyISAM DEFAULT CHARSET=utf8;    CREATE TABLE `mod_ad_auta` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `ad_id` int(10) unsigned NOT NULL,    `model` int(10) unsigned NOT NULL,    `rocnik` smallint(5) unsigned NOT NULL,    `palivo` varchar(10) NOT NULL,    `karoseria` varchar(12) NOT NULL,    `km` mediumint(8) unsigned NOT NULL,    `prevodovka` varchar(12) NOT NULL,    `farba` varchar(16) NOT NULL,    `metaliza` tinyint(1) unsigned NOT NULL,    `obsah` smallint(5) unsigned NOT NULL,    `vykon` smallint(5) unsigned NOT NULL,    `vybava` text NOT NULL,    PRIMARY KEY (`id`),    KEY `ad_id` (`ad_id`)  ) ENGINE=MyISAM DEFAULT CHARSET=utf8;  

What happens when we create index

Posted: 16 Jun 2013 11:14 AM PDT

What happens exactly when I create an index on the empid column of this employee table

CREATE TABLE employee  (    empid number(5),    ename varchar2(30),    sal   number(10,2)  );  

Let's assume the table contains the below data

empid   ename      sal  1       hema       10000  3       suma       10000  2       bheema     15000  4       bhama      20000  5       panama     25000  

what exactly my doubt is what will be created and stored internally in index..... to improve the performance , whether sorted content of empid or some other information related to rows....?

Fast way of delete MySQL record

Posted: 16 Jun 2013 12:16 PM PDT

I have a 1.2 million record in a database table( it use index column also)

I want to delete old 1 million record. I try to do it using PhpMyadmin but it take more time.

DELETE FROM `table` WHERE `id` < 1000000  

Is there any way to do it fast?

I have another question: if i did this using SSH client in command line, will it be fast?

Several PC Cannot Connect To MySQL Server on Certain Time

Posted: 16 Jun 2013 11:54 AM PDT

I have a network of 10 PCs and 1 Server. I installed MySQL Community Server v5.6.10 64-bit on the Server, and all the PCs are Windows XP, using ODBC Connection 5.2(a).

Some of the PC (previously it was 3, now become 4) cannot connect to the MySQL at a certain time, which is 2.00 pm. Before, it was OK, the client can connect to the Server normally, but at 2.00 pm, those PC cannot connect anymore, with this error:

Your connection attempt failed for user xxx@xxx from your host to server at   xxx.xxx.xxx.xxx:xxxx. Cannot connect to MySQL Server on xxx.xxx.xxx.xxx (10055)  Please:  1. Check that mysql is running on server xxx.xxx.xxx.xxx  2. Check that mysql is running on port xxxx  3. Check the xxxxxx has rights to connect to xxx.xxx.xxx.xxx from your address (mysql rights define what clients can connect to the server and from which machines)  4. Make sure you are both providing a password if needed and using the correct password for xxx.xxx.xxx.xxx connection from the host address you're connecting from  

From the above error message:

  1. I already check that mysql is running (other PC can access it normally)
  2. I also check MySQL to run in port xxxx
  3. I also check the user xxxxxx has rights to connect (because it can connect to the server before 2 pm)
  4. I also check the password has been provided correctly
  5. I also check the maximum number of connection in my.ini, which I set to 0 (unlimited connection)
  6. I also check the network connection, but find no problem since I can still ping the server and other PC

The only action I do to make everything work again is by restarting the PC (client), however, I don't want such solution. I want to know what's the cause, is it because of the ODBC connector, the PC, or any other reason. Could anyone give me some hints on what I should check?

Thank you

Oracle DBF import brings me to a weird situation

Posted: 16 Jun 2013 01:01 PM PDT

I have a big problem .

One of my client send me some files to install them to his Oracle server but as a new user in oracle I am not sure how to do that. What I've recieved are this files :

DRSYS01.DBF  INDX01.DBF  SYSTEM01.DBF  TEMP01.DBF  TOOLS01.DBF  UNDOTBS01.DBF  USERS01.DBF  XDB01.DBF  

And if helps , those files are dated at 2007 (maybe do I need another version or Oracle?)

I've googled for those files and what I get is that maybe files are from Acess , Are I am right?

I've hexed the files and in top of them they have this hex

6D6C6B6A1635 >>> mlkj.5  

And In some parts of the files(hexing) are references to Oracle , so I dont know if It's Acess or Oracle. I've also tried to import the files using Navicat

But the result was just not as expected because all tables where nulled and with all columns the same name (from f1-fx)

That's all I know . Thanks you for your help

How to drop inheritance?

Posted: 16 Jun 2013 10:05 AM PDT

I am new to PostgreSQL. The situation I have is someone created a child table inherits from the parent table. And dropped the child table. But the parent table still has the "down arrow" sign on it.

I checked and there's no other link/relation on the parent table. Is this really an issue?

I was told that the parent table is still in the 'been inherited' status and causing the performance issue. How to resolve this? By removing the 'been inherited' status' from the parent table?

Call pl/sql function and discard result

Posted: 16 Jun 2013 04:59 AM PDT

I'm calling a function from pl/sql, but I do not require the result in this case. Is there a cleaner way to call this function without using a dummy variable?

Works:

junk := performWork();  

Does not work:

performWork();         --PLS-00221: performWork is not a procedure or is undefined  EXEC performWork();    --this might work from SQL*Plus, but not from pl/sql  SELECT pkg.performWork() FROM DUAL;  --PLS-00428: INTO clause is expected in this SELECT statement  

Oracle DB 11gR2 netca installation failing

Posted: 16 Jun 2013 04:29 AM PDT

I am getting below error when I run netca from command line

cmd: /u01/app/oracle/product/11.2.0/db_1/bin/netca /silent /responsefile=/oracle_binaries/database/response/netca.rsp    [main] [ 2013-05-17 10:52:47.975 IST ] [CmdlineArgs.parseLogParam:290]  Parsing /log argument...  [main] [ 2013-05-17 10:52:47.976 IST ] [CmdlineArgs.parseLogParam:345]  Returning Log File = null  java.lang.ArrayIndexOutOfBoundsException: 2          at oracle.net.ca.CmdlineArgs.parseArgs(CmdlineArgs.java:416)          at oracle.net.ca.InitialSetup.<init>(NetCA.java:3969)          at oracle.net.ca.NetCA.main(NetCA.java:405)  

copy package from schema B to schema C

Posted: 16 Jun 2013 06:29 AM PDT

I am in the next situation: I am using oracle 11g. I am connected to an user, lets say schema1, where are a lot of permissions. I want to create a script which will copy the Package1 (and its body) from schema2 to schema3.

The script will be executed from schema1.

PS: I already look for a solution and I am not interested in export/import or in using other tools from toad, sql developer etc.

If an account has REQUIRE SUBJECT, does it still need a password?

Posted: 16 Jun 2013 09:29 AM PDT

I'm in the process of setting up SSL-secured replication between two servers. Each server has its own public/private keypair, and the CA cert is just the concatenation of the two public certs, like this answer.

Now I'm updating the replication account with REQUIRE SUBJECT "exact subject of the client"

Is there any practical value to also having a password on the replication account (IDENTIFIED BY "secret")?

MySQL users corrupt

Posted: 16 Jun 2013 11:29 AM PDT

I have a strange situation here:

From time to time I cannot log in with any of my mysql users. I even cannot make a mysql dump.

So I started searching in mysql files and I found that users.MYD and users.MYI are modified in the time when the login problem occurs. The only way to return everything to work is to restore the users.* files from the time when the system was running okay.

I searched about the problem and I found that there was some bug in the MySQL, but it was in the older versions (4.X). I'm running 5.5.

Any ideas? Thanks!

Databases list not showing up?

Posted: 16 Jun 2013 05:29 AM PDT

This morning when I logged into SQL Server Management Studio (v 10.0.16) I clicked on the databases node and saw all by db's (they are hosted remotely) as I have done for the last 3 years. This evening, when I click on the databases node I see NOTHING - except the system databases.

According to my hosting company, this is an issue with my local permissions. I have searched everywhere and found nothing to help me. I'm now desperate and and help would be massively appreciated.

I have multiple sources INSERTing into a MySQL innodb table. My periodic aggregation script never makes accurate aggregates. Why?

Posted: 16 Jun 2013 12:29 PM PDT

I apologize in advance if this is a repeat. I'm not really sure how to properly ask for what I'm running into.

I have a large InnoDB table set up. I have 3 sources that all INSERT concurrently at a moderately high volume (300-500 INSERT/s).

I have a PERL script running on a cron job every 5 minutes that aggregates data from the last time it ran. It keeps track of the last record it processed by storing the auto_increment value of the last row in a metadata table.

The aggregates are always off. But not by much. This has been so frustrating because it's just plain simple math (SELECT account_id,sum(sold) GROUP BY account_id). I have a suspicion that it has something to do with the transaction isolation (repeatable-read).

I recently found FlexViews which looks very cool and might address my problem. But I was wondering if anyone could:

  • Tell me what is going on here. Why is this happening?
  • What can be done to produce 100% accurate aggregates

I'm like 95% sure the auto_increment thing along with transaction commit ordering is hosing me up, but I don't understand enough about DBs to really ask the right question.

Oh, one thing to note, I've already checked over the field types. This issues isn't the result of rounding.

Converting dbo file from windows to linux

Posted: 16 Jun 2013 07:29 AM PDT

I have a .dbo file which is created from windows. This file is succesfully reloaded into the mysql database in windows. I need to reload the dbo file into the mysql/mariadb database in linux. How I convert the file that was created from windows to linux?

Allow users to do anything within his own schema but cannot create or drop the schema itself

Posted: 16 Jun 2013 09:29 AM PDT

I have created a schema in SQL Azure using following permissions on my database role:

CREATE ROLE myrole AUTHORIZATION dbo;  EXEC sp_addrolemember 'myrole', 'myuser';    CREATE SCHEMA myschema AUTHORIZATION dbo;    GRANT ALTER, CONTROL, DELETE, EXECUTE, INSERT, REFERENCES, SELECT, UPDATE, VIEW   DEFINITION ON SCHEMA::myschema TO myrole;    GRANT CREATE TABLE, CREATE PROCEDURE, CREATE FUNCTION, CREATE VIEW TO myrole;  

Through the above defined permissions a user can create/drop his own schema, so to overcome the problem I tried the ALTER ANY SCHEMA permission. But this permission also denies the user to create/drop tables. What permissions are required in order to allow the user to do anything within his own schema but not be able to create or drop the schema itself?

No comments:

Post a Comment

Search This Blog