Sunday, May 12, 2013

[how to] Database Volume Statistics in Oracle - dba_hist_tbspc_space_usage

[how to] Database Volume Statistics in Oracle - dba_hist_tbspc_space_usage


Database Volume Statistics in Oracle - dba_hist_tbspc_space_usage

Posted: 12 May 2013 08:36 PM PDT

I am looking for generating a Database Volume Consumption Report in Oracle 11g based on a specific time period. For e.g 1) Total Volume( Excluding Index Size) at 12 AM 2) Total Volume( Excluding Index Size) at 4 AM

Once I have the above info, I take the difference in the volume which gives me the database consumption that happened during this period of time. I have tried using dba_hist_tbspc_space_usage. I wanted to ensure whether this is the right view where I have to hit for the above information. Also the Index Size need to be excluded.

Please let me know your thoughts on this.

Large INSERTs preformace optimization

Posted: 12 May 2013 05:09 PM PDT

I have 15 Amazon AWS EC2 t1.micro inctances which simultaneusly populate Amazon RDS MySQL d2.m2.xlarge database with data using large INSERTs (40000 rows in query).

The queries are send continuously. The table is INNODB, two INT columns, there is index for both columns. CPU Utilization of RDS instance is about 30% during data receiving.

When I have one EC2 instance, the speed is in orders faster then I run 15 instances simultaneusly. And the 15-instances group work slower and slower until the speed becomes totally unsatisfactory.

How can I optimize performance of this process?

How to perform ETL from RDBMS to Neo4j

Posted: 12 May 2013 04:01 PM PDT

I have a mature 50+ tables web application based on mySQL. In order to do some advanced data mining I want to use Neo4j and the goodnes of cypher. However I'm having a hard time migrating my data from RDBMS to Neo4j. I don't wan't to do it by hand, because this is generally unmaintainable.

I use pentaho which supports the Neo4j JDBC, however I'm having a hard time wrapping my head around loading the extracted data to Neo4j since the GUI is clearly designed for RDBMS and there are no sources about how to do it. Had anyone run into a similar problem and found a solution?

Ubnutu/MySQL - Error "Unable to locate package innotop"

Posted: 12 May 2013 04:58 PM PDT

I am using Ubuntu 12.04.2 LTS, with MySQL 5.5.29.

Im trying to install innotop for database monitoring, but getting an error "Unable to locate package innotop"

root@server1:~# apt-get install innotop  Reading package lists... Done  Building dependency tree         Reading state information... Done  E: Unable to locate package innotop  

I checked the Ubuntu package site at http://packages.ubuntu.com/search?keywords=innotop&searchon=names&suite=raring&section=all and there is no mention, but I know I used to see it there.

root@server1:~# apt-get update  

did not help.

I think I might have once had this problem before, and fixed it by installing some prerequisite, but I cant recall anymore.

I checked Google for the error message, but no mention.

How can I benchmark a PostgreSQL query?

Posted: 12 May 2013 01:02 PM PDT

I want to benchmark a query containing an user-defined function I have written for PostgreSQL. Are there any standard ways to run such a benchmark?

I know that timing can be turned on with \timing at the psql prompt, but ideally I'd like to have a script that takes care of everything automatically: running the query a few times, clearing PostgreSQL cache after each run (probably by restarting the PostgreSQL's service), and outputting the average running time (and memory used is a plus).

Multi-variance products

Posted: 12 May 2013 11:26 AM PDT

My current system houses different variants of Products in different tables;

  • Productgroups
  • Productcolors
  • Productsizes
  • Productmodels

Products can reside under 1 Productgroup and can have multi-variance Colors, Sizes and Models.

The logic on filling the database now fully relies on the website-scripts; meaning when I make a new Product in my back-office I am filling and relating the Colors, Sizes and Models to the product and make sure they have a combination of themselves.

This structure quickly grows; like 3 Color-variants of 6 Sizes and 2 Models already convert into 36 different products.

I wonder if there is a more logical way to this Database-wise. The problem is that in the future it might be possible that a product will get another property (let's say Sublabel). How could I make this method ever-extendable?

I read about EAV, but that's really too much info to go on for now. Before I dive in the deep, I wonder if there are other approaches on this matter.

How to shred .docx XML?

Posted: 12 May 2013 12:08 PM PDT

I am trying to import a xml (actually a docx file) to a sql server 2008 database. I am almost a novice in XML programming. I googled a lot but almost all the examples there are with simple xml file. Here the xml file is little complex(please see below). Can you please give me some idea how I should create the table for this XML and what query should I run in sql server. I need values for all the tags e.g. w:rsidP,w:rsidRDefault,w:rsidR of w:p,w:pStyle,w:bookmarkStart,w:t tags etc.

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>  <w:document xmlns:ve="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:r="http://schemas.openxmlformats.org/officeDocument/2006/relationships" xmlns:m="http://schemas.openxmlformats.org/officeDocument/2006/math" xmlns:v="urn:schemas-microsoft-com:vml" xmlns:wp="http://schemas.openxmlformats.org/drawingml/2006/wordprocessingDrawing" xmlns:w10="urn:schemas-microsoft-com:office:word" xmlns:w="http://schemas.openxmlformats.org/wordprocessingml/2006/main" xmlns:wne="http://schemas.microsoft.com/office/word/2006/wordml">  <w:body>  <w:p w:rsidR="00EF42E0" w:rsidRDefault="00EF42E0" w:rsidP="00EF42E0">  <w:pPr><w:pStyle w:val="Heading1"/>  </w:pPr><w:bookmarkStart w:id="0" w:name="_Toc212523610"/>  <w:r>  <w:t>Summary</w:t>  </w:r>  <w:bookmarkEnd w:id="0"/>  </w:p>  <w:p w:rsidR="00EF42E0" w:rsidRDefault="00EF42E0" w:rsidP="00EF42E0"><w:pPr><w:pStyle w:val="mainbodytext"/><w:ind w:right="-694"/><w:rPr><w:b/><w:bCs/></w:rPr></w:pPr><w:r><w:rPr><w:b/><w:bCs/></w:rPr><w:t>What is the Group Defined Practice for Integrity Management?</w:t></w:r></w:p>  <w:p w:rsidR="00EF42E0" w:rsidRDefault="00EF42E0" w:rsidP="00EF42E0"><w:pPr><w:pStyle w:val="mainbodytext"/></w:pPr><w:r><w:t xml:space="preserve">This Practice is derived from the GP Group Standard, GRP 01 January 2006, </w:t></w:r><w:proofErr w:type="gramStart"/><w:r><w:t>Integrity</w:t></w:r><w:proofErr w:type="gramEnd"/><w:r><w:t xml:space="preserve"> Management.  In developing QMS it has been possible to embed much of the content of the IM Standard directly into the Group Essentials statements.  For elements 2, 7, 8 and 9 of the Standard it was possible to do that in their entirety and therefore content of those elements are not repeated within this Practice.</w:t></w:r></w:p></w:body></w:document>  

Why is MySQL order by performance poor within a single partition of a partitioned table?

Posted: 12 May 2013 10:30 AM PDT

I have to store some sequence numbered data in MySQL. I have about 300,000 data items per day for about a 10 year span. Let's say the table structure is just sequence number (a big int) and data (a varchar). One very common query i'll be making is for all data for a single day, ordered by sequence number.

Storing the data in one table per day works really well. Querying all data from a table takes 0.8 seconds (which is acceptable), and adding order by sequence number takes it up to 1.0 seconds (still acceptable), but this does result in a massive number of tables that i'd rather have less of.

Storing in one table per month (and adding an indexed day number field), the time to get a day's data goes up to 1.6 seconds (not great), but adding partitioning by day number brings it right back down to 0.8.

However... when I add the order by sequence number to the partitioned table query, the time goes up to 2.5 seconds! Does this make sense? I would have thought that since my where clause (day number = X) instantly limits to all the data in a single partition, it should then be about the same performance as one of my original day tables (which it is, until I add the order by clause, then everything goes to hell).

It's like it is doing the order by before checking the partitions or something. Anyone got any ideas? (either "yes, that'll happen, and here's why", or "here's what you need to do to speed it up").

Thanks.

Disk space full but logical space available in database

Posted: 12 May 2013 11:28 AM PDT

We have a fairly big MS SQL 2008R2 database that resides on a SSD drive. The drive itself only has ~110Gb of space, and the database files are the only files on the drive.

The database is in "Simple" recovery mode, and only has two files, .MDF and .LDF.

The disk is now nearly full: the MDF is currently 109Gb in size. However, SSMS tells me there's nearly 18Gb of "Space Available" (in the 'General' properties page), and if I go through the motions of Shrinking the file it also tells me there's 18Gb of space free. SSMS also tells me the database size is around 132Gb, which surprises me - that wouldn't fit on the drive!

From what I've read, shrink is a really bad idea. However, I'm starting to see replication errors (could not allocate space for object). We've previously tried shrinking the database, but within a few hours the file was back to it's original size.

How should we proceed - given there's apparently 18Gb of free space, should SQL automatically use that free space? Or is it as simple as: we really need more disk space?

SQL Server simple model database transaction log full 'CHECKPOINT'

Posted: 12 May 2013 10:15 AM PDT

I have a SQL Server 2012 SP1 database which has a problem that the transaction log is full. It consists of a single data file and a single log file, both about 800MB.

I had 5GB free disk space, and I've enlarged it to 20GB+, just in case there was a disk space problem.

The database is in SIMPLE recovery model.

I cannot do a full database backup - it fails with

transaction log is full, with wait_desc of 'CHECKPOINT'

I cannot add a log file - it fails with the same error.

I cannot change the recovery model to full or bulk-logged - it fails with the same error.

I cannot execute a manual CHECKPOINT - it fails with the same error.

DBCC LOGINFO (dbid) shows that there are a couple hundred VLFs, all in status 2.

I'm not using change tracking, mirroring, or transaction log shipping.

What can I do to resolve this?

(As recommended, moved from stackoverflow to dba.stackexchange)

Drop database locked the server

Posted: 12 May 2013 11:11 AM PDT

I tried to drop one of the databases on the server. That locked the entire server up so I stopped the mysql process via the console.

I restarted the server, started mysql and now it seems hanged on:

debian-sys-maint | localhost | | Query | 192 | Opening tables | select count(*) into @discard from information_schema.COLUMNS |

Not necessarily hanged, but this takes a long time to move to a different query.

The other databases seem locked but I see no locks in "show open tables".

Any idea what's going on and what I could try?

Thanks.

Edit: it seems like the locks are moving from database to database and table to table. I guess it's doing some kind of checks?

Edit2: I think I understand what's happening, the answer to why it's doing this is: Linux Mint trigger slowly query on mysql on system booting

File locations for Full Text Catalogs using SQL Server 2005 on VMware

Posted: 12 May 2013 10:17 AM PDT

I have an issue with one of our development SQL Server Servers. We have databases that use full text search catalogs. Just some background info, our servers are configured to use separate drives for logs, data, and the default installation location This server is a virtual machine. When I try to create a new full text search catalog, it forces me to create it in the default installation directory, citing that you can't create a catalog on a removable drive. This seems kind of contrived since this is a VM, none of the drives are "physical", and the drive it will let me install it on isn't any less "removable" than the the others.

It does this on any of the test servers, but for this conversation the specifications are:

Windows Server 2008-R2 x64 Standard Edition

SQL Server 2005 9.0.5057

VMware vCenter Server 5.1.0.88016

All of these drives are attached to the vm as Hard disks, not mapped drives. drives

But when creating a new catalog, I always receive an error unless I put the catalog on the same drive the instance is installed on.

error

Is this just a quirk with sql server, or is there a work around? I know in newer versions of sql server, the catalogs are stored internally, and so this isn't an issue, but a number of our servers are still and will stay for quite some time on sql server 2005. Has anyone else seen this issue? Did you find a solution?

Oracle 11g - what's the most appropriate NLS_SORT parameter to cover all/most European languages?

Posted: 12 May 2013 09:50 AM PDT

I'm developing an Oracle-based application which is likely to be used by clients in many different countries (mostly European). I want the program to change its NLS_SORT so that it's case insensitive on startup using the ALTER SESSION SET NLS_SORT='???'; command.

I'm going to have a config file which contains the NLS_SORT setting the user wants to use so it is editable, but I want a default which will cover European languages. Which is the best one to choose?

Thanks!

MySQL Synchronise Data From One DB Table To Another & Vice-Versa Using Triggers

Posted: 12 May 2013 10:56 AM PDT

We are migrating from one database structure to another - very slowly. The system is MySQL. There are two databases. For example's sake, we will call the databases old_db and new_db. Both databases have tables that will need to be synchronised for around a year whilst the business's application is migrated from the old db to the new db.

Again to slim down the example, we have one table in the old database, called: old_db . myoldertable, new_db . mynewtable.

Structure:

old_db.myoldertable  primary_id, Date, amount, unneedded_column1, unneedded_column2 ... and so-on    new_db.mynewtable  primary_id, date, amount, old_table_primary_id  

They look similar in structure, but this is simplified.

Problem:

  • If I put an insert/update/delete trigger on both tables, I will end up in a recursion loop. This is because the tables need to be synchronised, so the insert / update / delete triggers need to be on both tables.

Requirements

  • Must be 100% MySQL. Cannot use a programming language for this.
  • The new and old database tables must be synchronised immediately if either have rows that are inserted/updated/deleted.
  • Daily / hourly updates are NOT an option. Again, must be synchronised immediately.

Solutions?

  • Ideas to break possibility of recursion for all three insert / update / delete scenarios.
  • Any other ideas.

EDIT :

So far I have looked at

  • One solution I have explored is replacing the old table with a view. However, our new database has two tables (or more) where the old database had only one table. Views, in MySQL cannot be writeable to more than one base table. Also views cannot have triggers in MySQL.

    • I also looked at using a UUID that represents the last update. I would create a column in both tables to contain a Unique ID. Making sure that both tables contain the same UUID as part of the triggers. However, this does not stop recursion.

SQL Server-MySQL master-master replication

Posted: 12 May 2013 07:48 AM PDT

We are looking into setting up a master-master replicated MSSQL database and I am interested to hear of any potential pitfalls that we may run into along the way and generally any advice that you might have for us.

To expand on our situation a bit we currently have an internal SQL 2008 R2 server and a web based MySQL server. We use SSIS a lot to transfer the data between the two servers. This ends up with the data on the two servers being out of sync as the SSIS packages run hourly or nightly. We have investigated MSSQL->MySQL replication but we've never found a comprehensive solution.

This has brought us to what we are hoping to set up with the web using both the current MySQL and an additional replicated MSSQL database.

Specific areas that I'm interested in finding out more on are:

  • How does a master-master set up deal with an internet connection being dropped? I've read that one node becomes read-only but I can't seem to find much detail about this.
  • Is there anything to be wary of connecting from PHP to an MSSQL server?
  • What type of replication would you recommend? Bi-directional transactional seems to fit the bill quite well.

Thanks for your time

createdb: could not connect to database postgres: FATAL: could not write init file

Posted: 12 May 2013 06:41 PM PDT

RedHat Enterprise Server 3.0 32 Bits

psql (PostgreSQL) 8.2.3

user: postgres

server is running:

/soft/postgres/8.2.3/bin/pg_ctl start  pg_ctl: another server may be running; trying to start server anyway  2013-05-09 11:23:07 BRST---:FATAL:  lock file "postmaster.pid" already exists  2013-05-09 11:23:07 BRST---:HINT:  Is another postmaster (PID 12810) running in data directory "/opt/psql/dba/bdadms/data1/pstg"?  pg_ctl: could not start server  Examine the log output.  

I had just created a new database cluster with initdb; but when I run createdb:

8.2.3:postgres:pstg:>/soft/postgres/8.2.3/bin/createdb pstg  createdb: could not connect to database postgres: FATAL:  could not write init file  8.2.3:postgres:pstg:>/soft/postgres/8.2.3/bin/createdb postgres  createdb: could not connect to database template1: FATAL:  could not write init file  8.2.3:postgres:pstg:>/soft/postgres/8.2.3/bin/createdb template1  createdb: could not connect to database postgres: FATAL:  could not write init file  

any clues as to the cause and possible solutions to this problem?

Real-time merge of two SQL Server databases on different servers

Posted: 12 May 2013 07:28 PM PDT

One of the requirements our customer has given is to merge two databases running on two different machines in real time. This means that we have to build some sync policy (or use SQL Server Tools to manage that). But I have some doubts:

  • How can I be sure that my databases are totally merged? I mean, we are in a real time environment with n/sec web and local requests, it's impossible to share table updates/merges with the same speed.

  • Is this a good architecture in spite of a master/slaves one (with an unique DB) topology? I'm pretty sure sync both databases each other will cost too much since our tables store a lot of data in terms of rows and information.

  • Can anyone provide a good solution to build the best architecture possible?

MySQL PDO Cannot assign requested address

Posted: 12 May 2013 01:07 PM PDT

Can someone help me with this error?

[08-Apr-2013 17:44:08 Europe/Berlin] PHP Warning:  PDO::__construct(): [2002]      Cannot assign requested address (trying to connect via tcp://****:3306) in       /var/www/***  [08-Apr-2013 17:44:08 Europe/Berlin] PHP Fatal error:  Uncaught exception       'PDOException' with message 'SQLSTATE[HY000] [2002] Cannot assign requested       address' in /var/www/***  

I have a Server with a lot connections per second; out of about 100 Connections, a single one got this error.

I've tried this recommendation from stackoverflow however it does not solve my problem.

RID vs INCLUDE on a large field

Posted: 12 May 2013 07:57 AM PDT

I have a table that stores notes

create tblNote(      Id int identity(1,1),      ParentId  int ,       ParentType varchar(32),       NoteType varchar(32),       Note varchar(max),      CreatedBy varchar(25),       CreatedDate  datetime,       .      .      .      <other metadata about the note>    )    

I have done a lot of reading recently about how MSSS handles indexes (2005 and forward).

I have a clustered index on ID

[ I have considered changing the clustered index to parentId, parentType since that is reasonably narrow and it is static. ]

The overwhelming percentage of queries against this table are going to be along the lines of

select NOTE, createdDate, createdBy   from tblNote   where parentId = 12 and parentType = 'RFQ'  

The question I want to ask today (though any feedback is welcome) is this:

The NC index I could add is:

create index  idx_nc_note_parent(            parentId ,             parenttype        )        include (createdby, createdDate)    

This would be useful in creating little lists of the notes where we might include who and when type info.

I am hesitant to include a varchar(max) field. It seems like it would really hurt the amount of the index that would be cached (Is this reasonable or unreasonable)

Assuming I dont include the NOTE field, a RID Lookup will be necessary to actually fetch the note content if it is requested.

While I have read quite a bit about how expensive RID lookups are, it still has to be better to have this index as opposed to doing a table scan, RIGHT?

[apologies for the code block, i have added the 4 spaces, but maybe i did it wrong? ]

1286 - Unknown storage engine 'InnoDB'

Posted: 12 May 2013 05:07 PM PDT

I am trying to use roundcube and it recently just broke. I don't know if this is due to a MySQL update that happened recently or not but in phpMyAdmin I get the following error if I try and view a table:

1286 - Unknown storage engine 'InnoDB'  

and

mysql> SHOW ENGINES;  +--------------------+---------+----------------------------------------------------------------+--------------+------+------------+  | Engine             | Support | Comment                                                        | Transactions | XA   | Savepoints |  +--------------------+---------+----------------------------------------------------------------+--------------+------+------------+  | MRG_MYISAM         | YES     | Collection of identical MyISAM tables                          | NO           | NO   | NO         |  | CSV                | YES     | CSV storage engine                                             | NO           | NO   | NO         |  | MyISAM             | DEFAULT | MyISAM storage engine                                          | NO           | NO   | NO         |  | BLACKHOLE          | YES     | /dev/null storage engine (anything you write to it disappears) | NO           | NO   | NO         |  | FEDERATED          | NO      | Federated MySQL storage engine                                 | NULL         | NULL | NULL       |  | PERFORMANCE_SCHEMA | YES     | Performance Schema                                             | NO           | NO   | NO         |  | ARCHIVE            | YES     | Archive storage engine                                         | NO           | NO   | NO         |  | MEMORY             | YES     | Hash based, stored in memory, useful for temporary tables      | NO           | NO   | NO         |  +--------------------+---------+----------------------------------------------------------------+--------------+------+------------+  8 rows in set (0.00 sec)  

and

[mysqld]  default-storage-engine=MyISAM  local-infile=0  symbolic-links=0  skip-networking  max_connections = 500  max_user_connections = 20  key_buffer = 512M  myisam_sort_buffer_size = 64M  join_buffer_size = 64M  read_buffer_size = 12M  sort_buffer_size = 12M  read_rnd_buffer_size = 12M  table_cache = 2048  thread_cache_size = 16K  wait_timeout = 30  connect_timeout = 15  tmp_table_size = 64M  max_heap_table_size = 64M  max_allowed_packet = 64M  max_connect_errors = 10  query_cache_limit = 1M  query_cache_size = 64M  query_cache_type = 1  low_priority_updates=1  concurrent_insert=ALWAYS  log-error=/var/log/mysql/error.log  tmpdir=/home/mysqltmp  myisam_repair_threads=4  [mysqld_safe]  open_files_limit = 8192  log-error=/var/log/mysql/error.log    [mysqldump]  quick  max_allowed_packet = 512M    [myisamchk]  key_buffer = 64M  sort_buffer = 64M  read_buffer = 16M  write_buffer = 16M  

Ideas as to how to fix? It used to work just fine.

Alternative tools to export Oracle database to SQL Server?

Posted: 12 May 2013 03:07 PM PDT

I've got an Oracle database that I need to export (schema and data) to SQL Server.

I am trying the Microsoft SQL Server Migration Assistant for Oracle, but it is horribly slow, grossly inefficient and very un-user-friendly, e.g. I was having problems connecting to the SQL Server DB during data migration - but it still spent ~5 minutes preparing all the data before attempting a connection to SQL Server, then when it failed, the 5 minutes of preparatory work were wasted.

Right now, I'm just trying to connect to another Oracle DB using this tool, I left it overnight and came back this morning, and it's still stuck on 19% of "Loading objects..." And this is on a machine with a good 18GB RAM, of which maybe 8.5 GB currently in use. Task Manager shows me that SSMAforOracle.exe is using 0 CPU, 0 PF Delta, and no change whatsoever in memory usage. In other words: frozen stiff. Absolutely intolerable.

Are there any other tools out there that can migrate an Oracle DB to SQL Server a little more efficiently?

Performing SELECT on EACH ROW in CTE or Nested QUERY?

Posted: 12 May 2013 04:07 PM PDT

This is a problem in PostgreSQL

I have a table which stores the tree of users;

      +------+---------+      |  id  | parent  |      |------+---------|      |  1   |   0     |      |------|---------|      |  2   |   1     |      |------|---------|      |  3   |   1     |      |------|---------|      |  4   |   2     |      |------|---------|      |  5   |   2     |      |------|---------|      |  6   |   4     |      |------|---------|      |  7   |   6     |      |------|---------|      |  8   |   6     |      +------+---------+  

I can query a complete tree from any node by using the connectby function, and I can separately query the size of tree in terms of total nodes in it, for example

tree for #1 has size 7
tree for #5 has size 0
tree for #6 has size 2, and so on

Now I want to do something like Selecting all possible trees from this table (which is again carried out by connectby), count the size of it and create another dataset with records of ID and size of underlying tree, like this:

      +------------------+-------------+      |  tree_root_node  |  tree_size  |      |------------------+-------------|      |      1           |     7       |      |------------------+-------------|      |      2           |     3       |      |------------------+-------------|      |      3           |     0       |      |------------------+-------------|      |      4           |     3       |      |------------------+-------------|      |      5           |     0       |      |------------------+-------------|      |      6           |     2       |      |------------------+-------------|      |      7           |     0       |      |------------------+-------------|      |      8           |     0       |      +------------------+-------------+  

The problem is, I am unable to perform the same SELECT statement for every available row in original table in order to fetch the tree and calculate the size, and even if I could, I dont know how to create a separate dataset using the fetched and calculated data.

I am not sure if this could be simple use of some functions available in Postgres or I'd have to write a function for it or simply I dont know what exactly is this kind of query is called but googling for hours and searching for another hour over here at dba.stackexchange returned nothing.

Can someone please point to right direction ?

"ORA-01950: no privileges on tablespace 'SYSTEM' " : What error is it?

Posted: 12 May 2013 07:55 PM PDT

Earlier I created a user :

SQL>create user suhail identified by password  SQL>User created.  SQL>Grant create session,create table to suhail;  SQL>Grant Succeeded.  

but when I login as suhail and try to create a table I get an error saying ORA-01950: no privileges on tablespace 'SYSTEM' . What error is it ? Why does it occur ?


SQL> connect suhail  Enter password:  Connected.  SQL> create table suhail_table(name char(10));  create table suhail_table(name char(10))  *  ERROR at line 1:  ORA-01950: no privileges on tablespace 'SYSTEM'  

I am using Oracle Database 11g Express Edition Release 11.2.0.2.0

List all permissions for a given role?

Posted: 12 May 2013 02:07 PM PDT

I've searched around all over and haven't found a conclusive answer to this question.

I need a script that can give ALL permissions for an associated role.

Any thoughts, or is it even possible?

This gets me CLOSE - but I can't seem to flip it around and give the summary for roles, rather than users.

http://consultingblogs.emc.com/jamiethomson/archive/2007/02/09/SQL-Server-2005_3A00_-View-all-permissions--_2800_2_2900_.aspx

 WITH    perms_cte as  (          select USER_NAME(p.grantee_principal_id) AS principal_name,                  dp.principal_id,                  dp.type_desc AS principal_type_desc,                  p.class_desc,                  OBJECT_NAME(p.major_id) AS object_name,                  p.permission_name,                  p.state_desc AS permission_state_desc          from    sys.database_permissions p          inner   JOIN sys.database_principals dp          on     p.grantee_principal_id = dp.principal_id  )  --role members  SELECT rm.member_principal_name, rm.principal_type_desc, p.class_desc,       p.object_name, p.permission_name, p.permission_state_desc,rm.role_name  FROM    perms_cte p  right outer JOIN (      select role_principal_id, dp.type_desc as principal_type_desc,      member_principal_id,user_name(member_principal_id) as member_principal_name,     user_name(role_principal_id) as role_name--,*      from    sys.database_role_members rm      INNER   JOIN sys.database_principals dp      ON     rm.member_principal_id = dp.principal_id  ) rm  ON     rm.role_principal_id = p.principal_id  order by 1  

Named Pipe Provider Error code 40

Posted: 12 May 2013 06:07 PM PDT

I have literally tried everything, from enabling named pipe to adding exception to ports in the firewall, to everything possible in surface configuration. I can connect to the SQL instance(using TCP and Named Pipes) with SQL Server Management Studio. But sqlcmd throws an error:

Login timeout expired  

Help!

MySQL auto increment problem with deleting rows / archive table

Posted: 12 May 2013 08:07 PM PDT

A hosted server is running "maintenance" each weekend. I am not privy to the details.

In a database on this server there is a MyISAM table. This table never holds more than 1000 rows and usually much less. It is MyISAM so that the auto increment does not reset (and with so few rows it really doesn't matter). Rows are regluarly deleted from this table and moved to an archive table (1M rows).

The problem is lately the auto increment has "rolled back" slightly after each maintenance.

Is there any easy way to verify the auto increment of the insert table by reading the max id from both the insert and the archive table?

I'd rather not verify before each insert unless that is the only solution.

Here are the basic table layouts:

CREATE TABLE x  (      xid int(10) unsigned NOT NULL AUTO_INCREMENT, //snip      PRIMARY KEY (xid)  ) ENGINE=MyISAM AUTO_INCREMENT=124 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;    CREATE TABLE xhistory  (      xid int(10) unsigned NOT NULL DEFAULT '0', //snip      PRIMARY KEY (xid)  ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;  

Far from perfect workaround: (this was somewhat urgent, I had to manually update over 100 rows)

select xid from xhistory where x=?  

Check if just inserted row in x exists in history. If it does:

select greatest(max(x.xid),max(xhistory.xid)) as newval from x,xhistory  

Find a new id.

INSERT INTO x SELECT * FROM x AS iv WHERE iv.xid=? ON DUPLICATE KEY UPDATE xid=?  

And update our row with this id.

MySQL Data Synchronization

Posted: 12 May 2013 06:22 PM PDT

I have a user who will be away from the office for a while, but still wants to be able to insert a load of records to our MySQL database. The user will have a laptop, but no reliable internet connection so no VPN etc.

I'm aware that MySQL has a Master/Slave Replication feature, but unless I've misunderstood, this only provides read-only synchronization for the slaves.

I'm thinking that I can take a snapshot of the master database and copy it to a local server on the user's laptop. From this point onwards the local and master database will of course be out-of-sync. However, because the user only intends to add records while away and not delete or update records I'm thinking, perhaps naively, that a re-sync should be relatively easy to do with some SQL scripting.

Is this the right way to go? Are there any hidden dangers to be aware of?

No comments:

Post a Comment

Search This Blog