Monday, April 15, 2013

[how to] impdp failing with ORA-01031: insufficient privileges

[how to] impdp failing with ORA-01031: insufficient privileges


impdp failing with ORA-01031: insufficient privileges

Posted: 15 Apr 2013 06:38 PM PDT

I'm trying to import a dmp file provided by our dba to my dev database. I've been able to export the contents to a SQLFILE and the contents look ok. I tried importing the schema and it kept on failing (same errors as shown below); so I tried running the SQL directly from the sqlfile - this created all the schema objects but didn't import the data.

The export was from the expenses schema from our test database. I already have another schema by that name in my dev database that I don't want to touch; so I'm remapping expenses to expenses1.

This schema (expenses1) has READ and WRITE privilege on the data_pump_dir directory. It has, as far as I can tell, all the privileges it needs (create table, view, procedure, sequence, etc) as well as IMP_FULL_DATABASE.

[oradev@DIAS45 dpdump]$ impdp expenses1/expenses1 logfile=expenses1.log    remap_schema=EXPENSES:EXPENSES1 directory=data_pump_dir    dumpfile=ossarct_expenses_expdp.dmp    content=DATA_ONLY schemas=EXPENSES include=TABLE    Import: Release 11.2.0.3.0 - Production on Tue Apr 16 09:28:21 2013    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.    Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production  With the Partitioning, OLAP, Data Mining and Real Application Testing options    UDI-31626: operation generated ORACLE error 31626  ORA-31626: job does not exist  ORA-39086: cannot retrieve job information  ORA-06512: at "SYS.DBMS_DATAPUMP", line 3326  ORA-06512: at "SYS.DBMS_DATAPUMP", line 4551  ORA-06512: at line 1    [oradev@DIAS45 dpdump]$ cat expenses1.log  ;;;  Import: Release 11.2.0.3.0 - Production on Tue Apr 16 09:28:21 2013    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.  ;;;  Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production  With the Partitioning, OLAP, Data Mining and Real Application Testing options  Master table "EXPENSES1"."SYS_IMPORT_SCHEMA_01" successfully loaded/unloaded  Starting "EXPENSES1"."SYS_IMPORT_SCHEMA_01":  expenses1/******** logfile=expenses1.log    remap_schema=EXPENSES:EXPENSES1 directory=data_pump_dir    dumpfile=ossarct_expenses_expdp.dmp content=DATA_ONLY schemas=EXPENSES include=TABLE  ORA-39097: Data Pump job encountered unexpected error -1031  ORA-39065: unexpected master process exception in DISPATCH  ORA-01031: insufficient privileges  

I can't tell from the above what privilege is missing, or if my command is telling impdp to do more than I'm expecting it to. I've tried skipping the schemas and include parameters with no difference.

Avoiding Multiple Queries when Searching for Records Associated with a Set of Records

Posted: 15 Apr 2013 03:45 PM PDT

So, I am sure I have done something really stupid while designing this, and I'm open to schema changes if they'll really help me out.

On to the problem:

I have a custom shopping cart system (backed by a MySQL database) that includes a products table and a price_rules table that's used for computing discounts and applying promo code discounts. Some price rules don't have promo codes attached to them; some are simply "10% off of product X from March 1st through April 1st" or similar. Because a single price rule can apply to many individual products, I also have a join table called price_rule_product.

When showing a set of products (for example, on the main shop page or listing all products in a category) I'm currently running a separate query for each product to look for price rules that apply to that product. Here's what one of those queries looks like:

SELECT DISTINCT price_rule.* FROM price_rule      INNER JOIN price_rule_product ON          price_rule.id = price_rule_product.price_rule_id      INNER JOIN product ON          product.id = price_rule_product.product_id      WHERE product.id = 10          AND price_rule.apply_to = 'line_items'          AND ( price_rule.start_date IS NULL              OR price_rule.start_date = 0              OR price_rule.start_date <= 1366063902 )          AND ( price_rule.end_date IS NULL              OR price_rule.end_date = 0              OR price_rule.end_date >= 1366063902 )          AND ( price_rule.promo_code IS NULL              OR price_rule.promo_code = '' )      ORDER BY sort ASC  

Oh SQL Gods, I pray you have some suggestions/solutions for this. It is causing some significant performance issues, and I'm just not experienced enough with SQL to figure out where to go from here.

Can putting mysql DB into memory or having the innodb_buffer_pool_size match the size of DB increase performance?

Posted: 15 Apr 2013 02:28 PM PDT

We have a mysql database that has roughly 80 gigabytes (GB) of data using inno_db engine. We are looking to increase performance on the database (we looked at the slow log and gone through and optimized the db and indexes) and one avenue is to get better hardware.

Currently our mysql DB is on a 6 year old server with 16GB of mem with 2 x Xeon 2.00GHz (2 cores) with the ability to have 32 GB mem max.

We are looking to buy a new server for the mysql database with the following specs:

Dell PowerEdge R420 - Highlight specs

  • 2 Intel Xeon E5-2450 2.10GHz, 20M Cache
  • 128 GB RAM
  • 400GB Solid State Drive

So if the database is 80 GB and we have 128 GB of Memory on this new server my question (s) are:

Can we put the full database into memory and see performance gains?

Would setting the innodb_buffer_pool_size to 88GB effectively be the same as putting the "the DB in memory"?

Can mysql take advantage of this memory? Are there any limitations on this on the mysql side?

Would there be any pitfalls of loading the full mysql in database or having such a large innodb_buffer_pool_size?

mySQL data table seems to stuck at 16777215 rows

Posted: 15 Apr 2013 02:21 PM PDT

I am creating a MySQL table and loading data from a local file.

CREATE TABLE `patent` (      `seq_number` mediumint(9) unsigned NOT NULL auto_increment,    `seq_id` varchar(24) default NULL,    `date` date default NULL,    `patent_number` varchar(16) default NULL,    `pat_seq_number` smallint(6) unsigned default '0',    `organism` text,    `assignee` text,    `seq_length` int(8) default '0',    `seq` text,    PRIMARY KEY  (`seq_number`),    Index `seq_id` (`seq_id`),    Index `patent_number` (`patent_number`),    Index `pat_seq_number` (`pat_seq_number`),    Index `seq_length` (`seq_length`)    ) Engine = MyISAM MAX_ROWS=536870912;      load data local infile '/data/p_table.txt' into table  patent ignore 0 lines   (seq_id, date, patent_number, pat_seq_number, organism, assignee, seq, seq_length);  

It seems that I am stuck at the number of rows:

mysql> select count(*) from patent;    +----------+  | count(*) |  +----------+  | 16777215 |  +----------+    1 row in set (0.00 sec)  

Although my txt data table has about 20 million lines (records).

I tried Engine=InnoDB

I also tried Engine = MyISAM MAX_ROWS=200000000 (200 million)

mysql> show table status like 'patent'\G    *************************** 1. row ***************************               Name: patent             Engine: MyISAM            Version: 10         Row_format: Dynamic               Rows: 16777215     Avg_row_length: 244        Data_length: 4098768552    Max_data_length: 281474976710655       Index_length: 715227136          Data_free: 0     Auto_increment: 16777216        Create_time: 2013-04-14 12:46:10        Update_time: 2013-04-14 12:50:43         Check_time: 2013-04-14 12:54:08          Collation: latin1_swedish_ci           Checksum: NULL     Create_options: max_rows=536870912          Comment:     1 row in set (0.00 sec)  

The manuals and online advices all say I should be able to create large tables.

her is my uanme linux output.

Linux ... 2.6.32-279.el6.x86_64 #1 SMP Wed Jun 13 18:24:36 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux

Here is the verison for mySQL:

Welcome to the MySQL monitor. Commands end with ; or \g.

Your MySQL connection id is 318 Server version: 5.1.67 Source distribution

How can I create a larger table to increase the number of rows to larger than 2**24=1677215 rows?

Reclaiming space from a large partitioned database

Posted: 15 Apr 2013 02:36 PM PDT

  • We have a database where one of the business group decided to remove several millions of rows as part of their upgrade process. (we removed for them..).. This database is partitioned based on a date and removing all these rows would help us gain around 30% of the DB size back. But obviously, if we run the shrink command, there is a possibility of the database getting fragmented and running rebuild obvious increases the space of the database. Is there any better way of reclaiming the space other than moving to a different file group as suggested by Paul Randall?.
  • If we go via the traditional shrink and rebuild index route to reclaim the space on several of these partitions, we will be having to put the db in simple (bulk did not help much with reindex ) to avoid the transaction log from getting filled. This obviously will break the log shipping and we will have to setup the logshipping on this VLDB database back again which would be another tedious work. For a VLDB, what would be the best way to reclaim space for few of these tables without breaking the logshipping and using the existing commands?

Thanks for your help with this

Generate script for SQL Broker objects

Posted: 15 Apr 2013 01:50 PM PDT

I'm using SQL Server Management Studio to generate scripts for all the objects in an existing database:

enter image description here

but the objects related to SQL Broker (messages, queues and services) are not generated.

I've fiddled with the advanced options but still can't get them out. Is there a way to do it?

import csv data stored in a blob column

Posted: 15 Apr 2013 08:03 PM PDT

Here's what i need to do:

  1. Users will upload CSV files into SQL Server 2012 blob column
  2. Each night, i would like to take each file and import the data into table

Is it possible to use a stored procedure to read the file inside the blob column, loop on each line and insert the data in a table?

  1. Should i export the file to a temporary system file
  2. Read the file and do the importation

or can i

  1. Read the file directly from the blob colum and do the importation

Order by on an alphanumerical column

Posted: 15 Apr 2013 12:52 PM PDT

I have a column that has strings of the following type:
Miller 10
Allen 20
King 10
....
Jones 100

I try to sort the column based on the numerical part of the data.
I tried the following:

SELECT * FROM the_table ORDER BY CAST(RIGHT(data,2) AS UNSIGNED);

But this does not work. It places Jones 100 as the first entry etc.

What am I doing wrong?

MySQL database drop insanely slow

Posted: 15 Apr 2013 03:04 PM PDT

I just installed MySQL 5.0.45-log Source distribution on my Debian 6.0.6 server.

I installed it under my user home directory like I'm used to doing.

But this time the queries are extremely slow to run.

Running a create table or a database drop takes ages. I can literally watch tv in the meantime.

So I did a profiling of the database drop statement.

mysql> SHOW PROFILES;  +----------+--------------+------------------------------+  | Query_ID | Duration     | Query                        |  +----------+--------------+------------------------------+  |        1 | 369.54719400 | drop database db_madeintouch |   |        2 |   0.00004600 | SELECT DATABASE()            |   +----------+--------------+------------------------------+  2 rows in set (0.00 sec)    mysql> SHOW PROFILE FOR QUERY 1;  +----------------------+-----------+  | Status               | Duration  |  +----------------------+-----------+  | (initialization)     | 0.000001  |   | checking permissions | 369.54705 |   | Opening table        | 0.000103  |   | System lock          | 0.000003  |   | Table lock           | 0.000018  |   | query end            | 0.000004  |   | freeing items        | 0.000004  |   | logging slow query   | 0.000002  |   +----------------------+-----------+  8 rows in set (0.00 sec)  

We can see the time it takes for the checking of permissions is of 369 seconds.

I also did a show status of the InnoDB engine.

mysql> show engine innodb status\G  *************************** 1. row ***************************  Status:   =====================================  130415 23:11:27 INNODB MONITOR OUTPUT  =====================================  Per second averages calculated from the last 9 seconds  ----------  SEMAPHORES  ----------  OS WAIT ARRAY INFO: reservation count 781, signal count 781  Mutex spin waits 0, rounds 8629, OS waits 231  RW-shared spins 379, OS waits 190; RW-excl spins 380, OS waits 342  ------------  TRANSACTIONS  ------------  Trx id counter 0 7599  Purge done for trx's n:o < 0 7430 undo n:o < 0 0  History list length 3  Total number of lock structs in row lock hash table 0  LIST OF TRANSACTIONS FOR EACH SESSION:  ---TRANSACTION 0 0, not started, process no 14133, OS thread id 140617364518656  MySQL thread id 16, query id 1305 localhost stephane checking permissions  drop database db_madeintouch  ---TRANSACTION 0 0, not started, process no 14133, OS thread id 140617364383488  MySQL thread id 13, query id 1307 localhost stephane  show engine innodb status  ---TRANSACTION 0 7597, COMMITTED IN MEMORY, process no 14133, OS thread id 140617364518656 dropping table  COMMITTING , undo log entries 16  MySQL thread id 16, query id 1305 localhost stephane checking permissions  drop database db_madeintouch  --------  FILE I/O  --------  I/O thread 0 state: waiting for i/o request (insert buffer thread)  I/O thread 1 state: waiting for i/o request (log thread)  I/O thread 2 state: waiting for i/o request (read thread)  I/O thread 3 state: waiting for i/o request (write thread)  Pending normal aio reads: 0, aio writes: 0,   ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0  Pending flushes (fsync) log: 1; buffer pool: 0  174 OS file reads, 3781 OS file writes, 2099 OS fsyncs  0.00 reads/s, 0 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s  -------------------------------------  INSERT BUFFER AND ADAPTIVE HASH INDEX  -------------------------------------  Ibuf: size 1, free list len 0, seg size 2,  0 inserts, 0 merged recs, 0 merges  Hash table size 17393, used cells 122, node heap has 1 buffer(s)  0.00 hash searches/s, 0.00 non-hash searches/s  ---  LOG  ---  Log sequence number 0 7801057  Log flushed up to   0 7798962  Last checkpoint at  0 7798962  1 pending log writes, 0 pending chkp writes  1535 log i/o's done, 0.00 log i/o's/second  ----------------------  BUFFER POOL AND MEMORY  ----------------------  Total memory allocated 22136914; in additional pool allocated 1048576  Buffer pool size   512  Free buffers       2  Database pages     509  Modified db pages  18  Pending reads 0  Pending writes: LRU 0, flush list 0, single page 0  Pages read 183, created 1444, written 6980  0.00 reads/s, 0.00 creates/s, 0.00 writes/s  No buffer pool page gets since the last printout  --------------  ROW OPERATIONS  --------------  0 queries inside InnoDB, 0 queries in queue  1 read views open inside InnoDB  Main thread process no. 14133, id 140617334142720, state: waiting for server activity  Number of rows inserted 0, updated 0, deleted 0, read 0  0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s  ----------------------------  END OF INNODB MONITOR OUTPUT  ============================    1 row in set (0.00 sec)  

And here is my environment.

mysql> SHOW VARIABLES;

+---------------------------------+--------------------------------------------------------------------+ | Variable_name | Value
| +---------------------------------+--------------------------------------------------------------------+ | auto_increment_increment | 1
| | auto_increment_offset | 1
| | automatic_sp_privileges | ON
| | back_log | 50
| | basedir | /home/stephane/programs/mysql-5.0.45/install/ | | binlog_cache_size | 32768
| | bulk_insert_buffer_size | 8388608
| | character_set_client | latin1
| | character_set_connection | latin1
| | character_set_database | latin1
| | character_set_filesystem | binary
| | character_set_results | latin1
| | character_set_server | latin1
| | character_set_system | utf8
| | character_sets_dir | /home/stephane/programs/mysql-5.0.45/install/share/mysql/charsets/ | | collation_connection | latin1_swedish_ci
| | collation_database | latin1_swedish_ci
| | collation_server | latin1_swedish_ci
| | completion_type | 0
| | concurrent_insert | 1
| | connect_timeout | 5
| | datadir | /home/stephane/programs/mysql/install/data/ | | date_format | %Y-%m-%d
| | datetime_format | %Y-%m-%d %H:%i:%s
| | default_week_format | 0
| | delay_key_write | ON
| | delayed_insert_limit | 100
| | delayed_insert_timeout | 300
| | delayed_queue_size | 1000
| | div_precision_increment | 4
| | engine_condition_pushdown | OFF
| | expire_logs_days | 0
| | flush | OFF
| | flush_time | 0
| | ft_boolean_syntax | + -><()~*:""&|
| | ft_max_word_len | 84
| | ft_min_word_len | 4
| | ft_query_expansion_limit | 20
| | ft_stopword_file | (built-in)
| | group_concat_max_len | 1024
| | have_archive | NO
| | have_bdb | NO
| | have_blackhole_engine | NO
| | have_compress | YES
| | have_crypt | YES
| | have_csv | NO
| | have_dynamic_loading | YES
| | have_example_engine | NO
| | have_federated_engine | NO
| | have_geometry | YES
| | have_innodb | YES
| | have_isam | NO
| | have_merge_engine | YES
| | have_ndbcluster | NO
| | have_openssl | NO
| | have_ssl | NO
| | have_query_cache | YES
| | have_raid | NO
| | have_rtree_keys | YES
| | have_symlink | YES
| | hostname | server1
| | init_connect |
| | init_file |
| | init_slave |
| | innodb_additional_mem_pool_size | 1048576
| | innodb_autoextend_increment | 8
| | innodb_buffer_pool_awe_mem_mb | 0
| | innodb_buffer_pool_size | 8388608
| | innodb_checksums | ON
| | innodb_commit_concurrency | 0
| | innodb_concurrency_tickets | 500
| | innodb_data_file_path | ibdata1:10M:autoextend
| | innodb_data_home_dir |
| | innodb_doublewrite | ON
| | innodb_fast_shutdown | 1
| | innodb_file_io_threads | 4
| | innodb_file_per_table | OFF
| | innodb_flush_log_at_trx_commit | 1
| | innodb_flush_method |
| | innodb_force_recovery | 0
| | innodb_lock_wait_timeout | 50
| | innodb_locks_unsafe_for_binlog | OFF
| | innodb_log_arch_dir |
| | innodb_log_archive | OFF
| | innodb_log_buffer_size | 1048576
| | innodb_log_file_size | 5242880
| | innodb_log_files_in_group | 2
| | innodb_log_group_home_dir | ./
| | innodb_max_dirty_pages_pct | 90
| | innodb_max_purge_lag | 0
| | innodb_mirrored_log_groups | 1
| | innodb_open_files | 300
| | innodb_rollback_on_timeout | OFF
| | innodb_support_xa | ON
| | innodb_sync_spin_loops | 20
| | innodb_table_locks | ON
| | innodb_thread_concurrency | 8
| | innodb_thread_sleep_delay | 10000
| | interactive_timeout | 28800
| | join_buffer_size | 131072
| | key_buffer_size | 16384
| | key_cache_age_threshold | 300
| | key_cache_block_size | 1024
| | key_cache_division_limit | 100
| | language | /home/stephane/programs/mysql-5.0.45/install/share/mysql/english/ | | large_files_support | ON
| | large_page_size | 0
| | large_pages | OFF
| | lc_time_names | en_US
| | license | GPL
| | local_infile | ON
| | locked_in_memory | OFF
| | log | ON
| | log_bin | OFF
| | log_bin_trust_function_creators | OFF
| | log_error | /home/stephane/programs/mysql/install/mysql.error.log | | log_queries_not_using_indexes | OFF
| | log_slave_updates | OFF
| | log_slow_queries | ON
| | log_warnings | 1
| | long_query_time | 10
| | low_priority_updates | OFF
| | lower_case_file_system | OFF
| | lower_case_table_names | 0
| | max_allowed_packet | 1047552
| | max_binlog_cache_size | 18446744073709551615
| | max_binlog_size | 1073741824
| | max_connect_errors | 10
| | max_connections | 100
| | max_delayed_threads | 20
| | max_error_count | 64
| | max_heap_table_size | 16777216
| | max_insert_delayed_threads | 20
| | max_join_size | 18446744073709551615
| | max_length_for_sort_data | 1024
| | max_prepared_stmt_count | 16382
| | max_relay_log_size | 0
| | max_seeks_for_key | 18446744073709551615
| | max_sort_length | 1024
| | max_sp_recursion_depth | 0
| | max_tmp_tables | 32
| | max_user_connections | 0
| | max_write_lock_count | 18446744073709551615
| | multi_range_count | 256
| | myisam_data_pointer_size | 6
| | myisam_max_sort_file_size | 9223372036854775807
| | myisam_recover_options | OFF
| | myisam_repair_threads | 1
| | myisam_sort_buffer_size | 8388608
| | myisam_stats_method | nulls_unequal
| | net_buffer_length | 2048
| | net_read_timeout | 30
| | net_retry_count | 10
| | net_write_timeout | 60
| | new | OFF
| | old_passwords | OFF
| | open_files_limit | 1024
| | optimizer_prune_level | 1
| | optimizer_search_depth | 62
| | pid_file | /home/stephane/programs/mysql/install/data/server1.pid | | port | 3306
| | preload_buffer_size | 32768
| | profiling | OFF
| | profiling_history_size | 15
| | protocol_version | 10
| | query_alloc_block_size | 8192
| | query_cache_limit | 1048576
| | query_cache_min_res_unit | 4096
| | query_cache_size | 0
| | query_cache_type | ON
| | query_cache_wlock_invalidate | OFF
| | query_prealloc_size | 8192
| | range_alloc_block_size | 2048
| | read_buffer_size | 258048
| | read_only | OFF
| | read_rnd_buffer_size | 258048
| | relay_log_purge | ON
| | relay_log_space_limit | 0
| | rpl_recovery_rank | 0
| | secure_auth | OFF
| | secure_file_priv |
| | server_id | 1
| | skip_external_locking | ON
| | skip_networking | OFF
| | skip_show_database | OFF
| | slave_compressed_protocol | OFF
| | slave_load_tmpdir | /tmp/
| | slave_net_timeout | 3600
| | slave_skip_errors | OFF
| | slave_transaction_retries | 10
| | slow_launch_time | 2
| | socket | /tmp/mysql.sock
| | sort_buffer_size | 65528
| | sql_big_selects | ON
| | sql_mode |
| | sql_notes | ON
| | sql_warnings | OFF
| | ssl_ca |
| | ssl_capath |
| | ssl_cert |
| | ssl_cipher |
| | ssl_key |
| | storage_engine | MyISAM
| | sync_binlog | 0
| | sync_frm | ON
| | system_time_zone | MSK
| | table_cache | 4
| | table_lock_wait_timeout | 50
| | table_type | MyISAM
| | thread_cache_size | 0
| | thread_stack | 131072
| | time_format | %H:%i:%s
| | time_zone | SYSTEM
| | timed_mutexes | OFF
| | tmp_table_size | 33554432
| | tmpdir | /tmp/
| | transaction_alloc_block_size | 8192
| | transaction_prealloc_size | 4096
| | tx_isolation | REPEATABLE-READ
| | updatable_views_with_limit | YES
| | version | 5.0.45-log
| | version_comment | Source distribution
| | version_compile_machine | x86_64
| | version_compile_os | unknown-linux-gnu
| | wait_timeout | 28800
| +---------------------------------+--------------------------------------------------------------------+ 225 rows in set (43.41 sec)

Is there a way to truncate table that has foreign a key?

Posted: 15 Apr 2013 01:49 PM PDT

Is there a way to truncate table that has foreign keys ? Delete and reseed can take too long. Is deleting and recreating keys only way ? If so is there a tool that does this ?

How do I list all schemas in PostgreSQL?

Posted: 15 Apr 2013 12:59 PM PDT

When using PostgreSQL v9.1, how do I list all of the schemas using SQL?

I was expecting something along the lines of:

SELECT something FROM pg_blah;  

Multiple like conditions for same column?

Posted: 15 Apr 2013 11:17 AM PDT

How do I write a query for a Oracle database when I have several "like" values for the same column (in my example data "name")?

My example data for table "address":

name like "John"  name like "Jon%"  name like "Jonathan"  name like "Frank"  

Is there something like IN() with LIKE in Oracle or do I have to write it for each value OR separated?

How to Return XML Node Ordinal, or delete node based on element value?

Posted: 15 Apr 2013 01:16 PM PDT

I have an XML document similar to this:

<Root>    <Sub>      <Record>        <Guid>aslkjflaksjflkasjfkljsd</Guid>      </Record>      <Record>         <Guid>opqiwuerl;kasdlfkjawop</Guid>      </Record>    </Sub>  </Root>  

I am replacing an entire <Record> node based on some criteria. The <Record> nodes contain a <Guid> that I can use to identify them (pretend those are valid GUIDs please!).

Knowing the GUID, I am returning that node's XML into a variable for further processing. Downstream, I need to be able to delete that particular node so I that I can insert the modified version of the node back into the original document.

Is there a way to determine the ordinal, or another way to use a delete/replace method to remove a node based on an element's value?

SQL Procedure working when run manually, not running from sql server agent

Posted: 15 Apr 2013 06:31 PM PDT

I have a procedure that runs fine using the execute command in SSMS, however putting the same command in a job gives the following error.

line 9, character 9, unexpected end of input    

The code takes a very long XML string in UTF-8 encoding and puts it into a single nvarchar(max) cell. Then puts this string into a XML cell in a different table, allowing me to query the individual parts of the XML code using the nodes function. I cannot put the data directly into a nvarchar cell due to encoding differences.

I can't reproduce the string here as it is very very long.

I'm just looking for ideas really as to where it might be going wrong.

Here is what I know so far:

  1. The procedure runs without issue when executed manually

  2. I have checked permission issues, and that doesn't seem to be the problem. The agent runs under my own account and I am a sysadmin on the database

  3. I split the procedure into seperate parts to locate exactly where the problem is occuring. Once again the seperate procedures run fine when executed manually but an error occurs when run through SQL Server agent.

  4. When the query is run seperately through SQL Server Agent it gives a slightly different error. This leads me to believe it is an encoding issue. However I am getting the XML from a webpage and I can't change the encoding on the webpage.

    line 1, character 38, unable to switch the encoding

I know this is a long shot since you can't replicate the issue but if anyone could give an idea as to where to start looking for an answer, it would be greatly appreciated.

Memcache implementation

Posted: 15 Apr 2013 01:29 PM PDT

I have a Wordpress site that uses WP Super Cache plugin in order to make my blog posts and pages stay as HTML so that less PHP/MySQL code is executed. However, there's a featured box that can't get cached. So, I configured it manually to bypass cache and stay dynamic (code is executed on every page load).

So, most part of the entire page is executed using HTML and a specific part (a featured bar box) uses a PHP/MySQL.

The code that runs on every page load are some simple <?php $query = mysql_query("SELECT * [... rest of my code here ...]"); ?> to select some data from my database and some $fetch and <?php echo [...] ?> to print the results in the webpage, of course. But instead of everytime calling database to get this results, it would be much better to cache this results. So, I have three options:

  1. Let WP Super Cache caches my entire page. But I can't do that, because inside my box that currently isn't getting cached, the database data can change anytime (normally 1 time per day). So, I can't cache this box and we have to discart this option 1.
  2. Activate query_cache in my.cnf. However, this will only cache MySQL and don't PHP. :(
  3. Than, we have memcache (or APC, I don't know much about it's difference). After all my explanations, here's the final question: I'd like to know if installing memcache in my server is enought to get it working imediatally or I have to make any changes in my code to adapt it with memcache; I'd like to know if it will really help me with both PHP and MySQL for this specific part of code I have to cache or it's results is similar to enabling query_cache? Or is using APC rather than memcache better in my case?

Thanks,
André.

Trying to allocate 3G for buffer pool in InnoDB fails

Posted: 15 Apr 2013 02:06 PM PDT

I'm trying to boost my InnoDB driver on my Windows 7 PC since each page request takes just about 1.7 seconds to load versus 0.002 seconds on my Ubuntu 12.10 workstation.

The setting I'm playing around with is the well-known innodb_buffer_pool_size line, which should increase the performance a lot, and it did on my Ubuntu workstation - but my Windows 7 workstation can only seem to allocate 1G of RAM, my system currently have 16GB.

If I try to exceed 4GB I get [ERROR] innobase_buffer_pool_size can't be over 4GB on 32-bit systems, so I try to allocate 3G but now my server wont start, same goes for 2G. 1GB works but did only contribute to about 0.1s worth of speed increase - which isn't much.

Is there something more I have to account for on Windows to get better performance, and why can I not allocate more than 1G of RAM? (I get no errors when I try to allocate more than 1GB, the server just won't start)

SHOW TABLE STATUS very slow on InnoDB

Posted: 15 Apr 2013 11:09 AM PDT

Recently we've been migrating from MyISAM to InnoDB and I understand that MyISAM uses meta information against each table to track information and such - however, the database is about 1.8gb with 1+ million records across 400+ or so tables.

The problem comes with software like PHPMyAdmin runs "SHOW TABLE STATUS FROM dbase;" where it can take up to 50 seconds to execute that command. Is there a way to optimise this? (MySQL or MariaDB)

Thanks!

Does WITH SCHEMABINDING on a multi-statement TVF improve cardinality estimates?

Posted: 15 Apr 2013 08:59 PM PDT

Based on http://blogs.msdn.com/b/psssql/archive/2010/10/28/query-performance-and-multi-statement-table-valued-functions.aspx and other articles, SQL Server assumes that a multi-line table valued function returns one row. This causes the selection of a poor execution plan for the calling statement if it actually returns many rows.

Does adding WITH SCHEMABINDING to the RETURNS clause of the CREATE FUNCTION result in a more correct cardinality estimate for the return value of the function?

If we assume that we are passing a UserId to this function and getting back a table of RecordId values that the user is allowed to access, and that some users are only allowed to see a few records and that some are allowed to see many or even all records, would either the function or the calling statements (or the procedures that include them) benefit from using FORCE RECOMPILE? Does the use of WITH SCHEMABINDING in the function change this answer?

I realize that I could figure this out by experimentation, but I am hoping that someone has already figured out the answer. A pointer to someplace where this is well documented would be helpful.

PostgreSQL newbie - how to create objects in a database

Posted: 15 Apr 2013 08:24 PM PDT

I've just finished installing PostgreSQL and pgadmin3 on Ubuntu. Using pgadmin, I've been able to create a new user called 'test' and also a new database with 'test' as the owner. Now I'm trying to create tables using SQL statements... but I can't seem to be able to figure out how to do that in pgAdmin. I don't' want to create each column individually using the GUI. Is there any way I can run a SQL statement like:

create table test   (     id serial primary key,     name varchar(64),     handler varchar(16),     desc varchar(255)   );  

sqlplus command not working

Posted: 15 Apr 2013 08:59 PM PDT

I am trying to install Oracle Application Express on Windows 7, that is why I need SQL*plus . However SQL*Plus cannot be defined in cmd.

I am new to Oracle. What can I do in order to run SQL*Plus ?

Designing Simple Schema for Disaggregation of Demand Forecast

Posted: 15 Apr 2013 12:02 PM PDT

I am doing a simple database design task as a training exercise where I have to come up with a basic schema design for the following case:

I have a parent-child hierarchy of products (example, Raw Material > Work in Progress > End Product).

  • Orders are placed at each level.
  • Number of orders shall be viewable in weekly buckets for the next 6 months.
  • Demand forecast can be done for each product level.
  • Demand forecast for any week within next 6 months can be done today.
  • Demand forecast is done for weekly buckets, for the next 6 months.

Demand Forecast is usually done at the higher level in hierarchy (Raw Material or Work in Progress level) It has to be disaggregated to a lower level (End Product).

There are 2 ways in which demand forecast can be disaggregated from a higher level to lower level:

  1. User specifies percentage distribution for end product. Say, there's a forecast of 1000 for Work In Progress.. and user says I want 40% for End Product 1 and 60% for End Product 2 in bucket 10.. Then for 10th week (Sunday to Saturday) from now, forecast value for End Product 1 would be 400 and, for End Product 2 would be 600.
  2. User says, just disaggregate according to orders placed against end products in Bucket 5, and orders in bucket 5 for End Product 1 and 2 are 200 and 800 respectively, then forecast value for EP1 would be ((200/1000) * 100)% and for EP2 would be ((800/1000) * 100)% of forecast for 'Work in Progress'.

Forecast shall be viewable in weekly buckets for the next 6 months and the ideal format should be:

product name | bucket number | week start date | week end date | forecast value | created_on  

PRODUCT_HIERARCHY table could look like this:

id  |   name                |   parent_id  __________________________________________  1   |   raw material        |   (null)  2   |   work in progress    |   1  3   |   end product 1       |   2  4   |   end product 2       |   2  

ORDERS table might look like this:

id | prod_id | order_date | delivery_date | delivered_date  

where,

prod_id is foreign key that references id of PRODUCT_HIERARCHY table,

How to store forecast? What would be a good basic schema for such a requirement?


My idea to select orders for 26 weekly buckets is:

SELECT      COUNT(*) TOTAL_ORDERS,      WIDTH_BUCKET(          delivery_date,          SYSDATE,          ADD_MONTHS(sysdate, 6),           TO_NUMBER( TO_CHAR(SYSDATE,'DD-MON-YYYY') - TO_CHAR(ADD_MONTHS(sysdate, 6),'DD-MON-YYYY') ) / 7      ) BUCKET_NO  FROM      orders_table  WHERE      delivery_date BETWEEN SYSDATE AND ADD_MONTHS(sysdate, 6);  

But this will give weekly buckets starting from today irrespective of the day. How can I convert them to Sunday to Saturday weeks in Oracle?

Please help designing this database structure.

(will be using Oracle 11g)

Query is slow for certain users

Posted: 15 Apr 2013 05:21 PM PDT

I have a couple queries called from a C# .NET web application which are always fast for me (I am an local admin on the SQL Server) but for a group of users (domain group with required permissions), the query is incredibly slow to the point it times out in the application.

What would cause the exact same query to run differently for different users?

More info:

  • The query is inline SQL in the C# code, not a stored procedure
  • The app uses domain authentication and both the user and myself run the query through the app
  • Seems like the issue is different plans and one was cached so that is why it was different for different users. Something is affecting the cache because now the query is slow for me via the app and fast in SQL Server Management Studio.

DB2 Authentication TRUST_CLNTAUTH

Posted: 15 Apr 2013 04:02 PM PDT

I am using db2inst1 to connect to a database in DB2 which I have installed on my machine. Therefore, db2inst1 user does not require username/password authentication (borrows them from the OS). I would like to change that, and force every time a connection is initiated a username/password to be requested.

More specifically, this is how the authentication configuration looks like:

db2 get dbm cfg|grep -i auth

 GSS Plugin for Local Authorization    (LOCAL_GSSPLUGIN) =    Server Connection Authentication          (SRVCON_AUTH) = NOT_SPECIFIED   Database manager authentication        (AUTHENTICATION) = CLIENT   Alternate authentication           (ALTERNATE_AUTH_ENC) = NOT_SPECIFIED   Cataloging allowed without authority   (CATALOG_NOAUTH) = NO   Trusted client authentication          (TRUST_CLNTAUTH) = SERVER   Bypass federated authentication            (FED_NOAUTH) = NO  

db2 connect to dbName

   Database Connection Information       Database server        = DB2/LINUXX8664 10.1.0     SQL authorization ID   = DB2INST1     Local database alias   = DBNAME  

db2 connect to dbName user db2inst1 using password

   SQL1639N  The database server was unable to perform authentication because      security-related database manager files on the server do not have the required      operating system permissions.  SQLSTATE=08001  

I have played with some authentication combinations for "AUTHENTICATION" and "TRUST_CLNTAUTH" without much luck.

mysql duplicate entry error 1062 when restoring backup

Posted: 15 Apr 2013 08:11 PM PDT

Sorry, I seen similar threads but I still couldn't find it addressing my issue plus, I needed some more info on this.

Requirement: To create an exact replica 'db4' of an existing DB 'db3'.

Procedure followed:

  • mysqldump -uuser -ppass db3 > db3.sql (size is 6G)
  • mysql -uuser -ppass db4 < db3.sql (db4 was a newly created blank database)

The 2nd step throws in the error:

ERROR 1062 (23000) at line 5524: Duplicate entry '600806' for key 1"  

I ran the 2nd step again with --force. The restore completed but with 2 additional similar errors:

ERROR 1062 (23000) at line 6309: Duplicate entry '187694' for key 1      ERROR 1062 (23000) at line 6572: Duplicate entry '1567400' for key 1  

On completion when I queried certain tables of db4 database, I was able to see missing records.

Question:

  1. Does this indicate a corrupted/problematic db3 database?

  2. How to proceed to create a 'consistent/working' replica (db4) of db3?

  3. If (2) fails, how to possibly troubleshoot and find the reason behind why it occurs?

Thanks,

Upgrade production database during software development lifecycle

Posted: 15 Apr 2013 02:57 PM PDT

Background

Using Oracle, there are a number of tools that help with migrating and applying development changes into the production environment (such as Embarcadero Change Manager). These tools can be configured to perform any database DDL upgrade with little to no human intervention.

Problem

I have development and production servers running PostgreSQL 9.x. After the initial deployment of the database DDL to the production server, I will continue to make changes to the development database. These changes will include bug fixes to stored procedures, changes to tables, additional sequences, new views, more types, more tables, and so forth.

Question

What are the steps to upgrade/migrate the DDL for a production PostgreSQL database application in an automatic fashion (or nearly automatically) such that it has the new changes made in development?

Related

Thank you!

Setting up DRBD on an active MySQL server

Posted: 15 Apr 2013 07:02 PM PDT

When it comes to setting up DRBD and MySQL, is the following possible?

  • Set up DRBD on an active MySQL server
  • Set up DRBD with no downtime allowed

MySQL optimization - year column grouping - using temporary table, filesort

Posted: 15 Apr 2013 02:02 PM PDT

I have a transactions table which is having 600,000 records, I need to list the count for the dashboard on financial year basis. The table used is MyISAM. I tried adding index for the transaction date (tran_date). Even though it is using the index it creates temporary table which is taking more time because of the temporary table and the filesort. Is there any way to optimize the query to improve the query time?

  SELECT COUNT( * ) AS cnt, CASE WHEN MONTH( tran_date ) >=3  THEN concat( YEAR( tran_date ) , '-', YEAR( tran_date ) +1 )  ELSE concat( YEAR( tran_date ) -1, '-', YEAR( tran_date ) )  END AS financial_year  FROM `transactions1`  WHERE tran_date >= '2010-06-01'  GROUP BY financial_year    Showing rows 0 - 4 (5 total, Query took 1.2095 sec)  
  id select_type  table       type  possible_keys     key key_len    ref  rows    Extra  1   SIMPLE    transactions1 range PRIMARY,tran_date tran_date 8  NULL   346485  Using where; Using index; Using temporary; Using filesort  
  Keyname     Type    Unique  Packed  Field       Cardinality   Collation   PRIMARY     BTREE   Yes       No    tran_date      205720         A                                           tran_ID        617162         A   coupon_No   BTREE   No        No    coupon_No      617162         A       account_typeBTREE   No        No    account_type   3              A       prodCode    BTREE   No        No    prodCode       430            A                                           tran_date      308581         A   tran_date   BTREE   No        No    tran_date      205720         A       cust_ID     BTREE   No        No    cust_ID        3265           A                                           tran_date      308581         A                                       account_type   308581         A                                       points_earned  617162         A  

Update :

Tried adding partition which is not that much helpful in comparison with non partitioned one. Does replication help in this case for reading this table?. There will be more grouping based on the dates (using the date functions) when reading the data.

Edit:

I altered the query and reduced the query execution time. The query I used is,

  SELECT SUM( count )  FROM (  SELECT COUNT( * ) AS count,  CASE WHEN MONTH( tran_date ) >=3  THEN concat( YEAR( tran_date ) , '-', YEAR( tran_date ) +1 )  ELSE concat( YEAR( tran_date ) -1, '-', YEAR( tran_date ) )  END AS format_date  FROM transactions1  GROUP BY tran_date  ) AS s  GROUP BY format_date    Showing rows 0 - 4 (5 total, Query took 0.5636 sec)  
  id  select_type     table     type  possible_keys   key     key_len     ref     rows    Extra  1   PRIMARY     <derived2>    ALL       NULL        NULL      NULL      NULL    229676  Using temporary; Using filesort  2   DERIVED     transactions1 index     NULL        tran_date   8       NULL    617162  Using index  

But when using

  SELECT COUNT( * ) AS count,  CASE WHEN MONTH( tran_date ) >=3  THEN concat( YEAR( tran_date ) , '-', YEAR( tran_date ) +1 )  ELSE concat( YEAR( tran_date ) -1, '-', YEAR( tran_date ) )  END AS format_date  FROM transactions1  GROUP BY tran_date    Showing rows 0 - 29 (229,676 total, Query took 0.0006 sec)  

gives less time without using the SUM(count) in the derived table. Is there any other way to get the sum without using the subquery in MySQL or can the subquery be optimized to get the index.

MySQL join for multiple child records in a single joined row

Posted: 15 Apr 2013 03:02 PM PDT

I have one master table (teacher) structured like

teacherId   Name        Class  1       Praveen     10  2       John        9  

and having a child table (student) structured like

studentId   teacherId   Name  1       1       David  2       1       Creg  3       2       Mike  4       2       Steve  5       2       Jim  

How I can get a result set like

teacherId   teacher     studentId_1 student_1   studentId_2 student_2   studentId_3 student_3    1       Praveen     1       David       2       Creg        null        null  2       John        3       Mike        4       Steve       5       Jim  

Thanks in advance..

Regards, Praveen

Add a new Column and define its position in a table

Posted: 15 Apr 2013 03:59 PM PDT

I have Table A with 5 columns:

TableA  --  Name  Tel  Email  Address  

I want to add a new column (mobile) in between Tel & Email:

TableA  --  Name  Tel  Mobile  Email  Address  

If I use

ALTER TABLE TableA  ADD COLUMN Mobile INT NOT NULL  

the mobile column is added to the end of the table.

Is there a way to achieve this without dropping the table and moving data to a new table?

No comments:

Post a Comment

Search This Blog