Tuesday, August 20, 2013

[how to] How to use the check_constraints in postgresql?

[how to] How to use the check_constraints in postgresql?


How to use the check_constraints in postgresql?

Posted: 20 Aug 2013 08:28 PM PDT

I need to check the constraints of the table in database, but i dont understand how to check it inside python program?

Is data encrypted by ENCRYPTBYPASSPHRASE() protected by the service master key?

Posted: 20 Aug 2013 06:24 PM PDT

In a recent question, What does the Service Master Key protect?, it was suggested that the service master key protects, among other things, data encrypted by ENCRYPTPASSPHRASE(). Is this true?

What am I doing wrong with pg_hba.conf?

Posted: 20 Aug 2013 07:13 PM PDT

I'm trying to make it so that local connections to the database don't require a login. So I added the line local all all trust to the pg_hba.conf file, but then when I load pgAdmin and try to connect to the server it just says FATAL: could not load pg_hba.conf. What am I doing wrong?

Create two columns from one, based on two different WHERE clauses

Posted: 20 Aug 2013 06:37 PM PDT

SELECT TOP 1000 [Value]                         FROM [OnlineQnres].[dbo].[tmp_DataSets]  WHERE [VariableID] in ('1')  UNION ALL  SELECT TOP 1000 [Value]  FROM [OnlineQnres].[dbo].[tmp_oDataSets]  WHERE [VariableID] in ('4')  

The above query produces 2000 rows; 1000 with Value for VariableID = 1, and 1000 with Value for VariableID = 2. What I want is 1000 rows, with one column (val1) containing the values for VariableID = 1, and the other column (val2) with the values for VariableID = 2.

Queryplan changes depending on filter values

Posted: 20 Aug 2013 08:38 PM PDT

I created a clustered index on a table expecting it to make the queries with ranges perform better, but, different values in the where clause can produce differente query plans, one uses the clustered index and one does not.

My question is: What can I do to make the DBMS use the better query plan? Or better yet, should I change my schema to something better?

Details:

  • I'm using Amazon's RDS (Server version: 5.5.31-log)
  • I executed optimize table on each table (expecting it to "rebuild" the clustered index and reset the statistics), sometimes it does not change anything, sometimes the DBMS will use worse query plans, sometimes makes it faster because it will use the clustered index.
  • explain extended followed by a show warnings did not produce anyting interesting/useful
  • I'm aware of index hinting. I tested it and the query plan used the primary index but I don't know if it always works, also, I'm using django and django's ORM does not support index hinting, so a solution that did not require it would be nice.

queries:

-- DDL  create table api_route (     id int(11) not null auto_increment primary key,     origin_id int(11) not null,     destination_id int(11) not null,     group_id int(11) not null,     foreign key (origin_id) references api_area (id),     foreign key (destination_id) references api_area (id),     foreign key (group_id) references api_group (id)  ) engine=innodb, collate=utf8;  create table api_area (    id int(11) not null auto_increment primary key,    name varchar(50) not null  ) engine=innodb, collate=utf8;  create table api_location (    id int(11) not null auto_increment primary key,    area_id int(11),    foreign key (area_id) references api_area (id)  ) engine=innodb, collate=utf8;  create table api_locationzip (     location_ptr_id int(11) not null,     zip_start int(11) not null,     zip_end int(11) not null,    foreign key locationzip_to_location (location_ptr_id) references api_location (id)  ) engine=innodb, collate=utf8;  create table api_locationkey (    location_ptr_id int(11) not null,    key varchar(10) not null,    foreign key locationkey_to_location (location_ptr_id) references api_location (id)  ) engine=innodb, collate=utf8;    -- query  select * from    api_route,    api_area origin,    api_area destination,    api_location location_origin,    api_location location_destination,    api_locationzip origin_zip,    api_locationzip destination_zip  where    api_route.group_id IN (1,2,3,...) and    -- filter origin by zip code    api_route.origin_id = origin.id and    origin.id = location_origin.area_id and    location_origin.id = origin_zip.location_ptr_id and    origin_zip.zip_start <= <zipcode_origin> and    origin_zip.zip_end >= <zipcode_origin> and    -- filter destination by zip code    api_route.destination_id = destination.id and    destination.id = location_destination.area_id and    location_destination.id = destination_zip.location_ptr_id and    destination_zip.zip_start <= <zipcode_destination> and    destination_zip.zip_end >= <zipcode_destination>  limit 100  

An area has many locations, every location has either a zip or key.

Here is a explain of a slow query (~1.6s):

*************************** 1. row ***************************             id: 1    select_type: SIMPLE          table: destination           type: index  possible_keys: PRIMARY            key: api_area_group_id_599f0627e68b9613_uniq        key_len: 156            ref: NULL           rows: 3794          Extra: Using index  *************************** 2. row ***************************             id: 1    select_type: SIMPLE          table: api_route           type: ref  possible_keys: api_route_0261d0a2,api_route_8de262d6            key: api_route_8de262d6        key_len: 4            ref: master.T6.id           rows: 9          Extra:   *************************** 3. row ***************************             id: 1    select_type: SIMPLE          table: origin           type: eq_ref  possible_keys: PRIMARY            key: PRIMARY        key_len: 4            ref: master.api_route.origin_id           rows: 1          Extra:   *************************** 4. row ***************************             id: 1    select_type: SIMPLE          table: location_origin           type: ref  possible_keys: PRIMARY,api_location_a4563695            key: api_location_a4563695        key_len: 4            ref: master.origin.id           rows: 39          Extra: Using where; Using index  *************************** 5. row ***************************             id: 1    select_type: SIMPLE          table: origin_zip           type: ref  possible_keys: PRIMARY,locationzip_to_location             key: locationzip_to_location         key_len: 4            ref: master.location_origin.id           rows: 1          Extra: Using where; Using index  *************************** 6. row ***************************             id: 1    select_type: SIMPLE          table: location_destination           type: ref  possible_keys: PRIMARY,api_location_a4563695            key: api_location_a4563695        key_len: 4            ref: master.destination.id           rows: 39          Extra: Using index  *************************** 7. row ***************************             id: 1    select_type: SIMPLE          table: destination_zip           type: ref  possible_keys: PRIMARY,locationzip_to_location             key: locationzip_to_location         key_len: 4            ref: master.location_destination.id           rows: 1          Extra: Using where; Using index  7 rows in set (0.00 sec)  

Here is the explain of a fast query (~100ms):

*************************** 1. row ***************************             id: 1    select_type: SIMPLE          table: destination_zip           type: range  possible_keys: PRIMARY,locationzip_to_location             key: PRIMARY        key_len: 4            ref: NULL           rows: 119268          Extra: Using where; Using index  *************************** 2. row ***************************             id: 1    select_type: SIMPLE          table: location_destination           type: eq_ref  possible_keys: PRIMARY,api_location_a4563695            key: PRIMARY        key_len: 4            ref: master.destination_zip.location_ptr_id           rows: 1          Extra:   *************************** 3. row ***************************             id: 1    select_type: SIMPLE          table: api_route           type: ref  possible_keys: api_route_0261d0a2,api_route_8de262d6            key: api_route_8de262d6        key_len: 4            ref: master.location_destination.area_id           rows: 9          Extra:   *************************** 4. row ***************************             id: 1    select_type: SIMPLE          table: origin           type: eq_ref  possible_keys: PRIMARY            key: PRIMARY        key_len: 4            ref: master.api_route.origin_id           rows: 1          Extra:   *************************** 5. row ***************************             id: 1    select_type: SIMPLE          table: location_origin           type: ref  possible_keys: PRIMARY,api_location_a4563695            key: api_location_a4563695        key_len: 4            ref: master.origin.id           rows: 39          Extra: Using where; Using index  *************************** 6. row ***************************             id: 1    select_type: SIMPLE          table: origin_zip           type: ref  possible_keys: PRIMARY,locationzip_to_location             key: locationzip_to_location         key_len: 4            ref: master.location_origin.id           rows: 1          Extra: Using where; Using index  *************************** 7. row ***************************             id: 1    select_type: SIMPLE          table: destination           type: eq_ref  possible_keys: PRIMARY            key: PRIMARY        key_len: 4            ref: master.location_destination.area_id           rows: 1          Extra:   7 rows in set (0.00 sec)  

edit: Added create table and full query

Guaranteeing a FK relationship through multiple tables

Posted: 20 Aug 2013 06:26 PM PDT

I'm using MySQL / InnoDB, and using foreign keys to preserve relationships across tables. In the following scenaro (depicted below), a 'manager' is associated with a 'recordLabel', and an 'artist' is also associated with a 'recordLabel'. When an 'album' is created, it is associated with an 'artist' and a 'manager', but both the artist and the manager need to be associated with the same recordLabel. How can I guarantee that relationship with the current table setup, or do I need to redesign the tables?

enter image description here

How can I look up table relationships in SQL if I don't have physical/logical models?

Posted: 20 Aug 2013 12:39 PM PDT

What is the best way to look up/learn table relationships in a database that I'm not familiar with? I don't have any documentation on that DB avaiable to me.

Thank you

SQL 2012 Availability Group issue

Posted: 20 Aug 2013 01:10 PM PDT

I need some assistance and some bandages for shooting myself in the foot. This is a DEV environment but I need to bring it up.

I am SQL 2012 Enterprise on Windows 2008 R2. Somehow our AG listener group disappeared. I tried to recreated AG listener and it was giving error regarding computer account not having permissions. The genius inside of me decided to delete AG group from Windows Cluster Admin and now I cannot recreate AG group. ( I should've done it via SQL instead of Windows).

I keep on getting the following error :

TITLE: Microsoft SQL Server Management Studio

Creating availability group resulted in an error. (Microsoft.SqlServer.Management.HadrTasks)

ADDITIONAL INFORMATION:

Create failed for Availability Group 'SQLAG_D'. (Microsoft.SqlServer.Smo)

An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)

The availability group 'SQLAG_D' already exists. This error could be caused by a previous failed CREATE AVAILABILITY GROUP or DROP AVAILABILITY GROUP operation. If the availability group name you specified is correct, try dropping the availability group and then retry CREATE AVAILABILITY GROUP operation.

Failed to create availability group 'SQLAG_D'. The operation encountered SQL Server error 41042 and has been rolled back. Check the SQL Server error log for more details. When the cause of the error has been resolved, retry CREATE AVAILABILITY GROUP command. (Microsoft SQL Server, Error: 41042)

Does anyone have any suggestions?

What does the Service Master Key protect?

Posted: 20 Aug 2013 07:34 PM PDT

I am looking for a complete list of items that the Service Master Key is used to protect in SQL Server 2008.

Recover encrypted Oracle database

Posted: 20 Aug 2013 02:57 PM PDT

A client fired their DBA. He used OWM to add a wallet to their Oracle database and secure a few necessary columns of tables that they need. I can see the walled using:

orapki wallet display -wallet /opt/oracle/home/owm/root  

I do not know the password. The ex-employee will not divulge the password. With root access to the system, is it possible to remove the wallet or change the wallet's password?

NOTE: Not part of this problem, but for reference... Oracle Support told the client they are out of luck. I told them they are out of luck. They want a third (and fourth (and fifth)) opinion.

Help with Oracle Pipelined function

Posted: 20 Aug 2013 02:16 PM PDT

I have a pivot where I have a contract with tank numbers separated by ; and I've been given this example to pivot them down into rows (which works):

SELECT oks_c.contract_number,          oks_l.line_id,         oks_l.tank_numbers, column_value AS TANK_NUMBER     FROM oks_contract_header oks_c   INNER JOIN oks_contract_lines oks_l       ON oks_l.contract_number = oks_c.contract_number AND          oks_l.item_name LIKE '%.55201'      , table(str2tbl(oks_l.tank_numbers,';')) acbb  ORDER BY oks_c.contract_number,            oks_l.line_id,            TANK_NUMBER  

Here's the pipelined function:

CREATE OR REPLACE function DBRAJAH.str2tbl( p_str in varchar2, p_delim in varchar2 default ',' ) return   str2tblType  PIPELINED  as      l_str      long default p_str || p_delim;      l_n        number;  begin      loop          l_n := instr( l_str, p_delim );          exit when (nvl(l_n,0) = 0);          pipe row( ltrim(rtrim(substr(l_str,1,l_n-1))) );          l_str := substr( l_str, l_n+1 );      end loop;      return;  end;  

But I'm trying to understand why it works. Particularly how the pipelined function results are not cross-joined?

I'm familiar with outer apply and table-valued functions in SQL Server, but this seems to be quite different - or is this really an OUTER APPLY?

EF Code First uses nvarchar(max) for all strings. Will this hurt query performance?

Posted: 20 Aug 2013 12:46 PM PDT

I have some databases created using Entity Framework Code First; the apps are working and in general I'm pretty happy with what code first lets me do. I am a programmer first, and a dba second, by necessity. I am reading about DataAttributes to further describe in C# what I want the database to do; and my question is: what penalty will I be eating by having these nvarchar(max) strings in my table (see example below)?

There are several columns in this particular table; in C# they are defined as such:

    [Key]      [DatabaseGeneratedAttribute(DatabaseGeneratedOption.Identity)]      public int ID { get; set; }      public string Name { get; set; }      public string Message { get; set; }      public string Source { get; set; }      public DateTime Generated { get; set; }      public DateTime Written { get; set; }  

I expect to query and/or sort based on Name, Source, Generated, and Written. I expect Name and Source to be in the 0-50 character length, occasionally up to 150. I expect this table to start pretty small (<100k rows), but grow significantly over time (>1m rows). Obviously message could be small or large, and will probably not be queried against.

What I want to know, is there a performance hit for my Name and Source columns being defined as nvarchar(max) when I never expect them to be larger than 150 characters?

Advice needed for Querying DateTimeOffset data by end users

Posted: 20 Aug 2013 08:30 PM PDT

We are building an Operational Data Store (ODS). The data is coming from our production databases which are located in the United States and Asia. The data in our production databases store date/time values as local time (no UTC or offsets). When we bring the data into our ODS, we're storing all date/time values as DateTimeOffset values.

When our users query the ODS data, we don't want them to have to think about including the appropriate offset value in their queries involving date/time values. We want them to be able to query the data as if it were simple local time values.

We're masking the offset values on date/time values by having users query views. The views return local time values and strip off the offset values. While this works, any indexes on the underlying datetimeoffset columns are not used because the act of converting the datetimeoffset to datetime in the views causes indexes to not be used.

So, I'm looking for advice on how others, who have dealt with this issue, have dealed with it. We need to be able to utilize indexes while, at the same time, not require users to think about time zones and daylight savings time when querying the data.

We're using SQL Server 2012.

Query to list all tables in a schema

Posted: 20 Aug 2013 02:29 PM PDT

I am using Rational Application Developer to run querys on a database. We are unable to locate anyone who has a list of the tables on the schema and so far the queries we have found to get a list of tables are unsuccessful.

USE <DB>  GO  SELECT *  FROM sys.Tables  GO  

Use breaks but I replaced it with CALL.

GO breaks as well and I cannot locate a function that does the same thing.

Schema Change/Update script for Database deploy

Posted: 20 Aug 2013 12:37 PM PDT

I have a need to change the database schema . I'm planning to write Schema change and update scripts for tracking database changes and updating them. I followed

Versioning Databases – Change Scripts

for a start, I got a gist of what he is getting at however since I haven't worked much on SQL scripts before, a tutorial or something to start with would be good. I did some research on the web and came to know that most people use Automatic comparing tools to generate the script which I don't want to do for obvious reason that I won't learn the anything in the process.

I'm looking for some tutorials/links on How to write Change scripts and Update scripts ? Especially update scripts as I couln't find even a single script/pseudo-code on how to do update schema by comparing SchemaChangeLog table, connecting to the table using scripts...

Thanks in advance!

Which version of Oracle supports Streams

Posted: 20 Aug 2013 01:49 PM PDT

I am using Oracle 10g Express Edition. I want to enable the streams feature in my current edition. However, when I searched on the net I couldn't find any information regarding streams setup on Oracle XE

Can anyone tell me which version of Oracle supports streams? Is it only the Enterprise Edition that supports it? If yes how can I add streams support to my express or standard editions

Handling or preventing conflicts in a multi-user system

Posted: 20 Aug 2013 12:55 PM PDT

I have a web application that is accessed by multiple users from different locations worldwide. Let's say I have an "edit" form for a specific document and two (or more) users are editing the document at the same time. I know that both of the users can edit the document and submit it, but user1 will not see that user2 made changes to the document and might submit an older version of it.

I was thinking about adding some kind of a lock, but then, if user1 only opened the document for edit, but never changed anything (and kept it open) user2 will never be able to edit it. Therefore, I was thinking to add a Timeout for the editor, but then user1 might time out before he finished doing his changes to the document (let's say, went out for launch).

The question is, how would one prevent the document from being edited from one user while the other changes it?

Migration to databases Domain Users do not have access

Posted: 20 Aug 2013 02:10 PM PDT

I migrated databases to new servers, however the applications that were previously used with the databases are failing to load. I have changed the connections and etc. The jobs also seem to be failing. I have a domain account who is the job owner. However, when I try to execute the job under my User name i get the following error:

Executed as user: NT AUTHORITY\SYSTEM. Login failed for user.....[SQLSTATE 28000) (Error 18456).

Is this related to Domain Users not having appropriate read and write access to the database. Also how would I give All domain users permissions to execute stored procedures.

Creating a global temp table in MySQL

Posted: 20 Aug 2013 12:10 PM PDT

I am working on a MySQL stored procedure.

I need to create a temp table and I want to access this temp table whenever I execute this stored procedure.

I know we can't access the temp table in another request in MySQL.

Is there a way to create a temp table like globally, or how can I access the temp table across the multiple requests?

MySQL server crashed.

Posted: 20 Aug 2013 05:10 PM PDT

Help! I managed to crash MySQL last night. I am on a Mac using the native version that came with Mountain Lion. I was upgrading from 5.5 to 5.6. I have followed instructions in this forum to delete the installation, but trying to re-install 5.5 says that there is a newer version and won't install. Trying to install 5.6 fails. I found this error in the console:

4/21/13 10:16:56.000 PM kernel[0]: exec of /Volumes/mysql-5.6.11-osx10.7-x86/mysql-5.6.11  osx10.7-x86.pkg/Contents/Resources/preflight denied since it was quarantined by TextEdit    and created without user consent, qtn-flags was 0x00000006  

Help me please ?? I am stuck and in a world of hurt and despair.

MySQL database drop insanely slow

Posted: 20 Aug 2013 03:10 PM PDT

I just installed MySQL 5.0.45-log Source distribution on my Debian 6.0.6 server.

I installed it under my user home directory like I'm used to doing.

But this time the queries are extremely slow to run.

Running a create table or a database drop takes ages. I can literally watch tv in the meantime.

So I did a profiling of the database drop statement.

mysql> SHOW PROFILES;  +----------+--------------+------------------------------+  | Query_ID | Duration     | Query                        |  +----------+--------------+------------------------------+  |        1 | 369.54719400 | drop database db_madeintouch |   |        2 |   0.00004600 | SELECT DATABASE()            |   +----------+--------------+------------------------------+  2 rows in set (0.00 sec)    mysql> SHOW PROFILE FOR QUERY 1;  +----------------------+-----------+  | Status               | Duration  |  +----------------------+-----------+  | (initialization)     | 0.000001  |   | checking permissions | 369.54705 |   | Opening table        | 0.000103  |   | System lock          | 0.000003  |   | Table lock           | 0.000018  |   | query end            | 0.000004  |   | freeing items        | 0.000004  |   | logging slow query   | 0.000002  |   +----------------------+-----------+  8 rows in set (0.00 sec)  

We can see the time it takes for the checking of permissions is of 369 seconds.

I also did a show status of the InnoDB engine.

mysql> show engine innodb status\G  *************************** 1. row ***************************  Status:   =====================================  130415 23:11:27 INNODB MONITOR OUTPUT  =====================================  Per second averages calculated from the last 9 seconds  ----------  SEMAPHORES  ----------  OS WAIT ARRAY INFO: reservation count 781, signal count 781  Mutex spin waits 0, rounds 8629, OS waits 231  RW-shared spins 379, OS waits 190; RW-excl spins 380, OS waits 342  ------------  TRANSACTIONS  ------------  Trx id counter 0 7599  Purge done for trx's n:o < 0 7430 undo n:o < 0 0  History list length 3  Total number of lock structs in row lock hash table 0  LIST OF TRANSACTIONS FOR EACH SESSION:  ---TRANSACTION 0 0, not started, process no 14133, OS thread id 140617364518656  MySQL thread id 16, query id 1305 localhost stephane checking permissions  drop database db_madeintouch  ---TRANSACTION 0 0, not started, process no 14133, OS thread id 140617364383488  MySQL thread id 13, query id 1307 localhost stephane  show engine innodb status  ---TRANSACTION 0 7597, COMMITTED IN MEMORY, process no 14133, OS thread id 140617364518656 dropping table  COMMITTING , undo log entries 16  MySQL thread id 16, query id 1305 localhost stephane checking permissions  drop database db_madeintouch  --------  FILE I/O  --------  I/O thread 0 state: waiting for i/o request (insert buffer thread)  I/O thread 1 state: waiting for i/o request (log thread)  I/O thread 2 state: waiting for i/o request (read thread)  I/O thread 3 state: waiting for i/o request (write thread)  Pending normal aio reads: 0, aio writes: 0,   ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0  Pending flushes (fsync) log: 1; buffer pool: 0  174 OS file reads, 3781 OS file writes, 2099 OS fsyncs  0.00 reads/s, 0 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s  -------------------------------------  INSERT BUFFER AND ADAPTIVE HASH INDEX  -------------------------------------  Ibuf: size 1, free list len 0, seg size 2,  0 inserts, 0 merged recs, 0 merges  Hash table size 17393, used cells 122, node heap has 1 buffer(s)  0.00 hash searches/s, 0.00 non-hash searches/s  ---  LOG  ---  Log sequence number 0 7801057  Log flushed up to   0 7798962  Last checkpoint at  0 7798962  1 pending log writes, 0 pending chkp writes  1535 log i/o's done, 0.00 log i/o's/second  ----------------------  BUFFER POOL AND MEMORY  ----------------------  Total memory allocated 22136914; in additional pool allocated 1048576  Buffer pool size   512  Free buffers       2  Database pages     509  Modified db pages  18  Pending reads 0  Pending writes: LRU 0, flush list 0, single page 0  Pages read 183, created 1444, written 6980  0.00 reads/s, 0.00 creates/s, 0.00 writes/s  No buffer pool page gets since the last printout  --------------  ROW OPERATIONS  --------------  0 queries inside InnoDB, 0 queries in queue  1 read views open inside InnoDB  Main thread process no. 14133, id 140617334142720, state: waiting for server activity  Number of rows inserted 0, updated 0, deleted 0, read 0  0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s  ----------------------------  END OF INNODB MONITOR OUTPUT  ============================    1 row in set (0.00 sec)  

And here is my environment.

mysql> SHOW VARIABLES;

+---------------------------------+--------------------------------------------------------------------+ | Variable_name | Value
| +---------------------------------+--------------------------------------------------------------------+ | auto_increment_increment | 1
| | auto_increment_offset | 1
| | automatic_sp_privileges | ON
| | back_log | 50
| | basedir | /home/stephane/programs/mysql-5.0.45/install/ | | binlog_cache_size | 32768
| | bulk_insert_buffer_size | 8388608
| | character_set_client | latin1
| | character_set_connection | latin1
| | character_set_database | latin1
| | character_set_filesystem | binary
| | character_set_results | latin1
| | character_set_server | latin1
| | character_set_system | utf8
| | character_sets_dir | /home/stephane/programs/mysql-5.0.45/install/share/mysql/charsets/ | | collation_connection | latin1_swedish_ci
| | collation_database | latin1_swedish_ci
| | collation_server | latin1_swedish_ci
| | completion_type | 0
| | concurrent_insert | 1
| | connect_timeout | 5
| | datadir | /home/stephane/programs/mysql/install/data/ | | date_format | %Y-%m-%d
| | datetime_format | %Y-%m-%d %H:%i:%s
| | default_week_format | 0
| | delay_key_write | ON
| | delayed_insert_limit | 100
| | delayed_insert_timeout | 300
| | delayed_queue_size | 1000
| | div_precision_increment | 4
| | engine_condition_pushdown | OFF
| | expire_logs_days | 0
| | flush | OFF
| | flush_time | 0
| | ft_boolean_syntax | + -><()~*:""&|
| | ft_max_word_len | 84
| | ft_min_word_len | 4
| | ft_query_expansion_limit | 20
| | ft_stopword_file | (built-in)
| | group_concat_max_len | 1024
| | have_archive | NO
| | have_bdb | NO
| | have_blackhole_engine | NO
| | have_compress | YES
| | have_crypt | YES
| | have_csv | NO
| | have_dynamic_loading | YES
| | have_example_engine | NO
| | have_federated_engine | NO
| | have_geometry | YES
| | have_innodb | YES
| | have_isam | NO
| | have_merge_engine | YES
| | have_ndbcluster | NO
| | have_openssl | NO
| | have_ssl | NO
| | have_query_cache | YES
| | have_raid | NO
| | have_rtree_keys | YES
| | have_symlink | YES
| | hostname | server1
| | init_connect |
| | init_file |
| | init_slave |
| | innodb_additional_mem_pool_size | 1048576
| | innodb_autoextend_increment | 8
| | innodb_buffer_pool_awe_mem_mb | 0
| | innodb_buffer_pool_size | 8388608
| | innodb_checksums | ON
| | innodb_commit_concurrency | 0
| | innodb_concurrency_tickets | 500
| | innodb_data_file_path | ibdata1:10M:autoextend
| | innodb_data_home_dir |
| | innodb_doublewrite | ON
| | innodb_fast_shutdown | 1
| | innodb_file_io_threads | 4
| | innodb_file_per_table | OFF
| | innodb_flush_log_at_trx_commit | 1
| | innodb_flush_method |
| | innodb_force_recovery | 0
| | innodb_lock_wait_timeout | 50
| | innodb_locks_unsafe_for_binlog | OFF
| | innodb_log_arch_dir |
| | innodb_log_archive | OFF
| | innodb_log_buffer_size | 1048576
| | innodb_log_file_size | 5242880
| | innodb_log_files_in_group | 2
| | innodb_log_group_home_dir | ./
| | innodb_max_dirty_pages_pct | 90
| | innodb_max_purge_lag | 0
| | innodb_mirrored_log_groups | 1
| | innodb_open_files | 300
| | innodb_rollback_on_timeout | OFF
| | innodb_support_xa | ON
| | innodb_sync_spin_loops | 20
| | innodb_table_locks | ON
| | innodb_thread_concurrency | 8
| | innodb_thread_sleep_delay | 10000
| | interactive_timeout | 28800
| | join_buffer_size | 131072
| | key_buffer_size | 16384
| | key_cache_age_threshold | 300
| | key_cache_block_size | 1024
| | key_cache_division_limit | 100
| | language | /home/stephane/programs/mysql-5.0.45/install/share/mysql/english/ | | large_files_support | ON
| | large_page_size | 0
| | large_pages | OFF
| | lc_time_names | en_US
| | license | GPL
| | local_infile | ON
| | locked_in_memory | OFF
| | log | ON
| | log_bin | OFF
| | log_bin_trust_function_creators | OFF
| | log_error | /home/stephane/programs/mysql/install/mysql.error.log | | log_queries_not_using_indexes | OFF
| | log_slave_updates | OFF
| | log_slow_queries | ON
| | log_warnings | 1
| | long_query_time | 10
| | low_priority_updates | OFF
| | lower_case_file_system | OFF
| | lower_case_table_names | 0
| | max_allowed_packet | 1047552
| | max_binlog_cache_size | 18446744073709551615
| | max_binlog_size | 1073741824
| | max_connect_errors | 10
| | max_connections | 100
| | max_delayed_threads | 20
| | max_error_count | 64
| | max_heap_table_size | 16777216
| | max_insert_delayed_threads | 20
| | max_join_size | 18446744073709551615
| | max_length_for_sort_data | 1024
| | max_prepared_stmt_count | 16382
| | max_relay_log_size | 0
| | max_seeks_for_key | 18446744073709551615
| | max_sort_length | 1024
| | max_sp_recursion_depth | 0
| | max_tmp_tables | 32
| | max_user_connections | 0
| | max_write_lock_count | 18446744073709551615
| | multi_range_count | 256
| | myisam_data_pointer_size | 6
| | myisam_max_sort_file_size | 9223372036854775807
| | myisam_recover_options | OFF
| | myisam_repair_threads | 1
| | myisam_sort_buffer_size | 8388608
| | myisam_stats_method | nulls_unequal
| | net_buffer_length | 2048
| | net_read_timeout | 30
| | net_retry_count | 10
| | net_write_timeout | 60
| | new | OFF
| | old_passwords | OFF
| | open_files_limit | 1024
| | optimizer_prune_level | 1
| | optimizer_search_depth | 62
| | pid_file | /home/stephane/programs/mysql/install/data/server1.pid | | port | 3306
| | preload_buffer_size | 32768
| | profiling | OFF
| | profiling_history_size | 15
| | protocol_version | 10
| | query_alloc_block_size | 8192
| | query_cache_limit | 1048576
| | query_cache_min_res_unit | 4096
| | query_cache_size | 0
| | query_cache_type | ON
| | query_cache_wlock_invalidate | OFF
| | query_prealloc_size | 8192
| | range_alloc_block_size | 2048
| | read_buffer_size | 258048
| | read_only | OFF
| | read_rnd_buffer_size | 258048
| | relay_log_purge | ON
| | relay_log_space_limit | 0
| | rpl_recovery_rank | 0
| | secure_auth | OFF
| | secure_file_priv |
| | server_id | 1
| | skip_external_locking | ON
| | skip_networking | OFF
| | skip_show_database | OFF
| | slave_compressed_protocol | OFF
| | slave_load_tmpdir | /tmp/
| | slave_net_timeout | 3600
| | slave_skip_errors | OFF
| | slave_transaction_retries | 10
| | slow_launch_time | 2
| | socket | /tmp/mysql.sock
| | sort_buffer_size | 65528
| | sql_big_selects | ON
| | sql_mode |
| | sql_notes | ON
| | sql_warnings | OFF
| | ssl_ca |
| | ssl_capath |
| | ssl_cert |
| | ssl_cipher |
| | ssl_key |
| | storage_engine | MyISAM
| | sync_binlog | 0
| | sync_frm | ON
| | system_time_zone | MSK
| | table_cache | 4
| | table_lock_wait_timeout | 50
| | table_type | MyISAM
| | thread_cache_size | 0
| | thread_stack | 131072
| | time_format | %H:%i:%s
| | time_zone | SYSTEM
| | timed_mutexes | OFF
| | tmp_table_size | 33554432
| | tmpdir | /tmp/
| | transaction_alloc_block_size | 8192
| | transaction_prealloc_size | 4096
| | tx_isolation | REPEATABLE-READ
| | updatable_views_with_limit | YES
| | version | 5.0.45-log
| | version_comment | Source distribution
| | version_compile_machine | x86_64
| | version_compile_os | unknown-linux-gnu
| | wait_timeout | 28800
| +---------------------------------+--------------------------------------------------------------------+ 225 rows in set (43.41 sec)

SQL Server replication conflicts after migration from 2000 to 2008

Posted: 20 Aug 2013 08:10 PM PDT

I got a suggestion over at Stackoverflow to post here....greatful for any and all help.

Please bear with me I think this might take a while to explain. For many years now my company has hosted a solution for a client involving a complex web application with smaller mobile solution consisting of IIS 6 for the web app, SQL Server 2000 on its own server and Visual Studio 2005 Pocket PC app replicating with SQL Server via Merge Replication. This whole time the mobile solution has been very solid and did not require many updates so we have replicated with sscesa20.dll the entire time.

We recently migrated this entire solution as follow:

  • Web Box - New Win Server 2008 R2 running IIS 7.5
  • SQL Server Box - New Win Server 2008 R2 running SQL Server 2008
  • Mobile app - small updates converted to Visual Studio 2008 and Windows for Mobile 5

The new web box received the 64 bit version of SQL Server Compact 3.5 tools and we now call sqlcesa35.dll from the mobile device to perform merge replication.

The basic idea of the entire process is that mobile field users get assigned safety inspections to perform on buildings. When a facility in the system needs an inspection an inspection record is created via the web app in the DB. A status flag is set such that the HOST_NAME() is utilized to make sure only records for a given inspector with this particular status will let them show up on their mobile device. The user can synch multiple times in the field sending their data up to the SQL Server/web app and receive more inspections down or other updates such as look up table data...typical merge replication here and has been working great for years. Once the field user changes the status of the inspection, it will travel from mobile device to SQL Server database and be removed from their iPaq. The inspection has additional work flow on the web app from here on out.

Now on to the problem. We migrated everything publishing the exact same subset of tables with the same joins/filters. Same settings on the publication as far as I can tell are the same. However; when a user gets a new inspection down to the hand held for the very first time, enters data, then synchronizes back to the database every row has a conflict. Since we have default conflict resolution the publisher wins and the data collected in the field it lost. The inspection now looks blank just as it did when it first came down to the mobile device. If the user syncs again with or without changes on the mobile (subscriber) all is well. Any future changes from the mobile device are intact.

It is as if the web/db data is newer then the hand held data. I am 100% sure it is not. I have looked at table triggers, web app logic, etc. We were very careful not to include any application changes to DB/web app/mobile app with respect to data manipulation during this migration.

Here is a summary of the order of operation:

New row created in the database >> Mobile user receives data >> mobile user updates data >> synchronizes - data is lost. Conflicts show up for all data lost.

From here on out any additional mobile changes are captured. Merge replication works in both directions flawlessly.

Thanks for taking the time to read please help. I am stuck after 3 days.

Running a TPC-C Benchmark Without sleep() i.e. key-in + think time

Posted: 20 Aug 2013 11:10 AM PDT

We are running a TPC-C benchmark against a PostgreSQL 9.2 server using JdbcRunner-1.2 implementation. During first few tests we were not getting a smooth (that is, without sudden spikes down to almost 0 from 300 that we got at times) TPS graph even in supposedly steady state. During these tests we had not specified the sleeptimes for transactions (there are different sleep times recommended for different transaction types). So, by default all agents (for example, 100) continuously ran a mix of five transaction types without any sleeps in between. Later, we found out that we could in fact specify the sleep times; and when we did we could see a smoother graph without much variations.

So, the question is, if not specifying the sleep could be the real reason behind the bad results in first few tests?

How to connect to a Database made by Oracle SQL Database?

Posted: 20 Aug 2013 07:10 PM PDT

So I am fairly new at this, so if you could keep that in mind in your answers, it would be much appreciated. I installed Oracle SQL Database on my Windows PC. It came in two zip files. I installed it and the online portion of it works fine. I can login with Username: sys and Password: **. What I am trying to do is connect to this newly created database on another computer through SQL Developer. I have read that in order to do this, you need to change the hostname of the Database from "localhost" to an IP Address. How do I do that and is there anything else I need to do to make this work?

I also found this LINK. Is this something I should do? I do not have a Domain though.

listener.ora

# listener.ora Network Configuration File:    C:\app\hyderz\product\11.2.0\dbhome_2\network\admin\listener.ora  # Generated by Oracle configuration tools.    SID_LIST_LISTENER =  (SID_LIST =   (SID_DESC =     (SID_NAME = CLRExtProc)     (ORACLE_HOME = C:\app\hyderz\product\11.2.0\dbhome_2)     (PROGRAM = extproc)     (ENVS = "EXTPROC_DLLS=ONLY:C:\app\hyderz\product\11.2.0\dbhome_2\bin\oraclr11.dll")   )  )    LISTENER =  (DESCRIPTION_LIST =    (DESCRIPTION =      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1522))      (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1522))    )  )    ADR_BASE_LISTENER = C:\app\hyderz  

tnsnames.ora

# tnsnames.ora Network Configuration File:   C:\app\hyderz\product\11.2.0\dbhome_2\network\admin\tnsnames.ora  # Generated by Oracle configuration tools.    LISTENER_ORCL =    (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1522))      ORACLR_CONNECTION_DATA =  (DESCRIPTION =  (ADDRESS_LIST =    (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1522))  )  (CONNECT_DATA =    (SID = CLRExtProc)    (PRESENTATION = RO)  )  )    ORCL =  (DESCRIPTION =  (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1522))  (CONNECT_DATA =    (SERVER = DEDICATED)    (SERVICE_NAME = orcl)  )  )  

Deleting folders after Oracle upgrade

Posted: 20 Aug 2013 01:10 PM PDT

I upgraded from Oracle 10g to Oracle 11g in Windows Server 2003.

New Oracle home : C:\app\oracle\product\11.2.0\db_1

Old Oracle 10g was installed on: C:\oracle\product\10.2.0\

Questions

  • Is the 10g folder is still useful?
  • Could I delete the 10g folder without a problem?

How to optimize a log process in MySQL?

Posted: 20 Aug 2013 04:10 PM PDT

In my project, I have about 100.000 users and can't control their behavior. Now, what I would like to do is log their activity in a certain task. Every activity, is one record which includes columns like user_id and some tag_id's.

The problem I have, is that these tasks in some cases can go up to 1.000.000 per year per user. So if I would store all these activities in one table. that would obviously become very big (=slow).

What is best to do here? Create a single table per user (so I have 100.000 log tables) or put all these activities in one table? And what kind of engine should I use?

One important thing to note: Although i simplified the situation a bit the following doesn't look normal, but users can also change values in these tables (like tag_id's).

Disabling Checking of Schema On Function/Stored Procedure Creation

Posted: 20 Aug 2013 07:32 PM PDT

I'm trying to automate the process that executes changes to the SQL Server 2008 R2 database. The process I put in place drops and recreates my stored procedures and functions, as well as run scripts to change the tables/columns/data. Unfortunately, one of the scripts requires one of the functions to be put in place first. But I can't run all stored proc/function changes first because it relies on columns being added from the tables/columns/data change scripts first.

I was wondering if it was possible to run stored procedures and functions without SQL Server validating the columns used in the definition of the function/SP? I tried looking but couldn't find a condition or command to enable this.

Thanks.

Designing a database for a site that stores content from multiple services?

Posted: 20 Aug 2013 06:10 PM PDT

I'm building a site that implements David Allen's Getting Things Done that pulls in your email, Facebook newsfeed, tweets from those you follow on Twitter, and more services are planned. The problem is that I'm not a DBA, and I'm not sure how to design the database so that as I add features to the site, I won't have to artificially corrupt people's raw data for the purposes of storing it (for example, I want to add the ability to get RSS feeds sometime in the future, but I'm not sure how I'd do that without making a mess).

I've put down my initial ideas using DBDesigner 4, below, you'll find the diagram and the SQL.

A few notes to help clarify clarify things.

  • The Accounts table is for storing authentication tokens and such for facebook, twitter, and such.
  • The messages table is incomplete.
  • The password fields in emailconfiguration and users are encrypted, users with a one-way hash, emailconfiguration with a two-way.
  • I'm using a MySQL database using the InnoDB storage engine on Amazon RDS.
  • Each project may have one context associated with it.
  • Each message may have a project and context, but it's not required.
  • The imap, smtp, and pop3 tables exist to remove duplication within email configuration.
  • queries to this database are generated by Korma, a clojure library.

Can someone please point me in the right direction? I'd also be willing to look at using a NoSQL database if suggested. Thank you for your time and consideration.

site database schema

Here's the SQL create script just in case anyone wants to see it.

CREATE TABLE Pop3 (    domain VARCHAR NOT NULL,    host VARCHAR NULL,    port INTEGER UNSIGNED NULL,    ssl BOOL NULL,    PRIMARY KEY(domain)  )  TYPE=InnoDB;    CREATE TABLE Imap (    domain VARCHAR NOT NULL,    Host VARCHAR NULL,    port INTEGER UNSIGNED NULL,    ssl BOOL NULL,    PRIMARY KEY(domain)  )  TYPE=InnoDB;    CREATE TABLE users (    Username VARCHAR NOT NULL AUTO_INCREMENT,    email VARCHAR NULL,    password_2 VARCHAR NULL,    activation VARCHAR NULL,    is_active BOOL NULL,    PRIMARY KEY(Username)  )  TYPE=InnoDB;    CREATE TABLE smtp (    domain VARCHAR NOT NULL,    host VARCHAR NULL,    port INTEGER UNSIGNED NULL,    ssl BOOL NULL,    PRIMARY KEY(domain)  )  TYPE=InnoDB;    CREATE TABLE projects (    id INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,    users_Username VARCHAR NOT NULL,    name VARCHAR NULL,    description TEXT NULL,    context INTEGER UNSIGNED NULL,    PRIMARY KEY(id, users_Username),    INDEX projects_FKIndex1(users_Username),    FOREIGN KEY(users_Username)      REFERENCES users(Username)        ON DELETE NO ACTION        ON UPDATE NO ACTION  )  TYPE=InnoDB;    -- ------------------------------------------------------------  -- This is the table where access info for facebook, twitter, and others is stored.  -- ------------------------------------------------------------    CREATE TABLE Accountsi (    id INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,    users_Username VARCHAR NOT NULL,    owner INTEGER UNSIGNED NULL,    service VARCHAR NULL,    username VARCHAR NULL,    send INTEGER UNSIGNED NULL,    receive INTEGER UNSIGNED NULL,    info TEXT NULL,    PRIMARY KEY(id, users_Username),    INDEX Accountsi_FKIndex1(users_Username),    FOREIGN KEY(users_Username)      REFERENCES users(Username)        ON DELETE NO ACTION        ON UPDATE NO ACTION  )  TYPE=InnoDB;    CREATE TABLE EmailConfiguration (    id INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,    users_Username VARCHAR NOT NULL,    owner INTEGER UNSIGNED NOT NULL,    address VARCHAR NULL,    psswd VARCHAR BINARY NULL,    domain VARCHAR NULL,    PRIMARY KEY(id, users_Username),    INDEX EmailConfiguration_FKIndex1(users_Username),    FOREIGN KEY(users_Username)      REFERENCES users(Username)        ON DELETE NO ACTION        ON UPDATE NO ACTION  )  TYPE=InnoDB;    CREATE TABLE Messages (    id INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,    users_Username VARCHAR NOT NULL,    message_id VARCHAR NULL,    user_id VARCHAR NULL,    account INTEGER UNSIGNED NULL,    service VARCHAR NULL,    project INTEGER UNSIGNED NOT NULL,    context INTEGER UNSIGNED NOT NULL,    PRIMARY KEY(id, users_Username),    INDEX Messages_FKIndex1(users_Username),    FOREIGN KEY(users_Username)      REFERENCES users(Username)        ON DELETE NO ACTION        ON UPDATE NO ACTION  )  TYPE=InnoDB;    CREATE TABLE context (    id INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,    projects_id INTEGER UNSIGNED NOT NULL,    projects_users_Username VARCHAR NOT NULL,    users_Username VARCHAR NOT NULL,    name VARCHAR NULL,    description TEXT NULL,    PRIMARY KEY(id, projects_id, projects_users_Username, users_Username),    INDEX context_FKIndex1(projects_id, projects_users_Username),    INDEX context_FKIndex2(users_Username),    FOREIGN KEY(projects_id, projects_users_Username)      REFERENCES projects(id, users_Username)        ON DELETE NO ACTION        ON UPDATE NO ACTION,    FOREIGN KEY(users_Username)      REFERENCES users(Username)        ON DELETE NO ACTION        ON UPDATE NO ACTION  )  TYPE=InnoDB;  

No comments:

Post a Comment

Search This Blog