Friday, June 28, 2013

[how to] "SQL1042C An unexpected system error" when creating a database

[how to] "SQL1042C An unexpected system error" when creating a database


"SQL1042C An unexpected system error" when creating a database

Posted: 28 Jun 2013 04:16 PM PDT

I installed DB2 Express-C 10.1 on OS X 10.8.4. I installed it in user mode, and it seems to have created an instance with my username (avernet). I checked the instance is indeed there by running db2ilist. The database manager is running, as when I run db2start, I get the message SQL1026N The database manager is already active.

Now, I am trying to create a database by running db2 from the command line, and typing the command create database orbeon, but it returns:

QL1042C  An unexpected system error occurred.  SQLSTATE=58004  

What am I missing here? How can I create a simple database from the command line (I prefer not to bother with Data Studio as it isn't available for OS X)?

what is the best practice when making changes to the the application and database?

Posted: 28 Jun 2013 02:38 PM PDT

I need to make changes to an application which utilizes a database that is replicated across locations. I was wondering what the best practice is when working with databases and making changes to the database.

I will be backing up the database to dev environment and making changes to both the application and the database. Then when testing the application and the database I will likely make changes to the records. In that case, would it make sense run the t-sqls that I applied in the dev to the production database?

Creating view recursive query in Oracle DB results in ORA-00600 (ORA-600)

Posted: 28 Jun 2013 02:29 PM PDT

Hello Ladies and Gents,

I've been trying to deploy a recursive query as a view in Oracle XE and Standard Edition, neither to much success.

The query is located in this question here: http://stackoverflow.com/questions/17358109/how-to-retrieve-all-recursive-children-of-parent-row-in-oracle-sql

with recursion_view(base, parent_id, child_id, qty) as (     -- first step, get rows to start with     select        parent_id base,        parent_id,        child_id,        qty    from       md_boms      union all      -- subsequent steps    select      -- retain base value from previous level      previous_level.base,      -- get information from current level      current_level.parent_id,      current_level.child_id,      -- accumulate sum       (previous_level.qty + current_level.qty) as qty     from      recursion_view previous_level,      md_boms        current_level    where      current_level.parent_id = previous_level.child_id  )  select     base, parent_id, child_id, qty  from     recursion_view  order by     base, parent_id, child_id  

The query itself works and returns results. However, when I try to create a view with that query, I receive errors.

I've posted two screenshots: http://www.williverstravels.com/JDev/Forums/StackOverflow/17358109/ViewError.jpg and http://www.williverstravels.com/JDev/Forums/StackOverflow/17358109/InternalError.jpg The first one with the ! and * is when I am using JDeveloper 11g, using the Database Navigator, right-click on View and select "New View". I receive the error when I click OK. I can indeed create the view through a sql script, but when I attempt to view the data, I receive the ORA_00600 error.

I've tried this on both my local machine for XE (version 11.2.0.2.0) and on 11g Standard Edition (11.2.0.2.v6) via Amazon Web Services. The result is the same.

Does anyone know how to get around this?

Transnational replication altering tables and adding stored procedures

Posted: 28 Jun 2013 02:42 PM PDT

I have a database that is replicated in several location. It is setup as transnational replication. However, now i need to add more columns to the table. Add more stored procedures and etc. Hence, i make the change in the publisher, will it replicate across the subscribers or will it negatively impact the replication.

Suggestions will be very helpful.

Implementing a Record Keeper/Rollback System for a MySQL Database

Posted: 28 Jun 2013 11:46 AM PDT

What's the best way to create a record keeping table for all actions in my database?

I have a number of tables inside one database and I want to create another that records all movements in every other table - adds, removals, delete and edits, which can then be reviewed by a user with top-level administrative privileges and reversed if necessary at the click of a button.

Is there a tried and tested method for this or a native feature of (InnoDB) MySQL useful for this?

If possible, can you point me to resources specific to this please. I'm working on my first database so I apologise if this has been asked here before, but I couldn't find it.

Choosing a database type for recording infrequent changes in a large number of variables

Posted: 28 Jun 2013 11:38 AM PDT

I am writing an application that monitors and archives the values of hundreds of thousands of variables. Since they change infrequently, only updates are recorded after the initial value.

Three primary types of queries are expected:

  1. Return the values of a set of variables at a given time:

    | var_name_1 | time_stamp | value |  | var_name_2 | time_stamp | value |    .    .  

    For each var_name, the listed time_stamp is the last time the value changed before or at the given time.

  2. Compare the values of a set of variables between two given times:

    | var_name_1 | time_stamp_1 | value | time_stamp_2 | value |  | var_name_2 | time_stamp_1 | value | time_stamp_2 | value |    .    .  

    For each var_name, time_stamp_1 is the last time the value changed before or at the first given time and time_stamp_2 is the last time the value changed before or at the second given time.

  3. Return all the values of a set of variables between two given times:

    | var_name_1 | time_stamp_1 | value |               | time_stamp_2 | value |                 .                 .               | time_stamp_m | value |    | var_name_2 | time_stamp_1 | value |               | time_stamp_2 | value |                 .                 .               | time_stamp_n | value |    .    .  

    For each var_name, the first time_stamp is the last time the value changed before or at the first given time and the last time_stamp is the last time the value changed before or at the second given time.

The set of var_names returned is determined by a text matching search.

I would also like to indicate that a var_name was not being recorded for a requested time if the application had not been running then.

There is more information stored for each var_name for a given time_stamp than just the value, and it should be returned as well, but this should give the basic idea.

I was wondering what kind of database type might be best suited for this kind of application? Does a custom database already exist?

Linking third table with other two table to have a dynamic populated data inside using postgresql

Posted: 28 Jun 2013 11:34 AM PDT

I am in a phase where I need to have a data populated dynamically in a table which is associated with two other tables using foreign key constraint. Below is the schema/table that I have created in postgresql.

Table#1:

 application   ----------------------------      id | name | size | rate    

Table# 1 "application" have 'id' as "serial" which is also a primary key and 'name' as Varchar, 'Size' and 'rate' as Integer

Table#2:

Create  -------------------------  createId | name | score  

Similarly. createId is again a serial and a primary key of this table and 'name' as varchar and 'score' as integer.

Now, I am creating a third table which is suppose to be a mapping table of the above two with a following fields-

Table# 3

app_create  ---------------------  app_create_id | id | createId |  

In the above table, I have made app_create_id again a sequence and a primary key and other two fields as a foreign key with references to above tables application(Id) and create(createId).

Next, when I am inserting values inside table # 1 and # 2, my understanding says that it should automatically populate it into third table#3 with respect to the data I inserted in other two tables. But, instead I am getting a blank table meaning having no values in table # 3 (app_create) the mapping table. (after insertion in table # 1 & 2)

I am looking to have the output something like this shown below, like for example- application table and create table have following data populated-

application  --------------------------  id | name | size | rate      1  |    xyz  |   2096 | 12                                           2  |   mno   |  1280  | 34        Create  -------------------------  createId | name | score                                                           1        | a.b.c | 50         2       |  m.cpm  |20  

Now, after I inserted data into these two above table, I am not sure what will happen in my third below table. I thought it will also get populated the same way as soon as I am populating the above two table as there is a primary and foreign key relationship.

Table# 3

app_create  ----------------------------------  app_create_id | id | createId |   

I believe it has something to do with many-to-many relationship, but I am not sure how should I make the one or any right way how to proceed the same and have the output likewise. Please help me understand and correct me if I am doing anything wrong or guide me how should I proceed in order to have the data inside a mapping table# 3 dynamically?

Any suggestion/help would really help.

Thank you!

What's a good way to model user authorization to hierarchical data?

Posted: 28 Jun 2013 02:14 PM PDT

I'm wanting to implement something that's basically an ACL in SQL Server (2008, we'll be moving to 2012). We're working with farm production information. We will have users who need to be authorized to view production records, but only at certain vertical levels in a hierarchy.

Our data hierarchy looks like this:

System  - Farm    - Group      - Animal  

The idea is that some users will have access at the System level, and can see records for all Farms, Groups, and Animals within that System. Likewise, some users will have permission starting at the Farm level, and need access only linked to that Farm and all Groups (and animals) within it.

Each table contains a primary key column, and a foreign key column linking it to the parent record (along with whatever other attributes each entity requires).

What I've implemented in the past is two-table system for linking users to the appropriate items they're allowed to see. Implemented here, it would look like this:

Table:  Authorizations          Table:  FullAuthorizations  Columns:    Id (PK)             Columns:    Id (PK)              UserId                          UserId              ObjectId                        SystemId              ObjectType                      FarmId                                              GroupId                                              AnimalId  

The application inserts a record into Authorizations, with the user to authorize, the record id (System id, Farm id, etc), and the type of record (System, Farm, etc). The FullAuthorizations table is used to denormalize the farm hierarchy for easier/faster filtering of data. A trigger is used on the Authorizations table (and each of the farm, etc, tables) to update FullAuthorizations. I considered using a View here, in a previous project with additional levels of entities, and the performance was quite poor once we began getting several hundred thousand records.

The queries would look something like:

SELECT *  FROM dbo.Animals a  WHERE EXISTS (      SELECT 1      FROM dbo.FullAuthorizations fa      WHERE fa.UserId = 1 AND fa.AnimalId = a.Id  )  

In the other project where we're doing this, the solution is performant, but feels like a bit of a hack, and I especially don't like that we can't maintain referential integrity on Authorizations with the associated objects. I'd appreciate feedback on some other possible solutions. I've been looking at things like Nested Sets, but not sure something like that fits this particular problem.

In order to reproduce a display bug

Posted: 28 Jun 2013 10:57 AM PDT

Unless I asked you to, please do not act on this question, even if it appears in the suggested aardvark edit review queue or the close votes review queue.

Thanks, Aaron

Need to install Oracle Express 11g Release 2 on a Windows 7 64-bit laptop

Posted: 28 Jun 2013 10:50 AM PDT

I need the Oracle 11g Release 2 sample schemas (HR, OE, etc.) in order to do most of the available online tutorials. I was hoping to install Oracle Express Edition on my Windows 7 laptop to get these; but I have never heard of anybody successfully installing Oracle XE on a 64-bit Windows platform.

Is there a version of Oracle XE 11g R2 available for Windows 7? And if so, could you please point me to it?

Thanks...

Mysql is larger than wamp

Posted: 28 Jun 2013 03:14 PM PDT

Wamp which includes mysql,php,apache is only 24.7 MB but the only Mysql is 207 MB here.Why?

How to edit publication's generation_leveling_threshold configuration value

Posted: 28 Jun 2013 11:57 AM PDT

I dont know where I put this code:

update sysmergepublications set [generation_leveling_threshold] = 0

I only need do it. But I need edited any funcion of agent.

Terminology for table types

Posted: 28 Jun 2013 11:58 AM PDT

I am currently unsure of the correct terminology for types of tables that exist, or indeed whether there is a term for the examples I have.

I have included my current understanding of the types that I am aware of further down in this question.

Q: Please provide a list of the different table types, with brief definitions and examples to illustrate them.

Sidenote: The context to this is that I am currently attempting to reduce a database to a bare-bones version (for testing). As part of this I am hoping to identify tables that contain values to be looked up and will not change (eg job titles and department titles) vs the "data" tables which may be emptied with no ill-effect on the remainder of the database (cf, referential integrity in the case of adding a staff member where no departments or job titles are defined).


Lookup tables

I believe this to be where a table refers to a second table, where many rows/records in the first table share identical fields/column data. In the simpler cases, this table may be replaced by an ENUM type on the original table.

Example, where many users can have the same job title, the "Job Title" table is identified as a lookup table and the "Users" table is identified as the data table (see below):

Users >- Job Title  

Transactional / Junction tables

I believe this to be where a many-to-many relationship occurs between two tables, and this transaction/junction table resolves this relationship.

Example, where many developers may be working on many projects, the "developer_project" table is identified as the transaction/junction table, and the "Developer"/"Project" tables are indentified as data tables (see below).

Developer -< developer_project >- Project  

Data/Entity table

I am pretty much making this one up, but where a table is able to exist on its own, irrespective of other tables I consider this to be primarily a data storage table as opposed to a structural table. It may use lookup tables as part of the normalisation/optimisation process, but this is incidental.

I have seen the term "weak" entity floating around and it may/may not apply here. I need to do further reading to be sure.

Example, "staff" and "building" may exist independently of any other tables whereas transactional or relational tables/entities may not:

Staff  Building  

No predicate warning [on hold]

Posted: 28 Jun 2013 03:47 PM PDT

I am getting a No predicate warning on this query. Can someone help?

SELECT DISTINCT TOP 100 T0.nsid,                           T0.id,                           T0.version,                           T0.name,                           T1.currency,                           T0.disbursmentamount,                           T0.effectivedate,                           T0.enddate,                           T0.contractname,                           T0.scheduledbillingdate,                           T0.dueamount,                           T0.paidamount,                           T0.cashpaidamount,                           T0.paidamount,                           T0.writeoffamount,                           T0.canceled,                           T0.creationdate,                           T0.creationtime,                           T0.mustbepaidat,                           T0.isrebill,                           T1.name,                           T0.status,                           T2.sendtoprint,                           T2.isprinted   FROM   (awfbillingstatementitem T0           INNER JOIN awfbusinessaccount T1                   ON (( ( T0.attachedtoclientaccount_nsid = T1.nsid )                         AND ( T0.attachedtoclientaccount_id = T1.id ) )))          LEFT OUTER JOIN awfbillingstatementitem T2                       ON ( ( ( T2.nsid = T0.nsid )                              AND ( T2.id = T0.id ) )                            AND ( T2.h_iskilled = 0 )                            AND ( T2.h_clid = 218169570 ) ),          awfcontract T3          LEFT OUTER JOIN awfdescversionedbydate T5                       ON ( ( ( T3.versionedpolicyowner_nsid = T5.nsid )                              AND ( T3.versionedpolicyowner_id = T5.id )                              AND ( -T3.versionedpolicyowner_version =                            T5.version ) )                            AND ( T5.h_iscurvers = 1 )                            AND ( T5.h_iskilled = 0 )                            AND ( T5.h_clid = 234821058 ) )          LEFT OUTER JOIN (awfpolicyownerversion T6                           LEFT OUTER JOIN awfdescversionedbydate_versions T8                                        ON ( T8.nsid = T6.nsid                                             AND T8.id = T6.id                                             AND T8.version = -T6.version ))                       ON ( T5.nsid = T8.lonsid                            AND T5.id = T8.loid                            AND T5.versions_version = T8.loversion                            AND ( T6.h_iscurvers = 1 )                            AND ( T6.h_iskilled = 0 ) ),          awfperson T4   WHERE  ( ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND (( T0.canceled = 0 ))            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND ( 0 = 0 )            AND (( ( T0.contract_nsid = T3.nsid )                   AND ( T0.contract_id = T3.id ) ))            AND ( 0 = 0 )            AND ( 1 = 1 )            AND ( (( ( T0.client_nsid = T4.nsid )                     AND ( T0.client_id = T4.id ) ))                   OR (( ( T1.thirdparty_nsid = T4.nsid )                         AND ( T1.thirdparty_id = T4.id ) )) )            AND ( 0 = 0 )            AND ( (( ( T3.subscriber_nsid = T4.nsid )                     AND ( T3.subscriber_id = T4.id ) ))                   OR (( ( T3.policyowner_nsid = T4.nsid )                         AND ( T3.policyowner_id = T4.id ) ))                   OR (( ( T6.policyowner_nsid = T4.nsid )                         AND ( T6.policyowner_id = T4.id ) )) )            AND ( 0 = 0 ) )          AND ( T3.h_iscurvers = 1 )          AND ( T4.h_iscurvers = 1 )          AND ( T0.h_iskilled = 0 )          AND ( T3.h_iskilled = 0 )          AND ( T4.h_iskilled = 0 )          AND ( ( T4.h_clid = 234817316 )                 OR ( T4.h_clid = 234820248 )                 OR ( T4.h_clid = 234816906 )                 OR ( T4.h_clid = 234817143 )                 OR ( T4.h_clid = 234821693 )                 OR ( T4.h_clid = 218169345 ) )   ORDER  BY T0.creationdate DESC,             T0.creationtime DESC,             T0.effectivedate DESC   

Oracle shared memory exception ORA-04031

Posted: 28 Jun 2013 01:10 PM PDT

I'm trying to establish an Oracle database connection. It throws the following exception:

ORA-04031: unable to allocate 4048 bytes of shared memory     ("shared pool","unknown object","sga heap(1,0)","kglsim heap")  

I have tried connecting the database through JDBC as well as SQL Developer, however it throws the exception in both case.

How can I fix this?

MySQL bin log missing data?

Posted: 28 Jun 2013 04:28 PM PDT

I'm trying to make heads and tails of my binary logs and am coming up short. I have many entries similar to the following from mysqlbinlog but I'm missing log data that I know should be there.

# at 1050548  #130617 18:40:03 server id 1  end_log_pos 1050619   Query   thread_id=844988    exec_time=0 error_code=0  SET TIMESTAMP=1371519603/*!*/;  BEGIN  /*!*/;  # at 1050619  # at 1050782  #130617 18:40:03 server id 1  end_log_pos 1050809   Xid = 33121028  COMMIT/*!*/;  

It's puzzling because I get expected SQL in the mysqlbinlog output for statements executed in phpmyadmin but those coming from other PHP-based remote web servers appear to not be recorded.

My settings bin logging are:

bin_log = /path_to_sql_incrementals  binlog_ignore_db = mysql  

Am I missing a logging option?

Mysql 5.0.95 / CentOS 5.9

Error 3154 while restoring a backup using WITH REPLACE

Posted: 28 Jun 2013 07:02 PM PDT

I have SQL 2012 with SP1 installed on my computer. I made a backup of a database test.bak.

I have a database with the name test2 which is the same database, but the data changed.

I want to restore test.bak over test2 database.

I am always getting the error:

Error 3154: The backup set holds a backup of a database other than the existing database.

I tried:

  1. I right-cliked on test2 -> Restore database -> From device

    I chose test.bak and checked With Replace but I get the error.

  2. Then I tried to right-click on test2 -> Restore file and filegroups

    I chose test.bak and checked With Replace but I get the error.

I can delete my old database and then restore my backup with the right name, but when I was using SQL 2008, I had no problem restoring over an existing database.

It seems that since I use SQL2012, I get this error a lot!

Memcached plugin on MariaDB?

Posted: 28 Jun 2013 02:22 PM PDT

I'd like to try new NoSQL feature in MySQL 5.6 but I am using MariaDB 10.0.2 on Debian 6 and don't fancy coming back to MySQL.

I'm wondering whether the memcached plugin has been added to MariaDB? And if not whether one can still use it as an addon?

And if not, can I use the existing Cassandra plugin of MariaDB to the same effect?

How to embed a sub-prodecure call in a SELECT statement in an Oracle 11g PL/SQL stored procedure

Posted: 28 Jun 2013 12:45 PM PDT

I need to figure out a way to embed an Oracle PL/SQL sub-procedure call in a SELECT statement, within another procedure in the same package.

I am using SQLDeveloper 3.0 with an Oracle 11g database.

I have already developed a stored procedure 'acctg_detail_proc()' that generates a detailed list of accounting transactions within a specified period. What I am trying to do is create a summary report procedure 'acctg_summary_proc()' using the detailed data returned by acctg_detail_proc().

Since I am new to PL/SQL, all I have learned so far is how to retrieve the detailed data via a ref cursor, then LOOP through it, FETCHing each individual detail row. I want to figure out how acctg_summary_proc() can substitute that call to acctg_detail_proc() for a table name in a SELECT statement with a GROUP-BY clause. Here is the source code for an anonymous block where I tried to test it:

SET SERVEROUTPUT ON;  DECLARE      start_date VARCHAR2(50) := '04/01/2012';      end_date VARCHAR2(50) := '04/30/2012';      c_acctg_refcur    SYS_REFCURSOR;  BEGIN    acctg_rpt_pkg.acctg_detail_proc(start_date, end_date, c_acctg_refcur);    SELECT       date_posted,      debit_acct,      credit_acct,      SUM(dollar_amt)    FROM c_acctg_refcur    GROUP BY      date_posted,      debit_acct,      credit_acct;    CLOSE c_acctg_refcur;  END;  

When I try to execute this code, I get the following error:

PL/SQL: ORA-00942: table or view does not exist

I realize I could use a nested SELECT statement instead of a table name, but I want to avoid duplication of source code. Is there any way to 'alias' a ref cursor so I can reference its data in a SELECT statement?

Here is some further background info: The called sub-procedure has ~600 lines of code, and selects 40 columns of data from a de-normalized VIEW. The corporate DBAs will not let me create any VIEWs that contain WHERE clauses, so that is not an option for me.

Thanks in advance, Ken L.

How to debug a db memory-leak causing mysql to go before it's own limits?

Posted: 28 Jun 2013 08:45 PM PDT

We are having a problem with one of the database server of one application, possibly caused by some code that is creating a problem in the way Mysql manage it's memory.

Until the second week of April, our db server has a stable consumption of memory of about 5 gigs (with a maximum of 7 gigs). But then, it started to increase limitless, even surpassing it's theorically maximum possible allocation.

This is our yearly munin graph showing the increase in the last 2 months:

This is another view from the last seven days after a restart in mysql:

This is the report created by mysqltuner.pl:

  -------- Performance Metrics -------------------------------------------------    [--] Up for: 4d 1h 56m 28s (152M q [431.585 qps], 383K conn, TX: 593B, RX: 29B)  [--] Reads / Writes: 90% / 10%  [--] Total buffers: 5.3G global + 10.2M per thread (200 max threads)    [OK] Maximum possible memory usage: 7.3G (46% of installed RAM)  [OK] Slow queries: 0% (2K/152M)  [OK] Highest usage of available connections: 13% (26/200)  [OK] Key buffer size / total MyISAM indexes: 16.0M/300.0K  [OK] Key buffer hit rate: 100.0% (61M cached / 9 reads)  [OK] Query cache efficiency: 70.8% (103M cached / 146M selects)  [!!] Query cache prunes per day: 501819  [OK] Sorts requiring temporary tables: 0% (926 temp sorts / 3M sorts)  [!!] Joins performed without indexes: 39128  [OK] Temporary tables created on disk: 16% (821K on disk / 5M total)  [OK] Thread cache hit rate: 99% (26 created / 383K connections)  [!!] Table cache hit rate: 10% (845 open / 7K opened)  [OK] Open file limit used: 3% (148/4K)  [OK] Table locks acquired immediately: 99% (65M immediate / 65M locks)  [!!] InnoDB data size / buffer pool: 5.5G/5.0G  

We are in unknown territory here. Any help will be appreciated!

Edit: Adding my.cnf

  # The MySQL database server configuration file.    [client]  port            = 3306  socket          = /var/run/mysqld/mysqld.sock    [mysqld_safe]  socket          = /var/run/mysqld/mysqld.sock  nice            = 0    [mysqld]  character_set_server = utf8  collation_server = utf8_general_ci    user            = mysql  socket          = /var/run/mysqld/mysqld.sock  pid-file        = /var/run/mysqld/mysqld.pid  port            = 3306  basedir         = /usr  datadir         = /var/lib/mysql  tmpdir          = /tmp  skip-external-locking  bind-address            = 0.0.0.0    # Fine Tuning  max_connections         = 200  key_buffer              = 16M  max_allowed_packet      = 16M  thread_stack            = 192K  join_buffer_size        = 2M  sort_buffer_size        = 2M  read_buffer_size        = 2M  read_rnd_buffer_size    = 4M  thread_cache_size       = 128  thread_concurrency      = 24  table_cache             = 2K  table_open_cache        = 2K  table_definition_cache  = 4K    # This replaces the startup script and checks MyISAM tables if needed  # the first time they are touched  myisam-recover         = BACKUP    # innodb  innodb_buffer_pool_size         = 5G  innodb_flush_log_at_trx_commit  = 1  innodb_support_xa               = 1  innodb_additional_mem_pool_size = 32M  innodb_log_buffer_size          = 8M  innodb_flush_method             = O_DIRECT    # Query Cache Configuration  query_cache_limit               = 32M  query_cache_size                = 256M  query_cache_min_res_unit        = 256    # Logging and Replication  log_error                       = /var/log/mysql/error.log  log-slow-queries                = /var/log/mysql/slow.log  long_query_time                 = 1    # REPLICATION CONFIGURATION  log_bin                 = /var/log/mysql/mysql-bin.log  log-bin                 = mysql-bin  expire_logs_days        = 15  sync_binlog             = 1  server-id               = 1    ssl-ca   =/etc/ssl/private/repl/cacert.pem  ssl-cert =/etc/ssl/private/repl/master-cert.pem  ssl-key  =/etc/ssl/private/repl/master-key.pem    [mysqldump]  quick  quote-names  max_allowed_packet      = 16M    [isamchk]  key_buffer              = 16M                                  

updation of a column after expire of specific time in mysql

Posted: 28 Jun 2013 06:45 PM PDT

  1. i have a table where i have a column named 'state'(int(2)) and 'modify_time' (time_stamp on update current timestamp).

  2. i have an update trigger which changes the value of state column to 0 based on some condition.

    3 . i want to set the value of state column to 1 after 24 hours of modify_time , if it still 0 .

    4 . i tried below method to test :

CREATE EVENT myevent1 ON SCHEDULE AT current_timestamp + interval 1 minute DO UPDATE test.mytabletable SET state = 0;

but it is not doing any thing.

is there any other alternative method ?

Repairing Broken Binary Replication in PostgreSQL 9.0

Posted: 28 Jun 2013 11:45 AM PDT

I have a binary replication that was stopped for so long that the WALs were removed and as a result, it ended up being old. I'm trying to reestablish the replication and the best way I've found so far is following the steps on the PostgreSQL wiki:

  • Issue select pg_start_backup('clone',true); on master
  • rsync everything except for pg_xlog from master to slave
  • Issue select pg_stop_backup(); on master
  • rsync pg_xlog

But the database is too big (300 GB), my connection is not really amazing (like 800 kB/s) and the files in base keep changing. So I was wondering if there's a more optimal way to do this.

Failed copy job deletes all users

Posted: 28 Jun 2013 01:45 PM PDT

Since the progression of this was apparently somewhat difficult to follow the first time around:

I attempted a completely boring, been-done-a-thousand-times-before copy of a database using the copy database wizard with the detach/reattach method.

The copy failed. The log indicates that it was unable to execute a CREATE VIEW action for a particular view, because the datasource for the view did not exist. This is interesting in its own right, as the source most certainly exists, and the view(s) in question are fully functional in the source database. I'm not really clear, just yet, on how significant this is, as I've yet to figure out precisely why this generated an error.

This resulted in the deletion of all non-system user associations from the source database, leaving me with users dbo, information_schema, sys, and guest. Non-system roles were also deleted. Schemas were unaffected.

I have since restored the damaged database from backup. Academically, however, I would like to know the following:

  1. Why would a failed copy operation strip the user associations from the source database?
  2. Is there any sort of maintenance/rebuild/repair operation that could be performed on the source database to repair it?
  3. The loss of the users was immediately obvious, but given the rather mysterious nature of a failed copy job apparently damaging the source database, how concerned should I be about less obvious effects? In theory, I would expect restoring from backup would alleviate this concern, but do I have any cause to worry about, e.g., the master database?

This is entirely repeatable. I've made a handful of copies (manually) for the sake of experimenting with this particular issue, and in each case, the failed copy job obliterates the users and roles from the source database.

Removing the views that generated errors allows the copy to complete, and, as one would expect, produces a copy with identical data, users, etc., in addition to leaving the source database unaltered.

If it's important, I've tried rebuilding the indexes of the system databases, as well as the damaged database, to no appreciable effect.

The error generated:

1:00:25 PM,5/28/2013 1:00:25 PM,0,0x,ERROR : errorCode=-1073548784 description=Executing the query "CREATE VIEW [Sourcing].[PermittedArrProducts]  AS  SELECT     dbo.tblArrProducts.ArrProductID, dbo.tblArrProducts.ArrangementID, dbo.tblArrProducts.ContainerTypeID, dbo.tblArrProducts.Quantity  FROM         Sourcing.PermittedArrangements INNER JOIN                        dbo.tblArrProducts ON Sourcing.PermittedArrangements.ArrangementID = dbo.tblArrProducts.ArrangementID    " failed with the following error: "Invalid object name 'Sourcing.PermittedArrangements'.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.  

Mysql - How to optimize retrival time in a table

Posted: 28 Jun 2013 07:45 PM PDT

I have query like this! which has 200 million Records in a single table.. I am using BTree Indexes in my table...

mysql> select COUNT(DISTINCT id) from [tablename] where [columname] >=3;
+------------------------------+
| COUNT(DISTINCT id) |
+------------------------------+
| 8242063
+------------------------------+
1 row in set (3 min 23.53 sec)

I am not satisfy with this timing ..! how can I reduce the result time less than 30sec. Kindly give me any suggessions! It will be more helpful to me!

thanking you!

SQL Server 2005 Replication

Posted: 28 Jun 2013 05:45 PM PDT

I am in the process of creating Replication between 2 Remote Servers, server 1 is the Distributor and Publisher and server 2 is the Subscription.

server 1 windows 2003 server 192.168.10.1 connected by vpn SQL Server 2005 domain1.local

server 1  windows 2003 server  192.168.10.1 connected by vpn  SQL Server 2005  domain1.local  

server 2 windows 2003 server 192.168.10.6 connected by vpn SQL Server 2005 domain2.local

server 2  windows 2003 server  192.168.10.6 connected by vpn  SQL Server 2005  domain2.local  

When I setup up Replication everything looked fine until I looked at the sync status and it said:

The Agent could not be started    An exception occurred while executing a transact-sql statement or batch    sqlserveragent error request to run job  server1-username blah blah blah  

From user sa refused because the job is already running from a request by user sa changed database context to technical error 22022.

I have cleared jobs in the server agent as well as restarted the service.

Could this be something to do with authentication between two non trusted domains as I can browse and even control each sql server via SQL studio but just not setup replication?

Yes I can manage each SQL Server in SSMS and we are using merge with snapshot.

Multiple parents and multiple children in product categories

Posted: 28 Jun 2013 10:45 AM PDT

I am making a ecommerce site. In this site I want to categorise the items into three different layers

primary category             sub category           sub category    >>electronic             >>smart phone          samsung    cameras                       tablets              nokia                                    laptop               apple                                  headphone  

In the above table, I want to display the sub category after the customer selects the primary one. At the same time the 'samsung' also comes under 'camera'. Like this a parent can have any number of children and one child can have many parents. In the future we may change the primary and secondary.

What is the best solution for this? Which model will adopt our category: tree or nested?

Mysqldump tables excluding some fields

Posted: 28 Jun 2013 03:45 PM PDT

Is there a way to mysqldump a table without some fields?

Let me explain:
I have a MySQL database called tests. In tests I have 3 tables: USER, TOTO and TATA. I just want to mysqldump some fields of table USER, so excluding some fields like mail, ip_login, etc.

How can I do this?

Replication master binlog rotation when network is unavailable

Posted: 28 Jun 2013 04:45 PM PDT

I recently experienced an issue where the binlog file in master rotated because network connectivity between the slave and master was unavailable.

After solving the network issue, the slave was not able to follow the master as it was doing through previous binlog and position.

That was solved by purging the binlog to last binlog in master and pointing the slave to that last binlog and previous binlog's position which was following.

I am wondering if this issue is normal?

Mongo connection failing with 'Transport endpoint is not connected'

Posted: 28 Jun 2013 02:45 PM PDT

I have a 2 server installation.

A web server with Apache and a DB server with MongoDB.

I am load testing it, and on ~300 RPS I am getting this error:

PHP Fatal error: Uncaught exception 'MongoConnectionException' with message 'Transport endpoint is not connected'.

The only thing I am noticing is that right before the fail, I am getting a lots of connections on Mongo:

insert  query update delete getmore command flushes mapped  vsize    res faults locked % idx miss %     qr|qw   ar|aw  netIn netOut  conn       time        0    659      0      0       0       1       0   208m  1.28g    40m      0        0          0       0|0     0|0    62k   217k   486   03:57:20   

Almost 500 connections here... but never more than that!

Mongo is 2.0.3. PHP is 5.3.x (latest of Debian install...)

Help!

No comments:

Post a Comment

Search This Blog