Thursday, April 11, 2013

[how to] SUSPENDED in Index Rebild sql server 2012

[how to] SUSPENDED in Index Rebild sql server 2012


SUSPENDED in Index Rebild sql server 2012

Posted: 11 Apr 2013 08:06 PM PDT

In SQL Server 2012, an Index Rebuild job is taking a very long time (up to 8 hours). However, not even one Index rebuild completed, so I stopped the index job.

In monitoring SQL task:

state = SUSPENDED  , comment = ALTER INDEX  , APPLICATION name = Microsoft SQL server Management Studio  , Query Wait = LCK_M_SCH_M   , Head Blocker = object lock   lock Partition = 15   objid=585105175   subresource=FULL   dbid=5   id=lockaa7aae200   mode=Sch-S   associatedObjectId=585105175 1386  

Thanks in advance for any helpful info.

When REBUILDing indexes on SQL Server, what bearing does tempdb & LOG disk speed have?

Posted: 11 Apr 2013 08:10 PM PDT

Say I have the a data disk that is 50x faster than the LOG and tempdb (measured by Random Write speed) disk. (Don't ask why that's something we'll be fixing if needed)

I have a table that's got 19 million rows with about 30 indexes (so basically non trivial).

How much of a factor would the disk speed of the LOG & tempdb than if I was to have the same disk speeds as the data disk.

Edit: Some more stats.

We have moved the LOG to the RAID data drive as test and unfortunately there was not improvement in the speed of the the REBUILD. It still too 1 hour.

  • tempdb is on it's own Raid10 array
  • LOG is on it's own Raid10 array
  • Data is on it's own Raid10 array

  • RAM > 100 Gb (yes 100Gb) and no limit on SQL using it for data or index creation

Note this is not a transactional system, rather a Data Warehouse.

Insert performance deterioration over network

Posted: 11 Apr 2013 05:37 PM PDT

i have a strange issue with a windows application (in delphi) running on a win7 client. The application has a loops and inserts rows on a SQL Server table. It's simply an insert statement, repeated on the same table. For the first 9-10 inserts, the performance is good (approx 10 milliseconds per insert) From the 10th insert, the time suddenly increase to approx 400-500 ms. It's not a deterioration... if you go ahead, the performance is constant, always 400-500 ms. The performance changes only from the 9th to 10th insert. And it pass from 10 to 500 ms. During this time, the CPUs of client and server seems not to be overloaded, and the network too.

The server is an HP ML 370 G6 with Windows Server 2008R2, and multiple sql installed: 2005, 2008 and 2008R2

If i attach the DB to 2005, 2008 or 2008R2 it's the same. If i move the DB files from the SAN to a local SAS disk, it's the same If i change client, it's the same

but.... If i run the application directly on the server, the performance is excellent, straight to end of the loop.

So, i decided to use another server... well, it's not a server, at all. It's just a pc (dual core).. but has the same operating system (srv2008r2), and SQL server 2008 installed. If i move the Database to this "server", and launch the application from the same client, i have an excellent performance, not so different from running the application directly on the HP server.

What else can i do? Am i missing something?

Thanks.

Andrea

Tuning advisor with Extended events?

Posted: 11 Apr 2013 01:36 PM PDT

With SQL traces I was able to analyze them with Database Engine Tuning Advisor to obtain basic recommendations for perf. tuning(missing indexes, statistics,...).

Now, with SQL 2012 and Extended Events, how can I do something similar?

Thanks

SQL Server bulk insert performance

Posted: 11 Apr 2013 12:30 PM PDT

I'm running sql server 2012 and the installed ram on that machine is 128 GB. The performance of the bulk insert is very low. The bulk insert of 10 million records takes about 2 hours. We have checked the cpu, disk io, memory and everything seems to be ok. I've just checked the wait_type and found out the CXPACKET wait_time_ms is 79346890761 and max_wait_time_ms is 2164694 and signl_wait_time_ms 2849080871. Are these numbers too high and could it be the reason for slow performance? The degree of parallelism is set to 0.

Would using timestamps solve consistency issues with snapshot isolation?

Posted: 11 Apr 2013 01:53 PM PDT

I'm considering using Snapshot Isolation for some large queries being send from an Access 2007 frontend. These queries are occasionally causing ASYNC_NETWORK_IO waits, and some blocking as well. I'm working to try to solve this with application design, but since I can't control how the Jet engine loads data from the server, I haven't been able to completely eliminate the problem.

I know that one major concern with using snapshot isolation is the problem of updating dirty records, such as expressed in this blog post.

In my database, all of the tables have timestamp fields, which I would think would prevent the problem of updating dirty records. It might cause some update conflicts, but that seems like an acceptable outcome, while getting incorrect data is not. Am I right that having timestamps is a solution for potential data integrity problems, or could I still run into trouble?

EDIT: Max Vernon asked for an example. In the article I linked to, Craig Freedman uses this as an example:

We can demonstrate this outcome using SQL Server. Note that snapshot isolation is only available in SQL Server 2005 and must be explicitly enabled on your database:

alter database database_name set allow_snapshot_isolation on

Begin by creating a simple table with two rows representing two marbles:

create table marbles (id int primary key, color char(5)) insert  marbles values(1, 'Black') insert marbles values(2, 'White')  

Next, in session 1 begin a snaphot transaction:

set transaction isolation level snapshot begin tran update marbles set  color = 'White' where color = 'Black'  

Now, before committing the changes, run the following in session 2:

set transaction isolation level snapshot begin tran update marbles set  color = 'Black' where color = 'White' commit tran  

Finally, commit the transaction in session 1 and check the data in the table:

commit tran select * from marbles

Here are the results:

id          color  1           White   2           Black  

This is an issue without using any sort of rowversioning. My question is, do timestamps solve this problem? Do they leave similar issues outstanding? Or does SQL Server's rowversioning when you enable Snapshot Isolation solve the problem and make timestamps unnecessary? I'm assuming it doesn't, based on Freedman's example.

Types of cursor in sql server [closed]

Posted: 11 Apr 2013 11:38 AM PDT

How many types of cursor in SQL SERVER, please anyone explain all type of cursor with examples. Any help appreciated !!

MySQL my.cnf Optimization

Posted: 11 Apr 2013 06:43 PM PDT

We are an email marketing company and we recently switched our setup to MySQL. We need to configure MySQL (my.cnf) for extreme performance.

We have tried to configure my.cnf but heavy queries got stuck for many hours without any result. Some queries take one or two hours to complete on our SQL server (8GB RAM).

It's a dedicated MySQL server.

OS = Ubuntu  Installed RAM = 512GB  Table type = MyISAM  

We have very large tables (a few tables having more then 70 million rows). Almost every time new tables to match so indexes won't help that much, although the necessary indexes exist.

We do not have any application to connect our database, just 2 users who write adhoc queries directly.

Below are some variables we have changed from the default values.

key_buffer_size         = 250G  read_buffer_size        = 2G  read_rnd_buffer_size    = 50M  join_buffer_size        = 25M  tmp_table_size          = 5G  sort_buffer_size        = 2G  innodb_buffer_pool_size = 2G  query_cache_limit       = 10M  query_cache_size        = 100M  max_connections         = 20  

Below is the MySQLTuner recommendation:

-------- General Statistics --------------------------------------------------  [--] Skipped version check for MySQLTuner script  [OK] Currently running supported MySQL version 5.5.29-0ubuntu0.12.04.2  [OK] Operating on 64-bit architecture    -------- Storage Engine Statistics -------------------------------------------  [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster  [--] Data in MyISAM tables: 1909G (Tables: 940)  [--] Data in InnoDB tables: 2G (Tables: 3)  [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17)  [!!] Total fragmented tables: 3    -------- Performance Metrics -------------------------------------------------  [--] Up for: 6m 59s (9 q [0.021 qps], 6 conn, TX: 10K, RX: 625)  [--] Reads / Writes: 100% / 0%  [--] Total buffers: 252.1G global + 4.1G per thread (20 max threads)  [OK] Maximum possible memory usage: 333.6G (66% of installed RAM)  [OK] Slow queries: 0% (0/9)  [OK] Highest usage of available connections: 5% (1/20)  [OK] Key buffer size / total MyISAM indexes: 250.0G/131.6G  [!!] Query cache efficiency: 0.0% (0 cached / 4 selects)  [OK] Query cache prunes per day: 0  [OK] Temporary tables created on disk: 0% (0 on disk / 2 total)  [OK] Thread cache hit rate: 83% (1 created / 6 connections)  [OK] Table cache hit rate: 78% (26 open / 33 opened)  [OK] Open file limit used: 1% (18/1K)  [OK] Table locks acquired immediately: 100% (36 immediate / 36 locks)  [!!] Connections aborted: 16%  [!!] InnoDB data size / buffer pool: 2.2G/2.0G    -------- Recommendations -----------------------------------------------------  General recommendations:      Run OPTIMIZE TABLE to defragment tables for better performance      MySQL started within last 24 hours - recommendations may be inaccurate      Enable the slow query log to troubleshoot bad queries      Your applications are not closing MySQL connections properly  Variables to adjust:      query_cache_limit (> 10M, or use smaller result sets)      innodb_buffer_pool_size (>= 2G)  

Any help would be greatly appreciated.

Column or new table

Posted: 11 Apr 2013 02:10 PM PDT

I am working on a student management system for a private school. In the table "student" I need to indicate which public school the student came from. Can I just keep this information as a column "school" in the student table or do I need to make a new table "school"? NB: The school has just one attribute which is its name.

Oracle: Full database (user) restore requires drop?

Posted: 11 Apr 2013 05:03 PM PDT

In our development environment (Oracle 11g), we often have to restore databases (users) to previous versions to test upgrades, verify defects, etc.

We are using expdp/impdp to perform our backup/restores.

For back-ups we run...

expdp 'sys/password@orcl as sysdba' schemas=CurrentUser directory=DirLabel Dumpfile=MyDatabase.dmp logfile=MyDatabase.log  

For restores we have been dropping the user (database) manually, and then running

impdp 'sys/password@orcl as sysdba' directory=DirLabel Dumpfile=MyOldDatabase.dmp logfile=MyRestore.log remap_schema=OLDUser:CurrentUser  

We are now trying to automate this process and have a question.

Do we have to drop the current user (database) prior to the restore or is there some option/flag on impdb that can be used to force a complete restore? I looked at TABLE_EXISTS_ACTION=REPLACE, but I noticed I still get the same "errors" during restore if I leave the option off the command-line.

Thoughts?

MySQL tuning (my.cnf) for very large tables and ad hoc queries

Posted: 11 Apr 2013 06:48 PM PDT

We are an email marketing company and recently switched our setup to MySQL. We need to configure mysql (my.cnf) for extreme performance.

We have tried to configure my.cnf but heavy queries can get stuck for many many hours without any result and same queries take one or two hours to complete on sqlserver (8GB Ram).

It's a dedicated MySQL server.

  • OS = ubuntu
  • Installed Ram = 512GB
  • Table type = Myisam

We have very large tables, (few tables having more then 70 million rows), almost every time new tables to match so indexes wont help that much, although there exist necessary indexes.

We do not have any application to connect our database, just 2 users who writes ad hoc queries directly.

Below are some variables we have changed from the default values.

key_buffer_size= 250G  read_buffer_size=2G  read_rnd_buffer_size=50M  join_buffer_size=25M  tmp_table_size = 5G  sort_buffer_size = 2G  innodb_buffer_pool_size = 2G  query_cache_limit       = 10M  query_cache_size        = 100M  max_connections        = 20  

Below is the mysqltuner recommendation

-------- General Statistics --------------------------------------------------  [--] Skipped version check for MySQLTuner script  [OK] Currently running supported MySQL version 5.5.29-0ubuntu0.12.04.2  [OK] Operating on 64-bit architecture    -------- Storage Engine Statistics -------------------------------------------  [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster  [--] Data in MyISAM tables: 1909G (Tables: 940)  [--] Data in InnoDB tables: 2G (Tables: 3)  [--] Data in PERFORMANCE_SCHEMA tables: 0B (Tables: 17)  [!!] Total fragmented tables: 3    -------- Performance Metrics -------------------------------------------------  [--] Up for: 6m 59s (9 q [0.021 qps], 6 conn, TX: 10K, RX: 625)  [--] Reads / Writes: 100% / 0%  [--] Total buffers: 252.1G global + 4.1G per thread (20 max threads)  [OK] Maximum possible memory usage: 333.6G (66% of installed RAM)  [OK] Slow queries: 0% (0/9)  [OK] Highest usage of available connections: 5% (1/20)  [OK] Key buffer size / total MyISAM indexes: 250.0G/131.6G  [!!] Query cache efficiency: 0.0% (0 cached / 4 selects)  [OK] Query cache prunes per day: 0  [OK] Temporary tables created on disk: 0% (0 on disk / 2 total)  [OK] Thread cache hit rate: 83% (1 created / 6 connections)  [OK] Table cache hit rate: 78% (26 open / 33 opened)  [OK] Open file limit used: 1% (18/1K)  [OK] Table locks acquired immediately: 100% (36 immediate / 36 locks)  [!!] Connections aborted: 16%  [!!] InnoDB data size / buffer pool: 2.2G/2.0G    -------- Recommendations -----------------------------------------------------  General recommendations:      Run OPTIMIZE TABLE to defragment tables for better performance      MySQL started within last 24 hours - recommendations may be inaccurate      Enable the slow query log to troubleshoot bad queries      Your applications are not closing MySQL connections properly  Variables to adjust:      query_cache_limit (> 10M, or use smaller result sets)      innodb_buffer_pool_size (>= 2G)  

Any help would be greatly appreciated.

Individual queries run on 10ms, with UNION ALL they are taking 290ms+ (7.7M records MySQL DB). How to optimise?

Posted: 11 Apr 2013 05:45 PM PDT

I have a table that stores available appointments for teachers, allowing two kinds of insertions:

  1. Hourly based: with total freedom to add unlimited slots per day per teacher (as long as slots don't overlap): on 15/Apr a teacher may have slots at 10:00, 11:00, 12:00 and 16:00. A person is served after choosing a specific teacher time/slot.

  2. Time period/range: on 15/Apr another teacher may work from 10:00 to 12:00 and then from 14:00 to 18:00. A person is served by order of arrival, so if a teacher works from 10:00 to 12:00, all persons that arrive in this period will be attended by order of arrival (local queue).

Since I have to return all available teachers in a search, I need all slots to be saved in the same table as the order of arrival ranges. This way I can order by date_from ASC, showing the first available slots first on the search results.

Current table structure

CREATE TABLE `teacher_slots` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `teacher_id` mediumint(8) unsigned NOT NULL,    `city_id` smallint(5) unsigned NOT NULL,    `subject_id` smallint(5) unsigned NOT NULL,    `date_from` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',    `date_to` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',    `status` tinyint(4) NOT NULL DEFAULT '0',    `order_of_arrival` tinyint(1) unsigned NOT NULL DEFAULT '0',    PRIMARY KEY (`id`),    KEY `by_hour_idx` (`teacher_id`,`order_of_arrival`,`status`,`city_id`,`subject_id`,`date_from`),    KEY `order_arrival_idx` (`order_of_arrival`,`status`,`city_id`,`subject_id`,`date_from`,`date_to`)  ) ENGINE=MyISAM DEFAULT CHARSET=utf8;  

Search query

I need to filter by: actual datetime, city_id, subject_id and if a slot is available (status = 0).

For hourly based I have to show all available slots for the first closest available day for every teacher (show all time slots of a given day and can't show more than one day for the same teacher). (I got the query with the help from mattedgod).

For range based (order_of_arrival = 1), I have to show the closest available range, just one time per teacher.

First query runs individually in around 0.10 ms, second query 0.08 ms and the UNION ALL an average of 300ms.

(      SELECT id, teacher_slots.teacher_id, date_from, date_to, order_of_arrival      FROM teacher_slots      JOIN (          SELECT DATE(MIN(date_from)) as closestDay, teacher_id          FROM teacher_slots          WHERE   date_from >= '2014-04-10 08:00:00' AND order_of_arrival = 0                  AND status = 0 AND city_id = 6015 AND subject_id = 1          GROUP BY teacher_id      ) a ON a.teacher_id = teacher_slots.teacher_id      AND DATE(teacher_slots.date_from) = closestDay      WHERE teacher_slots.date_from >= '2014-04-10 08:00:00'          AND teacher_slots.order_of_arrival = 0          AND teacher_slots.status = 0          AND teacher_slots.city_id = 6015          AND teacher_slots.subject_id = 1  )    UNION ALL    (      SELECT id, teacher_id, date_from, date_to, order_of_arrival      FROM teacher_slots      WHERE order_of_arrival = 1 AND status = 0 AND city_id = 6015 AND subject_id = 1          AND (              (date_from <= '2014-04-10 08:00:00' AND  date_to >= '2014-04-10 08:00:00')              OR (date_from >= '2014-04-10 08:00:00')          )      GROUP BY teacher_id  )    ORDER BY date_from ASC;  

Question

Is there a way to optimise the UNION, so I can get a reasonable response of a maximum ~20ms or even return range based + hourly based in just one query (with an IF, etc)?

SQL Fiddle: http://www.sqlfiddle.com/#!2/59420/1/0

EDIT:

I tried some denormalization by creating a field "only_date_from" where I stored only the date, so I could change this...

DATE(MIN(date_from)) as closestDay / DATE(teacher_slots.date_from) = closestDay  

... to this

MIN(only_date_from) as closestDay / teacher_slots.only_date_from = closestDay  

It already saved me 100ms! Still 200ms on average.

how to resolve ELFCLASS32 error in mysql for UDF lib_mysqludf_sys.so

Posted: 11 Apr 2013 07:31 PM PDT

I'm trying to install lib_mysqludf_sys.so in mysql to create sys_exec function which will run an external java programm ..

first of all i issued this command :

CREATE FUNCTION sys_exec RETURNS INT SONAME 'lib_mysqludf_sys.so';

and i got this error :

ERROR 1126 (HY000): Can't open shared library 'lib_mysqludf_sys.so' (errno: 0 /usr/lib/mysql/plugin/lib_mysqludf_sys.so: wrong ELF class: ELFCLASS32)

this error is due to the 64 bit os...

i'm using 64 bit Ubuntu and this library file is of 32 bit.

I'm not getting the library of 64 bit..

I have gone trough this :

gcc -Wall -m64 -I/usr/include/mysql -I. -shared lib_mysqludf_sys.c -o /usr/lib/lib_mysqludf_sys.so -fPIC

this is also giving error that file not present ..how should i resolve this

Why is a "non-single-group group function" allowed in a subselect but not on it's own?

Posted: 11 Apr 2013 01:27 PM PDT

Why does the first query not fail with the same error as the second:

with w as (select 1 product_id, 10 units from dual union all select 2, 5 from dual)  select sum(units) from (select product_id, sum(units) units from w);    /*  SUM(UNITS)  ----------          15   */    with w as (select 1 product_id, 10 units from dual union all select 2, 5 from dual)  select product_id, sum(units) units from w;    /*  Error starting at line 7 in command:  with w as (select 1 product_id, 10 units from dual union all select 2, 5 from dual)  select product_id, sum(units) units from w  Error at Command Line:8 Column:8  Error report:  SQL Error: ORA-00937: not a single-group group function  00937. 00000 -  "not a single-group group function"  *Cause:      *Action:  */  

edit: version info added:

select * from v$version;  /*  BANNER                                                                           --------------------------------------------------------------------------------  Oracle Database 11g Release 11.2.0.3.0 - 64bit Production                          PL/SQL Release 11.2.0.3.0 - Production                                             CORE    11.2.0.3.0  Production                                                           TNS for Linux: Version 11.2.0.3.0 - Production                                     NLSRTL Version 11.2.0.3.0 - Production                   */  

edit: non-default params added:

select name, value from v$parameter where isdefault = 'FALSE' order by name;  /*  NAME                              VALUE                                                                                                                               --------------------------------- ----------------------------------------------------------------------------------------------------------------------------------  aq_tm_processes                   1                                                                                                                                   archive_lag_target                3600                                                                                                                                audit_file_dest                   /u01/app/oracle/admin/oracle/adump                                                                                                  audit_trail                       NONE                                                                                                                                compatible                        11.2.0.3                                                                                                                            control_file_record_keep_time     31                                                                                                                                  control_files                     /home/oracle/cfile/controlfile.dat, +DATA/oracle/controlfile/current.915.730988607, +FRA/oracle/controlfile/current.970.730988607   core_dump_dest                    /u01/app/oracle/admin/oracle/cdump                                                                                                  db_block_size                     4096                                                                                                                                db_create_file_dest               +DATA                                                                                                                               db_domain                                                                                                                                                             db_file_multiblock_read_count     1                                                                                                                                   db_name                           oracle                                                                                                                              db_recovery_file_dest             +FRA                                                                                                                                db_recovery_file_dest_size        375809638400                                                                                                                        diagnostic_dest                   /u01/app/oracle                                                                                                                     dispatchers                       (PROTOCOL=TCP) (SERVICE=oracleXDB)                                                                                                  event                                                                                                                                                                 filesystemio_options              setall                                                                                                                              global_names                      TRUE                                                                                                                                job_queue_processes               10                                                                                                                                  log_archive_dest_1                                                                                                                                                    log_archive_dest_10               LOCATION=USE_DB_RECOVERY_FILE_DEST MANDATORY REOPEN=60                                                                              log_checkpoint_timeout            30                                                                                                                                  memory_max_target                 36507222016                                                                                                                         memory_target                     36507222016                                                                                                                         nls_language                      ENGLISH                                                                                                                             nls_length_semantics              BYTE                                                                                                                                nls_territory                     UNITED KINGDOM                                                                                                                      open_cursors                      300                                                                                                                                 pga_aggregate_target              0                                                                                                                                   processes                         150                                                                                                                                 remote_login_passwordfile         EXCLUSIVE                                                                                                                           sga_max_size                      32212254720                                                                                                                         sga_target                        0                                                                                                                                   shared_pool_size                  536870912                                                                                                                           smtp_out_server                   mailout.redacted.com                                                                                                                     streams_pool_size                 1073741824                                                                                                                          undo_management                   AUTO                                                                                                                                undo_retention                    900                                                                                                                                 undo_tablespace                   TS_UNDO                                                                                                                                41 rows selected  */  

With SQL Server 2008, how can I roll back a transaction done some time before?

Posted: 11 Apr 2013 12:36 PM PDT

I am coding up a script to merge some accounts (and other tables) as an out-of-band customer request. All the updates will be done in a single transaction. They are all updates (no inserts or deletes) - effectively changing the "account_id" of some entities to achieve the account merge.

That's all fine, but my boss wants to be able to roll back the affected accounts after other transactions, i.e. DAYS down the road on a very active server.

My ideas are: -

  1. print out all the affected rows (select all the previous data to console) so I could manually move everything back if required (there aren't too many affected tables - about 8)
  2. export the transaction and "somehow" roll that individual transaction back - without affecting any subsequent transactions.
  3. something else?

Many thanks!

Leaf nodes for averages

Posted: 11 Apr 2013 05:01 PM PDT

I have the following MDX tuple calculation for my KPI in Dashboard Designer:

AVG([Person].[Person].children,[Measures].[Hours])  

This works perfectly when, for instance, I view it by Team name.

However, when I view it by the [Person] it's returning no values. Does AVG not work when you're looking directly at the leaf nodes or something? Or is there something else I'm doing wrong?

Breaking Semisynchronous Replication in MySQL 5.5

Posted: 11 Apr 2013 03:44 PM PDT

I've set up Semisynchronous Replication between two MySQL 5.5 servers running on Windows 7.

My application is running and updating the database of the master server and same is being updated in the slave database server.

But due to some unknown reasons sometimes, Replication breaks.

On running the command:

SHOW STATUS LIKE 'Rpl_semi_sync%';  

It gives this status:

'Rpl_semi_sync_master_no_times', '0'  'Rpl_semi_sync_master_no_tx', '0'  'Rpl_semi_sync_master_status', 'ON'     <<-------------  'Rpl_semi_sync_master_timefunc_failures', '0'  'Rpl_semi_sync_master_tx_avg_wait_time', '338846'  'Rpl_semi_sync_master_tx_wait_time', '29479685'  'Rpl_semi_sync_master_tx_waits', '87'  'Rpl_semi_sync_master_wait_pos_backtraverse', '0'  'Rpl_semi_sync_master_wait_sessions', '0'  'Rpl_semi_sync_master_yes_tx', '3106'  

Ideally, in semi synchronization, when the sync breaks the status should come as OFF since master is not able to receive any acknowledgement from the slave. Please help us in this regard.

Public Database for Mobile Application

Posted: 11 Apr 2013 11:01 AM PDT

What are the proper steps to hosting a public database. Currently I a building a mobile application with Adobe Flash Builder. I have used Microsoft SQL Server, My SQL as well as WAMP Server. All of these I have been told are private databases and thus will only allow my application to how the information as long as they are on the same internet connection. I have also been told that port forwarding my database isn't safe either.

In this case what is the best approach to making my database public or what other source should I use to make my application accessible to world wide users.

DB2 Server Table Space Locked

Posted: 11 Apr 2013 04:01 PM PDT

At work we keep receiving the following DataException seemingly at random when one of our processes tries to write/access a table for one of our clients:

com.ibm.db.DataException: A database manager error occurred. :   [IBM][CLI Driver][DB2/NT64] SQL0290N  Table space access is not allowed.  SQLSTATE=55039  

Has anyone encountered this? I'm not the person who primarily does administrative tasks on our databases, but even they seem to be having difficulty finding the root of this problem. Any suggestions? Anyone encounter this before? This error comes up for only one of our clients at a time, and it generally seems to rotate. We have Rackspace service but they wont be of much help unless we can provide screenshots, etc at the exact moment this occurs.

Apologies if this post may be too vague, please let me know what information to supply to clarify things more. I'm one of the developers in my office, but I don't primarily handle the administrative tasks on our databases.

edit: We spoke with IBM and this could possibly be caused by some sort of virus scan being run by IBM/Rackspace as a part of maintenance? They said this kind of dubiously though, so I am doubting this is the culprit because tables remained locked for variable amounts of time.

unable to login oracle as sysdba

Posted: 11 Apr 2013 07:01 PM PDT

I just got 11gR2 running and was able to conn as sysdba. I shutdown and started up the database to mount a new pfile. Now, I cannot login as sysdba. My parameter for password file is:

 *.remote_login_passwordfile='EXCLUSIVE'  

I am using sqlplus within the server. This is not a remote connection.

[oracle@oel56 ~]$ sqlplus /nolog    SQL*Plus: Release 11.2.0.1.0 Production on Tue Feb 5 22:50:46 2013    Copyright (c) 1982, 2009, Oracle.  All rights reserved.    SQL> conn / as sysdba  ERROR:  ORA-01031: insufficient privileges  

Here's some more information:

[oracle@oel56 ~]$ grep -E "ine SS_DBA|ine SS_OPER" $ORACLE_HOME/rdbms/lib/config.c  #define SS_DBA_GRP "oracle"  #define SS_OPER_GRP "oracle"  [oracle@oel56 ~]$ id oracle  uid=500(oracle) gid=500(oracle) groups=500(oracle),54321(oinstall),54322(dba),54323(oper) context=user_u:system_r:unconfined_t  

"connectivity libraries needed are not found" error in IBM Data Studio

Posted: 11 Apr 2013 03:01 PM PDT

UPDATE

I am getting the following error when I try to create a new database in IBM Data Studio v3.1.1.0.

The connectivity libraries that are needed for local or remote non-JDBC operations were not found. To provide these libraries, you can install IBM data server client or a local DB2 server instance.  

I have already started the instance using

db2start  

command.

After searching exhaustively, I am not able to find any help on the internet regarding this error.

mysql replication delay very serious

Posted: 11 Apr 2013 12:01 PM PDT

We use a ssd disk for master database and SATA disk for slave.

On high write load (300 writes/s), slave delay is very serious. I have set:

  • innodb_flush_log_at_trx_commit = 0
  • increased innodb_log_file_size

on the slave, but the delay continues.

the slave is for backup only, no read or write request.

Seconds_Behind_Master: 92265, continue increasing.

some conf:

innodb_buffer_pool_size=15G innodb_log_file_size=1G innodb_log_buffer_size=32M

du -hs mysql  338G    mysql    iostat  Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util  sde               0.00    73.00   34.50  185.00   552.00  7086.00    69.59     5.63   25.63   19.19   26.83   3.65  80.20  

How to insert into junction table using triggers

Posted: 11 Apr 2013 02:01 PM PDT

Sorry in advance if this is "basic SQL." I wanted to know how to update my junction tables automatically. For example, these are my tables.

Artist and Song are base tables and SongArtist is the junction table. Everything in SongArtist is PK and FK.

CREATE  TABLE IF NOT EXISTS `Artist` (    `artistID` INT NOT NULL AUTO_INCREMENT ,    `artistName` VARCHAR(150) NOT NULL ,    PRIMARY KEY (`artistID`) )  ENGINE = InnoDB    CREATE  TABLE IF NOT EXISTS `Song` (    `songName` VARCHAR(150) NOT NULL ,    `songID` INT NOT NULL AUTO_INCREMENT ,    PRIMARY KEY (`songID`) )  ENGINE = InnoDB    CREATE  TABLE IF NOT EXISTS `SongArtist` (    `songID` INT NOT NULL ,    `artistID` INT NOT NULL ,    PRIMARY KEY (`songID`, `artistID`) ,    INDEX `fk_Artist_Artist_idx` (`artistID` ASC) ,    INDEX `fk_Song_Song_idx` (`songID` ASC) ,    CONSTRAINT `fk_Song_Song`      FOREIGN KEY (`songID` )      REFERENCES `Song` (`songID` )      ON DELETE CASCADE      ON UPDATE CASCADE,    CONSTRAINT `fk_Artist_Artist`      FOREIGN KEY (`artistID` )      REFERENCES `Artist` (`artistID` )      ON DELETE CASCADE      ON UPDATE CASCADE)  ENGINE = InnoDB  

I created some triggers like this, but they don't seem to work as I can't do INSERT INTO and add a new row when I only know one field of the junction table because I have two columns that are PK.

CREATE   TRIGGER after_song_insert AFTER INSERT  ON Song  FOR EACH ROW   BEGIN      INSERT INTO SongArtist (songID) values (songID);  END;  CREATE  TRIGGER after_song_update AFTER UPDATE  ON Song  FOR EACH ROW   BEGIN      INSERT INTO SongArtist (songID) values (songID);  END;  CREATE  TRIGGER after_song_delete AFTER DELETE  ON Song  FOR EACH ROW   BEGIN      DELETE FROM SongArtist (songID) values (songID);      END;  $$      DELIMITER ;  

What should I do?

Data dictionary best practices in SQL Server 2008 r2

Posted: 11 Apr 2013 08:01 PM PDT

We are interested in sharing the meta data and data dictionary among the team. I know that we can use the Extended Properties for this purpose, but based on my experience I've seen it gets out of date easily, because team members tend to forget to update them or skip this step.

I'm wondering if there is a more convenient way to create the data dictionary which can be maintained with the least amount of effort and time.

Thank you.

What GUI tools can be used for PostgreSQL in Mac OS X?

Posted: 11 Apr 2013 06:06 PM PDT

I have recently started using PostgreSQL with my rails app.

I was looking for a SQL GUI to the PostgreSQL database (for Mac).

Anything better than pgadmin3?

Why use both TRUNCATE and DROP?

Posted: 11 Apr 2013 11:50 AM PDT

In the system I work on there are a lot of stored procedures and SQL scripts that make use of temporary tables. After using these tables it's good practice to drop them.

Many of my colleagues (almost all of whom are much more experienced than I am) typically do this:

TRUNCATE TABLE #mytemp  DROP TABLE #mytemp  

I typically use a single DROP TABLE in my scripts.

Is there any good reason for doing a TRUNCATE immediately before a DROP?

No comments:

Post a Comment

Search This Blog