Friday, March 15, 2013

[how to] oracle huge hash_area_size

[how to] oracle huge hash_area_size


oracle huge hash_area_size

Posted: 15 Mar 2013 08:09 PM PDT

I am wondering what is the maximum setting for hash_area_size parameter? I'm trying to perform a huge join which ends up allocating 10Gb of temp space and completes in an hour. 10Gb is not that much by today's standards. I have the RAM available in the system. Should be possible to process that in memory. Is there a way to tell oracle to do that? When I try to set hash_area_size to 10Gb it complains because it seems to be limited to 32bit integers.

Cannot initialize the data source object of OLE DB provider “Microsoft.ACE.OLEDB.12.0” for linked server

Posted: 15 Mar 2013 02:25 PM PDT

I'm trying to establish a linked server from SQL Server 2008 R2 to an Access database. The Access database is not password protected. As I have seen in other posts (specifically this one), this is usually due to the current user not having access to the Temp folder inside the NetworkService folder. Since we are running this in Windows Server 2008, the directory structure is quite different than what most are eluding to. I have allowed access to every temp folder in the "root:\Documents and Settings\" directory, and to no avail.

This is the procedure I am using to add the linked server:

EXEC sp_addlinkedserver   @server = N'OS_Access',   @provider = N'Microsoft.ACE.OLEDB.12.0',   @srvproduct = N'',  @datasrc = N'C:\RTBData\Data\OS.mdb';  GO  

It creates the linked server with no problem, but I am not able to view the tables/views of the database. Likewise, my ASP.NET application cannot access it either.

I have tried both ACE and JET(64-bit)(by installing the Data Connectivity Components for Office), and they both do not work. I have also tried configuring the ACE provider with "Dynamic Parameter" and "Allow InProcess" to true.

Additionally, I tried upping the memory usage by the MSSQLSERVER services by adding "-g512;" in front of "startup parameters" string in SQL configuration manager to rule out memory issues.

If anyone could shed some light on this that would be fantastic! Thanks!

Mysql 5.6 Failure

Posted: 15 Mar 2013 08:25 PM PDT

I have the following in RPM_UPGRADE_HISTORY in /var/lib/mysql

MySQL RPM upgrade to version 5.6.10-1.linux_glibc2.5  'pre' step running at Fri Mar 15 15:41:26 EDT 2013    ERR file(s):  -rw-r----- 1 mysql root 1725 2013-03-15 13:45 /var/lib/mysql/mysql.err  

I check the error file to find this information:

2013-03-15 15:43:22 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).  2013-03-15 15:43:22 11993 [Note] Plugin 'FEDERATED' is disabled.  2013-03-15 15:43:22 11993 [Note] InnoDB: The InnoDB memory heap is disabled  2013-03-15 15:43:22 11993 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins  2013-03-15 15:43:22 11993 [Note] InnoDB: Compressed tables use zlib 1.2.3  2013-03-15 15:43:22 11993 [Note] InnoDB: CPU does not support crc32 instructions  2013-03-15 15:43:22 11993 [Note] InnoDB: Using Linux native AIO  2013-03-15 15:43:22 11993 [Note] InnoDB: Initializing buffer pool, size = 128.0M  2013-03-15 15:43:22 11993 [Note] InnoDB: Completed initialization of buffer pool  2013-03-15 15:43:22 11993 [ERROR] InnoDB: auto-extending data file ./ibdata1 is of a different size 640 pages (rounded down to MB) than specified in the .cnf file: initial 768 pages, max 0 (relevant if non-zero) pages!  2013-03-15 15:43:22 11993 [ERROR] InnoDB: Could not open or create the system tablespace. If you tried to add new data files to the system tablespace, and it failed here, you should now edit innodb_data_file_path in my.cnf back to what it was, and remove the new ibdata files InnoDB created in this failed attempt. InnoDB only wrote those files full of zeros, but did not yet use them in any way. But be careful: do not remove old data files which contain your precious data!  2013-03-15 15:43:22 11993 [ERROR] Plugin 'InnoDB' init function returned error.  2013-03-15 15:43:22 11993 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.  2013-03-15 15:43:22 11993 [ERROR] Unknown/unsupported storage engine: InnoDB  2013-03-15 15:43:22 11993 [ERROR] Aborting    2013-03-15 15:43:22 11993 [Note] Binlog end  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'partition'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'BLACKHOLE'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'ARCHIVE'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_SYS_DATAFILES'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_SYS_TABLESPACES'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN_COLS'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_SYS_FIELDS'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_SYS_INDEXES'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_SYS_TABLESTATS'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_SYS_TABLES'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_FT_INDEX_TABLE'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_FT_INDEX_CACHE'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_FT_CONFIG'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_FT_BEING_DELETED'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_FT_DELETED'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_FT_INSERTED'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_FT_DEFAULT_STOPWORD'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_METRICS'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_BUFFER_POOL_STATS'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE_LRU'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX_RESET'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_CMPMEM'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_CMP_RESET'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_CMP'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_LOCKS'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'INNODB_TRX'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'MRG_MYISAM'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'MEMORY'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'CSV'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'MyISAM'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'sha256_password'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'mysql_old_password'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'mysql_native_password'  2013-03-15 15:43:22 11993 [Note] Shutting down plugin 'binlog'  2013-03-15 15:43:22 11993 [Note] /usr/sbin/mysqld: Shutdown complete    130315 15:43:22 mysqld_safe mysqld from pid file /var/lib/mysql/mysql.pid ended  

I installed using rpm package on MySQL web site. What went wrong?

After Rackspace server creation, PostgreSQL query planner doesn't work as expected

Posted: 15 Mar 2013 06:32 PM PDT

We created an image of one of our database servers in Rackspace. Then, we created a new server using that image, expecting things to work. However, the index performance we have seen seems to be bad, and even the query plans, when comparing the explain analyze outputs from the first server to the second one, have been different. Basically, the second server - the one that was restored from a saved image - does not seem to rely on indexes as it should.

Is there something related to image restore that would cause this behavior? If so, what may need to be done to fix the issue? Should I look somewhere else?

Thanks very much! I can provide more information if needed, so let me know if that would be useful.

Create Scheme that allows for fluidity between entities

Posted: 15 Mar 2013 01:16 PM PDT

We are a small rapidly evolving company and recently investigated switching databases from SQL Server to a graph database. We like the graph approach because of the fluidity and openness that it offered.

For instance, with a graph DB you can add or remove new relationships quite easily. However, in an RDBMS it is much more difficult.

Is it possible to have some or all of the fluidity in a RDBMS like SQL Server?

Database Tasks in C#

Posted: 15 Mar 2013 07:02 PM PDT

We have 2 database environments a "Production" Database and a "Reporting" Database located on different physical windows server 2003 boxes. The reporting Db is derived off the Production database and contains some modified tables schema's and additional "Pre Cooked" Data tables which would be too costly to otherwise process on the fly.

What I had originally created was a database script (MS SQL) on our Reporting Server to grab the Live Production database at 1AM and restore it in our Reporting environment by running all the scripts to get it up and going such as the tables restores, and modification of the data. The problem with this approach is that I quickly began to realize a) how long this approach takes (there is no option of putting a 'modifieddate' on any production records) and b) how a junior DB developer (me) should probably not dive into this stuff yet because he's more of a programmer :).

Would it be part of the 'best practice methodology' to create a windows service which basically runs the restore at 1AM and sends the email if the process failed? I'm not sure if writing a c# component is going to allow me the level of granularity that MS SQL Scripts allow. So by using a programming language to accomplish this task - is this nuts and should this all be done in the database?

How would you recommend I run the syncing of the two environments?

Let me know what you think!

What is the difference between leaf and non-leaf In SQL Server?

Posted: 15 Mar 2013 08:28 PM PDT

I've been running some index usage reports, and I'm trying to get a definition of Leaf and Non-leaf. There seem to be both Leaf and Non-leaf Inserts, Updates, Deletes, Page Merges, and Page Allocations. I really don't know what it means, or if one is better than the other.

If someone could give a simple definition of each, and also explain why Leaf or Non-leaf matters, it would be appreciated!

Best way to perform SQL Server replication over multiple servers?

Posted: 15 Mar 2013 12:00 PM PDT

My current setup requires me to use three separate SQL Server 2012 instances (installed on 3 separate machines, separated geographically) and have the data be replicated across all 3. Any changes made in the 7 tables within my database need to be replicated based on where the change was made, for example, if I add a record on Server A, it needs to be added on Server B and Server C. If a record is added on Server B, it needs to be added on Server A and Server C. Same thing with Server C.

From what I've read, Bidirectional Transactional Replication seems to be the best avenue for me to use in SQL Server 2012 Standard. The issue that I'm having is that no one has a very good guide to Bidirectional Transactional Replication that really explains much. I think I've poured over Hilary Cotter and Microsoft's information a million times and I still can't seem to get it to work. The fluff data that they provide in examples just isn't practical and no one explains how to do it between two physical machines; they assume you're using the same machine and just replicating between another instance.

Am I doing something wrong or am I going about this the wrong way?

MySQL backup InnoDB

Posted: 15 Mar 2013 10:34 AM PDT

I have a VoIP server running 24x7. At low peak hour at lease 150+ users are connected. My server has MySQL running with InnoDB engine on Windows 2008 platform. I like to take at least 2 times full database backup without shutting down my service.

As per Peter Zaitsev - the founder of percona, mysqldump –single-transaction is not always good.

read here if you are interested

As I'm not a DBA, I like to know in my scenario, which would be best solution to take a database backup?

Thanks,

Collect CPU, reads and writes when replaying SQL Profiler Trace

Posted: 15 Mar 2013 09:48 AM PDT

I'm trying to replay a trace on a test environment with SQL Profiler in order to benchmark a database prior to making some performance tweaks. I intend to replay the same trace at various stages to compare the stats and measure the improvements.

I've been following the post at keepitsimpleandfast which seems good however I'm not getting the CPU, Reads and Writes data saved when I replay the trace in Step 6..

I had to add the CPU, Reads and writes columns to events when I set up the TSQL-Replay template when I took the trace but I don't have any options in Replay Configuration to select events or columns.

Strangely, I also ran a separate trace first and then replay the trace nothing gets traced at all! The events are different though, instead of RPC: Completed I get Replay Result Set Event and Replay Result Row Event, would this make a difference?

Does anyone know how to either amend the columns logged when you replay a trace OR how to set up another trace to trace the replay?

Getting timeouts and backup errors after mirroring and log shipping is setup

Posted: 15 Mar 2013 01:00 PM PDT

OS is Windows Server 2003 32 bit. SQL Server is SQL Server 2008 SP3 32 bit

Today I finished configuring database mirroring and log shipping for 3 databases on this SQL instance and started receiving timeouts (connection and query timeouts from our applications) and log backup errors (log backup was performed by log shipping executable). Never had this kind of issues before when database mirroring was not active and log backups were performed via our custom SSIS package instead of using log shipping (also previously log backup was setup only for 1 database every 15 minutes and now it is for 3 databases every 5 minutes).

More details:


1) Connection timeout from one of our applications:

Event Text:   System.Data.SqlClient.SqlException: Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.     at System.Data.ProviderBase.DbConnectionPool.GetConnection(DbConnection owningObject)     at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection)     at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory)     at System.Data.SqlClient.SqlConnection.Open()     at App.Component1.EventBroadcaster.ConnectToSql(SqlConnection connect, Boolean bulk)     at App.Component1.Broadcaster.ReBroadCastLocate(Int32 LocateId, BroadCastType bcType_)    

2) Query timeout from another of our applications:

Event Text:   System.Data.SqlClient.SqlException: Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.     at App1.RTFS.DBProcess.exeNonQuery(SqlCommand cmd_)     at App1.RTFS.RTDB.submitOrder(NewOrderParams& params_)     at App1.RTFS.NewOrderMessageProcess.processSingle()     at App1.RTFS.NewOrderMessageProcess.processSubmit()     at App1.RTFS.NewOrderMessageProcess.process()     at App1.RTFS.Application.processMessageRequest(IMessageProcess msgProcess, SessionID sessionID)   

3) Log backup failure (all these are created at the same time in Event log)

Event 3041
BACKUP failed to complete the command BACKUP LOG DB0. Check the backup application log for detailed messages.

Event 701
There is insufficient system memory in resource pool 'default' to run this query.

Event 701
There is insufficient system memory in resource pool 'internal' to run this query.

These errors are not that frequent 5-6 times a day


I think error messages related to failed log backup ("There is insufficient system memory in resource pool") indicate possible bottleneck. From what I found possible solution is to allocate more memory for SQL Server by using AWE. However, this server only has 4GB of physical RAM and SQL Server is already using 1.8GB (as shown by task manager).

How can I better troubleshoot these issues? Is there a way to see detailed memory situation for SQL Server and trace exactly what queries/connections are timing out?

Ms-Access 2007-2010 SQL - from single row record how to create multiple rows

Posted: 15 Mar 2013 01:39 PM PDT

I need help in below table updation using SQL in Ms-Access 2007-2010 database.

I've a table called table1 which has below entries:

  table1:  --------------------------------------------------  |  col1  |   col2   |          col3         |  -------------------------------------------------  |    A1  |  abcd    |      123;456;789      |  -------------------------------------------------  |    B2  |  efgh    |       246;369         |  --------------------------------------------------  |    C3  |   ijk    |                       |  --------------------------------------------------   

Using SQL, I want to change the above table (particularly the column col3 multiple values into individual multiple rows) as shown below:

  --------------------------------------------------  |  col1  |   col2   |          col3       |  -------------------------------------------------  |    A1  |  abcd    |         123         |  -------------------------------------------------  |    A1  |  abcd    |         456         |  -------------------------------------------------  |    A1  |  abcd    |         789         |  -------------------------------------------------  |    B2  |  efgh    |         246         |  --------------------------------------------------  |    B2  |  efgh    |         369         |  -------------------------------------------------  |    C3  |   ijk    |                     |  --------------------------------------------------  

How do I do it? I tried various forums. but could not come up with exact solution so far.

Appreciate your help.

Thanks, Jak.

SQL Server 2008 R2 replication high delivery latency

Posted: 15 Mar 2013 09:25 AM PDT

I am seeing an unusually high delivery latency between our distributor and subscribers and i do not understand why.

We have in this configuration 3 sql servers using transactional push replication to replicate data from one master server to two reporting servers.

We have 9 publications. The distribution agent for most publications are showing under 5ms but one is show as 2000+ms to both subscribers.

The suspect publication has only 4 small articles (tables) that rarely, if ever, change. Ive checked and each table has an primary key.

ive also checked the @status parameter for each article according to the MS KB: The distribution agent may experience high latency when you configure transactional replication with articles that are configured not to replicate changes as parameterized statements

Im tempted to start droping articles to find out if one particular table is the culprit.

Doe anyone have any suggestions as to what I can look at?

Refresh Postgres Linked Tables in MS Access

Posted: 15 Mar 2013 10:42 AM PDT

Does anybody knows a VBA procedure that automatically link and refresh Postgres linked tables (via ODBC) in MS-Access 2010?

How can I verify I'm using SSL to connect to mysql?

Posted: 15 Mar 2013 02:23 PM PDT

I have configured my server to allow SSL, and have modified my client ~/.my.cnf so I use SSL:

[client]  ssl  ssl-cipher=DHE-RSA-AES256-SHA  ssl-ca=~/certs/ca-cert.pem  

When I log in with my client and view the status, it lists a cipher on the SSL line:

mysql> \s  --------------  SSL:            Cipher in use is DHE-RSA-AES256-SHA  

Without installing something like wireshark to verify that the connection is secure, can I assume that I'm connecting via SSL based on this information?

Database design that handles data growth with time

Posted: 15 Mar 2013 12:43 PM PDT

I have a big table that stores video rental data with the following columns:

id, title, language, duration, owner, remarks, closing_date_for_loan

Assuming every day there are thousands of data inserts to this table, then within a year, I could have a million rows of data. The search on the data record involves range query on several columns and will always contain a WHERE closing_date_for_loan > NOW() condition.

To maintain this table, I could perform a query to transfer data with closing_date_for_loan < NOW() out of this big table periodically so that the table won't get too big causing excessive query times. I am looking for a more automated way of handling such data growth similar to how a log rotation works. Any ideas?

Additional Note:

I have tested a few composite index and the query time can range from a few seconds to 50s if the row gets to 5 million. Range queries can be hard to optimize, so I am looking for other ways like keeping the table to a manageable size.

Strange characters in mysqlbinlog output

Posted: 15 Mar 2013 08:26 AM PDT

Has anyone experienced this? Data replicates fine but when output in mysqlbinlog there are hidden characters that break the input?

  • mysqlbinlog Ver 3.3 for Linux at x86_64
  • mysql 5.5.28 server

Thanks! Julie

High Buffer I/O SQL Server 2008 R2 causes?

Posted: 15 Mar 2013 08:17 AM PDT

Am trying to troubleshoot performance on an SQL server that we have for our document management server and today performance has hit an all time low. Looking at the MDW I setup recently I can see a huge spike in the Buffer I/O. The combined SQL Wait time is in the region of 1500-2000 ms which is considerably higher than normal (around 500ms). The large proportion of this increase is Buffer I/O.

Not being a DBA and doing this out of neccessity I am massively out of my depth. Is there any way to tell why there has been a large increase in buffer i/o? Is this likely due to a query pulling large amounts of data or incorrect config of sql memory limits? Is there anything specific (or general for that matter) I should be looking at or DMVs I can query to help troubleshoot?

The server is 2008 R2 as is SQL. It is a VM running two vCPUs and 8GB RAM with the disks hosted on a separate array on a SAN.

enter image description here

EDIT: I think I may have misread this graph on the server. I actually think it is Lock Wait that has increased dramatically not Buffer I/O. The colours are far too similar for me to pick out at a glance :(

Where are PostgreSQL databases stored on my computer?

Posted: 15 Mar 2013 04:40 PM PDT

So this might be a stupid question, but I'm really new to this. So I'm making a Postgres database on the postgres server, that I started up with this command:

sudo -u postgres psql  

And then I did the command:

CREATE DATABASE database;  

Which supposedly created the database. But where is this database? Where can I find it on my computer?

QGIS PostGIS Authentication Failure with "trust" option

Posted: 15 Mar 2013 09:37 AM PDT

I'm trying to setup a PostGIS db using PostGreSQL 9.1 on Ubuntu 12.04. I've set up the database, created a user account "jay" with no password, added spatial functionality following section 2.5 here, and set my pg_hba.conf for local to:

# "local" is for Unix domain socket connections only  local   all             all                                     trust  

I can can connect to the database using the PGAdminIII, requiring no password to connect, but connection fails when I try to connect via PostGIS or with QuantumGIS. Here's a screencap of the connection window in QuantumGIS:
PostGIS connection

Looking into the problem a bit, I came across this post which led me to try connecting via the terminal using psql -U jay -h localhost gis. psql prompted me for a password, I hit enter because I had not set a password, and it returned psql: fe_sendauth: no password supplied. I tried again, this time supplying my Ubuntu user password, which returned

psql: FATAL:  password authentication failed for user "jay"  FATAL:  password authentication failed for user "jay"  

I then tried setting a password for jay using ALTER USER, and entering this password in the prompt, and this failed as well. Clearly, I am unable to connect. However, I'm having trouble figuring out what troubleshooting steps to take next.

Any ideas??

Proper indexes or anything else to optimize

Posted: 15 Mar 2013 08:40 AM PDT

I need help to optimize the query to avoid using "Using temporary", "Using filesort".

CREATE TABLE `target_bahrain_pepsi` (         `id` int(11) DEFAULT NULL,        `col_value` varchar(50) DEFAULT NULL,           `source_file_id` int(11) DEFAULT NULL,         KEY `id` (`id`)   ) ENGINE=InnoDB        SELECT su.adbrand_id, sum(st.col_value) as amount   FROM statex_with_id su  INNER JOIN target_bahrain_pepsi st  ON st.id = su.id   GROUP BY su.adbrand_id   ORDER BY amount DESC ;  

Table statex_with_id has also index.

mysql> EXPLAIN select su.adbrand_id, sum(st.col_value) as amount      -> from statex_with_id su      -> INNER JOIN target_bahrain_pepsi st      -> ON st.id = su.id GROUP BY su.adbrand_id ORDER BY amount DESC ;       +----+-------------+-------+--------+---------------+---------+---------+------------------+---------+----------------------------------------------+   | id | select_type | table | type   | possible_keys | key     | key_len | ref              | rows    | Extra                                        |   +----+-------------+-------+--------+---------------+---------+---------+------------------+---------+----------------------------------------------+   |  1 | SIMPLE      | st    | index  | id            | id      | 58      | NULL             | 1804021 | Using index; Using temporary; Using filesort |   |  1 | SIMPLE      | su    | eq_ref | PRIMARY       | PRIMARY | 4       | dashboard1.st.id |       1 | Using where                                  |   +----+-------------+-------+--------+---------------+---------+---------+------------------+---------+----------------------------------------------+  

Connecting to a SQL Server database from a Flash program

Posted: 15 Mar 2013 10:40 AM PDT

I currently have the ability to utilize Microsoft SQL Server 2012. I am developing a project with Adobe Flash Builder 4.7.

If I link my database with Adobe Flash Builder is there any additional steps I must take in order to make the database live, or as long as my computer is running will this database be accessible from any device that is utilizing it?

In other words is this a LAN only system or does it automatically make itself available for the programs I link to it?

Oracle Express edition on Ubuntu - control file missing

Posted: 15 Mar 2013 01:40 PM PDT

I have installed the Oracle Express edition on Ubuntu as mentioned here.

I am facing issues when I try to create a sample table.

Started oracle

$ sudo service oracle-xe start   Starting Oracle Database 11g Express Edition instance.  

Started sqlplus

$ sqlplus / as sysdba  

Executed the CREATE command

SQL> CREATE TABLE SAMPLE (ID NUMBER);  CREATE TABLE SAMPLE (ID NUMBER)  *** ERROR at line 1: ORA-01109: database not open**  

After a series of research on web, I tried to shutdown and restart oracle:

Shutdown command

SQL> shutdown  ORA-01507: database not mounted  ORACLE instance shut down.  

Started the oracle instance

SQL> startup    ORACLE instance started.  Total System Global Area  688959488 bytes Fixed Size                   2229688 bytes Variable Size             411044424 bytes Database  Buffers          272629760 bytes Redo Buffers                3055616  bytes ORA-00205: error in identifying control file, check alert log  for more info  

I realized that the control file is missing at /u01/app/oracle/oradata/XE. XE is the DB name.

So I tried to create the control file as follows:

SQL> CREATE CONTROlFILE SET DATABASE XE RESETLOGS;    Control file created.  

Tried to create the sample table again

SQL> CREATE TABLE SAMPLE(ID NUMBER);      CREATE TABLE SAMPLE(ID NUMBER)    ERROR at line 1: ORA-01109: database not open  

So I tried to issue the following command

SQL> ALTER DATABASE OPEN RESETLOGS;  ALTER DATABASE OPEN RESETLOGS    ERROR at line 1:  ORA-01194: file 1 needs more recovery to be consistent**  ORA-01110: data file 1: '/u01/app/oracle/product/11.2.0/xe/dbs/dbs1XE.dbf'  

What should be done next? I am clueless as I am not a database guy.

Note:

Output of

$ lsnrctl services    LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 04-JAN-2013 09:15:37    Copyright (c) 1991, 2011, Oracle.  All rights reserved.    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC_FOR_XE)))    Services Summary...    Service "PLSExtProc" has 1 instance(s).   Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...      Handler(s):        "DEDICATED" established:0 refused:0           LOCAL SERVER  Service "XE" has 1 instance(s).    Instance "XE", status READY, has 1 handler(s) for this service...      Handler(s):        "DEDICATED" established:0 refused:0 state:ready           LOCAL SERVER  The command completed successfully  

Find the first gap in an aggregation of integers in SQL Server

Posted: 15 Mar 2013 02:03 PM PDT

Let's say I have a table called dbo.GroupAssignment.

  GroupID | Rank  ------------------  1    1  1    2  1    3  2    1  2    3  2    4  3    2  3    4  3    5  

The PK is GroupID, Rank. Normally, the ranks within a group are a contiguous sequence of integers starting from 1. But it's possible for a GroupAssignment to get removed, leaving a gap. When a new assignment is made for the group, I want to fill the first gap available.

So, how could I calculate this in SQL Server?

mysql cannot start because of init.d folder?

Posted: 15 Mar 2013 08:27 AM PDT

After I setup mysql 5.5, I changed the data dir to another folder and modify the related config in my.cnf. This is quite simple, but mysql cannot start anymore even if reinstall or not change data dir.

The error log has no useful information, so I cannot find any reason.

111207 16:28:02 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 111207 16:28:02 mysqld_safe mysqld from pid file /var/lib/mysql/localhost.localdomain.pid ended

But if I use the command as the start script, mysql starts normally.

/usr/bin/mysqld_safe --datadir=/data/mysql/ --pid-file=/data/mysql/localhost.localdomain.pid > /dev/null 2>&1

Much more strange, I use /usr/share/mysql/mysql.server start instead of /etc/init.d/mysql start, mysql also can start without error, while these two files are the same. So in the end I have to use cp -l /usr/share/mysql/mysql.server /etc/init.d/mysql to fix the problem.

I really want to know why and if there is a better solution!

One Big Database vs. Several Smaller Ones

Posted: 15 Mar 2013 08:54 AM PDT

We have a situation were we can (A) deploy instances of an applications in one MySQL database using table prefixing or (B) use different MySQL databases for each instance of the application, for e.g.,

Setup "A":

central_database    app1_table1    app1_table2    app1_tablen  ...    appn_table1    appn_table2    appn_tablen  

The end result being a large db with many tables.

Setup "B":

app1_db    table1    table2    tablen    ...    appn_db    table1    table2    tablen  

The end result being many databases with some tables.

All things equal (e.g., amount of data, number of app instances, etc), what are the pros and cons of going with either approach? What would be detrimental to database performance and maintenance? The application is PHP 5 based, run over Apache 2.x, and we're running MySQL 5.x.

Many thanks for your time and thoughts!

Is it possible to mysqldump a subset of a database required to reproduce a query?

Posted: 15 Mar 2013 01:22 PM PDT

Background

I would like to provide the subset of my database required to reproduce a select query. My goal is to make my computational workflow reproducible (as in reproducible research).

Question

Is there a way that I can incorporate this select statement into a script that dumps the queried data into a new database, such that the database could be installed on a new mysql server, and the statement would work with the new database. The new database should not contain records in addition to those that have been used in the query.

Update: For clarification, I am not interested in a csv dump of query results. What I need to be able to do is to dump the database subset so that it can be installed on another machine, and then the query itself can be reproducible (and modifiable with respect to the same dataset).

Example

For example, my analysis might query a subset of data that requires records from multiple (in this example 3) tables:

select table1.id, table1.level, table2.name, table2.level          from table1 join table2 on table1.id = table2.table1_id          join table3 on table3.id = table2.table3_id         where table3.name in ('fee', 'fi', 'fo', 'fum');   

No comments:

Post a Comment

Search This Blog