Monday, June 10, 2013

[how to] SQL-server 2008 TMG access logs required?

[how to] SQL-server 2008 TMG access logs required?


SQL-server 2008 TMG access logs required?

Posted: 10 Jun 2013 08:14 PM PDT

I'm using this guide http://msdn.microsoft.com/en-us/library/cc280424%28v=sql.105%29.aspx

Now, after enabling this, I'm using a jdbc connection from another machine (siem) to pull logs from. This is working fine. But I'm only able to get logs such as alert *insert* etc.

What do i need to bring entire access logs? Do i need to change object schema?

PostgreSQL "freeze"/"unfreeze" command equivalents

Posted: 10 Jun 2013 07:05 PM PDT

In Derby (an embedded database written in Java which is mostly used for testing or prototyping) there are "freeze" and "unfreeze" commands which can be used during an online backup. "Freeze" simply causes all database accesses to block until "unfreeze" is called. This is useful for backing up using an external program, which you might do if the external program is much faster than using Derby's internal backup solution. For my use case, I can take a snapshot almost instantaneously using some built-in filesystem utilities, so it is a constant-time operation (not O(length of DB files)).

I'm migrating an application which has outgrown Derby to PostgreSQL, and I was wondering if there's anything comparable there which I can use to quiesce all connections. Also, I'd prefer to know what my serialization point is from inside my application so that I don't get caught in some awkward state, so being able to pause/resume all other accesses is really nice-to-have for me.

Since PostgreSQL has a transaction log, I could just take a snapshot without "freeze"ing, but the snapshot would need to be run through PostgreSQL's recovery mechanism before I can use it because otherwise what's stored on disk would be the same as if I pulled the plug on a normal filesystem. This solution is not ideal.

EDIT I learned that pg_start_backup() is close, but it doesn't cause incoming transactions to block until a matching call to pg_stop_backup(), forcing me to do a point-in-time-recovery back to the transaction id pg_start_backup() returns from a filesystem snapshot. It would be nice not to have to actually shutdown PostgreSQL to get this (perhaps there is a pseudo-shutdown command that keeps connections open?).

Which hardware you suggest?

Posted: 10 Jun 2013 03:28 PM PDT

I'm completely new to buying a database hosting and I would greatly appreciate your opinion about the following situation. I just created an App and I want to go live this weekend. I want to be prepared as good as possible so I will write some statistics of the database usage I think to see every 24 hours.

- 20 000 000 inserts into a table which contains 1 bigint, 7 ints and 1 date.  -  5 000 000 updates on above table  - 10 000 000 inserts into mapping table 1 bigint 1 int  -  2 000 000 inserts into table with 3 ints  -  5 000 000 select queries with WHERE pk reference of first table described above  

I will use the MySQL 5.5 database with all InnoDB tables. At the moment I am thinking about buying a dedicated server with these stats:

Intel Xeon E3-1220LV2 2.3GHz  100 Mbps Uplink  4 GB DDR3 Memory  500 GB RAID-1 Drives  10 TB Bandwidth  Up to 5 Dedicated IPs  

For a price of $140 dollar a month. What do you think, will this be enough? Would it be overkill? You know a better hosting? At the moment I am on a shared webhost ($12 a month) and I get a fifth of the numbers above, but the shared database is really not optimized for innoDB.

How do I clear log copy queue red gate sql backup? [closed]

Posted: 10 Jun 2013 01:46 PM PDT

I was wondering if anyone knew the steps to take to clear the Log Copy Queue in Red Gate's Sql Monitor software?

pgAdmin3 can't read log

Posted: 10 Jun 2013 06:04 PM PDT

I have pgAdmin3 and PG running on my iMac. If I point to the server and run Tools/Server Status -

I get:

ERROR: could not read directory pg_log: No such file or directory  

The PG log is located in:

macintosh HD/usr/local/var/postgres/server.log  

How do I change pgAdmin3 to read the log?

SQL Server cube - count distinct of dimension key

Posted: 10 Jun 2013 05:35 PM PDT

I have a fact table which has a lot of records with the same person ID. I would like to create a calculated member that count them, but without duplicates of the ID.

I tried to use just count(distinct([Dimension Person].[PersonID].members])) and some expression with crossjoin, that I found in the Internet, but everything gives wrong results of records.

How to count it correctly?

What collation to use to store characters from the Character Map?

Posted: 10 Jun 2013 03:47 PM PDT

I have an SQL Server 2012 database with the default collation of SQL_Latin1_General_CP1_CI_AS. I've noticed that certain characters from the Windows Character Map application namely small letters with circumflex (not all but some) get replaced with their simple latin equivalent without the diacritics mark. That's not OK for me. I'm using the Nvarchar data type but this does not help.

Basically I need the database to mainly focus on English-language data and apply the sorting rules based on that language. But I also need the DB to store all other characters without damaging them. It's OK if sorting would be slightly off, but I must retain the exact characters and be able to perform precise searches with respect to these diacritics marks.

What collation should I use instead?

P.S. Just in case someone implies I haven't done my homework - I've been googling to no avail. Information of collations seems to be either absent or very superficial. I'm not even sure where to start. I've read some MSDN articles too but they were less than helpful. I hope somebody can point me to the right direction.

UPDATE: The problem is narrowed down to a UDF which processes the data. Something goes wrong in it.

  CREATE FUNCTION [dbo].[MyFunc]  (      @Input NVARCHAR (MAX)  )  RETURNS NVARCHAR (MAX)  WITH SCHEMABINDING  AS  BEGIN      DECLARE @Output NVARCHAR (MAX)        SET @Output = @Input         -- replaces with accented letters, this site cannot correctly display them      SET @Output = REPLACE (@Output, 'something', 'a')      SET @Output = REPLACE (@Output, 'somethingelse', 'c')      -- And so on repeated        RETURN @Output  END  

P.S. WITH SCHEMABINDING is requred for a calculated persisted column.

P.S.2. And I've found out that the REPLACE function doesn't work correctly, even if run manually in SSMS. Does anybody know what is wrong with it?

P.S.3. EPILOGUE.

I've found out what was wrong.

I missed the N'' in the call to REPLACE function. Hiding in plain sight, as they say...

SET @Output = REPLACE (@Output, 'something', N'a')

Special thanks to Martin Smith for reminding me of the obvious.

This certain MySQL database organisation

Posted: 10 Jun 2013 08:19 PM PDT

Hypothetically, let's say that I have three tables:

- Products (id, product_name, added_timestamp)  - Individuals (id, name, last_name)  - Companies (id, company_name, company_number, company_address)  

Now, let's say that both individuals and companies can add their products to my virtual shop.

Now, let's say I want to display all the added products sorted from descending order by the Products.added_timestamp column. However, I don't know how to connect Products with Individuals and Companies and display all the data.

How would I approach this problem?

find sold subscriptions from table 1 and store name in table 2, joined by store id. (oracle)

Posted: 10 Jun 2013 12:39 PM PDT

I am trying to get the number of sold subscriptions from one table and the store name from another. The tables have a common column containing a store id, however the columns do not have the same name.

Talbe1                  |   Table2    Amount     store_code   |   store_name    store_no    1          nf1ii        |   Jims kiosk    nf1ii  1          nfoi1        |   Henrys kiosk  nfoi1  ...        ...          |   ...           ...  

I am trying to count the number of subscriptions sold for each store name. How is this doen in Oracle? I am fairly new to SQL in general so please forgive me if the question is phrased strangely.

I tried something like this, however it did not return what i expected (I'm not at work atm. so i wrote this from memory, i.e. it might be terribly wrong).

select count(*)  from Table1 a, Table2 b  where a.store_code = b.store_no  and a.subs = 'twin' /* type of subscription */  and a.date > '01.05.13'  group a.store_code;  

Doe's any one know a simple way to solve the example above? I am grateful for any hint, answer or constructive comment.

Store large hierarchy in NoSQL each day and compare

Posted: 10 Jun 2013 11:36 AM PDT

I'm in the process of creating a utility that stores a fairly large hierarchy into a NoSQL document. The data of this is similar to an Active Directory / LDAP structure which contains sub groupings, etc for each record.

These are fairly large, a JSON representation is about 40MB per copy.

Each day a new copy of this hierarchy would be added as a new document in the collection.

Through a web interface, a user would be able to select any two copies (documents) and they would be compared. The differences between the two would be displayed, sort of like a diff command would display differences between two documents.

My questions are

  • Is NoSQL a good fit for storing a LDAP like structure for just viewing purposes?
  • If so, what sort of techniques are useful for comparing two documents and getting the differences?

I'm currently trying this in MongoDB and I haven't figured out how to do a direct comparison from a MongoDB command to get the differences. So, my attempt so far is to simply export each record as a JSON text file and run a custom diff command to compare and format as needed.

I am curious if someone has a better method for this.

An example document in NoSQL

{      date: "2013-06-10",      users: [          {               _id: 1,               dn: "CN=B A Baracus,OU=Testing,OU=People,DC=my,DC=domain,DC=com",               name: { first: 'B', middle: 'A', last: 'Baracus' },               employeeId: '001234',               domain: 'mydomain',               username: 'babaracus',               created: new Date('Dec 03, 1924'),               modified: new Date('Mar 17, 2007'),               memberOf: [                      "CN=TEST-DeleteMe,OU=Testing,OU=People,DC=my,DC=domain,DC=com",                     "CN=CTX - Epic Prod,OU=Access,OU=Groups,DC=my,DC=domain,DC=com",                     "CN=Public Folder Admins,CN=Users,DC=my,DC=domain,DC=com",                     "CN=Exchange Services,CN=Users,DC=my,DC=domain,DC=com",                     ...etc...               ],              ...etc...          }          ...etc...      ]  }  

How to create concurrent databses, So that when one database is down other can work...?

Posted: 10 Jun 2013 12:21 PM PDT

I have a local application running on LAN. Some times the system having database get shutdown. In such a scenario how to keep application working. Can i have two concurrent database. I am using Postgre db.

Mirrored tables on PostgreSQL

Posted: 10 Jun 2013 11:20 AM PDT

I have some values under "account" and I need to have the same values under "invoice", so, I'm trying to find a way to symlink my tables, so every time one gets updated, the another have the new value too. I'm developing under OpenERP, I can't trow that update directly on code, I need to write the rule using PgAdmin or something like.

I need to junction symlink 2 tables under Postgres.

ACCOUNT                INVOICE    |ID|payment_type|@@@@@@|ID|account_payment_type|    |#1|cc          |@@@@@@|#1|cc                  |    |#2|cash        |@@@@@@|#2|cash                |    |#3|cash        |@@@@@@|#3|cash                |    

Find the salary difference between two months

Posted: 10 Jun 2013 02:01 PM PDT

Suppose we have 3 columns in a table emp_no, salary, month

emp_no    salary    month  123       1000       jan  123       2000       feb  123       3000       march  

So, output should look like:

emp_no    salary    month      diff  123       1000       jan       1000  123       2000       feb       1000  123       3000       march  

Is there a limit to how many credentials SSMS can remember?

Posted: 10 Jun 2013 10:08 AM PDT

Using SQL Server Management Studio that came with SQL Server 2008 R2, I make connections to the same server using a number of different sets of SQL Server Authentication credentials. On my development machine, I use the Remember password checkbox extensively to avoid having to look up the password each time.

When I choose a Login which I have previously connected with (using Remember password), the Password entry field gets populated with asterisks, and clicking Connect allows me in - most of the time. But sometimes, with no pattern that I can discern, on choosing a previously-used Login, the Password field will prepopulate with asterisks, but clicking Connect tells me the login fails. I enter the password, click Connect, and get in, and then typically the next time I use the same login it will work.

My suspicion is that there is a maximum number of credentials that SSMS is prepared to remember, and only the most-recently-used n credentials are actually remembered, but I can't find any mention of such behaviour in the documentation.

Is there a known limit on how many credentials SSMS will remember?

SQL Server giving intermittant connection failures

Posted: 10 Jun 2013 03:02 PM PDT

I have a SQL Server 2000 running on Windows 2003 Server that is giving me intermittent connection failures.

The user connections are getting closed on waiting but mature connections. Meaning when users are connected to an application that is running on this server, the programs start up fine. The problem usually shows up after the connection has been sitting for a bit. I'm wondering if there is some odd setting that is killing older, not active connections?

What could be causing this? What can I check to further troubleshoot this.

delete rows in 3 tables with on delete cascade

Posted: 10 Jun 2013 12:19 PM PDT

For a relation 0..n between two tables (t1,t2), the mpd generates an intermediate table (t3) with two fk.
Let two constraints on the t3 table, each one with "on delete cascade", if I do:

delete from t1 where id1=$id1;  

I want to delete all rows of the t3 which contain the fk id1 and all rows of the t2 which contain the fk id2.
Is it possible in one query ?
Edit: here are my tables:

create T1 (  id1 ...  ...  CONSTRAINT pk_id1 PRIMARY KEY (id1) );    create T2 (  id2 ...   ...  CONSTRAINT pk_id2 PRIMARY KEY (id2) );    create T3 (  id1 ...,  id2 ...,  CONSTRAINT pk_T3 PRIMARY KEY (id1, id2),  CONSTRAINT fk_T3_1 FOREIGN KEY (id2)  REFERENCES T2 (id2) MATCH SIMPLE  ON UPDATE CASCADE ON DELETE CASCADE,  CONSTRAINT fk_T3_2 FOREIGN KEY (id1)  REFERENCES T1(id1) MATCH SIMPLE  ON UPDATE CASCADE ON DELETE CASCADE )  

InnoDB tables don't show after reboot

Posted: 10 Jun 2013 10:58 AM PDT

After reboot each time, I can't see my InnoDB table. However, when use command mode I can see my tables, but cannot access to it.

mysql> show tables;  +----------------------+  | Tables_in_xpl_ticket |  +----------------------+  | active_list          |  | bill_block           |  | bill_block_list      |  | block                |  | block_list           |  | box                  |  | cisco_switch         |  | mac_add              |  | mac_change           |  | month                |  | new_connect          |  | new_user             |  | open                 |  | package_change       |  | paid_list            |  | pay                  |  | problem              |  | re_open              |  | refund               |  |  ticket              |  | user                 |  | user_log             |  +----------------------+  22 rows in set (0.00 sec)  

But when I want to access a table it says table doesn't exist;

mysql> select * from active_list;  ERROR 1146 (42S02): Table 'xpl_ticket.active_list' doesn't exist  

Edit by RolandoMySQLDBA

Please run the following in MySQL:

SELECT VERSION();  SELECT CONCAT('[',table_name,']') tbl_name  FROM information_schema.tables WHERE table_schema='xpl_ticket';  

Please run the following in the OS:

cd /var/lib/mysql/xpl_ticket  ls -l  

disk I/O error in SQLite

Posted: 10 Jun 2013 10:51 AM PDT

What are the possible things that would trigger the "disk I/O error"? I've been having this problem and I couldn't find a solution. I have a SQLite3 database, and I'm trying to insert data from a file that contains SQL inserts.

Sample data in the file:

insert into files (filesize, filedate, md5, fullpath, origin) values (5795096,1370159412, "e846355215bbb9bf5f30102a49304ef1", "SDs/16G-1/DSC00144.JPG", "SDs");   insert into files (filesize, filedate, md5, fullpath, origin) values (5435597,1370159422, "1a7bcf3a4aaee3e8fdb304ab995ff80f", "SDs/16G-1/DSC00145.JPG", "SDs");  insert into files (filesize, filedate, md5, fullpath, origin) values (5121224,1370159432, "16d28e83599c731657a6cd7ff97a4903", "SDs/16G-1/DSC00146.JPG", "SDs");  

I tried inserting that in the db file with the following command:

$ sqlite3 allfiles.db < insert.sql  

See below the error that I get:

Error: near line 27: disk I/O error  Error: near line 28: disk I/O error  Error: near line 34: disk I/O error  Error: near line 39: disk I/O error  Error: near line 47: disk I/O error  Error: near line 129: disk I/O error  

The input lines that don't generate error are successfully included, but I don't understand why some lines have errors, and they are not inserted into the DB. There's nothing special in the lines with error, and if I run the command again I get errors in different lines, which means it's random (not related to the data itself). I tried adding pragma syncrhonous = off; and pragma temp_store = memory;, to no success. I'm running that on a lubuntu, which runs in a VirtualBox virtual machine. The host machine is a windows 7. The pwd of the files is a shared folder, i.e., it's a folder in the host machine. If I run it in a "local folder" in the guest machine, the error doesn't happen, although for some reason it's much slower... In any case, I'd like to know about the I/O error.

BULK INSERT in SQL

Posted: 10 Jun 2013 08:14 PM PDT

I have a text file , entries of which i want to insert into a table in my DB.

My query looks like this:

BULK INSERT myDepartment FROM 'C:\myDepartment-c-t.txt'  WITH (    DATAFILETYPE = 'char',    FIELDTERMINATOR = '\t', -- default terminator    ROWTERMINATOR = '\n'  );  

On executing it,my SQL management studio 2008,gives the following error :

'BULK' rowset provider not supported in this version of SQL Server.  

I googled it and found that SQl server 2008, does support BULK INSERT.Can anyone tell me what am i missing?

It's urgent,plz help .

Postgres Problems with Phusion Deployment (Rails)

Posted: 10 Jun 2013 01:47 PM PDT

FATAL:  Ident authentication failed for user "test-postgres"  

So I have a postgres server hosting a side of my database access for my Rails application and I have a database for production set up and a user for production set up but I keep getting the previous error whenever I try to view my Rails application from a browser. Can somebody help me out? I have the postgres user created and the database created already...

Troubleshooting high CPU usage from postgres and postmaster services?

Posted: 10 Jun 2013 05:59 PM PDT

I'm using an open source (RHEL 6.2) based machine running SIEM software. When I run the top command, I see postgres and postmaster both with 96% CPU usage. Is there a way to pin-point or see what causing these service to stack up?

Is it safe to remove an entry in a "config.mongos" MongoDB collection?

Posted: 10 Jun 2013 01:08 PM PDT

There are old entries in my "config.mongos"; they reference hosts which previously had "mongos" running on them but no longer do.

Can I safely remove those entries?

e.g.,:  # Old entries  { "_id" : "silicon:27117", "ping" : ISODate("2012-12-27T19:51:04.560Z"), "up" : 9301, "waiting" : true }  ...  # Current entries  { "_id" : "app1:27017", "ping" : ISODate("2013-06-07T17:12:02.856Z"), "up" : 5593404, "waiting" : true }  { "_id" : "app2:27017", "ping" : ISODate("2013-06-07T16:17:49.812Z"), "up" : 5590271, "waiting" : false }  { "_id" : "app4:27017", "ping" : ISODate("2013-06-07T17:12:02.110Z"), "up" : 5593413, "waiting" : true }  { "_id" : "app3:27017", "ping" : ISODate("2013-06-07T17:12:02.854Z"), "up" : 5593578, "waiting" : true }  

Update:

One reason why having these old entries in config.mongos is a problem is that making changes to the shard takes a long time, while we wait for old mongoses to time out.

For example, stopping the balancer takes a long time (several minutes):

mongos> sh.stopBalancer()  Waiting for active hosts...  Waiting for active host magnesium:27117 to recognize new settings... (ping : Thu Dec 22 2011 03:08:14 GMT+0000 (UTC))  Waited for active ping to change for host magnesium:27117, a migration may be in progress or the host may be down.  Waiting for active host Boron:27017 to recognize new settings... (ping : Tue Jan 31 2012 21:48:12 GMT+0000 (UTC))  Waited for active ping to change for host Boron:27017, a migration may be in progress or the host may be down.  Waiting for active host Beryllium:27017 to recognize new settings... (ping : Wed Feb 01 2012 17:34:15 GMT+0000 (UTC))  Waited for active ping to change for host Beryllium:27017, a migration may be in progress or the host may be down.  Waiting for active host helium:27017 to recognize new settings... (ping : Mon Jun 10 2013 19:07:41 GMT+0000 (UTC))  Waited for active ping to change for host helium:27017, a migration may be in progress or the host may be down.  Waiting for active host boron:27017 to recognize new settings... (ping : Sun Jun 02 2013 08:46:57 GMT+0000 (UTC))  Waited for active ping to change for host boron:27017, a migration may be in progress or the host may be down.  Waiting for the balancer lock...  

Does MySQL cache queries?

Posted: 10 Jun 2013 08:19 PM PDT

I'm interfacing a MySQL database with PDO and executing an extensive SQL query. Normally, it takes about 1500ms; I still need to optimize it. When I run the PHP script twice with a short interval in between, the query only takes about 90ms. The query is in both cases the same. When I run the script, with the same query, after some time again, it takes 1500ms again.

Why is that? Does the database cache automatically? Is there some time the database saves the cache and then automatically deletes it?

I assume the results can't be cached by PHP because this happens in two different threads. I wouldn't think PHP would cache the results because it can't know if the database has changed.

I have a script running every minute to insert new rows to the database. This might also be the cause that it takes 1500ms again after some time; the cache would've been deleted because the relevant tables aren't the same anymore.

how to build table relationships in a firebird database?

Posted: 10 Jun 2013 11:52 AM PDT

I am a beginner in Delphi and I have a Firebird database with 2 tables namely masterlist and daily collection. I used Zeos 7.0.3 to access my Firebird database.

My masterlist contains the following columns:

╔══════╦══════╦═════════╦════════╗  ║ name ║ date ║ balance ║ status ║  ╚══════╩══════╩═════════╩════════╝  

My daily collection contains the following columns:

╔══════╦══════╦═════════╦═════════╗  ║ date ║ name ║ payment ║ balance ║  ╚══════╩══════╩═════════╩═════════╝  

I would like to build a relation in which the balance from masterlist will be copied to the balance column of the daily collection, and when I update the column in the daily collection it will also update the content of the masterlist.

Hope this will be considered a good question I have tried very hard to make a useful question.

dead lock when updating

Posted: 10 Jun 2013 11:25 AM PDT

 Update Operation.TrTable   Set    RecordId = RecordTABLE.newRecordId   From  Operation.TrTable tr      Inner Join (          SELECT r.Id AS newRecordId, r.KeyM AS MappingKey          From  Operation.Record r          WHERE r.KeyM > 0        ) RecordTABLE        ON RecordTABLE.MappingKey = tr.KeyM   WHERE tr.KeyM > 0          UPDATE Operation.Record SET KeyM = 0   WHERE KeyM > 0     UPDATE Operation.TrTable SET KeyM = 0  WHERE KeyM > 0  

The error is:

Transaction (Process ID 93) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction.

Creating the MySQL slow query log file

Posted: 10 Jun 2013 10:25 AM PDT

What do I need to do to generate the slow logs file in MySQL?

I did:

 log_slow_queries  = C:\Program Files\MySQL\MySQL Server 5.1\mysql-slow.log    long_query_time  = 1   

What more do I need to do to?

DB2 Server Table Space Locked

Posted: 10 Jun 2013 06:25 PM PDT

At work we keep receiving the following DataException seemingly at random when one of our processes tries to write/access a table for one of our clients:

com.ibm.db.DataException: A database manager error occurred. :   [IBM][CLI Driver][DB2/NT64] SQL0290N  Table space access is not allowed.  SQLSTATE=55039  

Has anyone encountered this? I'm not the person who primarily does administrative tasks on our databases, but even they seem to be having difficulty finding the root of this problem. Any suggestions? Anyone encounter this before? This error comes up for only one of our clients at a time, and it generally seems to rotate. We have Rackspace service but they wont be of much help unless we can provide screenshots, etc at the exact moment this occurs.

Apologies if this post may be too vague, please let me know what information to supply to clarify things more. I'm one of the developers in my office, but I don't primarily handle the administrative tasks on our databases.

edit: We spoke with IBM and this could possibly be caused by some sort of virus scan being run by IBM/Rackspace as a part of maintenance? They said this kind of dubiously though, so I am doubting this is the culprit because tables remained locked for variable amounts of time.

unable to login oracle as sysdba

Posted: 10 Jun 2013 07:25 PM PDT

I just got 11gR2 running and was able to conn as sysdba. I shutdown and started up the database to mount a new pfile. Now, I cannot login as sysdba. My parameter for password file is:

 *.remote_login_passwordfile='EXCLUSIVE'  

I am using sqlplus within the server. This is not a remote connection.

[oracle@oel56 ~]$ sqlplus /nolog    SQL*Plus: Release 11.2.0.1.0 Production on Tue Feb 5 22:50:46 2013    Copyright (c) 1982, 2009, Oracle.  All rights reserved.    SQL> conn / as sysdba  ERROR:  ORA-01031: insufficient privileges  

Here's some more information:

[oracle@oel56 ~]$ grep -E "ine SS_DBA|ine SS_OPER" $ORACLE_HOME/rdbms/lib/config.c  #define SS_DBA_GRP "oracle"  #define SS_OPER_GRP "oracle"  [oracle@oel56 ~]$ id oracle  uid=500(oracle) gid=500(oracle) groups=500(oracle),54321(oinstall),54322(dba),54323(oper) context=user_u:system_r:unconfined_t  

"connectivity libraries needed are not found" error in IBM Data Studio

Posted: 10 Jun 2013 03:25 PM PDT

UPDATE

I am getting the following error when I try to create a new database in IBM Data Studio v3.1.1.0.

The connectivity libraries that are needed for local or remote non-JDBC operations were not found. To provide these libraries, you can install IBM data server client or a local DB2 server instance.  

I have already started the instance using

db2start  

command.

After searching exhaustively, I am not able to find any help on the internet regarding this error.

MySql - changing innodb_file_per_table for a live db

Posted: 10 Jun 2013 02:59 PM PDT

I have a large MySql DB (150GB) and only now i've noticed that the innodb_file_per_table is set to off which cause the entire DB to be hosted on one single file (ibdata1). i want to active innodb_file_per_table and have him retroactively split the DB into several files, whats the best way to do this?

No comments:

Post a Comment

Search This Blog