Wednesday, September 11, 2013

[how to] Not able to login in Oracle 11g console using sys/system@SID as sysdba

[how to] Not able to login in Oracle 11g console using sys/system@SID as sysdba


Not able to login in Oracle 11g console using sys/system@SID as sysdba

Posted: 11 Sep 2013 08:12 PM PDT

I am not able to login into Oracle 11g console using administrative account, when I tried it gives me error of wrong username and password. Then I tried from SQL plus, it got logged in without any problem. I use PL/SQL developer also , on connecting the database using it I am receiving error of insufficient privileges. Request you tell me the solution for this, as if SYS account is locked than SQL plus will also not work for it, which is not the case here.

How do I access the full text of an error in SSIS GUI?

Posted: 11 Sep 2013 07:11 PM PDT

I have an Excel task which is providing error text through the GUI, but I don't know how to access the error task without the GUI and the error disappears if I move the cursor from the X

enter image description here

I have the project property Run64BitRuntime set to False (though it is not editable so even if I wanted to, couldn't change it to true)

MySQL Table across multiple servers

Posted: 11 Sep 2013 02:58 PM PDT

I have been looking at different types of replication, and was wondering if this type is available:

Replication

To explain this more, here is how I am visioning this working:

The Setup

  • You have one master server that forwards queries to 2+ slaves
    • The slaves don't have all the data, for example if you have 2 slaves each would have 50% of the data. 3 slaves would have 33% of the data, and so on.
  • Slaves return data to the master
  • Master returns final data

The Process

  1. If a select was sent to the master
    1. The master would forward the select to the slaves
    2. The slaves would search their personal databases/tables
    3. The slaves would return their result set to the master
    4. The master would reassemble the slaves result sets into a final result set
    5. The master would return the result set to the web server/service
  2. If a INSERT/UPDATE/DELETE was sent to the master
    1. The master would forward the query to the slaves
    2. The slaves would INSERT/UPDATE/DELETE their personal data
    3. Once done they alert the master
    4. Once all slaves alert the master master alerts the web server/service

So, basically I would like to know, is this type of database setup possible with MySQL?

BerkeleyDB: Receiving truncated keys to bt_compare function in python BTREE

Posted: 11 Sep 2013 02:38 PM PDT

I am using BerkeleyDB 6.0 with bsddb3 python drivers. I have a dataset with BTREE access method having keys as strings representing floating point numbers. I have set a compare function to be used in set_bt_compare().

When I try to use db.set_range(key) function, the keys that the compare function receives are sometimes truncated. for instance,

--------------------------------------------------  'left :1378934633890000.0'  --------------------------------------------------  'right:13789346362'  

Here, the right key should be '1378934636286548.8'.

Has anyone seen this problem? have any suggestions as to how to fix it?

Thank you.

Postgres functions vs prepared queries

Posted: 11 Sep 2013 01:14 PM PDT

I'm sure it's there but I've not been able to find a simple definitive answer to this in the docs or via Google:

In Postgres, are prepared queries and user defined functions equivalent as a mechanism for guarding against SQL injection? Are there particular advantages in one approach over the other?

Thanks

Can I have a Distributed Database in which one database is MariaDB and the other one is MySQL Cluster?

Posted: 11 Sep 2013 12:33 PM PDT

This is a hypothetical question of course, I just want to know, because that's one of the question i'm going to be asked in my presentation and I have a feeling that the answer is "yes" after all my research in these past 2 weeks.

Seting timezone per database in MySQL

Posted: 11 Sep 2013 12:27 PM PDT

I wanted to know if MySQL let the admin set a timezone per database ?

I have three different databases for three different customers. Each of them being in their own timezone.

I would like to load the exact same event for each customer and would like it to run everyday at 00:00:01 for their respective timezone. Is it possible ?

I can load three different sql script specifying in each their respective timezone, but it doesn't scale well and it make me have three version of almost the same file.

SQL output formatting

Posted: 11 Sep 2013 04:38 PM PDT

I want this:

"ACCOUNT_ID","MAJOR_VERSION","MINOR_VERSION","COUNTRY","ACCEPTANCE_TIMESTAMP","AGREEMENT_ID"  "abcdefgh-1234-5678-ijkl-mnopqrstuvwx","20110901","1","CN","1329574013737","tos"  

But I get this:

'"'||"ACCOUNT_ID"||'"'||','||'"'||"MAJOR_VERSION"||'"'||','||'"'||"MINOR_VERSION"||'"'||','||'"'||"COUNTRY"||'"'||','||'"'||"ACCEPTANCE_TIMESTAMP"||'"'||','||'"'||"AGREEMENT_ID"||'"'  "abcdefgh-1234-5678-ijkl-mnopqrstuvwx","20110901","1","CN","1329574013737","tos"  

Using the following SQL:

select '"'|| "ACCOUNT_ID"||'"'||','||    '"'|| "MAJOR_VERSION"||'"'||','||    '"'|| "MINOR_VERSION"||'"'||','||    '"'|| "COUNTRY"||'"'||','||    '"'|| "ACCEPTANCE_TIMESTAMP"||'"'||','||    '"'|| "AGREEMENT_ID"||'"'  from THE_TABLE aaa  where aaa.country='CN' and rownum < 10;  

How can I get the first output ?

Under what conditions does SQL Server encrypt tempdb?

Posted: 11 Sep 2013 12:37 PM PDT

When does tempdb get encrypted with Transparent Data Encryption? What configuration, etc, causes tempdb to be encrypted this way?

Is it possible to dynamically retrieve the number of columns in a view in Oracle?

Posted: 11 Sep 2013 12:42 PM PDT

As the title says, is it possible to retrieve the number of columns in a view dynamically?

I'm learning Oracle 11gR2 and I'm checking out the V$ views. One of the things I want to do is output them to file (for various reasons I am unable to spool the document) so I was going to output using java, however I need to know how many columns are in each view - a daunting task with 536 views, and now I'm curious to if I can do this dynamically.

I tried using user_tab_columns but it returns 0 for views.

Note: performance is not key, this is a learning exercise for me, so doing it correctly is more important than doing it quickly (if its even possible)

One Materialized Views in Two Refresh Groups

Posted: 11 Sep 2013 08:37 PM PDT

I have 5 MViews that I want to refresh in two occassions, every sundays and at the 1st of the month. I created a Refresh Group for the weekly and that work fine. But when I tried to created the second Refresh Group for the monthly I get a "materialized view is already in a refresh group".

You can only have a materialized view in one refresh group? What options to I have to refresh it in different intervals?

Thanks

No SA password. No SQL Server Management Studio. No OS authentication

Posted: 11 Sep 2013 11:06 AM PDT

I have this problem.

I need to do some administrative tasks on a MS SQL Database using SA account.

  • OS authentication is not set.
  • SQL Server Manager Manager Studio is not installed.
  • I have no other admin database account.
  • I do have access to a Windows admin account
  • The only tools installed on the database server are as shown:

enter image description here

How can I activate OS authentication so I can log into the database an reset the SA password ?

Moving Data between Partitions

Posted: 11 Sep 2013 12:20 PM PDT

I have partition on my database up to 2013 and this partition contains >= 2013 records. Now I have test table which have more than 2013 records so I have created 2014 File and filegroup and add partition on quarterly basis. but as soon as partition created some records switched from 2013 partition to 2014 partitions.

I have tried with only one table with almost 12 million records,and I have almost 100 tables with more than 100 million records.so is it the good practise? or I have to do with the http://technet.microsoft.com/en-us/library/ms191174(v=sql.105).aspx approach.

Create user for alternate port

Posted: 11 Sep 2013 11:32 AM PDT

I am running a private MySQL instance on WebFaction

WebFaction is a shared host who provides both for their public MySQL instance and for your own private MySQL instance. The private instance is a 1-click install using their system which then provides a root user and port to you. The public instance is always just localhost, and the private instance is localhost:port.

What is the proper way to set up a user for that private instance?

CREATE USER 'user'@'localhost:port' IDENTIFIED BY 'password';
CREATE USER 'user'@'localhost' IDENTIFIED BY 'password';
CREATE USER 'user'@'%' IDENTIFIED BY 'password';

Does it matter?

I'm running into an issue trying to set up a Simplified Machines forum instance. The SMF forum will connect to localhost just fine, and it seems to connect to localhost:port okay (at the point in code where it makes the connection to the DB, it is successful), but then will only read and write to localhost and not localhost:port.

I have successfully created a db user which can connect to, read, and write via localhost:port, but refuses to do so via SMF.

Any thoughts?

Attach trigger to stored procedure

Posted: 11 Sep 2013 01:01 PM PDT

I have a CLR sp in SQL Server 2008 R2, and I want to count how often it is called, in order to create a statistic on data quality. (The sp allows manual correction of data).

How can I let a counter go up everytime the CLR sp is called? Do I necessarily have to change the sp itself?

Suggestions are appreciated.

Find the size of each index in a MySQL table

Posted: 11 Sep 2013 08:13 PM PDT

I am trying to figure out size of each index of a table.

SHOW TABLE STATUS gives "Index_length" is the summation of all the indices of the table. However if a table has multiple indices (e.g. in an employee table) then emp_id, ename and deptno are 3 different indices for which I want sizes separately.

emp_id : xx Kb    ename  : yy Kb    deptno : zz Kb     

How can I get these?

Can't connect to service after updating ODBC driver

Posted: 11 Sep 2013 08:22 PM PDT

I am upgrading a program at work and one of the changes is that it now uses PostgreSQL 9.2.4 instead of 8. I was getting a 'client encoding mismatch' error, so I updated the ODBC driver, and the problem went away. However, with the new driver, my program does not want to connect to a custom service that it uses anymore.

The custom service uses postgres a lot. The error I'm getting is '(10061) connection is forcefully rejected'. Postgres is configured to accept connections from any IP address, so I'm not sure why I'm getting this error. The program will connect fine to the custom service with the old version of the ODBC driver, but as soon as I start using the new driver, it does not want to connect. I've checked the services list and both postgres and the custom service are started.

At one point, while trying to connect to the custom service, I was getting an error that said something like "OLE DB error: cannot send query to the backend". However, I can't seem to reproduce this error message anymore, it is simply not connecting.

I don't have a lot of database experience, so I apologize if this information is confusing or incomplete. Please let me know if you need clarification on anything.

Any suggestions would be appreciated, even if they are just ideas on how to troubleshoot this issue.

Can I run concurrent backups of multiple read only filegroups?

Posted: 11 Sep 2013 06:10 PM PDT

I have 62 readonly filegroups in a SQL server 2008 enterprise database. Can I backup multiple read only filegroups at the same time? I would assume this to be the case but given this database is over 20TB in size I do not want to invalidate any files by giving it the ol' college try.

MySQL MyISAM index causes query to match no rows; indexes disabled, rows match

Posted: 11 Sep 2013 07:20 PM PDT

I created a table and index as described in this SE post, but when I query the table for a particular ID, no matches are found. When I disable the index, matches are found.

Commands ran:

CREATE TABLE mytable (id1 int, id2 int, score float) ENGINE=MyISAM;  LOAD DATA INFILE '50-billion-records.txt' INTO mytable (id1, id2, score);  ALTER TABLE mytable ADD INDEX id1_index (id1);    SELECT COUNT(*) FROM mytable; # returns 50 billion  SELECT COUNT(DISTINCT id1) FROM mytable; # returns 50K, should be about 50M  SELECT COUNT(*) FROM mytable WHERE id1 = 49302; # returns 0 results    ALTER TABLE mytable DISABLE KEYS;  SELECT * FROM mytable WHERE id1 = 49302 LIMIT 1; # returns 1 row  

Is this a bug with MySQL, or does this behavior make sense for some reason?

Note: When I ran ALTER TABLE mytable ENABLE KEYS; just now, the command is acting like it is building an index for the first time (it's still running after 30 minutes, and memory usage is at 80 GB, which matches my setting of myisam_sort_buffer_size=80G. I'll reply when this command finishes running (the original ALTER .. ADD INDEX.. took 7.5 hours, so it may be a bit).

Update: Running SHOW PROCESSLIST indicates "Repair with keycache" is taking place with my ENABLE KEYS command.

Update 2: I killed the repair job on the original index because after several hours, the memory and IO seemed pretty constant, and I hoped if I started over, it may just work. So in second pass, I rebuilt the table and index, and after doing so, the exact same result occurs.

As requested, here is explain for count queries with index enabled:

mysql> explain select * from mytable where id1 = 49302;  +----+-------------+-----------+------+---------------+-----------+---------+-------+------+-------------+  | id | select_type | table     | type | possible_keys | key       | key_len | ref   | rows | Extra       |  +----+-------------+-----------+------+---------------+-----------+---------+-------+------+-------------+  |  1 | SIMPLE      | mytable   | ref  | id1_index     | id1_index | 5       | const |    1 | Using where |  +----+-------------+-----------+------+---------------+-----------+---------+-------+------+-------------+  1 row in set (0.00 sec)    mysql> explain SELECT COUNT(DISTINCT id1) FROM mytable;  +----+-------------+-----------+-------+---------------+-----------+---------+------+-----------+--------------------------+  | id | select_type | table     | type  | possible_keys | key       | key_len | ref  | rows      | Extra                    |  +----+-------------+-----------+-------+---------------+-----------+---------+------+-----------+--------------------------+  |  1 | SIMPLE      | mytable   | range | NULL          | id1_index | 5       | NULL | 170331743 | Using index for group-by |  +----+-------------+-----------+-------+---------------+-----------+---------+------+-----------+--------------------------+  1 row in set (0.01 sec)  

Here is explains after disabling indexes (Note: 25 billion is correct number of records in table, not 50 billion as mentioned above):

mysql> explain select * from mytable where id1 = 66047071;  +----+-------------+-----------+------+---------------+------+---------+------+-------------+-------------+  | id | select_type | table     | type | possible_keys | key  | key_len | ref  | rows        | Extra       |  +----+-------------+-----------+------+---------------+------+---------+------+-------------+-------------+  |  1 | SIMPLE      | mytable   | ALL  | NULL          | NULL | NULL    | NULL | 25890424835 | Using where |  +----+-------------+-----------+------+---------------+------+---------+------+-------------+-------------+  1 row in set (0.00 sec)    mysql> explain SELECT COUNT(DISTINCT id1) FROM mytable;  +----+-------------+-----------+------+---------------+------+---------+------+-------------+-------+  | id | select_type | table     | type | possible_keys | key  | key_len | ref  | rows        | Extra |  +----+-------------+-----------+------+---------------+------+---------+------+-------------+-------+  |  1 | SIMPLE      | mytable   | ALL  | NULL          | NULL | NULL    | NULL | 25890424835 |       |  +----+-------------+-----------+------+---------------+------+---------+------+-------------+-------+  1 row in set (0.00 sec)  

Update 3: Still hoping to solve this oddity. Is there something I can do with myisamchk that might fix this? Since I completely repopulated the table and rebuilt the index (i.e. starting from scratch) and got the same result, I assume this was not just some freak occurrence, and that it is due to some internal limit I'm unaware of. On a side note, I've tried switching to Postgres for this dataset, but running into some other unrelated issues (that I'll leave to a different question), so fixing this index is still a top priority for me. Thanks!!

Update 4: Running CHECK TABLE currently. Will post back with updates as I have them

Error: "Storage Engine for the Table Doesn't Support Nullable Columns" (SequelPro)

Posted: 11 Sep 2013 02:20 PM PDT

I'm trying to load a very normal .csv file (that was created from Excel 2011 for Mac) into SequelPro (using MySQL) with my Mac -- and I've recently started getting this error consistently. Can anybody let me know what it is and how to fix it?

An error occurred while trying to add the new table 'wblist' by    CREATE TABLE `wblist` (  `FILE` VARCHAR(255),   `FIRSTNAME` VARCHAR(255),   `MIDDLE` VARCHAR(255),   `LASTNAME` VARCHAR(255),   `FULLNAME` VARCHAR(255),   `GENDER` VARCHAR(255),   `ADDRESS` VARCHAR(255),   `CITY` VARCHAR(255),   `STATE` VARCHAR(255),   `ZIP` VARCHAR(255),   `PHONE` BIGINT(11),   `UNIT` VARCHAR(255),   `JOB` VARCHAR(255),   `AREA` VARCHAR(255),   `TIME` VARCHAR(255),   `MAILINGADDRESS` VARCHAR(255),   `MAILINGCITY` VARCHAR(255),   `MAILINGSTATE` VARCHAR(255),   `MAILINGZIP` VARCHAR(255),   `ID` BIGINT(11),   `CONFIDENCE` VARCHAR(255),   `BIRTHDATE` VARCHAR(255),   `AGE` INT(11),   `RACE` VARCHAR(255),   `ETHNICITY` VARCHAR(255),   `RELIGION` VARCHAR(255),   `PARTY` VARCHAR(255),   `REGISTRATIONDATE` VARCHAR(255),   `VOTERSTATUS` VARCHAR(255),   `OtherPhone` VARCHAR(255),   `POSSIBLEADDRESS` VARCHAR(255),   `POSSIBLEMAILADDRESS` VARCHAR(255),   `RECID` VARCHAR(255)) ENGINE=CSV;    MySQL said: The storage engine for the table doesn't support nullable columns  

This is stopping me before I'm able to import the table. Thanks for the help!

optimizing MySQL for traffic analytics system

Posted: 11 Sep 2013 08:20 PM PDT

background :

I've developed a URL shortener system like Bitly with same features , so the system also tracks clickers info and represent as graphs to the person who has shorten the link as analytics data. currently I'm using MySQL and have a table to store click info with this schema:

visit_id (int)  ip (int)  date (datetime)  country  browser  device  os  referrer (varchar)  url_id (int)  //as foreign key to the shortened URL  

and for now , just the url_id field has index

The system should represent click analytics in the time periods the user wants, for example past hour, past 24 hours , the past month , ...

for example to generate graphs for past month , I do following queries:

SELECT all DAY(date) AS period, COUNT( * )                           FROM (                            SELECT *                           FROM visits                          WHERE url_id =  '$url_id'                          ) AS URL                          WHERE DATE > DATE_SUB( CURRENT_TIMESTAMP( ) , INTERVAL 1 MONTH )                           GROUP BY DAY( DATE )    //another query to display clicker browsers in this period  //another query to display clicker countries in this period  // ...  

issues:

  • for a shortened link with about 500,000 clicks , it takes about 3-4 seconds to calculate just the first query , so for total queries about 10-12 seconds which is terrible.
  • lots of memory and CPU is needed to run such queries

questions :

1- how to improve and optimize the structure , so the analytics of high traffic links will be shown in less than 1 second(like bitly and similar web apps) and with less usage of CPU and RAM ? should I make an index on the fields date, country, browser, device, os, referrer ? if yes , how to do that for the field date because I should group clicks some times by DAY(date), sometimes by HOUR(date), sometimes by MINUTE(date) and ...

2- is MySQL suitable for this application? assume at maximum my application should handle 100 million links and 10 billion clicks on them totally. Should I consider switching to an NoSQL solution for example?

3- if MySQL is ok , is my database design and table structure proper and well designed for my application needs? or you have better recommendations and suggestions?

UPDATE: I made an index on column referrer but it didn't help at all and also damaged the performance and I think that's because of the low cardinality of this column (also others) and the big resulting index size related to the RAM of my server.

I think making index on these columns would not help to solve my problem, my idea is about one of these:

1- if using MySQL, maybe generating statistics using background processing for high traffic links is better instead of calculating lively at the user request.

2- using some caching solution like memcached to help MySQL with high traffic links.

3- using a NoSQL such as MongoDB and solutions like Map-Reduce which I am poorly familiar with and haven't used ever.

what do you think?

Primary replica set server goes secondary after secondary fails

Posted: 11 Sep 2013 05:20 PM PDT

I have a 2 servers replica set that, after the secondary fails the primary goes into secondary mode while the secondary is in STARTUP2 (recovering). The problem with this is that I can't use the collection stored in that replica set freely, I'm getting errors trying to use the collection:

pymongo.errors.OperationFailure: database error: ReplicaSetMonitor no master found for set: rs2  

Sometimes if I restart the mongod instances, the server rs2-1 is the primary for a while, but after some time (while the secondary is recovering) I see this in the logs of rs2-1 (the primary):

Tue May  7 17:43:40.677 [rsHealthPoll] replSet member XXX.XXX.XXX.XXX:27017 is now in state DOWN  Tue May  7 17:43:40.677 [rsMgr] can't see a majority of the set, relinquishing primary  Tue May  7 17:43:40.682 [rsMgr] replSet relinquishing primary state  Tue May  7 17:43:40.682 [rsMgr] replSet SECONDARY  Tue May  7 17:43:40.682 [rsMgr] replSet closing client sockets after relinquishing primary  

Is there an easy way to make the primary keep being primary after the secondary fails? Am I doing something wrong?

Thanks in advance!

MySQL backup InnoDB

Posted: 11 Sep 2013 01:20 PM PDT

I have a VoIP server running 24x7. At low peak hour at lease 150+ users are connected. My server has MySQL running with InnoDB engine on Windows 2008 platform. I like to take at least 2 times full database backup without shutting down my service.

As per Peter Zaitsev - the founder of percona, mysqldump –single-transaction is not always good.

read here if you are interested

As I'm not a DBA, I like to know in my scenario, which would be best solution to take a database backup?

Thanks,

Strange characters in mysqlbinlog output

Posted: 11 Sep 2013 11:20 AM PDT

Has anyone experienced this? Data replicates fine but when output in mysqlbinlog there are hidden characters that break the input?

  • mysqlbinlog Ver 3.3 for Linux at x86_64
  • mysql 5.5.28 server

Thanks! Julie

Connecting to a SQL Server database from a Flash program

Posted: 11 Sep 2013 12:20 PM PDT

I currently have the ability to utilize Microsoft SQL Server 2012. I am developing a project with Adobe Flash Builder 4.7.

If I link my database with Adobe Flash Builder is there any additional steps I must take in order to make the database live, or as long as my computer is running will this database be accessible from any device that is utilizing it?

In other words is this a LAN only system or does it automatically make itself available for the programs I link to it?

Why Does the Transaction Log Keep Growing or Run Out of Space?

Posted: 11 Sep 2013 03:06 PM PDT

This one seems to be a common question in most forums and all over the web, it is asked here in many formats that typically sound like this:

In SQL Server -

  • What are some reasons the transaction log grows so large?
  • Why is my log file so big?
  • What are some ways to prevent this problem from occurring?
  • What do I do when I get myself on track with the underlying cause and want to put my transaction log file to a healthy size?

Does SQL manage foreign key constraints in terms of Insertion?

Posted: 11 Sep 2013 01:26 PM PDT

Say I have a table tbl1 with data A1 and A2, where A1 is the primary key. Then I have another table tbl2 with data A3 and A1, both together being the primary key, referencing A1 as a foreign key from tbl1.

Am I able to insert a tuple into tbl2 that has an A1 that is not in tbl1? Does SQL manage this for us as an error? Or what happens in this situation?

Why are queries parsed in such a way that disallows the use of column aliases in most clauses?

Posted: 11 Sep 2013 11:29 AM PDT

While trying to write a query, I found out (the hard way) that SQL Server parses WHEREs in a query long before parsing the SELECTs when executing a query.

The MSDN docs say that the general logical parsing order is such that SELECT is parsed nearly last (thus resulting in "no such object [alias]" errors when trying to use a column alias in other clauses). There was even a suggestion to allow for aliases to be used anywhere, which was shot down by the Microsoft team, citing ANSI standards compliance issues (which suggests that this behavior is part of the ANSI standard).

As a programmer (not a DBA), I found this behavior somewhat confusing, since it seems to me that it largely defeats the purpose of having column aliases (or, at the very least, column aliases could be made significantly more powerful if they were parsed earlier in the query execution), since the only place you can actually use the aliases is in ORDER BY. As a programmer, it seems like it's missing a huge opportunity for making queries more powerful, convenient, and DRY.

It looks like it's such a glaring issue that it stands to reason, then, that there are other reasons for deciding that column aliases shouldn't be allowed in anything other than SELECT and ORDER BY, but what are those reasons?

Sql Server 2008 x64 ODBC Linked Server to Oracle Not Working

Posted: 11 Sep 2013 02:01 PM PDT

I have an install of Sql Server 2008 x64, installed on two boxes. One is a Win 7 x64 workstation (Sql Server 2008 x64 SP2 developer edition), the other is a Windows Server 2003 x64 (Sql Server 2008 x64 SP2 Enterprise edition). I am trying to create a linked server to an external vendor's Oracle instance. I have installed the Oracle ODAC 11g 64 bit drivers using the full install (not the XCopy version).

The drivers appear to have all been installed correctly. I have created and updated my tnsnames.ora file using the correct IP, port, etc to the remote server and rebooted. Using the 64 ODBC admin tool, I am able to create the ODBC connection to the Oracle server and the "Test" button returns as successful using the alias in the tnsnames.ora file and the correct user ID and password.

I then go in to Sql Server 2008 and try to create my linked servers. I can create the OLE DB linked server and it connects successfully, I can list the tables/views in the catalog and do queries against them with only one significant problem. For tables with TIMESTAMP fields, a normal 4-part query throws fits. Looking around this appears to be a common problem with OLEDB linked servers from Sql Server to Oracle and using OPENQUERY is the most common workaround, which I do have working.

The ODBC connection, which is what our vendor recommends using to connect to them, is where I have big problems. I can create the linked server, which appears to be successfully created using the System DSN ODBC connection I created earlier. I can view the lists of tables/views in the catalog and they all appear to show up correctly. However, when I try to get data, it fails completely.

If I try to right click on a view name and select Script -> SELECT To.... I get a message that says that:

[tablename] contains no columns that can be selected or the current user does not have permissions on that object.

If I try to script out a SELECT manually and run it, as I know most of the column names, I get an error message:

The OLE DB provider "MSDASQL" for linked server [linked server name] returned an invalid column definition for table [table name].

The vendor states that the user ID (the same one in both cases) has the proper rights to the tables/views, which it appears to as the OLE DB connection mostly works. The Oracle server is 10g, but I don't know if it's 32 or 64 bit. Would that make a difference?

Right now I'm working on getting this working from the Win7 x64 workstation, but a short test on the 2003 server yielded the same results. If I have to, I guess I can make the OLEDB/OPENQUERY solution work. However it's not ideal or recommended by our vendor. Any ideas what I might be missing on getting the ODBC connection working?

MySQL - fastest way to ALTER TABLE for InnoDB

Posted: 11 Sep 2013 02:00 PM PDT

I have an InnoDB table that I want to alter. The table has ~80M rows, and quit a few indices.

I want to change the name of one of the columns and add a few more indices.

  • What is the fastest way to do it (assuming I could suffer even downtime - the server is an unused slave)?
  • Is a "plain" alter table, the fastest solution?

At this time, all I care about is speed :)

No comments:

Post a Comment

Search This Blog