Saturday, May 18, 2013

[how to] oracle database query [closed]

[how to] oracle database query [closed]


oracle database query [closed]

Posted: 18 May 2013 03:40 PM PDT

I have a problem in writing a Database Query

Problem: I have the below fields in my table Fields: Primary_ID(PK) ,E_ID, Bank, REQUEST_STATUS, START_DATE, STATUS

Data: REQUEST_STATUS may contains [new, mod,bulk load, bulk delete, delete]

Condition

scenario 1:

REQUEST_STATUS = new or mod or bulk load, or all three for a particular E_ID then i need all the E_ID but if that particular E_ID contains also Request_status of Delete, or bulk delete then i don't need that data of E_ID till that particular Date , and if any data exist after bulk delete or delete after that particular date then i need those data.

I have attached the data for better understanding Let me know if you need more information

  100001 55554111 SBI NEW 5/5/2013 Complete    100002 55556112 SBI NEW 6/5/2013 Complete    100003 55554111 SBI MOD 6/5/2013 Complete    100004 55554111 SBI MOD 7/5/2013 Complete    100005 55554111 SBI MOD 8/5/2013 Failure    100006 55556112 SBI MOD 8/5/2013 Complete    100007 55556113 UTI BULK LOAD 8/5/2013 Complete    100008 55556111 SBI MOD 9/5/2013 Complete    100009 55556113 UTI MOD 9/5/2013 Complete    100010 55556113 UTI MOD 10/5/2013 Failure     100011 55554111 SBI MOD 11/5/2013 Complete    100012 55556113 UTI DEL 11/5/2013 Complete     00013 55556112 SBI DEL 12/5/2013 Complete    100014 55554111 SBI MOD 12/5/2013 Completed    

Result SET

  100001 55554111 SBI NEW 5/5/2013 Complete    100003 55554111 SBI MOD 6/5/2013 Complete    100004 55554111 SBI MOD 7/5/2013 Complete    100011 55554111 SBI MOD 11/5/2013 Complete     100014 55554111 SBI MOD 12/5/2013 Complete    

Oracle 11g install on Debian Wheezy does not start

Posted: 18 May 2013 02:40 PM PDT

I successfully installed Oracle 11g R2 on my laptop for evaluation, by mixing various sources of documentation. I saw the Oracle daemons after installation. But I have a few problems :

  • After a reboot, the instance does not start, and does not even create a log file. (only the TNSLSNR starts).
  • The oracle user's ORACLE_SID gets a default value (AL32UTF8) where the value I set in the /etc/profile.d/oracle.sh initialisation script is INFO (created during install). Where does this come from?
  • Where can I start investigating?

Thanks for your help.

Best regards,

Fred

Database management tool for compact edition (.sdf) database

Posted: 18 May 2013 02:00 PM PDT

What is the best database management tool for remotely managing a compact edition (.sdf) database created in WebMatrix?

change data directory postgres with database cluster

Posted: 18 May 2013 01:43 PM PDT

I have my present database cluster of postgres at /mnt/my_hard_drive which I want to change to /home/myfolder. I also want to move all my databases present in the present cluster to /home/myfolder. Is there a way to do so?

I know one way for doing so is to dump my databases in some form e.g. .sql and reconstruct them from there. But considering my database size if 5TB I dont want to use this approach. Is there some other way to achieve this? Since my machine is same, all I want to do is to move my database from hard drive to my home folder.

SQL Server: Change drive letter (which contains system dbs)

Posted: 18 May 2013 08:47 PM PDT

Is it possible to change driver letter for a volume which holds only system databases safely? What precautions should be taken and how should it be done (I know I can just go to computer management > storage and change drive letter but it can have negative consequences on sql server opeartion?).

Any suggestion will be helpful, thanks in advance!

LISTEN / NOTIFY privileges

Posted: 18 May 2013 06:51 PM PDT

I have a single postgres database, with two users; Alice and Bob.

I would like to be able to do a NOTIFY alice_channel 'sensitive data' without Bob being able to sneakily LISTEN in just by guessing that the channel name is 'alice_channel'.

In practice the channel names are very hard to guess, but this is security through obscurity at best.

Am I correct in believing that there is no way to prevent a database user from using (abusing) LISTEN & NOTIFY? i.e. there does not appear to be any associated privileges that can be granted or revoked.

Is this a dead end?

MySQL: How to recover/restore corrupted Innodb data files?

Posted: 18 May 2013 05:53 PM PDT

A while ago, my Windows 7 system on which a MySQL Server 5.5.31 was running crashed and corrupted the InnoDB database. The weekly backup that's available does not cover all the tables that were created in the meantime, therefore I would endeavor to recover as much as possible from the data. Right after the crash, I copied the whole data folder of MySQL to an external drive. I would like use this as the starting point for my rescue attempts.

In the following I'll describe the steps of my (not yet convincing) rescue attempt so and would be thankful for any comments or guidance on how to improve it:
1. I've now done a fresh install of MySQL Server 5.5.31 on another PC
2. I stop the MySQL service with "net stop MySQL" at the command prompt.
3. I already figured that I need to adjust the size of the innodb log file in the my.ini file as it deviates (256 MB) from the default value (19MB).
4. In the my.ini, I also set innodb_force_recovery=6
5. In the data folder of the fresh installation, I overwrite the ibdata1, iblogfile0, iblogfile1 files with those recovered from the crashed machine. I also copy the relevant database folders into here (NOT the standard mysql, test and performance folders).
6. I start the MySQL service with "net start MySQL".
7. I go into MySQL Workbench, open my server instance, go to Data Export, basically leave the default settings, and have every table of my databases exported as an individual dump file. I also set stored procedures to be dumped. Otherwise I do not change the default settings there.
8. I start the dump process; it makes its way through 43 out of 195 tables. Of these 43,
- some cannot be recovered yielding an error "mysqldump: Got error: 1146: Table '...whatever...' doesn't exist when doing LOCK TABLES",
- but many can. I assume that when the dump does not yield any error, the table's data is non-corrupted.
Then, after the 44th, all the other table dumps fail as it is reported that the server cannot be connected to anymore:
"mysqldump: Got error: 2003: Can't connect to MySQL server on 'localhost' (10061) when trying to connect
Operation failed with exitcode 2
"
These errors then go on for all the remaining tables from the 44th to the 195th.
For the 44th table itself, the error is the following: "mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table ...table 44... at row: 57". So it seems that for this table corruption is or begins at row 57.

Now to my questions:
Why is the connection breaking down given that innodb_force_recovery is set to 6? How to proceed? I worked out what the 44th table was where the connection was lost and can try to resume the process from the 45th table. But isn't there a better way to do it? Once the data has been copied and the server restarts well, should I just try a dump of each table or what alternatives are there?

Thanks.

Allocating 8GB memory to MySQL on a 64bit system

Posted: 18 May 2013 11:10 AM PDT

Specs -

MySQL 5.0.x  ,  Redhat 5.9  ,  Physical memory - 16GB  

I am trying to set innodb buffer pool size to 8GB (innodb_buffer_pool_size=8G). When i do, and start mysql, i get following error -

InnoDB: Error: cannot allocate 8589950976 bytes of  InnoDB: memory with malloc! Total allocated memory  InnoDB: by InnoDB 37679104 bytes. Operating system errno: 12  InnoDB: Check if you should increase the swap file or  InnoDB: ulimits of your operating system.  

Here is output of free -m -

             total       used       free     shared    buffers     cached  Mem:         15922      15269        653          0        248       4275  -/+ buffers/cache:      10744       5178  Swap:         2047          0       2047  

Here is output of ulimit -a -

core file size          (blocks, -c) 0  data seg size           (kbytes, -d) unlimited  scheduling priority             (-e) 0  file size               (blocks, -f) unlimited  pending signals                 (-i) 127352  max locked memory       (kbytes, -l) 32  max memory size         (kbytes, -m) unlimited  open files                      (-n) 1024  pipe size            (512 bytes, -p) 8  POSIX message queues     (bytes, -q) 819200  real-time priority              (-r) 0  stack size              (kbytes, -s) 10240  cpu time               (seconds, -t) unlimited  max user processes              (-u) 127352  virtual memory          (kbytes, -v) unlimited  file locks                      (-x) unlimited  

I checked file '/etc/security/limits.conf'. Nothing in there. All lines are commented (start with #). Checked directory '/etc/security/limits.d/', empty.

Something is preventing allocation of more than 4GB of memory to MySQL. Not sure what.

Any ideas?

Danke.

How to Change location of postgres cluster and database within the same machine?

Posted: 18 May 2013 03:41 PM PDT

I have my present database cluster of postgres at /mnt/my_hard_drive which I want to change to /home/myfolder. I also want to move all my databases present in the present cluster to /home/myfolder. Is there a way to do so?

I know one way for doing so is to dump my databases in some form e.g. .sql and reconstruct them from there. But considering my database size if 5TB I dont want to use this approach. Is there some other way to achieve this..since my machine is same...all I want to do is to move my database from hard drive to my home folder. Please suggest

Storing history of full/partial tables in MySQL

Posted: 18 May 2013 11:12 AM PDT

I'm building a web-application with Django and MySQL (InnoDB) and am currently pondering over how to manage historical changes on various tables.

I wonder if it's efficient to store a lot of rows with NULLS on those rows that didn't change. For example this is a simplistic representation of my products table;

enter image description here

The products_history table has all it's field (except the FK and non-logical rows) set to NULL, whereas the normal products table has NOT NULL on all, except description.

Now what I had in mind is to push a duplicate of the actual product row and push the change into the products_history table. So let's say I have this row in products:

{600, CURRENT_TIMESTAMP, 2, 'BS001', NULL, 49.95}  

and would change the price to 48.50. So I would push which values has changed, in this case the base_price and add a change-row to the history table;

{600, THE_date_created_FROM_PRODUCT, NULL, NULL, 49.95}  {600, CURRENT_TIMESTAMP, NULL, NULL, 48.50}  

So here on the first modification of a Product it will get 2 rows, and each modification afterwards only one new with the modified fields). After it will update the current product row with the new base_price.

This approach works for me and is quite effective (as the history table will only be filled with product data if a product get's actual edited for the first time), but I am wondering if it this is efficient by storing all those NULL-values. Would it affect my performance after a while or would the impact not be that great?

Otherwise; I'm curious what would be good approaches to do this in MySQL, or even Django ORM-specific ways.

Data Migration from Oracle to SQL Server [duplicate]

Posted: 18 May 2013 10:07 AM PDT

This question already has an answer here:

If you need to do a data migration from an Oracle database to SQL server, what approaches and technical solutions are the best practice?

Our database has about 100 million rows and a total of 52 tables.

How to make continues cluster in postgres?

Posted: 18 May 2013 06:34 AM PDT

I have a report table with the following index: providerid,date

The table is around 30M records and its grow on a daily basis based on provider actions (100K grow for a day).

I want to use in the index above as a cluster, but as i understand i need to run the cluster table command each time that i want to cluster the data, so new data dosnt clusterd.

Is there a way to define cluster index like mssql that new rows also clustered.

I cannot stop all my process each day (cluster need an exclusive lock ).

The table is report table that write event for each provider point. The query is : select date,providerid,sum(*) from report_table where data < x and date >x group by date,providerid

help with best practice of merging sql databases

Posted: 18 May 2013 08:21 AM PDT

Bad news, our website server (windows 2003) crashed because of dead RAID controller.

Luckily few hours later our backup server was up and website was live again. Hopefully tomorrow our original server will be fixed and i'm looking for the best practice to merge our MS-SQL 2005 data.

this is the situation right now:

  • Original server has backed up data till 16th 01:30AM (night.bak)
  • Original server has data which is not backed up till 17th 00:30AM so basically there is 23 hours missing on the night.bak

I restored the night.bak to the temporary server and since 17th 03:00 new data is being added to the temporary server.

Tomorrow I wish to take data from the temporary server (17th 03:00 till 19th) and put it back on the original server.

I believe i can't make differential backup on the temporary server and restore it on the original server because last backup on original server has timestamp of 16th 01:30AM but I don't really know so I'm asking here.

My main concern is to preserve data of joined tables that share index keys


i don't know how to replay to the answer so i'm replaying here :


thank you i'm testing redgate's data compare (which is by the way 14 days trial and not 30) and it seems good to add / update / delete rows very easily

but this is the problem, old database has 120 records on tableA (primary key 1-120) when i left it

new temp database doesn't have all 120 records because only 100 records was backed up so it has only 100 records (primary key 1-100)

since we continue using the temp database it now has 140 records (primary key 1-100 from original database and 101-140 from temp database)

the sql compare would want to insert 40 new records to the tableA but it cannot use 101-120 key because it's already exists on the original database so i guess it will try to update and destroy them. and anyway, it cannot insert with correct keys because

for example: i have table "tbl_users" (code(index), firstname, email) and table "tbl_priceoffers" (code(index), usercode(from tbl_users), price)

redgate generate script to insert "tbl_priceoffers" before "tbl_users" but even if it was the correct order (i can edit), it cannot insert "tbl_priceoffers" row without having @@identity from the recently inserted "tbl_users" row

any thoughts ?

mysql-5.5 errors while creating multiple instances

Posted: 18 May 2013 09:00 AM PDT

i have installed 3rd mysql instance on my testing server. 2 instances already running without any issues. when i installed 3rd instance by mysql-5.5.30 zip source, it installed successfully but when i tried to restart 3rd instance of mysql it says,

MySQL server PID file could not be found!                  [FAILED]  Starting MySQL........................................................The server quit without updating PID file.  

1st instance running on 3305

BASEDIR: /usr/local/mysql  Configuration File: /etc/my.cnf  Socket: /tmp/mysql.stock  

2nd instance running on 3306

BASEDIR: /home/mysql-5.5.30  Configuration File: /home/mysql-5.5.30/my.cnf  Socket: /home/mysql-5.5.30/mysql.stock  

3rd instance running on 3307

BASEDIR: /home/mysql-5/  Configuration File: /home/mysql-5/my.cnf  Socket: /home/mysql-5/mysql.stock  

how can i start 3rd instance??

Error Log is as follows.

130513 11:22:23 mysqld_safe Starting mysqld daemon with databases from /backup/mysql-cbt/data  130513 11:22:23 InnoDB: The InnoDB memory heap is disabled  130513 11:22:23 InnoDB: Mutexes and rw_locks use GCC atomic builtins  130513 11:22:23 InnoDB: Compressed tables use zlib 1.2.3  130513 11:22:23 InnoDB: Using Linux native AIO  130513 11:22:23 InnoDB: Initializing buffer pool, size = 128.0M  130513 11:22:23 InnoDB: Completed initialization of buffer pool  130513 11:22:23 InnoDB: highest supported file format is Barracuda.  130513 11:22:23  InnoDB: Waiting for the background threads to start  130513 11:22:24 InnoDB: 5.5.30 started; log sequence number 1595675  130513 11:22:24 [ERROR] /backup/mysql-cbt/bin/mysqld: unknown option '--safe- show-    database'  130513 11:22:24 [ERROR] Aborting  130513 11:22:24  InnoDB: Starting shutdown...  130513 11:22:25  InnoDB: Shutdown completed; log sequence number 1595675  130513 11:22:25 [Note] /backup/mysql-cbt/bin/mysqld: Shutdown complete  130513 11:22:25  mysqld_safe mysqld from pid file /backup/mysql-cbt/cbt-    instance.pid ended  

Still unable to figure out error.... Any luck?

Database design: Two 1 to many relationships to the same table

Posted: 18 May 2013 05:22 AM PDT

I have to model a situation where I have a table Chequing_Account (which contains budget, iban number and other details of the account) which has to be related to two different tables Person and Corporation which both can have 0, 1 or many chequing accounts.

In other words I have two 1-to-many relationships with the same table Chequing account

I would like to hear solutions for this problem which respect the normalization requirements. Most solutions I have heard around are:

1) find a common entity of which both Person and Corporation belong and create a link table between this and the Chequing_Account table, this is not possible in my case and even if it were I want to solve the general problem and not this specific instance.

2) Create two link tables PersonToChequingAccount and CorporationToChequingAccount which relate the two entities with the Chequing Accounts. However I don't want two Persons to have the same chequing account, and I don't want to have a natural person and a Corporation to share a chequing account! see this image

http://i41.tinypic.com/35i6kbk.png

3) Create two foreign keys in Chequing Account which point to Corporation and Natural Person, however I would thus enforce that a Person and a Company can have many chequing accounts however I would have to manually ensure that for each ChequingAccount row not both relations point to Corporation and Natural person because a checquing account is either of a corporation or of a Natural Person. see this image

http://i40.tinypic.com/1rpv9z.png

Is there any other cleaner solution to this problem?

replication breaks after upgrading master

Posted: 18 May 2013 10:08 AM PDT

I have a set up of replication with master 5.1.30 and slave 5.5.16 and the replication is working good

Now i have upgraded mysql master to 5.1.47

As far as i know we have to turn off the log bin with sql_log_bin=0 before using mysql_upgrade program in order to up grade the replication setup as well

but the problem here is the binary log was not turned off while mysql_upgrade program is running

The reason i found is in 5.1 the sql_log_bin is a session variable and mysql_upgrade program runs in another session

so how to upgrade the replication as well along with the server with any breakage on replication setup.

any suggestions are really useful.....

Do I need client certs for mysql ssl replication?

Posted: 18 May 2013 02:08 PM PDT

I'm setting up mysql replication using SSL, and have found two different guides.

The first one creates both client and server certs, while the second one only creates server certs.

I don't know enough about SSL to understand the implication of one option over the other. Should the slave be using the client certs or the server certs?

optimize big sql query

Posted: 18 May 2013 07:08 AM PDT

My query has to return a statistics for example for march containing productname and price and its sidedishes (1:n relation) with the right price of each sidedish and the right tax for each sidedish and product (can be different).

SELECT count(*) as daySum,      p_name,      SUM(pp_price) as daySumPrice,      o_day,      o_time,      s_id,      s_name,      t_name,      t_id,      SUM(ust10) as ust10,       SUM(ust20) as ust20,      SUM(sdUst10) as sdUst10,       SUM(sdUst20) as sdUst20 ,      fk_p_id,      u_name  FROM   (      SELECT tbl_orders.o_id,          tbl_orders.o_comment,          DATE_FORMAT( $dateField,  '$sqlDateFormat' ) AS o_day,          tbl_orders.o_time,          tbl_orders.o_t_name AS t_name,          tbl_orders.fk_t_id AS t_id,          tbl_orders.fk_u_id AS u_id,          tbl_orders.fk_s_id AS s_id,          tbl_orders.o_s_name AS s_name,          tbl_orders.o_u_name AS u_name,          tbl_orders.fk_p_id,          CONCAT(tbl_orders.o_p_name,IF(sideDish.p_name IS NULL,'',' ('),IFNULL(GROUP_CONCAT(DISTINCT(sideDish.p_name) SEPARATOR ', '),''),IF(sideDish.p_name IS NULL,'',')'),IF(tbl_orders.o_comment='','',' - ".getLanguageKey('label_comment').": '),tbl_orders.o_comment,IF(tbl_orders.o_comment='','','')) as p_name,          ROUND(IFNULL(tbl_orders.o_p_price,0)+IFNULL(sideDishPrice.pp_price,0),2) as pp_price,           IF(tbl_orders.o_p_ust=10,IFNULL(tbl_orders.o_p_price/(100+tbl_orders.o_p_ust)*tbl_orders.o_p_ust,0),0) AS ust10,          IF(tbl_orders.o_p_ust=20,IFNULL(tbl_orders.o_p_price/(100+tbl_orders.o_p_ust)*tbl_orders.o_p_ust,0),0) AS ust20,          sideDishPrice.ust10 AS sdUst10,          sideDishPrice.ust20 AS sdUst20      FROM tbl_orders       INNER JOIN tbl_dailyReport dr1           ON dr1.dr_id = tbl_orders.fk_dr_id      LEFT JOIN       (          SELECT  o2.o_id,              SUM(IFNULL(o2.o_p_price,0)) as pp_price,o2.o_p_name as p_name,o2.o_parent AS o_parent,              SUM(IF(o2.o_p_ust=10,IFNULL(o2.o_p_price/(100+o2.o_p_ust)*o2.o_p_ust,0),0)) AS ust10,              SUM(IF(o2.o_p_ust=20,IFNULL(o2.o_p_price/(100+o2.o_p_ust)*o2.o_p_ust,0),0)) AS ust20          FROM tbl_orders o2           GROUP BY o_parent      ) sideDishPrice          ON tbl_orders.o_id = sideDishPrice.o_parent       LEFT JOIN       (          SELECT o_id,o_p_name AS p_name,o_parent           FROM tbl_orders           GROUP BY o_id            ) sideDish              ON tbl_orders.o_id = sideDish.o_parent  WHERE                       tbl_orders.fk_c_id = '$c_id'                      $stornoNull                        AND tbl_orders.o_parent IS NULL                      $onlyDayClosingData                      $whereClauseTime                      $whereClauseID                      $whereClauseStorno                          GROUP BY  tbl_orders.o_id,tbl_orders.o_p_name,sideDish.o_parent                        ) products  $groupBy $orderBy;    

The query is working and it gets the right numbers but it takes too long.

rows in database: 7000 orders in march: 3500 time to print 6 of these queries with different group by's (date, waiter, table, products, payment method, and cancellations,...): about 30-40 secs.

Imagine how long it would take if we have 10000000 of rows (which could be realistic in a few years?)

Is there any way to improve this performance?

EDIT: I already solved it using a second table. As the table tbl_orders is used for the orders itself (recursive, for orders with sidedishes) i just put it joined into a new table tbl_report. There it is possible now for me to group things like i want them with good speed :)

Thank you for your advices. Some of them were helpful though :)

how should i mark my question? solved?

SQL Server Login failed for user Error: 18456, Severity: 14, State: 11

Posted: 18 May 2013 05:08 AM PDT

I have an AD group XYZ that I have added to SQL Server security with data_reader permissions.

The XYZ group has around 10 users in there who are successfully able to access the SQL Server database. I recently added a new user to this group (at AD level), but this person is not able to access SQL Server (through Mgmt Studio) and he's getting the error below

Login failed for user. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors.

Error: 18456, Severity: 14, State: 11.

I have already verified AD permissions are setup properly, user has restarted his machine, he is not part of any group that has DENY access and the SQL Server XYZ group has been removed and readded to the SQL Server instance in Mgmt Studio and server has been restarted.

Any ideas on how to proceed further?

Thanks!

Custom sp_who/sp_whoUsers

Posted: 18 May 2013 03:08 PM PDT

I need to allow a client in a dev DW SQL 2K8R2 environment, to view and kill processes, but I do not want to grant VIEW SERVER STATE to this person (he's a former sql dba and is considered a potential internal threat).

When I run the following, it returns one row as if the user ran the sp themselves with their current permissions.

USE [master]  GO    SET ANSI_NULLS ON  GO  SET QUOTED_IDENTIFIER ON  GO    CREATE PROCEDURE [dbo].[usp_who] with execute as owner  AS  BEGIN      SET NOCOUNT ON;      exec master.dbo.sp_who;  END  

Changing the "with execute as" to "self" (I'm a sysadmin) returns the same results. I've also tried the below instead of calling sp_who, and it only returns one row.

select * from sysprocesses  

It seems that the context isn't switching, or persisting, throughout the execution of the procedure. And this is to say nothing of how I'm going to allow this person to "kill" processes.

Does anyone have a solution or some suggestions to this seemly unique problem?

Need to suppress rowcount headers when using \G

Posted: 18 May 2013 11:08 AM PDT

Is there a command to suppress the rowcount headers and asterisks when using '\G' to execute a SQL statement? I am executing mysql with the -s and --skip-column-name options, but these don't suppress the rowcounts.

How to search whole MySQL database for a particular string

Posted: 18 May 2013 01:08 PM PDT

is it possible to search a whole database tables ( row and column) to find out a particular string.

I am having a Database named A with about 35 tables,i need to search for the string named "hello" and i dont know on which table this string is saved.Is it possible?

Using MySQL

i am a linux admin and i am not familiar with databases,it would be really helpful if u can explain the query also.

multivalued weak key in ER database modeling

Posted: 18 May 2013 12:08 PM PDT

I was wondering since i didnt find out any clarification for this. I want to store movies that exist in different formats (dvd, bluray etc) and the price for each format differs from each other as well as the quantity of each format, so i came up with this:

example

Is this correct from a design perspective? Does this implies redundancy? I dont understand how will this be stored in a table. Would it be better to do it like this :

enter image description here

Thanks in advance.

EDIT : I add some more descriptive information about what i want to store in this point of the design. I want to store information about sales. Each movie that exist in the company i need to store format, price and stock quantity. I will also need to store customer information with a unique id, name, surname, address, movies that he/she has already bought and his credit card number. Finally i will have a basket that temporary keeps items (lets suppose that other items exist apart from movies) that the customer wants to buy.

Microsoft Office Access database engine could not find the object 'tableName'

Posted: 18 May 2013 04:08 PM PDT

First a little background: I am using MS access to link to tables in an advantage database. I created a System DSN. In the past in Access I've created a new database, and using the exteranl data wizard, successfully linked to tables. Those databases and the linked tables are working fine.

Now I am trying to do the same thing, create a new access db, and link to this same DSN. I get as far as seeing the tables, but after making my selection, I get the error, " The Microsoft Office Access database engine could not find the object 'tableSelected'. Make sure the object exists and that you spell its name and the path name correctly.

I've tried creating another datasource (system and user) with no luck. Environment is Wn XP, Access 2007, Advantage DB 8.1

MYSQL 5.5 Fail start Fedora 16

Posted: 18 May 2013 06:08 AM PDT

I installed mysql and mysql-server from the repos (MySQL version 5.5). Then tried to start it, but got an error.

[root@server]# service mysqld start  Redirecting to /bin/systemctl start  mysqld.service  Job failed. See system logs and 'systemctl status' for details.  

Here is the log:

121118  2:41:38 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql  121118  2:41:38 [Note] Plugin 'FEDERATED' is disabled.  121118  2:41:38 InnoDB: The InnoDB memory heap is disabled  121118  2:41:38 InnoDB: Mutexes and rw_locks use GCC atomic builtins  121118  2:41:38 InnoDB: Compressed tables use zlib 1.2.5  121118  2:41:38 InnoDB: Using Linux native AIO /usr/libexec/mysqld: Can't create/write to file '/tmp/ibhsfQfU' (Errcode: 13)  121118  2:41:38  InnoDB: Error: unable to create temporary file; errno: 13  121118  2:41:38 [ERROR] Plugin 'InnoDB' init function returned error.  121118  2:41:38 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.  121118  2:41:38 [ERROR] Unknown/unsupported storage engine: InnoDB  121118  2:41:38 [ERROR] Aborting    121118  2:41:38 [Note] /usr/libexec/mysqld: Shutdown complete    121118 02:41:38 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended  

Fresh installation, nothing changed prior to that, just ran yum update.

Here is the systemctl status trace

[root@linyansho /]# systemctl status mysqld.service  mysqld.service - MySQL database server    Loaded: loaded (/lib/systemd/system/mysqld.service; disabled)    Active: failed since Sun, 18 Nov 2012 02:45:19 +0300; 5min ago    Process: 864 ExecStartPost=/usr/libexec/mysqld-wait-ready $MAINPID (code=exited, status=1/FAILURE)    Process: 863 ExecStart=/usr/bin/mysqld_safe --basedir=/usr (code=exited, status=0/SUCCESS)    Process: 842 ExecStartPre=/usr/libexec/mysqld-prepare-db-dir %n (code=exited, status=0/SUCCESS)    CGroup: name=systemd:/system/mysqld.service  

Workaround to importing data

Posted: 18 May 2013 08:20 AM PDT

I am trying to import data into a SQL Server. I can import through the Import and Export Data wizard. I cannot import from my machine using BULK IMPORT or OPENROWSET since the file is not on the server machine. How does the IED wizard insert data into its target db? Does it go row by row?

Are there any other possible solutions to importing data from my machine?

Sql Anywhere 11: Restoring incremental backup failure

Posted: 18 May 2013 08:08 AM PDT

We want to create remote incremental backups after a full backup. This will allow us to restore in the event of a failure and bring up another machine with as close to real time backups as possible with SQL Anywhere network servers.

We are doing a full backup as follows:

dbbackup -y -c "eng=ServerName.DbName;uid=dba;pwd=sql;links=tcpip(host=ServerName)"      c:\backuppath\full  

This makes a backup of the database and log files and can be restored as expected. For incremental backups I've tried both live and incremental transaction logs with a renaming scheme if there are multiple incremental backups:

dbbackup -y -t -c "eng=ServerName.DbName;uid=dba;pwd=sql;links=tcpip(host=ServerName)"      c:\backuppath\inc    dbbackup -y -l -c "eng=ServerName.DbName;uid=dba;pwd=sql;links=tcpip(host=ServerName)"       c:\backuppath\live  

However, on applying the transaction logs on restore I always receive an error when applying the transaction logs to the database:

10092: Unable to find table definition for table referenced in transaction log

The transaction log restore command is:

dbeng11 "c:\dbpath\dbname.db" -a "c:\backuppath\dbname.log"  

The error doesn't specify what table it can't find but this is a controlled test and no tables are being created or dropped. I insert a few rows then kick off an incremental backup before attempting to restore.

Does anyone know the correct way to do incremental backup and restore on Sql Anywhere 11?

UPDATE: Thinking it may be related to the complexity of the target database I made a new blank database and network service. Then added one table with two columns and inserted a few rows. Made a full backup, then inserted and deleted a few more rows and committed transactions, then made an incremental backup. This also failed with the same error when attempting to apply the incremental backups of transaction logs after restoring the full backup ...

Edit:

You can follow this link to see the same question with slightly more feedback on SA: http://sqlanywhere-forum.sybase.com/questions/4760/restoring-incrementallive-backup-failure

No comments:

Post a Comment

Search This Blog