[how to] oracle database query [closed] |
- oracle database query [closed]
- Oracle 11g install on Debian Wheezy does not start
- Database management tool for compact edition (.sdf) database
- change data directory postgres with database cluster
- SQL Server: Change drive letter (which contains system dbs)
- LISTEN / NOTIFY privileges
- MySQL: How to recover/restore corrupted Innodb data files?
- Allocating 8GB memory to MySQL on a 64bit system
- How to Change location of postgres cluster and database within the same machine?
- Storing history of full/partial tables in MySQL
- Data Migration from Oracle to SQL Server [duplicate]
- How to make continues cluster in postgres?
- help with best practice of merging sql databases
- mysql-5.5 errors while creating multiple instances
- Database design: Two 1 to many relationships to the same table
- replication breaks after upgrading master
- Do I need client certs for mysql ssl replication?
- optimize big sql query
- SQL Server Login failed for user Error: 18456, Severity: 14, State: 11
- Custom sp_who/sp_whoUsers
- Need to suppress rowcount headers when using \G
- How to search whole MySQL database for a particular string
- multivalued weak key in ER database modeling
- Microsoft Office Access database engine could not find the object 'tableName'
- MYSQL 5.5 Fail start Fedora 16
- Workaround to importing data
- Sql Anywhere 11: Restoring incremental backup failure
oracle database query [closed] Posted: 18 May 2013 03:40 PM PDT I have a problem in writing a Database Query Problem: I have the below fields in my table Fields: Primary_ID(PK) ,E_ID, Bank, REQUEST_STATUS, START_DATE, STATUS Data: REQUEST_STATUS may contains [new, mod,bulk load, bulk delete, delete] Condition scenario 1: REQUEST_STATUS = new or mod or bulk load, or all three for a particular E_ID then i need all the E_ID but if that particular E_ID contains also Request_status of Delete, or bulk delete then i don't need that data of E_ID till that particular Date , and if any data exist after bulk delete or delete after that particular date then i need those data. I have attached the data for better understanding Let me know if you need more information 100001 55554111 SBI NEW 5/5/2013 Complete 100002 55556112 SBI NEW 6/5/2013 Complete 100003 55554111 SBI MOD 6/5/2013 Complete 100004 55554111 SBI MOD 7/5/2013 Complete 100005 55554111 SBI MOD 8/5/2013 Failure 100006 55556112 SBI MOD 8/5/2013 Complete 100007 55556113 UTI BULK LOAD 8/5/2013 Complete 100008 55556111 SBI MOD 9/5/2013 Complete 100009 55556113 UTI MOD 9/5/2013 Complete 100010 55556113 UTI MOD 10/5/2013 Failure 100011 55554111 SBI MOD 11/5/2013 Complete 100012 55556113 UTI DEL 11/5/2013 Complete 00013 55556112 SBI DEL 12/5/2013 Complete 100014 55554111 SBI MOD 12/5/2013 Completed Result SET 100001 55554111 SBI NEW 5/5/2013 Complete 100003 55554111 SBI MOD 6/5/2013 Complete 100004 55554111 SBI MOD 7/5/2013 Complete 100011 55554111 SBI MOD 11/5/2013 Complete 100014 55554111 SBI MOD 12/5/2013 Complete |
Oracle 11g install on Debian Wheezy does not start Posted: 18 May 2013 02:40 PM PDT I successfully installed Oracle 11g R2 on my laptop for evaluation, by mixing various sources of documentation. I saw the Oracle daemons after installation. But I have a few problems :
Thanks for your help. Best regards, Fred |
Database management tool for compact edition (.sdf) database Posted: 18 May 2013 02:00 PM PDT What is the best database management tool for remotely managing a compact edition (.sdf) database created in WebMatrix? |
change data directory postgres with database cluster Posted: 18 May 2013 01:43 PM PDT I have my present database cluster of postgres at /mnt/my_hard_drive which I want to change to /home/myfolder. I also want to move all my databases present in the present cluster to /home/myfolder. Is there a way to do so? I know one way for doing so is to dump my databases in some form e.g. .sql and reconstruct them from there. But considering my database size if 5TB I dont want to use this approach. Is there some other way to achieve this? Since my machine is same, all I want to do is to move my database from hard drive to my home folder. |
SQL Server: Change drive letter (which contains system dbs) Posted: 18 May 2013 08:47 PM PDT Is it possible to change driver letter for a volume which holds only system databases safely? What precautions should be taken and how should it be done (I know I can just go to computer management > storage and change drive letter but it can have negative consequences on sql server opeartion?). Any suggestion will be helpful, thanks in advance! |
Posted: 18 May 2013 06:51 PM PDT I have a single postgres database, with two users; Alice and Bob. I would like to be able to do a In practice the channel names are very hard to guess, but this is security through obscurity at best. Am I correct in believing that there is no way to prevent a database user from using (abusing) Is this a dead end? |
MySQL: How to recover/restore corrupted Innodb data files? Posted: 18 May 2013 05:53 PM PDT A while ago, my Windows 7 system on which a MySQL Server 5.5.31 was running crashed and corrupted the InnoDB database. The weekly backup that's available does not cover all the tables that were created in the meantime, therefore I would endeavor to recover as much as possible from the data. Right after the crash, I copied the whole data folder of MySQL to an external drive. I would like use this as the starting point for my rescue attempts. In the following I'll describe the steps of my (not yet convincing) rescue attempt so and would be thankful for any comments or guidance on how to improve it: Now to my questions: Thanks. |
Allocating 8GB memory to MySQL on a 64bit system Posted: 18 May 2013 11:10 AM PDT Specs - I am trying to set innodb buffer pool size to 8GB (innodb_buffer_pool_size=8G). When i do, and start mysql, i get following error - Here is output of free -m - Here is output of ulimit -a - I checked file '/etc/security/limits.conf'. Nothing in there. All lines are commented (start with #). Checked directory '/etc/security/limits.d/', empty. Something is preventing allocation of more than 4GB of memory to MySQL. Not sure what. Any ideas? Danke. |
How to Change location of postgres cluster and database within the same machine? Posted: 18 May 2013 03:41 PM PDT I have my present database cluster of postgres at /mnt/my_hard_drive which I want to change to /home/myfolder. I also want to move all my databases present in the present cluster to /home/myfolder. Is there a way to do so? I know one way for doing so is to dump my databases in some form e.g. .sql and reconstruct them from there. But considering my database size if 5TB I dont want to use this approach. Is there some other way to achieve this..since my machine is same...all I want to do is to move my database from hard drive to my home folder. Please suggest |
Storing history of full/partial tables in MySQL Posted: 18 May 2013 11:12 AM PDT I'm building a web-application with Django and MySQL (InnoDB) and am currently pondering over how to manage historical changes on various tables. I wonder if it's efficient to store a lot of rows with NULLS on those rows that didn't change. For example this is a simplistic representation of my products table; The Now what I had in mind is to push a duplicate of the actual product row and push the change into the and would change the price to So here on the first modification of a Product it will get 2 rows, and each modification afterwards only one new with the modified fields). After it will update the current This approach works for me and is quite effective (as the history table will only be filled with product data if a product get's actual edited for the first time), but I am wondering if it this is efficient by storing all those NULL-values. Would it affect my performance after a while or would the impact not be that great? Otherwise; I'm curious what would be good approaches to do this in MySQL, or even Django ORM-specific ways. |
Data Migration from Oracle to SQL Server [duplicate] Posted: 18 May 2013 10:07 AM PDT This question already has an answer here: If you need to do a data migration from an Oracle database to SQL server, what approaches and technical solutions are the best practice? Our database has about 100 million rows and a total of 52 tables. |
How to make continues cluster in postgres? Posted: 18 May 2013 06:34 AM PDT I have a report table with the following index: providerid,date The table is around 30M records and its grow on a daily basis based on provider actions (100K grow for a day). I want to use in the index above as a cluster, but as i understand i need to run the cluster table command each time that i want to cluster the data, so new data dosnt clusterd. Is there a way to define cluster index like mssql that new rows also clustered. I cannot stop all my process each day (cluster need an exclusive lock ). The table is report table that write event for each provider point. The query is : select date,providerid,sum(*) from report_table where data < x and date >x group by date,providerid |
help with best practice of merging sql databases Posted: 18 May 2013 08:21 AM PDT Bad news, our website server (windows 2003) crashed because of dead RAID controller. Luckily few hours later our backup server was up and website was live again. Hopefully tomorrow our original server will be fixed and i'm looking for the best practice to merge our MS-SQL 2005 data. this is the situation right now:
I restored the night.bak to the temporary server and since 17th 03:00 new data is being added to the temporary server. Tomorrow I wish to take data from the temporary server (17th 03:00 till 19th) and put it back on the original server. I believe i can't make differential backup on the temporary server and restore it on the original server because last backup on original server has timestamp of 16th 01:30AM but I don't really know so I'm asking here. My main concern is to preserve data of joined tables that share index keys i don't know how to replay to the answer so i'm replaying here : thank you i'm testing redgate's data compare (which is by the way 14 days trial and not 30) and it seems good to add / update / delete rows very easily but this is the problem, old database has 120 records on tableA (primary key 1-120) when i left it new temp database doesn't have all 120 records because only 100 records was backed up so it has only 100 records (primary key 1-100) since we continue using the temp database it now has 140 records (primary key 1-100 from original database and 101-140 from temp database) the sql compare would want to insert 40 new records to the tableA but it cannot use 101-120 key because it's already exists on the original database so i guess it will try to update and destroy them. and anyway, it cannot insert with correct keys because for example: i have table "tbl_users" (code(index), firstname, email) and table "tbl_priceoffers" (code(index), usercode(from tbl_users), price) redgate generate script to insert "tbl_priceoffers" before "tbl_users" but even if it was the correct order (i can edit), it cannot insert "tbl_priceoffers" row without having @@identity from the recently inserted "tbl_users" row any thoughts ? |
mysql-5.5 errors while creating multiple instances Posted: 18 May 2013 09:00 AM PDT i have installed 3rd mysql instance on my testing server. 2 instances already running without any issues. when i installed 3rd instance by mysql-5.5.30 zip source, it installed successfully but when i tried to restart 3rd instance of mysql it says, 1st instance running on 3305 2nd instance running on 3306 3rd instance running on 3307 how can i start 3rd instance?? Error Log is as follows. Still unable to figure out error.... Any luck? |
Database design: Two 1 to many relationships to the same table Posted: 18 May 2013 05:22 AM PDT I have to model a situation where I have a table Chequing_Account (which contains budget, iban number and other details of the account) which has to be related to two different tables Person and Corporation which both can have 0, 1 or many chequing accounts. In other words I have two 1-to-many relationships with the same table Chequing account I would like to hear solutions for this problem which respect the normalization requirements. Most solutions I have heard around are: 1) find a common entity of which both Person and Corporation belong and create a link table between this and the Chequing_Account table, this is not possible in my case and even if it were I want to solve the general problem and not this specific instance. 2) Create two link tables PersonToChequingAccount and CorporationToChequingAccount which relate the two entities with the Chequing Accounts. However I don't want two Persons to have the same chequing account, and I don't want to have a natural person and a Corporation to share a chequing account! see this image 3) Create two foreign keys in Chequing Account which point to Corporation and Natural Person, however I would thus enforce that a Person and a Company can have many chequing accounts however I would have to manually ensure that for each ChequingAccount row not both relations point to Corporation and Natural person because a checquing account is either of a corporation or of a Natural Person. see this image Is there any other cleaner solution to this problem? |
replication breaks after upgrading master Posted: 18 May 2013 10:08 AM PDT I have a set up of replication with master 5.1.30 and slave 5.5.16 and the replication is working good Now i have upgraded mysql master to 5.1.47 As far as i know we have to turn off the log bin with sql_log_bin=0 before using mysql_upgrade program in order to up grade the replication setup as well but the problem here is the binary log was not turned off while mysql_upgrade program is running The reason i found is in 5.1 the sql_log_bin is a session variable and mysql_upgrade program runs in another session so how to upgrade the replication as well along with the server with any breakage on replication setup. any suggestions are really useful..... |
Do I need client certs for mysql ssl replication? Posted: 18 May 2013 02:08 PM PDT I'm setting up mysql replication using SSL, and have found two different guides. The first one creates both client and server certs, while the second one only creates server certs. I don't know enough about SSL to understand the implication of one option over the other. Should the slave be using the client certs or the server certs? |
Posted: 18 May 2013 07:08 AM PDT My query has to return a statistics for example for march containing The query is working and it gets the right numbers but it takes too long. rows in database: 7000 orders in march: 3500 time to print 6 of these queries with different group by's (date, waiter, table, products, payment method, and cancellations,...): about 30-40 secs. Imagine how long it would take if we have 10000000 of rows (which could be realistic in a few years?) Is there any way to improve this performance? EDIT: I already solved it using a second table. As the table tbl_orders is used for the orders itself (recursive, for orders with sidedishes) i just put it joined into a new table tbl_report. There it is possible now for me to group things like i want them with good speed :) Thank you for your advices. Some of them were helpful though :) how should i mark my question? solved? |
SQL Server Login failed for user Error: 18456, Severity: 14, State: 11 Posted: 18 May 2013 05:08 AM PDT I have an AD group The
I have already verified AD permissions are setup properly, user has restarted his machine, he is not part of any group that has Any ideas on how to proceed further? Thanks! |
Posted: 18 May 2013 03:08 PM PDT I need to allow a client in a dev DW SQL 2K8R2 environment, to view and kill processes, but I do not want to grant VIEW SERVER STATE to this person (he's a former sql dba and is considered a potential internal threat). When I run the following, it returns one row as if the user ran the sp themselves with their current permissions. Changing the "with execute as" to "self" (I'm a sysadmin) returns the same results. I've also tried the below instead of calling sp_who, and it only returns one row. It seems that the context isn't switching, or persisting, throughout the execution of the procedure. And this is to say nothing of how I'm going to allow this person to "kill" processes. Does anyone have a solution or some suggestions to this seemly unique problem? |
Need to suppress rowcount headers when using \G Posted: 18 May 2013 11:08 AM PDT Is there a command to suppress the rowcount headers and asterisks when using '\G' to execute a SQL statement? I am executing mysql with the |
How to search whole MySQL database for a particular string Posted: 18 May 2013 01:08 PM PDT is it possible to search a whole database tables ( row and column) to find out a particular string. I am having a Database named A with about 35 tables,i need to search for the string named "hello" and i dont know on which table this string is saved.Is it possible? Using MySQL i am a linux admin and i am not familiar with databases,it would be really helpful if u can explain the query also. |
multivalued weak key in ER database modeling Posted: 18 May 2013 12:08 PM PDT I was wondering since i didnt find out any clarification for this. I want to store movies that exist in different formats (dvd, bluray etc) and the price for each format differs from each other as well as the quantity of each format, so i came up with this: Is this correct from a design perspective? Does this implies redundancy? I dont understand how will this be stored in a table. Would it be better to do it like this : Thanks in advance. EDIT : I add some more descriptive information about what i want to store in this point of the design. I want to store information about sales. Each movie that exist in the company i need to store format, price and stock quantity. I will also need to store customer information with a unique id, name, surname, address, movies that he/she has already bought and his credit card number. Finally i will have a basket that temporary keeps items (lets suppose that other items exist apart from movies) that the customer wants to buy. |
Microsoft Office Access database engine could not find the object 'tableName' Posted: 18 May 2013 04:08 PM PDT First a little background: I am using MS access to link to tables in an advantage database. I created a System DSN. In the past in Access I've created a new database, and using the exteranl data wizard, successfully linked to tables. Those databases and the linked tables are working fine. Now I am trying to do the same thing, create a new access db, and link to this same DSN. I get as far as seeing the tables, but after making my selection, I get the error, " The Microsoft Office Access database engine could not find the object 'tableSelected'. Make sure the object exists and that you spell its name and the path name correctly. I've tried creating another datasource (system and user) with no luck. Environment is Wn XP, Access 2007, Advantage DB 8.1 |
MYSQL 5.5 Fail start Fedora 16 Posted: 18 May 2013 06:08 AM PDT I installed mysql and mysql-server from the repos (MySQL version 5.5). Then tried to start it, but got an error. Here is the log: Fresh installation, nothing changed prior to that, just ran yum update. Here is the systemctl status trace |
Posted: 18 May 2013 08:20 AM PDT I am trying to import data into a SQL Server. I can import through the Import and Export Data wizard. I cannot import from my machine using Are there any other possible solutions to importing data from my machine? |
Sql Anywhere 11: Restoring incremental backup failure Posted: 18 May 2013 08:08 AM PDT We want to create remote incremental backups after a full backup. This will allow us to restore in the event of a failure and bring up another machine with as close to real time backups as possible with SQL Anywhere network servers. We are doing a full backup as follows: This makes a backup of the database and log files and can be restored as expected. For incremental backups I've tried both live and incremental transaction logs with a renaming scheme if there are multiple incremental backups: However, on applying the transaction logs on restore I always receive an error when applying the transaction logs to the database:
The transaction log restore command is: The error doesn't specify what table it can't find but this is a controlled test and no tables are being created or dropped. I insert a few rows then kick off an incremental backup before attempting to restore. Does anyone know the correct way to do incremental backup and restore on Sql Anywhere 11? UPDATE: Thinking it may be related to the complexity of the target database I made a new blank database and network service. Then added one table with two columns and inserted a few rows. Made a full backup, then inserted and deleted a few more rows and committed transactions, then made an incremental backup. This also failed with the same error when attempting to apply the incremental backups of transaction logs after restoring the full backup ... Edit: You can follow this link to see the same question with slightly more feedback on SA: http://sqlanywhere-forum.sybase.com/questions/4760/restoring-incrementallive-backup-failure |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment