[how to] Ubuntu 12, POSTGRES 9.2 , POSTGIS 2.0 |
- Ubuntu 12, POSTGRES 9.2 , POSTGIS 2.0
- Unable to restore dump file in mysql5.5
- Database describing databases and references to tables
- implementing DAC and MAC in Oracle
- Versioning data in Databases
- mysqldump: flush-privileges option
- How does MySQL determine the 'index first key' and 'index last key' with indexes?
- Adding an ID column to a join table (many-to-many) so that it can be targeted
- How to grant permission for am administrator account in MS SQL server 2012?
- Unable to configure Informatica sources with ODBC 64 bit drivers
- Problem with a surrogate key?
- Process sessions foreach minute
- What queries is Oracle executing, how frequently, and time-taken?
- Need logic to collate data
- "custom archiver out of memory" error when restoring large DB using pg_restore
- SQL Server - Separating data, log, and TempDB files on a SAN
- The login already has an account under a different user name
- compare the same table
- Database Restoration Issue
- SQL Server Select Count READ_COMMITTED_SNAPSHOT QUESTION
- Viewing MySQL Account Resource Limits
- sql server database sharding - what to do with common data / non sharded data
- set the clock on mysql
- Oracle server connection reset by peer socket write error
- How to break down query into sub-queries for simplicity?
- SQL Server to Oracle Export - Data conversion error
- Inserting and updating and selecting at thousands of times per second
Ubuntu 12, POSTGRES 9.2 , POSTGIS 2.0 Posted: 03 Mar 2013 08:38 PM PST At the current moment, March 4th 2013, does POSTGIS2.0 can be install with POSTGRES 9.2? I check their website out and to my understanding it is not possible... I hope that's not the case. Any one can tell and point out the instruction how to install POSTGIS 2.0 on POSTGRES 9.2 on Ubuntu? Thanks |
Unable to restore dump file in mysql5.5 Posted: 03 Mar 2013 08:16 PM PST I'm trying to restore a mysql5.5 dump file using mysql command prompt using the command given below: mysql> -u -p < But it is giving the below error: ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '-u ro ot -p root123 mydb< "D:\mysql_dump\backup_2013_01_mydb.sql"' at line 1 Please help. I've tried it on both WinXP and Win7. Also please guide how to do the same using Mysql Workbench. Thank You |
Database describing databases and references to tables Posted: 03 Mar 2013 05:23 PM PST ContextThe company I work for makes a number of different softwares each with its own database. As they're all about related things (accounting) a good amount of data is theoretically the same among several of them, and if the user owns more than one there are options to import the appropriate data from the others. In general each database is "structured" differently, most tables use absolute positions and numbered fields (reflecting the software's data entry forms), some are much worse. I had the feeling that other systems such as rdf could be more suited to the task but I had to do it with an Access database. I think the matter is problematic for any sql databases though. Database structureThe database I made is divided in data definitions and associations, under certain conditions, of a location (in some cases a cell in an absolute position, in others a field) of a database's software with a datum or class of data. I decided to make a table in the database for each general type of data, with fields indicating its properties to identify an exact specific datum (or class of data if some field is left null), except for properties common to all data such as the period to which it refers. I might just make a table with each possible exact datum but it is a lot more convenient to be able to select based on the properties. The grouping of data is at a subjective most general reasonable level. I will use code to operate on the various softwares' databases, I don't need to make cross-database sql queries. QuestionThe issue now is that a certain location in a target database may be associated with any one of the types of data, thus with any one of the data definition tables. Of course in terms of information I'd just need a field identifying the table and one for the record ID, but as far as I can tell, both normal SQL queries and constraints can't apply to different tables based on the current contents of a cell. SQL queries could probably still be made putting a conditional join for every possible table, not very convenient, but I could actually also do without them and use only code. There seems to be no way to make foreign key constraints on the other hand, I could also do without them but I'd prefer not to. I figured different alternatives to face the problem, each with serious disadvantages, I'd like you to tell me if you know any better method or what you think would be the least worst:
|
implementing DAC and MAC in Oracle Posted: 03 Mar 2013 04:49 PM PST I want an example that might help me implement a dynamics modes of an access mode model: Access Modes are : use 1 Allows the subject to use the object without modifying the object read 2 Allows the subject to read the contents ofthe object update 3 Allows the subject to modify the contents ofthe object create 4 Allows the subject to add instances to the object delete 4 Allows the subject to remove instances ofthe object dynamic access modes are: grant 1 Allows the subject to grant any static access mode to any other subject revoke 1 Allows the subject to revoke a granted static access mode from a subject delegate 2 Allows the subject to grant the grant privilege to other subjects abrogate 2 Allows the subject to grant the revoke privilege to other subjects I was thinking about adopting the linux implementation in DAC and MAC, can you think of a simple way? |
Posted: 03 Mar 2013 06:52 PM PST Is there any ready mechanism in Oracle, SqlServer, PostgreSql or OR framework (c#), which can versions informations in database? By versioning informarions I mean that, e.g. I have table Persons(id, version, isEnable, enableFrom, enableTo, name, surname), and any change in column name or surname makes new record with incremented version, changed isEnable, enableFrom and enableTo. |
mysqldump: flush-privileges option Posted: 03 Mar 2013 04:50 PM PST I'd like to get some clarification about the Here's the description of the option from the MySQL docs: When it says that it sends the flush statement after dumping the database, I read that to mean that the data, schema, etc. is dumped into the backup file and then the flush statement is sent to the database that was just dumped (after it was dumped). I was wondering what dumping the data, etc. did that required the privileges to be flushed, so I started searching for an explanation to be sure when and why to use it. As I read various answers, it occurred to me that it would make sense if the flush statement was being included in the dump file. Then after the contents of the file was loaded into a database, the flush privileges statement was run to update the settings after the new info was imported. So, how does it work? |
How does MySQL determine the 'index first key' and 'index last key' with indexes? Posted: 03 Mar 2013 07:44 PM PST In other words, how does MySQL find the result range with indexes on the first step when it's executing a select query? |
Adding an ID column to a join table (many-to-many) so that it can be targeted Posted: 03 Mar 2013 08:23 AM PST I'm building a web application for a company that sells cars. They buy leads (personal details of people interested in buying cars) and try to turn them into customers. The company has a number of sales persons, and they want each
However, every sales person must have his own individual copy of the lead. This is because sales persons work on a commission basis, so they need to be able to:
The only option I see for this is to add an However, I always read that it's wrong to add an |
How to grant permission for am administrator account in MS SQL server 2012? Posted: 03 Mar 2013 07:09 PM PST I installed MS SQL Server 2012 on my laptop. I am trying to run a script to create a simple database. SQL server version: 11.0.3128 SSMS starting parameters: I did some online search and it seems like The account I used doesn't have proper permission to write to the folder in which .mdf and .ldf file should be held. I would like to know do I check the account permission and make necessary modification the the file can be created in the destination folder. Thank for you advise and help! forum post on a similar issue. I received the follow message: Here is the code I am using to create the database: |
Unable to configure Informatica sources with ODBC 64 bit drivers Posted: 03 Mar 2013 06:31 AM PST I have an Oracle database installed on my local system (64 bit) and am trying to use Informatica as an ETL tool. My problem is that my database is working fine (I am able to connect to it via SQL*Plus and all and run queries), also I was able to configure a domain for Informatica, but I am not able to import sources from my database as I am not able to configure System DSN for my installed database. The entry that's supposed to show (Oracle in Any idea how to configure this? |
Posted: 03 Mar 2013 07:55 AM PST I am working with an application for students management. For the student's table I used a surrogate key (auto increment id). The problem I face with the surrogate key is when the user wants to show information about a student he must enter the id but the user knows only the name of the student (and I can't use the name as a primary key because it's not unique). How can I solve this problem? |
Process sessions foreach minute Posted: 03 Mar 2013 12:54 AM PST I have a table that stores sessions. The columns are: It's for online radio. A listener connects to the stream, and a row is created. Once they stop listening, Stop is updated to the current time. Currently, it's about 80K rows per day. I'd like to process the sessions once a day by determining the max amount of listeners for each minute, and push that data into a different table: I'm not super concerned with speed, just with working out how I can process this data within MySQL. I'm assuming I use a stored procedure (not much experience there), loop over each minute between the Does this sound viable? Is there a better way? Thanks for your time. |
What queries is Oracle executing, how frequently, and time-taken? Posted: 03 Mar 2013 06:53 AM PST I have a (Java) web-application, that of course executes many queries (selects, inserts, updates, deletes) to the Oracle Database. I want to find out the following for last 7 days: Is this something Oracle can give me? If so, how can I get that? If not Oracle, should I look at JDBC driver? |
Posted: 03 Mar 2013 04:11 AM PST I have a table in the below structure. I should group all phone numbers, and its total number of attempts and mention its no:of time answered and no:of time not answered. For this the condition is answer_time='0000-00-00 00:00:00' for not answered and answer_time > '0000-00-00 00:00:00' for answered. There can be any number of times the call has been made to a particular number. Any suggestions or approach would be helpful. I'm terribly stuck up on this. The result should be : In this, at the first attempt the call is answered at the second it is not. The attempts may range max to 16 times. FOR MAX |
"custom archiver out of memory" error when restoring large DB using pg_restore Posted: 03 Mar 2013 09:34 AM PST I'm trying to a restore a local copy of a 30GB DB (with only one table) dumped using the Heroku wrappers (described here) using pg_restore. The Heroku DB is using 9.1.7 and I have had the same result using 9.1.5 and 9.1.7 under OS X 10.8. Using the command: I get the following output on the command line ("cleaning" commands omitted) after 20 minutes and seeing the DB size grow to about 30 GB: Turning up log level to I have tried increasing Any recommendations? |
SQL Server - Separating data, log, and TempDB files on a SAN Posted: 03 Mar 2013 02:30 PM PST I have a SQL Server connected to a SAN. Our new storage vendor recommends that all LUNs span the entire disk array, which is RAID5. I would normally request 3 separate LUNs (data, log, and TempDB) from the SAN administrator, but given the new vendor's recommendation, is there any point in creating separate LUNs? Or would I see the same performance if everything was in one LUN since it would span all disks anyway? Great article here, but doesn't quite address my exact situation: http://www.brentozar.com/archive/2008/08/sql-server-on-a-san-dedicated-or-shared-drives/ |
The login already has an account under a different user name Posted: 03 Mar 2013 06:56 AM PST When I execute this SQL: I get the following error:
How do I know what this different user name is for my login account? |
Posted: 03 Mar 2013 09:31 AM PST I am facing an issue with the following query. When I execute the query, it takes very long. I broke the query into two parts, compared with a shell script, but is there any chance to go with one query? Any suggestion welcome. |
Posted: 03 Mar 2013 06:13 AM PST I am new with postgresql. When I attempt to restore database a These reason again I restore empty tables. How can I resolve this issue? |
SQL Server Select Count READ_COMMITTED_SNAPSHOT QUESTION Posted: 03 Mar 2013 02:31 PM PST I seem to be getting a lot of deadlocks when doing select count(*) on a particular table. I have already changed all the required parameters and made them into row only locking. I've also changed the database to use READ_COMMITTED_SNAPSHOT isolation, however, it seems that using a select count(*) where column = ? on the table triggers deadlocks or locks on the table.. Am I correct that the select count(*) should only be accessing intermediate rows?, however, it doesn't seem that way and I'm still encountering deadlocks. Proper indexing would probably help, The question is: Does SQL server 2008 R2 place shared lock on table during select count(*) even when read_committed_snapshot is set to on? Thanks |
Viewing MySQL Account Resource Limits Posted: 03 Mar 2013 04:31 PM PST Is there any way of viewing an account's remaining resources that are allocated to it? I setup an account that's allowed 7200 queries an hour. At any point, could I then run a query to find out how many remaining queries it's allowed? MySQL must be storing this information somewhere as Is it just not possible to retrieve that information? |
sql server database sharding - what to do with common data / non sharded data Posted: 03 Mar 2013 03:31 PM PST We have a very large scale enterprise level database. As part of our business model all web users hit our web servers at the same time each month which in turn hammer our sql box. The traffic is very heavy and continues to grow heavier the larger the company grows. sql proc optimization has been performed and hardware has already been scaled up to a very high level. We are looking to shard the database now to ensure that we can handle company growth and future loads. We have decided what particular data should be sharded. It is a subset of our database which is highly utilized. However, my question is regarding the non sharded data which is common/universal. An example of data like this may be an Inventory table for instance or possibly an Employee table, user table etc . I see two options to handle this common/universal data: 1) design 1 - Place the common/universal data in an external database. All writes will occur here. This data will then be replicated down to each shard allowing each shard to read this data and inner join to this data in t-sql procs. 2) design 2 - Give each shard its own copy of all common/universal data. Let each shard write locally to these tables and utilize sql merge replication to update/sync this data on all other shards. concerns about design #1 1) Transactional issues: If you have a situation in which you must write or update data in a shard and then write/update a common/universal table in 1 stored proc for instance, you will no longer be able to do this easily. The data now exists on seperate sql instances and databases. You may need to involve MS DTS to see if you can wrap these writes into a transaction since they are in a separate database. Performance is a concern here and possible rewrites may be involved for procs that write to sharded and common data. 2)a loss of referential integrity. Not possible to do cross database referential integrity. 3) Recoding large areas of the system so that it knows to write common data to the new universal database but read common data from the shards. 4). increased database trips. Like #1 above, when you run into a situation in which you must update sharded data and common data you are going to make multiple round trips to accomplish this since the data is now in separate databases. Some network latency here but I am not worried about this issue as much as the above 3. concerns about design #2 In design #2 each shard gets its own instance of all common/universal data. This means that all code that joins to or updates common data continues to work/run just like it does today. There is very little recoding/rewriting needed from the development team. However, this design completely depends on merge replication to keep data in sync across all shards. the dbas are highly skilled and are very concerned that merge replication may not be able to handle this and should merge replication fail, that recovery from this failure is not great and could impact us very negatively. I am curious to know if anyone has gone with design option #2. I am also curious to know if i am overlooking a 3rd or 4th design option that I do not see. thank you in advance. |
Posted: 03 Mar 2013 07:31 AM PST I would like to know where does mysql get it current time. I am updating a table and using the time is 2 hours early. the time on the server is completly different, so I'm a little confused... I tried but I get |
Oracle server connection reset by peer socket write error Posted: 03 Mar 2013 02:31 AM PST Full disclosure, I'm not a database admin, but if anyone is able to help me, it would be you guys. Our Oracle 11g database server stopped working, I'm not aware of anything that could have caused it, it was out of the blue. When I try to connect via SQL Developer I get either 'connection reset by peer socket write error' or 'Input/Output error: connection reset. I've tried restarting it, checked whether the Windows Services are running and went through every item in the configuration that I could have found, but nothing helped. It seems that the listener is unable to start (I've tried adding a new listener with identical setting and also got the socket write error) Since I've done no changes to the configuration, I'm perplexed what the issue could be. Does anyone has any ideas to what could be the cause of this issue? And if not, can some one post a link to something like the SQL Server's "Repair Installation" which I didn't find? Any help is appreciated. EDIT: I was asked for the following: Output for the lsnrctl status: And according to the lsnrctl start, the listener is already running |
How to break down query into sub-queries for simplicity? Posted: 03 Mar 2013 11:32 AM PST I'm trying to learn Relational Algebra using Could someone show me how to convert the following SQL statement using i.e. something like: This is due to the fact that I'm finding it hard to understand how the statements are broken down and then put together in one RA query using |
SQL Server to Oracle Export - Data conversion error Posted: 03 Mar 2013 10:31 AM PST I have a rather weird problem with exporting a bunch of tables from SQL Server 2005 to Oracle database. I have SQL Agent job that deletes Oracle tables, recreates them and then exports the data using My problem is, the job fails while exporting data at the last table
|
Inserting and updating and selecting at thousands of times per second Posted: 03 Mar 2013 03:59 PM PST I have a table that gets inserted, updated and selected from thousands of times within one second. I am having deadlock issues though.
It seems as though it is the updates conflicting with the inserts that are causing the deadlocks. The example is a 1 to 1 with the stored procedure, only difference is the names. Assume the 1 and 2, regardless of the stored procedures names. |
You are subscribed to email updates from Recent Questions - Database Administrators - Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment