[how to] Choosing A PostgreSQL Authentication Method For A Large Course |
- Choosing A PostgreSQL Authentication Method For A Large Course
- Conversion failed when converting date and/or time from character string
- Administering PostgreSQL For Database Course [on hold]
- Oracle won't start
- Best practice for upgrading mysql on a master-master setup
- Mysql ON DUPLICATE KEY UPDATE does not work with specific key
- Why set up static data in views vs. using tables in mysql?
- How to delete duplicate records
- How to query a database for empty tables
- How to execute this procedure in PL/SQL?
- Execution plan vs STATISTICS IO order
- Indexing for query containing xml column
- ROW wise SUM VS COLUMN wise SUM in MySQL
- SSIS package blocks itself if uses TRUNCATE
- How to build a database that contain only the delta from yesterday
- Simple XML .query SELECT statement times out on 1 SQL SERVER instance
- Totals Column - Blank if Zero or NULL
- Can I force a user to use WITH NOLOCK?
- How to select from a table without including repeated column values?
- Why does Log Shipping .TRN file copy just stop
- oracle alter table move taking long time
- How to determine Oracle LOB storage footprint?
- Oracle schema import is not importing all the tables present in the schema dump file
- SQL Server update query on linked server causing remote scan
- Difference between idx_tup_read and idx_tup_fetch on Postgres
- User login error when trying to access secured SQL Server database
- hstore for versioning fields in postgresql
Choosing A PostgreSQL Authentication Method For A Large Course Posted: 07 Aug 2013 08:22 PM PDT I am teaching a first course in databases for the first time. Students will need to have a database management system to which they can connect to do much of their work for the course. I have chosen to use PostgreSQL (running on a GNU/Linux-based VPS), since I am familiar with it from my own personal projects. But I have never needed to administer a server with more than one user, so I want to make sure that I am making wise decisions before setting things into stone. I would like students to be able to do the following, and (of course) have their accounts reasonably secure from attack:
There are many authentication methods available (see http://www.postgresql.org/docs/9.2/static/auth-methods.html), but none seem to meet all of my requirements well.
Option A is what I have always used myself, and it would be my preference. But I believe that it would rule out accessing the database from any client other than psql running on the same machine. Option B seems to be the most flexible. But it also seems terribly ugly for students to need to set and maintain passwords in two disparate systems. Option C would only allow connecting from clients on remote machines, which is not really acceptable. I am fairly unfamiliar with GSSAPI / Kerberos, but it does not really sound like what I want either. My ideal connection method would have PostgreSQL ask the OS on which it is running to ask for a username and password, no matter where the client software is running. Is there some better option for my requirements than B above? |
Conversion failed when converting date and/or time from character string Posted: 07 Aug 2013 09:08 PM PDT I am getting this error:
|
Administering PostgreSQL For Database Course [on hold] Posted: 07 Aug 2013 06:13 PM PDT I am teaching a first course in databases for the first time. Students will need to have a database management system to which they can connect to do much of their work for the course. I have chosen to use PostgreSQL (running on a GNU/Linux-based VPS), since I am familiar with it from my own personal projects. But I have never needed to administer a server with more than one user, so I want to make sure that I am making wise decisions before setting things into stone. There are a few different aspects of administration that I know I need to think about (but am also interested in others I may be overlooking). Apologies if this would have been better as five distinct questions, but they all seem interrelated. Aspect #1: Organization of databasesI can see several different ways to possibly partition our data:
Option A seems like it would be a management headache and require running many instances of the server, while option C would cause namespace clashes between projects. Option B has no downsides that I am aware of, so that's what I believe I should go with. Agreed? Aspect #2: Client softwareThere are several ways that students could access the server:
Option A is the only one I have any personal experience with. Option B might be easier because I will not have to worry about teaching them to use UNIX at all. But the pgAdmin interface is enormously complex, exposing details that I would not like students to worry about before they have mastered the basics, and would allow them to things with point-and-click rather than mastering SQL. I think I would like to use option A initially and introduce option B later in the course as an alternative. Reasonable? Aspect #3: Users, roles, & authenticationThere are several ways the system could authenticate users:
Option C does not seem like a good choice, because I would like students to be able to write database-driven webapps in ~/public_html on the same machine that runs the database server, and that would seem to require user accounts. Option B seems ugly, because students would need to set passwords in two different systems. Option A seems cleanest (and is most familiar to me), but it seems to me like it would be incompatible with Aspect #2 Option B. And I am not sure this would allow the writing of webapps either, since apache would not be running as the file's owner. So option B looks like the least bad alternative. Agreed? Aspect #4: Querying student-owned objectsOnce students have done some work, I need to be able to view it (hopefully in an automated way, since I will have 40+ students in the course). I could log in as the superuser role to do this, but being a superuser unnecessarily seems like a bad idea. I can grant myself CONNECT privileges to the students' databases after I create them (for aspect #1 option B, adjusted as necessary for other options). But once students create tables I will not have SELECT privileges unless the student specifically gives them to me. Is there some way that I can get that privilege (and only that privilege) automatically granted to my non-superuser role on all objects that will later be created? If not, are there any other, better alternatives to logging is as a superuser. Aspect #5: Protecting my disk spaceSince many students will be sharing one system with constrained resources, I need a way to protect it from malicious or accidental havoc. There does not appear to be any way to limit database size within PostgreSQL. I thought about creating a tablespace per user in their home directories (this rules out aspect #3 option C) and using the OS's disk quotas. But it looks like this would not work because the files would still be owned by postgres rather than the individual user. Is there any good solution for this? |
Posted: 07 Aug 2013 03:28 PM PDT While it was working fine, I had to stop the server once. When trying to start Oracle using a script we have, I got the following error: Also when trying to start SQL Plus manually AS SYSDBA I get: Using SQL Plus with other users, I get: Any help appreciated ... |
Best practice for upgrading mysql on a master-master setup Posted: 07 Aug 2013 02:03 PM PDT Anyone have experience upgrading a master-master setup of Mysql 5.1 to a release version of Mysql while trying to keep at least one server online (in my case we are upgrading to MariaDB 5.5). |
Mysql ON DUPLICATE KEY UPDATE does not work with specific key Posted: 07 Aug 2013 01:09 PM PDT While, my query works on almost all entries, it does not work with one particular key. If I remove the ON DUPLICATE KEY part, both keys throw error that they already exist. Before and after query: Table layout: Do you have an idea what could cause this ? Engine:mysql(5.6.12) |
Why set up static data in views vs. using tables in mysql? Posted: 07 Aug 2013 12:45 PM PDT I get an LDAP feed nightly. I get it as a text file and dump/create my LDAPALL table. There are roughly 75K employees times about 50 fields. I have the following too: LDAPIMPORTANT - view that stores all 75K but only 15 fields LDAPSHORT - view that stores all 75k but 5 fields LDAPAB - view that only stores 9k employees based on two groups (field lookup) Each of these are used a lot and for different apps and also there are a lot of views written against these views. But there is no updates to them. We do not update employee data. It is just LDAPALL update once a night. In this circumstance should I create tables from the LDAPALL table instead of views? I could set up jobs to create these tables once a night. What is best practice behind this? Speak in layman's terms because I am a PHP developer made to do all DB admin stuff. |
How to delete duplicate records Posted: 07 Aug 2013 10:58 AM PDT I'm able to find duplicate records using the following query, however, I'm not sure how to delete duplicates records, and ignoring any records that starts with 0 in starttime field. Sample output |
How to query a database for empty tables Posted: 07 Aug 2013 05:40 PM PDT Due to some 'developers' we had working on our system we have had issues with empty tables. We have found that during the transfer to the cloud several tables were copied, but the data in them wasn't. I would like to run a query the system tables to find what user tables are empty. We are using MS SQL 2008 R2. Thanks for the help. |
How to execute this procedure in PL/SQL? Posted: 07 Aug 2013 12:49 PM PDT I have this table in the below format: This is how I want it: As I dont know the all the fields in the userfieldcd column, I am trying to dynamically pivot the table. So I am using this procedure but I dont know how to call it in PL/SQL developer. I am using Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production When I call the procedure using: it gives me
|
Execution plan vs STATISTICS IO order Posted: 07 Aug 2013 06:11 PM PDT SQL Server graphical execution plans read right to left and top to bottom. Is there a meaningful order to the output generated by The following query: Generates this plan: And this So, I reiterate: what gives? Is there a meaningful ordering to |
Indexing for query containing xml column Posted: 07 Aug 2013 03:21 PM PDT I have below queries which performs little slow.So i am planning to create index.I created primay xml index and now performance improved little bit(by looking at execution plan) .Queries are shown below. The table has below structure I am using sql server 2008 r2 express.So i think i cannot use full text search. So please advice me how the index should be created so that above query performance is improved. |
ROW wise SUM VS COLUMN wise SUM in MySQL Posted: 07 Aug 2013 09:34 PM PDT I have a table I modified this structure into Assume I have 210k rows in In some cases I want to sum all the values in the table: Query 1 is executing faster than query 2. Why is query 2 slower than query 1? |
SSIS package blocks itself if uses TRUNCATE Posted: 07 Aug 2013 10:09 AM PDT There is an SSIS package with Here the Delete block deletes everything from six tables, and the Parse block loads six files to the six tables. If the Delete uses At this moment on the server side I can see an SSIS spid being blocked by spid If I switch the only used SQL Server Connection manager to
Is there a way to set up the package so that it uses |
How to build a database that contain only the delta from yesterday Posted: 07 Aug 2013 06:32 PM PDT I need to know what has been changed on my database since last night. Is it possible to extract this data from the LDF file and to build a new Database that contains the delta? For example, let say I have a table for users and now, a new user was added and one of the users update his home address. I need to be able to build a new database that users table will contain two records 1. The new user (and to add a new column to know if it's new or update field) 2. The user that update his record (it will be nice to know which record has been update)? BTW, I have to SQL servers that I can use (2008 and 2012) Thanks In Advance |
Simple XML .query SELECT statement times out on 1 SQL SERVER instance Posted: 07 Aug 2013 07:44 PM PDT I dropped and recreated a stored procedure on a DB on a SQL Server 2012 instance (SERVER1/INSTANCE1), after which the sproc started hanging on a line. Using profiler I have reduced a recreateable scenario on this SQL Server instance to the following code: This statement hangs. If you comment out one of the returned columns in the SELECT statement, then the statement runs fine. So one of two returned columns is fine, but anything over is bad. If you take out the first node selector (i.e. the Here's the rub though:
It seems to me that SERVER1/INSTANCE1 has borked the way it runs this statement. I have heard that SQL Server does things with optimising execution of statements, but my knowledge stops there. I'm sure there must be some way to get it to behave again, but how? UPDATE The following adjustment (getting the singleton of the Xpath result-set rather than potentially multiple first elements) solves the issue above for the SERVER1/INSTANCE1 issue. I would put this as the answer, except that I don't believe this is identifying the underyling problem (please correct me if I'm wrong!). Given that our codebase deployed on other servers implements the above statement (or derivatives of it) I don't want to update our entire shredding XML approach without good reason (or at least I would have to justify it to my peers). Any help gratefully received. Thanks, Ali |
Totals Column - Blank if Zero or NULL Posted: 07 Aug 2013 08:48 PM PDT A report currently computes a "totals" column like so: How can I get a blank cell when a total equals zero, instead of NULL or "0", and avoid doing the computation twice? |
Can I force a user to use WITH NOLOCK? Posted: 07 Aug 2013 10:37 AM PDT Can I force a user's queries to always run with the hint NOLOCK? e.g. they type But what is executed on the server is This question is not: |
How to select from a table without including repeated column values? Posted: 07 Aug 2013 10:36 AM PDT In a previous question How to merge data sets without including redundant rows? I asked about filtering redundant historical data during import, but @DavidSpillett correctly replied that I couldn't do what I was trying to do. Instead of filtering the table during import, I now want to create a view on the table that returns only records where the price has changed. Here's the original scenario rephrased to suite this question: We have a table of historical prices for items. The table contains rows where the same price is recorded for multiple dates. I want to create a view on this data which only shows price changes over time, so if a price changes from A to B I want to see it, but if it "changes" from B to B then I don't want to see it. Example: if the price yesterday was $1, and the price today is $1, and there were no other price changes, then the price today can be inferred from the price yesterday so I only need the record from yesterday. Example (http://sqlfiddle.com/#!3/c95ff/1): My initial attempt used ROW_NUMBER: Which returned: I tried searching for a similar question/answer but it's hard to work out how to phrase the search, an example is worth a lot of words. Any suggestions appreciated. Thanks |
Why does Log Shipping .TRN file copy just stop Posted: 07 Aug 2013 01:59 PM PDT I apologize in advance for a long post but I have had it up to here with this error of having to delete LS configuration and starting it over for any DB thats got this error. I have LS setup on 3 win2k8r2 servers(pri,sec,monitor) with 100 databases transactions backed up and shipped from the primary to secondary and monitored by monitor. Back ups and copies are run every 15min and then the ones older than 24hrs are deleted. Some DBs are very active and some not so much but shipped regardless for uniformity sake(basically to make secondary server identical to primary). Some DBs are for SP2010 and majority for inhouse app. The issue is that after all LS configs are setup, all works well for about 3 to 4 days then i go to the Transaction LS Status report on the secondary, I see that randomly some LS jobs have an Alert Status because the time since last copy is over 45min so no restore has occured. This seems random and the only errors i see is from an SP2010 DB(WebAnalyticsServiceApplication_ReportingDB_77a60938_##########) which I belive is a reports db that gets created weekly and LS cannot just figure which the last copy to backup or to restore is. I posted here regarding that and i have yet to find a permanent solution. For my main error(time since last copy) i have not seen anything that could have caused that and i dont get any messages(even though some alert statuses have been ignored for 3 days). Anyway, I would really appreciate any input on understanding whats causing this and how i could fix it. Thanks. |
oracle alter table move taking long time Posted: 07 Aug 2013 10:55 AM PDT I'm currently trying to compress a table in Oracle with the following statements My question is: the If I cancel it, will I lose data? Or is there any way to find out how long it will take? |
How to determine Oracle LOB storage footprint? Posted: 07 Aug 2013 10:36 AM PDT With Timing is fairly easy to profile but what's the easiest way to get a reasonably accurate measurement of how much space a specific LOB column takes up? |
Oracle schema import is not importing all the tables present in the schema dump file Posted: 07 Aug 2013 09:55 AM PDT I have exported an existing oracle schema from another machine and then imported it in my local machine. Import was successful, but some tables which are present in the export dump file are not imported. Here are the export and import commands i have used. The Oracle we are using is 10g EE. What could be going wrong ? Can you please suggest a solution to this issue. |
SQL Server update query on linked server causing remote scan Posted: 07 Aug 2013 04:54 PM PDT I have a SQL Server 2012 setup as a linked server on a SQL Server 2008 server. The following queries executes in less than 1 second:
However, if I run this query to do a remote update, it takes 24 seconds, and 2 rows is affected: I tested using The table joins are identical in both queries, why is it using Remote Scan for the second query, and how do I fix this? |
Difference between idx_tup_read and idx_tup_fetch on Postgres Posted: 07 Aug 2013 08:35 PM PDT On Postgres 8.4 when you do: It returns the fields idx_tup_read and idx_tup_fetch, what is the difference? |
User login error when trying to access secured SQL Server database Posted: 07 Aug 2013 01:55 PM PDT We have a username that was recently renamed from one username to another (think getting married). The Active Directory admin renamed the user because "it has always worked in the past". One vendor package we use uses the built-in MS SQL Server security. Each module has three groups:
So we can add a person to one of these groups an they get the appropriate access. I don't have the actual error message in front of me anymore, but it said that they are not authorized to table CriticalVendorTable. It worked before the rename. The admin removed the person from each group and re-added them. Still no go. I even restarted the server and it still doesn't work. My best guess is that there is UUID (or unique id) somewhere that is causing problems. The vendor's response is to delete the user and then re-add them. I have only had time to do some brief searching, but I found this page; AD User SID Mis-mapping. Would this be worth trying? Would it be better to just delete the user and recreate them? |
hstore for versioning fields in postgresql Posted: 07 Aug 2013 08:55 PM PDT I have a postgresql database that will hold about 50 tables, each of them having about 15 fields, it would have at least 300.000 rows on each table. In order to track the changes done on each field I am thinking on create a table defined by: I would expect the table to grow and grow, and retrieve from When the tables are modified I would add a new record with something like: where Are there better approaches to accomplish this? the table would grow max to 225 millions of rows, or would it be better to have 50 tables of 4.5 millions of rows at max each? noting that the actual trace of each field will go to the hstore. |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment