[how to] How to create login and logout trigger for client audit? |
- How to create login and logout trigger for client audit?
- Best way to display number of overspent projects
- How to drop a column named 'column'?
- Deny access to one database, allow all the rest
- MYSQL - Unable to change 'old_passwords' variable
- How to interpretate and modify a dBase (.dbf) file?
- Can I speed up this slow query which should be limited by MIN/MAX values?
- Xml stored as varbinary
- How do I find out how far along my PostgreSQL query is?
- Why would one use the Innodb Plugin instead of the builtin Innodb in Mysql 5.1? (GPL)
- MongoDB cloneCollection between CentOS servers times out at 2 hours
- Convert SQL Server query to MS Access syntax
- Oracle query is slower the second time
- Should I be concerned about excessive Com_change_db commands?
- db2ckbkp - How to determine what tablespaces are temp-tablespace
- Find rows where any column matches a search pattern
- SQL Server 2005 get execution plan of a overnight stored procedure
- Attribute and Value tables on SQL
- How to speed up query on table with millions of rows
- How can I integrate a new system to a design scheme I previously created?
- What's a good way to model user authorization to hierarchical data?
- Oracle shared memory exception ORA-04031
- Memcached plugin on MariaDB?
- Failed copy job deletes all users
- MySQL Tables Require Daily Repairs - Server, Table or HD?
- Azure SQL Administrator Can't Access Master Database
- Constructing an SQL query with time intervals
- Mysqldump tables excluding some fields
- Sybase SQL Anywhere 12 - Get all indexes which are unique -> ambigious sysindexes error
- Mongo connection failing with 'Transport endpoint is not connected'
How to create login and logout trigger for client audit? Posted: 26 Sep 2013 09:30 PM PDT I need a table to put login and logout information for client not sysadmin,then i can use date to statistic user login/off ration... Can anyone offer me a simple method to create login and logout trigger for audit specific client not the sysadmin, Thanks in advance. |
Best way to display number of overspent projects Posted: 26 Sep 2013 07:46 PM PDT I have the first view displayed as: Which gives: However, I want to display a "NumberOfOverBudgetProjects" for each department in this view. I created a view which lists which projects are going over their hourly budget: ; Which gives: Marketing department should list 2 projects being over budget and Accounting and Finance should both list 0. Any ideas how I can incorporate a column of "OverBudgetProjects" in the original view? Or by creating a new view? |
How to drop a column named 'column'? Posted: 26 Sep 2013 05:41 PM PDT I need to delete a column named "column" form a table but the standard: returns: I am thinking "column" is reserved word and shouldn't be used. But how do I get around to deleting it ? |
Deny access to one database, allow all the rest Posted: 26 Sep 2013 07:49 PM PDT How can I deny access to specific databases from a login? I know that I can walk through and make the login a db_owner on all the ones I want the login to have access to, but how do I do the opposite? I want the login to have access to everything but the ones that I specify |
MYSQL - Unable to change 'old_passwords' variable Posted: 26 Sep 2013 05:53 PM PDT I have full access to a 'MYSQL 5.0.27-community-nt-log' database one of our new updates for a core app requires PHP 5.4, after installing and configuring I'm stuck at the error regarding old vs. new passwords. I've followed every thread I can find and I'm pretty sure I understand the process:
But I cannot seem to get 'old_passwords' to be set to 'OFF'. Logging in as root I've tried the following:
and restarted the service but 'show variables like 'old_passwords' always reports as 'ON'. Changing the password still remains at a 16 length. I'm sure once I'm able to turn this variable off I should be fine, Any advice? |
How to interpretate and modify a dBase (.dbf) file? Posted: 26 Sep 2013 03:48 PM PDT I'm working with dBASE (.dbf) files for a project using a GIS software because it seems that they are more efficient. I've converted several .xml file to .dbf with OpenOffice 3.4 and loaded into the GIS however in some columns, the fields that appeared with different numbers in Open Office, turn to a series of one (1) in the GIS attribute table. This is an example of my first row of one element: Na,N,6,2I think that I can affect it changing the first row of my .dbf file. First question:I would like to know what does it means every character. I know that the first (e.g. Na) is the name of the field, N is probably the type? Number? I've no idea about the third and the forth (red and blue arrows). Second question:Why if I write "Na" when I load the file it become "NA", with capitalized letters? Thanks a lot! |
Can I speed up this slow query which should be limited by MIN/MAX values? Posted: 26 Sep 2013 07:21 PM PDT The intention here is to look at the user retention over a period of time. My current solution employs a temporary table, since I do this over several ranges. Here I would have liked to limit this query to only match player_id's that has a first/last that's in a specific period of time. Instead I'm force to do that for the consequent query: This is pretty awful since I'm filling up the temporary table with lots of data I don't need. Is there a faster way? Assume at least 2-3 million entries in |
Posted: 26 Sep 2013 09:22 PM PDT I'm wondering if I have to store a XML information inside a column (just storing, not for querying ) (SQL SERVER) which Datatype is better to use XML or Varbinary? I did an example: and the result is that Varbinary uses more space... any thoughts on that |
How do I find out how far along my PostgreSQL query is? Posted: 26 Sep 2013 02:45 PM PDT I have a pretty decent idea of how many rows my SELECT...INTO query will actually process (e.g. I know how many will materialize). I understand Postgres won't tell me percentage completeness, is there a way (buried deep in logs, system tables, or otherwise) that I can find out how many rows have been pumped into the destination table or have been read by the SELECT query? |
Why would one use the Innodb Plugin instead of the builtin Innodb in Mysql 5.1? (GPL) Posted: 26 Sep 2013 04:44 PM PDT Innodb works in Mysql 5.1 whether or not the innodb plugin, ha_innodb_plugin.so is loaded at startup or not. Configuration dependent of course. What are the advantages and disadvantages of either using or not using the builtin or modular innodb features? Thanks very much for any explanation :) |
MongoDB cloneCollection between CentOS servers times out at 2 hours Posted: 26 Sep 2013 12:46 PM PDT I'm trying to clone a MongoDB collection from CentOS server A to CentOS server B. Both are running 2.4.6. The copy works great until exaxctly 2 hours, at which point I get a 9001 socket exception: "errno:10054 An existing connection was forcibly closed by the remote host." I have tried to set the tcp_keepalive_time to 3600, down from 7200 (2 hours) on both servers with the hope that my keep alive probes will tell routers to keep my connection alive after the 1 hour mark. No dice. Even with a shorter keepalive time (and 70 second interval for keepalive probes), my connection dies at 2 hours. Does anyone know how to set up my servers and/or my mongodb configuration to allow my tcp connection to be active past 2 hours? Thanks! |
Convert SQL Server query to MS Access syntax Posted: 26 Sep 2013 12:40 PM PDT I have this query in SQL Server: How can I convert it to Access syntax? |
Oracle query is slower the second time Posted: 26 Sep 2013 01:20 PM PDT I have a general question about something I'm seeing in performance tuning in Oracle. I have a query that I'm testing. It's too big and complex for me to provide a simple example, sorry! (I tried lowering it down, but lost the symptom) The first time I run the query, it's 2 seconds. Subsequent times it's 3, 4, or even 5 seconds. This happens consistently - if I change the spacing, I get 2 seconds again, then it goes back up. This is the opposite of what I would expect - after the first time, the database doesn't need to parse the query, and blocks should be in read into the buffer cache. The only thing I can think of is that it is writing temp data somewhere, in such a way that it is faster to write and read it than it is to just read it. But that of course makes no sense. Does anyone know why this is happening? Or any red flags in the execution plan that would cause something like this? FYI, this is a SELECT, with multiple joins, aggregation, and views. No internal PL/SQL. Thank you |
Should I be concerned about excessive Com_change_db commands? Posted: 26 Sep 2013 04:38 PM PDT I installed MONyog recently to check for problems on a MySQL server that I admin. Most of the information was useful and helpful, but there's one warning I am not sure is worth fixing. It reports that I typically connect such that I specify which database to use when creating a I can "fix" it by not specifying a database, and instead qualifying the database within the query every time. (e.g. db.table.col instead of table.col) Is it worth going through the entire code base of the application to make this change? Is MONyog concerned with something that doesn't matter? |
db2ckbkp - How to determine what tablespaces are temp-tablespace Posted: 26 Sep 2013 11:55 AM PDT I am writing a script to restore databases from backup. The script should need as little input as possible to fulfill this task (in an efficient way). The restore command takes the parallelism parameter. The idea is to set this parameter to the number of tablespaces that are not temporary tablespaces. I found the db2ckbkp command which will not only verifies the backup file, but also outputs lots of (useful) information. Currently I was planning to run it with -t to get the tablespace information. I just have troubles to interpret the information printed. following the output that is printed for one of the tablespaces. To determine whether it is an temp space or not, the When I use the Can I use the type from this table? It seems to be 1 for temporary tablespaces and 2 for all other tablespaces. I tried to find documentation on the internet that explained the output of db2ckbkp but wasn't successful. Any help in this matter is appreciated. |
Find rows where any column matches a search pattern Posted: 26 Sep 2013 02:49 PM PDT I have a table I'm looking for rows having characters like I cannot restrict the row using a WHERE clause, because I really do not have any idea in which column the |
SQL Server 2005 get execution plan of a overnight stored procedure Posted: 26 Sep 2013 03:49 PM PDT There is a overnight job runs long time. I have used profiler to get the trace and identified a statement (in a stored procedure) that takes most of execution time. Now I need to see the execution plan of the statement so I can see what I can do. But I can't rerun the query since it uses a temp table created in the proc. And the logic of the proc is rather complex and very hard to reproduce (involving getting data from file system and data exchanging with Oracle database via linked server). I am not saying reproducing is not possible but would be rather time consuming. So is there a way I can see the execution plan of the proc or the statement in particular? |
Attribute and Value tables on SQL Posted: 26 Sep 2013 07:21 PM PDT I'm not a SQL expert and I'd like to ask for the best practices on the following schema. On my current project, we're using Attribute and AttributeValue tables quite often. I'm thinking this might not be a good practice since it involves joins when we need that data. Example. Given the Person table, there's 2 more tables that basically contain "extra" data about person; those are PersonAttribute and PersonAttributeValue. I don't have a strong argument to prevent this practice on my team, but I'd like to hear what SQL experts have to say. Thanks |
How to speed up query on table with millions of rows Posted: 26 Sep 2013 12:02 PM PDT The Issue: I'm working on a big table that consists about 37mln rows. Data include measurements of many devices made in certain time e.g. '2013-09-24 10:45:50'. Each day all of those devices are sending many measurements in different intervals on different times. I want to make a query which selects all the most actual ( 'actual' I mean the latest from all measurements made in each day) measurement of each day for 2 months e.g from 2013-01-01 to 2013-02-01. The problem is that this query takes so much time to go, despite all of the indexes i've made on different columns. I've also created auxiliary table that contains max(MeterDate) and MeasurementsId when the measurement was given. I've noticed that index can't be made on MeterDate because it contains date and time which is not useful for making an index on it. So i converted the MeterDate -> CONVERT(date, MeterDate). I though that after joining The auxiliary table with [dbo].[Measurements] the query would be faster but still query takes more than 12s which is too long for me. The structure of table: Every row of Measurements table include measurement value on direct MeterDate e.g. "2008-04-04 13:28:44.473" Direct select structure: Does anyone knows how to rebuilt table or add new or add indexes on which column that speed up query a bit ? Thanks in advance for any info. Edit: The table that I used was created by this query Then I wanted to join the new created table [dbo].[MaxDatesMeasurements] with old [dbo].[Measurements] and select direct rows |
How can I integrate a new system to a design scheme I previously created? Posted: 26 Sep 2013 08:18 PM PDT This is a follow-up to a question I wrote a while ago, for reference, you can look it up here. I've developed a curriculum based database scheme that looks like this(Thanks Joel Brown) Now that a new educational system is introduced(while the old one is still not "deprecated") I have been looking for a way to integrate both curriculum systems together without creating a whole new different system; As the two systems are fundamentally different, this have been quite a challenge for me. Here's a picture describing the new system . . .
(English level 1, 2, and 3 are classified as courses) See, the new system removes the whole grades system. A jr. high school student can enroll in the exact same course(English level 3, for example) as a sr. high school student. Is there a way to make those two systems work with my current design scheme? Or should I create a whole new scheme for the new system? |
What's a good way to model user authorization to hierarchical data? Posted: 26 Sep 2013 08:23 PM PDT I'm wanting to implement something that's basically an ACL in SQL Server (2008, we'll be moving to 2012). We're working with farm production information. We will have users who need to be authorized to view production records, but only at certain vertical levels in a hierarchy. Our data hierarchy looks like this: The idea is that some users will have access at the System level, and can see records for all Farms, Groups, and Animals within that System. Likewise, some users will have permission starting at the Farm level, and need access only linked to that Farm and all Groups (and animals) within it. Each table contains a primary key column, and a foreign key column linking it to the parent record (along with whatever other attributes each entity requires). What I've implemented in the past is two-table system for linking users to the appropriate items they're allowed to see. Implemented here, it would look like this: The application inserts a record into Authorizations, with the user to authorize, the record id (System id, Farm id, etc), and the type of record (System, Farm, etc). The FullAuthorizations table is used to denormalize the farm hierarchy for easier/faster filtering of data. A trigger is used on the Authorizations table (and each of the farm, etc, tables) to update FullAuthorizations. I considered using a View here, in a previous project with additional levels of entities, and the performance was quite poor once we began getting several hundred thousand records. The queries would look something like: In the other project where we're doing this, the solution is performant, but feels like a bit of a hack, and I especially don't like that we can't maintain referential integrity on Authorizations with the associated objects. I'd appreciate feedback on some other possible solutions. I've been looking at things like Nested Sets, but not sure something like that fits this particular problem. |
Oracle shared memory exception ORA-04031 Posted: 26 Sep 2013 02:23 PM PDT I'm trying to establish an Oracle database connection. It throws the following exception: I have tried connecting the database through JDBC as well as SQL Developer, however it throws the exception in both case. How can I fix this? |
Posted: 26 Sep 2013 04:23 PM PDT I'd like to try new NoSQL feature in MySQL 5.6 but I am using MariaDB 10.0.2 on Debian 6 and don't fancy coming back to MySQL. I'm wondering whether the memcached plugin has been added to MariaDB? And if not whether one can still use it as an addon? And if not, can I use the existing Cassandra plugin of MariaDB to the same effect? |
Failed copy job deletes all users Posted: 26 Sep 2013 09:23 PM PDT Since the progression of this was apparently somewhat difficult to follow the first time around: I attempted a completely boring, been-done-a-thousand-times-before copy of a database using the copy database wizard with the detach/reattach method. The copy failed. The log indicates that it was unable to execute a This resulted in the deletion of all non-system user associations from the source database, leaving me with users dbo, information_schema, sys, and guest. Non-system roles were also deleted. Schemas were unaffected. I have since restored the damaged database from backup. Academically, however, I would like to know the following:
This is entirely repeatable. I've made a handful of copies (manually) for the sake of experimenting with this particular issue, and in each case, the failed copy job obliterates the users and roles from the source database. Removing the views that generated errors allows the copy to complete, and, as one would expect, produces a copy with identical data, users, etc., in addition to leaving the source database unaltered. If it's important, I've tried rebuilding the indexes of the system databases, as well as the damaged database, to no appreciable effect. The error generated: |
MySQL Tables Require Daily Repairs - Server, Table or HD? Posted: 26 Sep 2013 05:23 PM PDT I've been experiencing a weird issue with one of MySQL DBs. Every day, sometimes 2-3 times per day, I'll need to repair the tables. The MySQL DB has 25 tables with 5.6m rows in total. The bigger ones are: It seems table C needs to be repaired pretty frequently, Tables A & B not as much. When the table needs to be repaired, I'm not seeing it being marked as crashed or in use. But through other tools, I can see the data is not what it should be. When I do repair the table, I'll see a message similar to: or I've tried making adjustments in my.cnf but no difference. The server is a cloud server running both MySQL and Apache. Plenty of space available on all HDs: I'm not sure if this is a problem with the cloud HD, the server or the tables themselves. The problem didn't start happening until about 2 months ago and the size of the DB has only changed by 300-400MB until now. Any idea what I should be looking at to verify where the problem might be? Using MySQL v5.1.66 and MyISAM Thanks in advance. Best, Cent |
Azure SQL Administrator Can't Access Master Database Posted: 26 Sep 2013 12:23 PM PDT I created an SQL server and database. I can log in to the database manage page with my administrator account, and manage the database I created, but when I try to look at the master database, it says: Failed to create context. Details: The current user does not have VIEW DATABASE STATE permissions on this database. What am I doing wrong? I couldn't find anything else like this on the internet. I log in with the account that the sql server page on the azure managing portal says is the "Administrator Login". |
Constructing an SQL query with time intervals Posted: 26 Sep 2013 03:23 PM PDT I've a simple but large table in an SQL database which has four fields (id(int PK), time (unix timestamp, not null), value (double, not null) and an ID_fk (integer foreign key). Data is recorded every minute, but at a seemingly random second for subsets of sensors, each sensor has a foreign key value though. My question is this, I need to write a query that will tell me when, over all two minute intervals in the dataset, a series of sensors have a value greater than 0.9. I was thinking for trying to create a temporary table and do it that way, simplifying the time by only storing it at minute resolution? Any advice would be greatly received, Thanks, James |
Mysqldump tables excluding some fields Posted: 26 Sep 2013 07:23 PM PDT Is there a way to mysqldump a table without some fields? Let me explain: How can I do this? |
Sybase SQL Anywhere 12 - Get all indexes which are unique -> ambigious sysindexes error Posted: 26 Sep 2013 01:23 PM PDT we are using a Sybase SQL Anywhere 12 db. In the db there are indices, which are unique, but shouldn't be unique. Therefore I search for a quick way to list all tables with unique primary keys. I tried The result was an error message: Anweisung konnte nicht ausgeführt werden. Tabellenname 'sysindexes' ist mehrdeutig SQLCODE=-852, ODBC 3-Status="42S13" Zeile 1, Spalte 1 Roughly translated: sysindex is ambiguous. I found on internet the query: Which looked what i wanted. But there was still the same result of ambigious sysindexes. What dows ambigious indexes mean in this context? Will/Can this cause any error in future? As workaround I used sybase central (which by the way opens always on first monitor, not on the one where it was closed - ugly behaviour), and found that a item indices showed what i searched for. But I still want to know how a programmatically solution looks like. |
Mongo connection failing with 'Transport endpoint is not connected' Posted: 26 Sep 2013 06:23 PM PDT I have a 2 server installation. A web server with Apache and a DB server with MongoDB. I am load testing it, and on ~300 RPS I am getting this error:
The only thing I am noticing is that right before the fail, I am getting a lots of connections on Mongo: Almost 500 connections here... but never more than that! Mongo is 2.0.3. PHP is 5.3.x (latest of Debian install...) Help! |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment