[how to] How to go about modeling the main object of relationship? |
- How to go about modeling the main object of relationship?
- Use Database to Store Data Extracted From Internet
- Mysql performance issue when selecting text column
- Efficiently query MAX over multiple ranges
- What is the best way to manage user access to MySQL via PHP?
- ORA-21780: Maximum number of object durations exceeded
- Enabling JMX For Hadoop HDFS & Also MapReduce
- Dynamic file name for file import
- Does SQL Server support restoring to a point in time down to the millisecond?
- Improve performance with the WHERE NOT IN sub-select clause
- Getting the employee history of Sales department from HR database in Oracle
- MySQL: mysqld_safe options
- How force specific table to not logging (for all DML)?
- How to setup SSL in MSSQL Server 2012 with own certificate?
- What are the circumstances where autovacuum can be disabled?
- SQL Server 2008 - How to check FK integrity?
- How to keep the structure of the Tablix when there is no data to show
- What's a good way to model user authorization to hierarchical data?
- Oracle shared memory exception ORA-04031
- How can extract the table schema from just the .frm file?
- Memcached plugin on MariaDB?
- How to embed a sub-prodecure call in a SELECT statement in an Oracle 11g PL/SQL stored procedure
- Repairing Broken Binary Replication in PostgreSQL 9.0
- Failed copy job deletes all users
- SQL Server 2005 Replication
- Multiple parents and multiple children in product categories
- Mysqldump tables excluding some fields
- Sybase SQL Anywhere 12 - Get all indexes which are unique -> ambigious sysindexes error
- Mongo connection failing with 'Transport endpoint is not connected'
How to go about modeling the main object of relationship? Posted: 27 Aug 2013 08:36 PM PDT I'm using mySQL and myISAM tables. I a have a bunch of objects that I want to group together (1 object belongs to 1 and only 1 group AND 1 group can have many objects), and I absolutely need to know the main object of the group (1 group can only have 1 required main object). So far, I've build a model that I believe is the right one. But I don't know how to go about modeling the main object in the group. What is the perfect solution? |
Use Database to Store Data Extracted From Internet Posted: 27 Aug 2013 07:50 PM PDT I'm mining 500 million users, and their "followers" from a social network using their API. The extraction of data itself is not a problem, since I can do it with my scripts. However having 500 million users and their followers in a list in memory can be very costly. My script created two lists,one with the users that I already got their followers, and one with the users to be looked at (I would get each user, put their followers in the queue, write to file, and then go to the next one.) So it would be 2 long lists that I cannot handle in memory. So I thought of a database. So finally to my question, is it better for me to use a relational database, or a NoSQL, graph, database, like Neo4j. The only information I'm getting now is the user ID and the ID of the followers, which later I want to analyse (for graph theory research.) I thought of a database because I might try add more information later as well. Thank you. |
Mysql performance issue when selecting text column Posted: 27 Aug 2013 07:35 PM PDT I have a performance problem when selecting rows containing text fields: 5 rows in set (0.07 sec) 5 rows in set (0.15 sec) Explain is the same for both queries: Indexes (I know there's maybe no need for recipient_id index when there's recipient_id_2, but that's what I have there): How do I deal with that? I can even do something like and it runs faster: 5 rows in set (0.06 sec) My guess is it has something to do with filesort - larger data is sorted slower or something. I copied that database from production to another server to experiment with but strangely enough it runs fast in both cases. |
Efficiently query MAX over multiple ranges Posted: 27 Aug 2013 07:53 PM PDT When performing a MIN() or MAX() over a single range covered by an appropriately sorted index, SQL Server does a TOP() and so returns the value after fetching just one row. When the search criteria include more than one range, SQL Server instead grabs all the indexed values from both of the ranges and does a stream aggregate, which is far slower than performing a TOP() for each sub-value. For example, assume a large number of orders per customer in a table like: Running this query: will result in a query that takes several times as long as if only one customer ID were specified. What is the most efficient way to perform a query like the above? Relatedly, if separate results were needed (i.e. GROUP BY customer_id), what would the best method be? SQL Fiddle: http://sqlfiddle.com/#!3/ef0c6/1 |
What is the best way to manage user access to MySQL via PHP? Posted: 27 Aug 2013 02:29 PM PDT I am working on a web application that requires auditing db queries in MySQL. In every application that I have developed or worked on it is standard practice to store the DB credentials in a protected configuration file and access the DB through that one user. Auditing, then, must be managed by the code (I'm using PHP). My question: would it be better to create user accounts for each user in MySQL and use these credentials to create the DB connection? Doing so would allow MySQL to log the queries by each user either through the general query log or Mcafee's mysql-audit plugin. I have read some older posts that claim allowing multiple user accounts creates security vulnerabilities, but I have not read a clear explanation of how/why. It may be worth noting: admins create user accounts and set user permissions. Users cannot self-register. Thanks for any help you can provide. |
ORA-21780: Maximum number of object durations exceeded Posted: 27 Aug 2013 02:00 PM PDT I am getting this error when records are streamed over to another database Oracle DB version:11.2.0.3.0 Transaction Flow: DML on Schema1@DB1 Streamed to Schema2@DB2 then trigger on this table will insert into Schema3@DB2 then trigger on this table will insert into table in Schema4@DB2 ORA-21780 error happening at the last stage. Please advise. Thanks, Sreedhar. |
Enabling JMX For Hadoop HDFS & Also MapReduce Posted: 27 Aug 2013 12:03 PM PDT I'm having a hard time figuring out how to enable JMX to submit metrics for HDFS and MapReduce jobs in Hadoop (CDH4). I've seen several links and read through 'The Definitive Guide' and 'Hadoop Operations' on the 'monitoring' chapters and it goes through a great background on Metrics1, Metrics2, JMX JSON files, and that the latter is the preferred method, but I can't find out how to enable it for HDFS and MapReduce independently. This blog has a good intro but it doesn't monitor Task and Data Nodes. I'm not sure how to enable it for everything. Does anyone have any steps? Thanks! |
Dynamic file name for file import Posted: 27 Aug 2013 12:12 PM PDT I'm trying to figure out a way that I can pass a filename into a stored procedure that can import a file. However, after setting up the SQL dynamically, the object can't be found. The result |
Does SQL Server support restoring to a point in time down to the millisecond? Posted: 27 Aug 2013 11:13 AM PDT Using the Management Studio GUI, I am only able to restore a database to a point in time down to the second. Is this just a limitation of the GUI or if I use a script will it restore to the millisecond I specify? I'm trying to do it with a script and it 'seems' like it's ignoring the millisecond but unfortunately the date value I'm comparing to is, unfortunately, not necessarily the exact time the record was written so I just want to rule out any possibility that the problem could be because of a limitation of SQL Server. |
Improve performance with the WHERE NOT IN sub-select clause Posted: 27 Aug 2013 01:59 PM PDT In the following query, I have to count transactions for each customer. [EDIT]However, I have to exclude from the result set entirely, customers that have a transaction older than one year. Shouldn't the query optimizer be smart enough to only evaluate existence once for each customer? There are no naturals in my query plan. This query is performing as if the database is running the existence clause for every transaction instead of running it for every customer. Performance is the same if I remove the GROUP BY in the sub-query. Is there a better way to do this so that I may get better performance out of the database? Hopefully a simple select query will work avoiding a CTE if possible (that would introduce other challenges). Due to other group by criteria (not shown here) I'm not able to simply check MIN(DATED), I really need to perform another query. |
Getting the employee history of Sales department from HR database in Oracle Posted: 27 Aug 2013 11:22 AM PDT I am currently using the 'hr' database with Oracle 10g. You can view the schema diagram over here: http://docs.oracle.com/cd/B13789_01/server.101/b10771/diagrams001.htm#sthref63 What I would like is to retrieve the history of all employees who have worked in the 'Sales' department. I have selected only first name, last name, and job title for the working example but I'm getting the same job title where I should be getting two different ones. This is the query I have so far: Any help would be appreciated. JazakAllah khayr. |
Posted: 27 Aug 2013 09:06 PM PDT Is there a way to specify these options in configuration file? I've tried to add them into - Please note that I'm using Percona server. |
How force specific table to not logging (for all DML)? Posted: 27 Aug 2013 07:20 PM PDT We using Oracle 9i . I'm still learn about admin Oracle (I'm DBA but other vendor). We implement manually at our databases a trigger over LOGON and LOGOFF to save some information about our users sessions. We execute an insert at LOGON and an update at LOGOFF over our table. This table was created as NOLOGGING. Our problem is the amount of archive generated by day, growth a lot. After check with log miner I detect is the updates over this table the reason. I research little and found the information the NOLOGGING work only for very specific situations and all UPDATEs still logging. I don't need keep safe this table or backup it. We extract the information all days... if I loose some days I can live with it. But I can't keep this overhead . Is there no way to force a table to not logging for all DMLs in Oracle? Regards |
How to setup SSL in MSSQL Server 2012 with own certificate? Posted: 27 Aug 2013 01:26 PM PDT Environment: Windows 7 Professional x64 in a domain, running MS SQL Server 2012. I have lokal Admin rights, so messing up the system is easy. My SQL Server 'names' are 'SQLEXPRESS' and 'MSSQLSERVER'. FQDN should be for example 'my-pc.mydomain.local'. Problem: I dont know the right way to setup SSL on this MS SQL Server. I actually only want to setup SSL with a certificate created of my own. Can anybody help me out here? Most of the documentation online only describes the way of going through a CA and have different knowleadge as a prerequirement. |
What are the circumstances where autovacuum can be disabled? Posted: 27 Aug 2013 01:03 PM PDT I have a table where I am performing only inserts, never deletes/updates. I notice sometimes that autovacuum runs on this table, even though this is the case. It is taking a long time, and it is having an impact on the performance of my DB. Is it safe for me to just disable autovacuum for this table? Since I am not performing deletes/updates, I don't understand why autovacuum is even needed, and why postgres decides to run it. |
SQL Server 2008 - How to check FK integrity? Posted: 27 Aug 2013 03:28 PM PDT I'd like to pull a partial dataset from production for test environments - e.g. Copy the production database with only the first 100 rows of each table or 5% of the data - which ever comes first. Simple enough. For each table, just... SELECT TOP 100 * INTO DESTINATION FROM SOURCE" The problem is maintaining data integrity for FK. Is there a native SQL Server SP/FX to verify the integrity of each row's FK after the data has been copied over? If not, once the production schema is copied to sandbox environments, I was planning to implement the following logic: PSUEDO: I don't want to recreate the wheel or make it harder than it needs to be. If you see any holes in my logic, please let me know. Thanks in advance. |
How to keep the structure of the Tablix when there is no data to show Posted: 27 Aug 2013 02:19 PM PDT We have a SSRS report which shows the result of a dataset in a tablix. If the query in the dataset returns no data, the header of the Tablix gets displayed but not the tablix cells below the header. I know there is a property to I even set the rule for each cell value of the tablix using Do you have any idea? Thanks for your help in advance. |
What's a good way to model user authorization to hierarchical data? Posted: 27 Aug 2013 08:16 PM PDT I'm wanting to implement something that's basically an ACL in SQL Server (2008, we'll be moving to 2012). We're working with farm production information. We will have users who need to be authorized to view production records, but only at certain vertical levels in a hierarchy. Our data hierarchy looks like this: The idea is that some users will have access at the System level, and can see records for all Farms, Groups, and Animals within that System. Likewise, some users will have permission starting at the Farm level, and need access only linked to that Farm and all Groups (and animals) within it. Each table contains a primary key column, and a foreign key column linking it to the parent record (along with whatever other attributes each entity requires). What I've implemented in the past is two-table system for linking users to the appropriate items they're allowed to see. Implemented here, it would look like this: The application inserts a record into Authorizations, with the user to authorize, the record id (System id, Farm id, etc), and the type of record (System, Farm, etc). The FullAuthorizations table is used to denormalize the farm hierarchy for easier/faster filtering of data. A trigger is used on the Authorizations table (and each of the farm, etc, tables) to update FullAuthorizations. I considered using a View here, in a previous project with additional levels of entities, and the performance was quite poor once we began getting several hundred thousand records. The queries would look something like: In the other project where we're doing this, the solution is performant, but feels like a bit of a hack, and I especially don't like that we can't maintain referential integrity on Authorizations with the associated objects. I'd appreciate feedback on some other possible solutions. I've been looking at things like Nested Sets, but not sure something like that fits this particular problem. |
Oracle shared memory exception ORA-04031 Posted: 27 Aug 2013 02:16 PM PDT I'm trying to establish an Oracle database connection. It throws the following exception: I have tried connecting the database through JDBC as well as SQL Developer, however it throws the exception in both case. How can I fix this? |
How can extract the table schema from just the .frm file? Posted: 27 Aug 2013 08:08 PM PDT I have extracted the data directory of mysql from a backup and need to get the schema from an old table, however it's a backup from a different machine. I have read a fair number of tutorials today on how to do this, but each time it seems to fail or I end up having to reinstall mysql because mysql hangs or crashes. (Create a different database, create table with same name, replace file, stop/start engine, recover from frm. Tried in varying orders and combinations.) Is there any external tool that can extract the schema from the .frm file? I can see the column names if I open up the file. I had a look but I can't seem to find anything that will enable me to do this. Thanks in advance. |
Posted: 27 Aug 2013 03:16 PM PDT I'd like to try new NoSQL feature in MySQL 5.6 but I am using MariaDB 10.0.2 on Debian 6 and don't fancy coming back to MySQL. I'm wondering whether the memcached plugin has been added to MariaDB? And if not whether one can still use it as an addon? And if not, can I use the existing Cassandra plugin of MariaDB to the same effect? |
How to embed a sub-prodecure call in a SELECT statement in an Oracle 11g PL/SQL stored procedure Posted: 27 Aug 2013 04:16 PM PDT I need to figure out a way to embed an Oracle PL/SQL sub-procedure call in a SELECT statement, within another procedure in the same package. I am using SQLDeveloper 3.0 with an Oracle 11g database. I have already developed a stored procedure 'acctg_detail_proc()' that generates a detailed list of accounting transactions within a specified period. What I am trying to do is create a summary report procedure 'acctg_summary_proc()' using the detailed data returned by acctg_detail_proc(). Since I am new to PL/SQL, all I have learned so far is how to retrieve the detailed data via a ref cursor, then LOOP through it, FETCHing each individual detail row. I want to figure out how acctg_summary_proc() can substitute that call to acctg_detail_proc() for a table name in a SELECT statement with a GROUP-BY clause. Here is the source code for an anonymous block where I tried to test it: When I try to execute this code, I get the following error: PL/SQL: ORA-00942: table or view does not exist I realize I could use a nested SELECT statement instead of a table name, but I want to avoid duplication of source code. Is there any way to 'alias' a ref cursor so I can reference its data in a SELECT statement? Here is some further background info: The called sub-procedure has ~600 lines of code, and selects 40 columns of data from a de-normalized VIEW. The corporate DBAs will not let me create any VIEWs that contain WHERE clauses, so that is not an option for me. Thanks in advance, Ken L. |
Repairing Broken Binary Replication in PostgreSQL 9.0 Posted: 27 Aug 2013 01:16 PM PDT I have a binary replication that was stopped for so long that the WALs were removed and as a result, it ended up being old. I'm trying to reestablish the replication and the best way I've found so far is following the steps on the PostgreSQL wiki:
But the database is too big (300 GB), my connection is not really amazing (like 800 kB/s) and the files in base keep changing. So I was wondering if there's a more optimal way to do this. |
Failed copy job deletes all users Posted: 27 Aug 2013 06:16 PM PDT Since the progression of this was apparently somewhat difficult to follow the first time around: I attempted a completely boring, been-done-a-thousand-times-before copy of a database using the copy database wizard with the detach/reattach method. The copy failed. The log indicates that it was unable to execute a This resulted in the deletion of all non-system user associations from the source database, leaving me with users dbo, information_schema, sys, and guest. Non-system roles were also deleted. Schemas were unaffected. I have since restored the damaged database from backup. Academically, however, I would like to know the following:
This is entirely repeatable. I've made a handful of copies (manually) for the sake of experimenting with this particular issue, and in each case, the failed copy job obliterates the users and roles from the source database. Removing the views that generated errors allows the copy to complete, and, as one would expect, produces a copy with identical data, users, etc., in addition to leaving the source database unaltered. If it's important, I've tried rebuilding the indexes of the system databases, as well as the damaged database, to no appreciable effect. The error generated: |
Posted: 27 Aug 2013 09:16 PM PDT I am in the process of creating Replication between 2 Remote Servers, server 1 is the Distributor and Publisher and server 2 is the Subscription. server 1 windows 2003 server 192.168.10.1 connected by vpn SQL Server 2005 domain1.local server 2 windows 2003 server 192.168.10.6 connected by vpn SQL Server 2005 domain2.local When I setup up Replication everything looked fine until I looked at the sync status and it said: From user sa refused because the job is already running from a request by user sa changed database context to technical error 22022. I have cleared jobs in the server agent as well as restarted the service. Could this be something to do with authentication between two non trusted domains as I can browse and even control each sql server via SQL studio but just not setup replication? Yes I can manage each SQL Server in SSMS and we are using merge with snapshot. |
Multiple parents and multiple children in product categories Posted: 27 Aug 2013 12:16 PM PDT I am making a ecommerce site. In this site I want to categorise the items into three different layers In the above table, I want to display the sub category after the customer selects the primary one. At the same time the 'samsung' also comes under 'camera'. Like this a parent can have any number of children and one child can have many parents. In the future we may change the primary and secondary. What is the best solution for this? Which model will adopt our category: tree or nested? |
Mysqldump tables excluding some fields Posted: 27 Aug 2013 07:16 PM PDT Is there a way to mysqldump a table without some fields? Let me explain: How can I do this? |
Sybase SQL Anywhere 12 - Get all indexes which are unique -> ambigious sysindexes error Posted: 27 Aug 2013 11:16 AM PDT we are using a Sybase SQL Anywhere 12 db. In the db there are indices, which are unique, but shouldn't be unique. Therefore I search for a quick way to list all tables with unique primary keys. I tried The result was an error message: Anweisung konnte nicht ausgeführt werden. Tabellenname 'sysindexes' ist mehrdeutig SQLCODE=-852, ODBC 3-Status="42S13" Zeile 1, Spalte 1 Roughly translated: sysindex is ambiguous. I found on internet the query: Which looked what i wanted. But there was still the same result of ambigious sysindexes. What dows ambigious indexes mean in this context? Will/Can this cause any error in future? As workaround I used sybase central (which by the way opens always on first monitor, not on the one where it was closed - ugly behaviour), and found that a item indices showed what i searched for. But I still want to know how a programmatically solution looks like. |
Mongo connection failing with 'Transport endpoint is not connected' Posted: 27 Aug 2013 05:16 PM PDT I have a 2 server installation. A web server with Apache and a DB server with MongoDB. I am load testing it, and on ~300 RPS I am getting this error:
The only thing I am noticing is that right before the fail, I am getting a lots of connections on Mongo: Almost 500 connections here... but never more than that! Mongo is 2.0.3. PHP is 5.3.x (latest of Debian install...) Help! |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment