[how to] Better to create a table for each user account, or one huge table for all user data in MYSQL |
- Better to create a table for each user account, or one huge table for all user data in MYSQL
- Table Structure: 2 Related Entities Share the Same Entity
- Remove duplicates from two inner joins
- general memory requirements for sql server 2008 r2
- Targeting MSX with job on Multi-Server Administration
- How to solve ORA-011033: ORACLE initialization or shutdown in progress
- Why is SQL Server consuming more server memory?
- most effective architecture to apply mulitple functions
- MS Access Restricting table/form access [migrated]
- oracle tablespace extending
- how to being building a data warehouse [on hold]
- How can I write a routine to check, optimize and analyze all tables in databases in MySQL
- changing TEMPORARY TABLE default collation
- Does disconnecting network stop a query?
- Connect to SQL Server 2012 from an Windows XP client
- Select rows after a specific ID
- How to do aggregate functions in GROUP_CONCAT
- How can I determine how many IOPs I need for my AWS RDS database?
- How to select from a table without including repeated column values?
- FileWatcher Troubleshooting Options
- restrict user host settings to socket connection only
- Unable to create a new listener in oracle 11g
- Grant access to a table to all users
- What is Transaction Path Analysis?
- event scheduler not called
- What is a good Diff tool when comparing on-premise to SQL Azure?
- Shrink a database below its initial size
Better to create a table for each user account, or one huge table for all user data in MYSQL Posted: 01 Aug 2013 08:58 PM PDT I'm creating a web directory that will allow individual users to signup for an account and store essentially text documents in a mysql database entry. Initially there will likely be only a few hundred users, but we hope to have 10,000 to 100,000 at some point. And each user would be able to upload 100-200 "documents". Would it be more efficient to create one massive table, indexed by a user number? Which could theoretically grow to be 20,000,000 entries. Or to go ahead and create a table for each user with their individual documents? I'm assuming it's not healthy to have thousands of tables in a database, but I really can't find any concrete data on this. |
Table Structure: 2 Related Entities Share the Same Entity Posted: 01 Aug 2013 08:55 PM PDT I was hoping I could get some advice on my table structure. Im writing a simple workout tracker for me and my buddies. These are the requirements:
I have a way to accomplish what I need to but I feel like theres a much simpler way that Im missing. tbl_user tbl_workout tbl_user_workout tbl_set tbl_user_set I figure this way I could get:
Is this structure logical? |
Remove duplicates from two inner joins Posted: 01 Aug 2013 08:13 PM PDT Hi everyone The following query execute with duplicates values if i use distinct it only result one row |
general memory requirements for sql server 2008 r2 Posted: 01 Aug 2013 09:12 PM PDT I'm not experienced with DBA work, but I'm trying to make a case for requesting additional resources for our sql server and was hoping I could get some smart folks to provide a rough rough estimate of what we should be running. I'm suspecting that the allocation of resources IT has given to our production sql server is low. Hardware & Software: Database: sql server 2008 r2 enterprise database Windows: Windows 2008 r2 Enterprise 64 bit, pretty sure running on VMware. Processor: Intel(R) Xeon(R) CPU E7-4860 @ 2.27GHz 2.26 GHz (2 processors) Installed memory: 4GB Hard drive for Database files: 300GB Hard drive for backups: 150GB Hard drive for logs: 100GB Application: We have 3 main databases that add up to about 170GB in data, a Reporting Services database (SSRS) on the same server that houses maybe 10 different reports (comprising average of 700k records each) that get generated daily. Our user base is about 20 simultaneous users, maybe 5 of those could be considered "resource intensive" with generating data-crunching large reports. Majority of users interact with database through asp.net website and Report server website. Additionally, our developers use SSIS in BIDS extensively by remoting directly onto the server (2 remote connections max). Finally, we have a fairly involved data warehousing operation that probably brings in 3 million records per day by way of SSIS packages that also run on the server. Current Problems: We have chronic sever server timeouts and the response time to the website is pretty bad. I suspect the amount of memory we have (4GB) is probably a large bottleneck. Our previous requests for additional memory has been denied with the common response that we need to perform more query optimizations. While we aren't sql pros or (as I'm sure you can tell by our setup) db admin pros, I want to make sure I'm not expending all my time trying to squeeze out little potential performance if the hardware is the bottleneck. Thanks all for the tl;dr avoidance! |
Targeting MSX with job on Multi-Server Administration Posted: 01 Aug 2013 02:03 PM PDT I am using Multi-Server Administration to execute jobs on multiple targets. This works well, but I am not able to enlist the master as a target. I receive the following error when attempting to enlist the MSX as a TSX:
I would think that running these same jobs on the master would not require managing a local job as well as the multi-server jobs. Any help is appreciated. |
How to solve ORA-011033: ORACLE initialization or shutdown in progress Posted: 01 Aug 2013 08:57 PM PDT I have followed How to resolve ORA-011033: ORACLE initialization or shutdown in progress link to solve the problem. When it is giving ORA-00600: internal error code, arguments: [kcratr1_lostwrt] then I am using: SQL> startup mount the above line should return ORACLE Instance started but it is showing "cannot start already-running Oracle - shut it down first". Need Help. |
Why is SQL Server consuming more server memory? Posted: 01 Aug 2013 11:45 AM PDT Sqlserver is consuming 87.5 % of my server Ram. This recently causes alot of performance bottleneck such as slowness.I Research on this issue. One common solution i could find on the internet is to set the Maximum limit for Sqlserver. This was done and much improvement is gained.I want to know why if the Maximum Memory value is not set why sqlserver keep consuming the resources |
most effective architecture to apply mulitple functions Posted: 01 Aug 2013 01:14 PM PDT I have a table that is the only source for a second table. Currently the second table is created using a CTAS (create table as select) statement that pulls in all the columns from the first table and includes a dozen more columns that are functions that we have written. These functions take a column or two from the source table and transform them into new data elements. In some cases, the columns that are used as parameters for the functions are sparsely populated. The functions are compiled natively and utilize the result_cache. The first thing that happens in each function is a null check on the input parameters and a return of NULL. I have a few questions:
|
MS Access Restricting table/form access [migrated] Posted: 01 Aug 2013 09:33 AM PDT Is there a simple way to allow users to only edit form entries they create and only allow an administrator to edit the table/all forms? Ex: I need my entire staff to be able to add and edit their own meetings while not having the ability to edit other staff's meetings. |
Posted: 01 Aug 2013 08:48 PM PDT While under massive write (insert and create table as) load, when Oracle RAC needs to extend tablespace, all write activity gets blocked for however long it takes it to finish extending (tens of minutes), with all sessions blocked by "data file init write". Is there way to configure Oracle to extend the file proactively so that things can keep moving while it is doing it? |
how to being building a data warehouse [on hold] Posted: 01 Aug 2013 09:33 AM PDT We have a ERP system that uses a DB in postgres, each table has many rows and we want to build data warehouse (data warehousing). Also we want to add BI using ETL where would begin? would use schemes or two separate databases? |
How can I write a routine to check, optimize and analyze all tables in databases in MySQL Posted: 01 Aug 2013 09:21 AM PDT How can I create a routine in MySQL to check, optimize and then analyse the tables? Also how can I schedule this routine to run every night at set time? Thanks |
changing TEMPORARY TABLE default collation Posted: 01 Aug 2013 01:35 PM PDT when I use |
Does disconnecting network stop a query? Posted: 01 Aug 2013 07:50 PM PDT I was running an update statement for 100000 records, I realized a mistake while running and then quickly unplugged the network cable. Did this stop the query from processing, did it continue processing on the server or does it do a partial update? |
Connect to SQL Server 2012 from an Windows XP client Posted: 01 Aug 2013 10:44 AM PDT We have an SQL Server 2012 Standard database, to connect to it we use SQL Server 2008 Management Studio (since 2012 SSMS is not supported on Windows XP). The problem is that SQL Server 2008 gives all the time an exception (Index outside the bounds of the array) and it's not possible to edit table in visual editor or open table. Client can work now only with database objects through queries. |
Select rows after a specific ID Posted: 01 Aug 2013 10:16 AM PDT I have the following query, it chooses from a news articles table. Due to multiple sorting factors, I cannot locate the next page of date. The parameter I have is the last ID in the query result set, so how can I fetch the next page... Again, I don't have a page index, all I have is the last ID from the previous query result set This is an example of my data |
How to do aggregate functions in GROUP_CONCAT Posted: 01 Aug 2013 01:11 PM PDT I am generating a report to show outstanding Fee Like which Fee Type,How much amount is Fixed for that Fee Type, How Much Amount Paid for that Fee type and finally Balance Amount to be paid for that fee Type. Here i am pulling data from 5 tables like Class Names from Classes table, Roll no & Student Name from admissions table, Fee Types from Feevars table, amount fixed for fee type from studentfees table, and finally amounts paid for fee types from fee collection table. I am able to generate the partial results by mentioning the fee type names in select statment by summing and subtracting operations. Here is the full database and my query producing the result. plz look **@ mysql query in the select statement i have mentioned the fee types manually. But i want to generate the result without mentioning the Fee type as Column names. For this i did one thing, i had taken all the fee types into a sql variable like this this will result all the Fee types into single line as column names. And Finally I have written code to produce output what i am expecting, but i am getting error like Error Code 1064: You have error in your sql syntax. This is My final code Please anyone tell me, what is error in my sql query. And suggest me if any?? other way to do this report. |
How can I determine how many IOPs I need for my AWS RDS database? Posted: 01 Aug 2013 09:04 AM PDT I am migrating part of a mysql database into AWS. The data in question is 99% write, and each row has about 1k of varchar fields, a datetime and 4 ints. I estimate that we will need between 20-25k records inserted/hour during peak times. I ran iostat -h on the current database, and it reported around 40 tps. How do I figure out what type of IOPS I'll need? |
How to select from a table without including repeated column values? Posted: 01 Aug 2013 07:11 PM PDT In a previous question How to merge data sets without including redundant rows? I asked about filtering redundant historical data during import, but @DavidSpillett correctly replied that I couldn't do what I was trying to do. Instead of filtering the table during import, I now want to create a view on the table that returns only records where the price has changed. Here's the original scenario rephrased to suite this question: We have a table of historical prices for items. The table contains rows where the same price is recorded for multiple dates. I want to create a view on this data which only shows price changes over time, so if a price changes from A to B I want to see it, but if it "changes" from B to B then I don't want to see it. Example: if the price yesterday was $1, and the price today is $1, and there were no other price changes, then the price today can be inferred from the price yesterday so I only need the record from yesterday. Example (http://sqlfiddle.com/#!3/c95ff/1): My initial attempt used ROW_NUMBER: Which returned: I tried searching for a similar question/answer but it's hard to work out how to phrase the search, an example is worth a lot of words. Any suggestions appreciated. Thanks |
FileWatcher Troubleshooting Options Posted: 01 Aug 2013 10:40 AM PDT I have setup a Credential/Procedure/Program/File Watcher/Job to monitor a folder for new files, but it is not working. I am trying to figure out what I can check to troubleshoot why this isn't working. After scaling back the code to a bare minimum, here are somethings I have already done.
Could someone explain what I am doing wrong or give me a way to troubleshoot this further? Here is my test code: |
restrict user host settings to socket connection only Posted: 01 Aug 2013 12:48 PM PDT Is there a way to restrict a user's host setting so that (s)he can only connect by socket and not TCP/IP? I'd like to have this setting for the root account. edit: As Abdul Manaf pointed out |
Unable to create a new listener in oracle 11g Posted: 01 Aug 2013 01:48 PM PDT In net manager i get an error message "A valid logfile is required" when i click on listener and choose '+' button. and then it keeps on giving me that error when i click on any other tab or button in net manager. |
Grant access to a table to all users Posted: 01 Aug 2013 10:09 AM PDT Is it possible to assign a grant to a table for all users, or a default permission so that when new users are created they will have the specific grants for that table to |
What is Transaction Path Analysis? Posted: 01 Aug 2013 08:41 PM PDT I came across the term Transaction Path Analysis which I am not familiar with. A Google search resulted in only a few results, most of which seem to assume everyone knows what it means. From these my understanding is that it is about tracking what columns are used by queries to access data, in preparation for deciding indexes. This seems like common sense, but I'm surprised I can't find more specifics on it. Is there a concise definition for transaction path analysis, a process for completing one, or any alternative more commonly used phrases that mean the same thing? |
Posted: 01 Aug 2013 02:48 PM PDT I had created one event scheduler which looks like this This events has not called on 1st of month. So i tried so NULL in db col means that no DB is assigned to it? Please help me to solve it. |
What is a good Diff tool when comparing on-premise to SQL Azure? Posted: 01 Aug 2013 04:59 PM PDT We're working on migrating our database (~400 tables) from SQL 2008 R2 to SQL Azure. We're currently in the proof-of-concept stage, figuring out the process we'll be following. We have a process that we think is solid but I'd like to perform some validation diffs on both schema and data to confirm that we have a successful migration. I've never done this before with SQL Azure and am looking for a good tool to do this. I'd love if it's a free tool but we can pay for it if necessary. Ultimately, this is a one-time migration (we'll do it a few times but the real migration will only be done once). Whatcha got? |
Shrink a database below its initial size Posted: 01 Aug 2013 06:56 AM PDT I've got a dev database that is a copy of live 30gb, we have deleted some data that is not needed in dev that brings the data file used to 20gb. Its SQL server 2005 So we have about 33% unused. I need to reclaim the space which will allow us to have a second dev DB on the server (based on the cut down version) However I cannot reclaim the space, I have done the following The initial size of the file SMS2_Data is 30gb. followed by nojoy. I have tried making a backup, creating a new DB with a low initial size then restoring, no joy as the initial size gets overwritten Have tried errored saying MODIFY FILE failed. Specified size is less than current size. Have done the shrink then changed the recovery mode from FUll to simple and back again. no joy Any ideas?? ADDITION: So I thought it was to do with some TEXT fields. We have about 6 accross the system. So as a test I dropped them all and then did a shrink of the file and still no change. ADDITION 2 : Cheers everyone for your answers, no single answert has sorted teh problem. The only option left is to reimport the data to another DB. This is not practicle as it would have to be done on the Live DB, which carries too much risk. I've no idea what will happen to the bonus points. I cant pick an answer as nothing has given me a solution to my problem ADDITION 3 : I've tried lots of things and nothign has worked, thanks all for your help but I've just got to accept that its gonna be 30 gig at least. |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment