Thursday, August 1, 2013

[how to] Better to create a table for each user account, or one huge table for all user data in MYSQL

[how to] Better to create a table for each user account, or one huge table for all user data in MYSQL


Better to create a table for each user account, or one huge table for all user data in MYSQL

Posted: 01 Aug 2013 08:58 PM PDT

I'm creating a web directory that will allow individual users to signup for an account and store essentially text documents in a mysql database entry.

Initially there will likely be only a few hundred users, but we hope to have 10,000 to 100,000 at some point. And each user would be able to upload 100-200 "documents".

Would it be more efficient to create one massive table, indexed by a user number? Which could theoretically grow to be 20,000,000 entries. Or to go ahead and create a table for each user with their individual documents?

I'm assuming it's not healthy to have thousands of tables in a database, but I really can't find any concrete data on this.

Table Structure: 2 Related Entities Share the Same Entity

Posted: 01 Aug 2013 08:55 PM PDT

I was hoping I could get some advice on my table structure. Im writing a simple workout tracker for me and my buddies. These are the requirements:

  • Multiple users
  • A user can have multiple workouts
  • A workout has multiple sets
  • A set can have multiple users

I have a way to accomplish what I need to but I feel like theres a much simpler way that Im missing.

tbl_user

| UserId |   Name |  -------------------  |      1 |  John  |  |      2 |  Greg  |  

tbl_workout

| WorkoutId |  -------------  |     1     |  |     2     |  

tbl_user_workout

| WorkoutId | UserId |  ----------------------  |     1     |    1   |  |     2     |    1   |  

tbl_set

| SetId | WorkoutId |  ---------------------  |   1   |      1    |  |   2   |      1    |  

tbl_user_set

| UserSetId | SetId | UserId | Reps | Weight |  ----------------------------------------------  |     1     |    1  |    1   |  20  |   50   |  |     2     |    1  |    2   |  15  |   60   |  

I figure this way I could get:

  • A users sets with user->userset
  • A users workouts with user->userworkout
  • A workouts sets with workout->set

Is this structure logical?

Remove duplicates from two inner joins

Posted: 01 Aug 2013 08:13 PM PDT

Hi everyone The following query execute with duplicates values if i use distinct it only result one row

select a.name, b.id as order_id, b.mode, c.id as job_id from customers a  inner join orders b on a.id = b.customer_id  inner join jobs c on a.id = b.customer_id  where a.id = 1;   

general memory requirements for sql server 2008 r2

Posted: 01 Aug 2013 09:12 PM PDT

I'm not experienced with DBA work, but I'm trying to make a case for requesting additional resources for our sql server and was hoping I could get some smart folks to provide a rough rough estimate of what we should be running. I'm suspecting that the allocation of resources IT has given to our production sql server is low.

Hardware & Software:

Database: sql server 2008 r2 enterprise database

Windows: Windows 2008 r2 Enterprise 64 bit, pretty sure running on VMware.

Processor: Intel(R) Xeon(R) CPU E7-4860 @ 2.27GHz 2.26 GHz (2 processors)

Installed memory: 4GB

Hard drive for Database files: 300GB

Hard drive for backups: 150GB

Hard drive for logs: 100GB

Application:

We have 3 main databases that add up to about 170GB in data, a Reporting Services database (SSRS) on the same server that houses maybe 10 different reports (comprising average of 700k records each) that get generated daily. Our user base is about 20 simultaneous users, maybe 5 of those could be considered "resource intensive" with generating data-crunching large reports. Majority of users interact with database through asp.net website and Report server website. Additionally, our developers use SSIS in BIDS extensively by remoting directly onto the server (2 remote connections max). Finally, we have a fairly involved data warehousing operation that probably brings in 3 million records per day by way of SSIS packages that also run on the server.

Current Problems:

We have chronic sever server timeouts and the response time to the website is pretty bad. I suspect the amount of memory we have (4GB) is probably a large bottleneck. Our previous requests for additional memory has been denied with the common response that we need to perform more query optimizations. While we aren't sql pros or (as I'm sure you can tell by our setup) db admin pros, I want to make sure I'm not expending all my time trying to squeeze out little potential performance if the hardware is the bottleneck.

Thanks all for the tl;dr avoidance!

Targeting MSX with job on Multi-Server Administration

Posted: 01 Aug 2013 02:03 PM PDT

I am using Multi-Server Administration to execute jobs on multiple targets. This works well, but I am not able to enlist the master as a target. I receive the following error when attempting to enlist the MSX as a TSX:

Server 'XXXXX' is an MSX. Cannot enlist one MSX into another MSX. (Microsoft SQL Server, Error: 14299)

I would think that running these same jobs on the master would not require managing a local job as well as the multi-server jobs.

Any help is appreciated.

How to solve ORA-011033: ORACLE initialization or shutdown in progress

Posted: 01 Aug 2013 08:57 PM PDT

I have followed How to resolve ORA-011033: ORACLE initialization or shutdown in progress link to solve the problem.

When it is giving ORA-00600: internal error code, arguments: [kcratr1_lostwrt] then I am using: SQL> startup mount

the above line should return ORACLE Instance started

but it is showing "cannot start already-running Oracle - shut it down first".

Need Help.

Why is SQL Server consuming more server memory?

Posted: 01 Aug 2013 11:45 AM PDT

Sqlserver is consuming 87.5 % of my server Ram. This recently causes alot of performance bottleneck such as slowness.I Research on this issue. One common solution i could find on the internet is to set the Maximum limit for Sqlserver. This was done and much improvement is gained.I want to know why if the Maximum Memory value is not set why sqlserver keep consuming the resources

most effective architecture to apply mulitple functions

Posted: 01 Aug 2013 01:14 PM PDT

I have a table that is the only source for a second table. Currently the second table is created using a CTAS (create table as select) statement that pulls in all the columns from the first table and includes a dozen more columns that are functions that we have written. These functions take a column or two from the source table and transform them into new data elements.

In some cases, the columns that are used as parameters for the functions are sparsely populated. The functions are compiled natively and utilize the result_cache. The first thing that happens in each function is a null check on the input parameters and a return of NULL.

I have a few questions:

  • I have full control over the architecture of the source table (as I create it in an earlier process). Is there anything I can do with the source tables (index, partitions, etc.) that will help the CTAS run more efficiently?
  • Should I be using a CTAS at all or would an UPDATE be more efficicent (my gut here says no way, but I'd like some validation)
  • Is there something else I can do to the functions to make them more efficient?
  • Is there anything else I should be thinking about?

MS Access Restricting table/form access [migrated]

Posted: 01 Aug 2013 09:33 AM PDT

Is there a simple way to allow users to only edit form entries they create and only allow an administrator to edit the table/all forms? Ex: I need my entire staff to be able to add and edit their own meetings while not having the ability to edit other staff's meetings.

oracle tablespace extending

Posted: 01 Aug 2013 08:48 PM PDT

While under massive write (insert and create table as) load, when Oracle RAC needs to extend tablespace, all write activity gets blocked for however long it takes it to finish extending (tens of minutes), with all sessions blocked by "data file init write". Is there way to configure Oracle to extend the file proactively so that things can keep moving while it is doing it?

how to being building a data warehouse [on hold]

Posted: 01 Aug 2013 09:33 AM PDT

We have a ERP system that uses a DB in postgres, each table has many rows and we want to build data warehouse (data warehousing). Also we want to add BI using ETL

where would begin? would use schemes or two separate databases?

How can I write a routine to check, optimize and analyze all tables in databases in MySQL

Posted: 01 Aug 2013 09:21 AM PDT

How can I create a routine in MySQL to check, optimize and then analyse the tables?

Also how can I schedule this routine to run every night at set time?

Thanks

changing TEMPORARY TABLE default collation

Posted: 01 Aug 2013 01:35 PM PDT

when I use CREATE TEMPORARY TABLE to create a temporary table, the collation of this table is utf8_unicode_ci by default. why by default it is utf8_unicode_ci? How can I change the default?

Does disconnecting network stop a query?

Posted: 01 Aug 2013 07:50 PM PDT

I was running an update statement for 100000 records, I realized a mistake while running and then quickly unplugged the network cable.

Did this stop the query from processing, did it continue processing on the server or does it do a partial update?

Connect to SQL Server 2012 from an Windows XP client

Posted: 01 Aug 2013 10:44 AM PDT

We have an SQL Server 2012 Standard database, to connect to it we use SQL Server 2008 Management Studio (since 2012 SSMS is not supported on Windows XP). The problem is that SQL Server 2008 gives all the time an exception (Index outside the bounds of the array) and it's not possible to edit table in visual editor or open table. Client can work now only with database objects through queries.
So the question is - is there any solution on Windows XP to work properly with 2012 database ?

Select rows after a specific ID

Posted: 01 Aug 2013 10:16 AM PDT

I have the following query, it chooses from a news articles table. Due to multiple sorting factors, I cannot locate the next page of date.

The parameter I have is the last ID in the query result set, so how can I fetch the next page... Again, I don't have a page index, all I have is the last ID from the previous query result set

SELECT        TOP (20) id, DATEDIFF(SECOND, CONVERT(DATETIME, '1970-01-01', 102), release_date) AS date, DATEDIFF(SECOND, CONVERT(DATETIME, '1970-01-01',                        102), edit_date) AS lastchanged, important  FROM            articles AS news WITH (NOLOCK)  WHERE        (status = 1) AND (release_date <= GETUTCDATE())  ORDER BY important DESC, date DESC, lastchanged DESC  

This is an example of my data

  id       date     lastchanged important  187346  1366070400  1345212540  1  187306  1365984000  1345290300  1  187330  1365984000  1342457880  1  187344  1363392000  1342461540  1  187343  1363392000  1342461300  1  187342  1363392000  1342459980  1  187339  1363392000  1342459800  1  187337  1363392000  1342458960  1  187335  1363392000  1342458720  1  187334  1363392000  1342458600  1  187332  1363392000  1342458060  1  187331  1363392000  1342457940  1  187327  1363392000  1342457340  1  187328  1363392000  1342457340  1  187326  1363392000  1342456860  1  187323  1363392000  1342456020  1  187322  1363392000  1342455480  1  187321  1363392000  1342454700  1  187316  1363392000  1342454580  1  187320  1363392000  1342454520  1  

How to do aggregate functions in GROUP_CONCAT

Posted: 01 Aug 2013 01:11 PM PDT

I am generating a report to show outstanding Fee Like which Fee Type,How much amount is Fixed for that Fee Type, How Much Amount Paid for that Fee type and finally Balance Amount to be paid for that fee Type.

Here i am pulling data from 5 tables like Class Names from Classes table, Roll no & Student Name from admissions table, Fee Types from Feevars table, amount fixed for fee type from studentfees table, and finally amounts paid for fee types from fee collection table.

I am able to generate the partial results by mentioning the fee type names in select statment by summing and subtracting operations.

Here is the full database and my query producing the result. plz look **@ mysql query in the select statement i have mentioned the fee types manually. But i want to generate the result without mentioning the Fee type as Column names.

For this i did one thing, i had taken all the fee types into a sql variable like this

set @sqlList = null;SELECT GROUP_CONCAT(concat('tsf.', Replace(FeeName,' ',''))) INTO @sqlList FROM tbl_feevars;  

this will result all the Fee types into single line as column names. And Finally I have written code to produce output what i am expecting, but i am getting error like Error Code 1064: You have error in your sql syntax.

This is My final code

Expected Output Code

Please anyone tell me, what is error in my sql query. And suggest me if any?? other way to do this report.

How can I determine how many IOPs I need for my AWS RDS database?

Posted: 01 Aug 2013 09:04 AM PDT

I am migrating part of a mysql database into AWS. The data in question is 99% write, and each row has about 1k of varchar fields, a datetime and 4 ints.

I estimate that we will need between 20-25k records inserted/hour during peak times.

I ran iostat -h on the current database, and it reported around 40 tps.

How do I figure out what type of IOPS I'll need?

How to select from a table without including repeated column values?

Posted: 01 Aug 2013 07:11 PM PDT

In a previous question How to merge data sets without including redundant rows? I asked about filtering redundant historical data during import, but @DavidSpillett correctly replied that I couldn't do what I was trying to do.

Instead of filtering the table during import, I now want to create a view on the table that returns only records where the price has changed.

Here's the original scenario rephrased to suite this question:

We have a table of historical prices for items. The table contains rows where the same price is recorded for multiple dates. I want to create a view on this data which only shows price changes over time, so if a price changes from A to B I want to see it, but if it "changes" from B to B then I don't want to see it.

Example: if the price yesterday was $1, and the price today is $1, and there were no other price changes, then the price today can be inferred from the price yesterday so I only need the record from yesterday.

Example (http://sqlfiddle.com/#!3/c95ff/1):

Table data:    Effective            Product  Kind  Price  2013-04-23T00:23:00  1234     1     1.00  2013-04-24T00:24:00  1234     1     1.00 -- redundant, implied by record 1  2013-04-25T00:25:00  1234     1     1.50  2013-04-26T00:26:00  1234     1     2.00  2013-04-27T00:27:00  1234     1     2.00 -- redundant, implied by record 4  2013-04-28T00:28:00  1234     1     1.00 -- not redundant, price changed back to 1.00    Expected view data:    Effective            Product  Kind  Price  2013-04-23T00:23:00  1234     1     1.00  2013-04-25T00:25:00  1234     1     1.50  2013-04-26T00:26:00  1234     1     2.00  2013-04-28T00:28:00  1234     1     1.00  

My initial attempt used ROW_NUMBER:

SELECT      Effective,      Product,      Kind,      Price  FROM  (      SELECT          History.*,          ROW_NUMBER() OVER          (              PARTITION BY                  Product,                  Kind,                  Price              ORDER BY                  Effective ASC          ) AS RowNumber      FROM History  ) H  WHERE RowNumber = 1  ORDER BY Effective  

Which returned:

Effective               Product  Kind  Price  2013-04-23T00:23:00     1234     1     1.00                                               -- not 2013-04-24, good  2013-04-25T00:25:00     1234     1     1.50  2013-04-26T00:26:00     1234     1     2.00                                               -- not 2013-04-27, good                                               -- not 2013-04-28, bad  

I tried searching for a similar question/answer but it's hard to work out how to phrase the search, an example is worth a lot of words.

Any suggestions appreciated. Thanks

FileWatcher Troubleshooting Options

Posted: 01 Aug 2013 10:40 AM PDT

I have setup a Credential/Procedure/Program/File Watcher/Job to monitor a folder for new files, but it is not working. I am trying to figure out what I can check to troubleshoot why this isn't working. After scaling back the code to a bare minimum, here are somethings I have already done.

  • Verify that files are being created in the folder monitored - They are.
  • Verify that the procedure can be run by itself - It can.
  • Verify that the File Watcher shows up in DBA_Scheduler_File_Watchers and is enabled. - It does and is.
  • Verify that DBA_Scheduler_Job_Run_Details shows a successful execution - It does NOT show any entries for this job.
  • Check the alert log - Nothing interesting.
  • Check for other trace files - I am getting trace files mattching *_j000_*.trc, but they just have this: FILE_TRANSFER error is: with nothing after it.

Could someone explain what I am doing wrong or give me a way to troubleshoot this further? Here is my test code:

--Create Credential.  BEGIN      sys.dbms_scheduler.create_credential(          username => 'oracle',          password => 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxx',          database_role => NULL,          windows_domain => NULL,          comments => NULL,          credential_name => 'TestCredential'      );  END;  /      CREATE TABLE FilesFound(FoundDate Date);      CREATE OR REPLACE PROCEDURE TestProcedure(iFileInfo In sys.scheduler_filewatcher_result) AS  BEGIN     INSERT INTO FilesFound VALUES (sysdate);     COMMIT;  END;  /      BEGIN     DBMS_SCHEDULER.create_program(        program_name => 'TestProgram',        program_type => 'stored_procedure',        program_action => 'TestProcedure',        number_of_arguments => 1,        enabled => False);     DBMS_SCHEDULER.define_metadata_argument(        program_name => 'TestProgram',        metadata_attribute => 'event_message',        argument_position => 1);  END;  /        BEGIN  dbms_scheduler.create_file_watcher(     file_watcher_name => 'TestFileWatcher',     directory_path => '/u01/test',     file_name => '*.*',     credential_name => 'TestCredential',     destination => NULL,     enabled => False);  END;  /      BEGIN     dbms_scheduler.create_job(     job_name => 'TestJob',     program_name => 'TestProgram',     event_condition => NULL,     queue_spec => 'TestFileWatcher',     auto_drop => False,     enabled => False);       --Enable Everything.     dbms_scheduler.enable('TestProgram, TestFileWatcher, TestJob');  end;  /    --Set a one minute check interval.  BEGIN    DBMS_SCHEDULER.set_attribute(      'file_watcher_schedule',      'repeat_interval',      'freq=minutely; interval=1');  END;  /    --Create a file.  DECLARE    vFile utl_file.file_type;  BEGIN    EXECUTE IMMEDIATE 'create or replace directory TESTDIRECTORY as ''/u01/test''';    vFile := utl_file.fopen('TESTDIRECTORY', 'TestFileWatcher.txt', 'w', NULL);    utl_file.put_line(vFile, 'File has arrived '||SYSTIMESTAMP, TRUE);    utl_file.fclose(vFile);  END;  /      --Delay to give time for file to appear.  BEGIN     DBMS_LOCK.SLEEP(120);  END;  /    --Check the table.  BEGIN     FOR vx IN (select count(*) ct from sys.FilesFound) LOOP        If (vx.ct = 0) Then           DBMS_Output.Put_Line('Failed - No File Watcher Activity.');        Else           DBMS_Output.Put_Line('Success - File Watcher Activity Found.');        End If;     END Loop;  END;  /    --Cleanup.  EXECUTE dbms_scheduler.drop_job(job_name => 'TestJob');  EXECUTE dbms_scheduler.drop_program(program_name => 'TestProgram');  EXECUTE dbms_scheduler.drop_file_watcher(file_watcher_name => 'TestFileWatcher');  EXECUTE DBMS_SCHEDULER.drop_credential(credential_name => 'TestCredential');  drop table FilesFound;  drop procedure TestProcedure;  drop directory TestDirectory;  

restrict user host settings to socket connection only

Posted: 01 Aug 2013 12:48 PM PDT

Is there a way to restrict a user's host setting so that (s)he can only connect by socket and not TCP/IP? I'd like to have this setting for the root account.

edit: As Abdul Manaf pointed out skip-networking can be used to turn off TCP/IP connectivity altogether. But can it be done on a user account basis?

Unable to create a new listener in oracle 11g

Posted: 01 Aug 2013 01:48 PM PDT

In net manager i get an error message "A valid logfile is required" when i click on listener and choose '+' button. and then it keeps on giving me that error when i click on any other tab or button in net manager.

Grant access to a table to all users

Posted: 01 Aug 2013 10:09 AM PDT

Is it possible to assign a grant to a table for all users, or a default permission so that when new users are created they will have the specific grants for that table to SELECT, UPDATE, INSERT and DELETE?

What is Transaction Path Analysis?

Posted: 01 Aug 2013 08:41 PM PDT

I came across the term Transaction Path Analysis which I am not familiar with. A Google search resulted in only a few results, most of which seem to assume everyone knows what it means. From these my understanding is that it is about tracking what columns are used by queries to access data, in preparation for deciding indexes. This seems like common sense, but I'm surprised I can't find more specifics on it.

Is there a concise definition for transaction path analysis, a process for completing one, or any alternative more commonly used phrases that mean the same thing?

event scheduler not called

Posted: 01 Aug 2013 02:48 PM PDT

I had created one event scheduler which looks like this

mysql> show create event event1      -> ;  +--------+----------+-----------+--------------------------------------------------------------------------------------------+  | Event  | sql_mode | time_zone | Create Event  | character_set_client | collation_connection | Database Collation |  +--------+----------+-----------+-----------------------------------------------------------------------+----------------------+----------------------+    | event1 |          | SYSTEM    | CREATE DEFINER=`root`@`localhost` EVENT `event1` ON SCHEDULE EVERY 1 MONTH STARTS '2013-02-02 00:00:00' ON COMPLETION NOT PRESERVE ENABLE DO BEGIN   update samp set col1 =col1  + 1; END | utf8                 | utf8_general_ci      | latin1_swedish_ci  |  +--------+----------+-----------+-----------------------------------------------------------------------+----------------------+----------------------+---------  -----------+  1 row in set (0.00 sec)  

This events has not called on 1st of month. So i tried show processlist\g to find it is runnung or not; it showed following o/p

mysql> show processlist\g;  +-----+-----------------+-----------------+---------------+---------+------+---     | Id  | User            | Host            | db            | Command | Time | State                       | Info             |  +-----+-----------------+-----------------+---------------+---------+------+---  | 136 | event_scheduler | localhost       | NULL          | Daemon  | 1855 | Waiting for next activation | NULL    |  

so NULL in db col means that no DB is assigned to it?

Please help me to solve it.

What is a good Diff tool when comparing on-premise to SQL Azure?

Posted: 01 Aug 2013 04:59 PM PDT

We're working on migrating our database (~400 tables) from SQL 2008 R2 to SQL Azure. We're currently in the proof-of-concept stage, figuring out the process we'll be following. We have a process that we think is solid but I'd like to perform some validation diffs on both schema and data to confirm that we have a successful migration. I've never done this before with SQL Azure and am looking for a good tool to do this. I'd love if it's a free tool but we can pay for it if necessary. Ultimately, this is a one-time migration (we'll do it a few times but the real migration will only be done once).

Whatcha got?

Shrink a database below its initial size

Posted: 01 Aug 2013 06:56 AM PDT

I've got a dev database that is a copy of live 30gb, we have deleted some data that is not needed in dev that brings the data file used to 20gb.

Its SQL server 2005

So we have about 33% unused.

I need to reclaim the space which will allow us to have a second dev DB on the server (based on the cut down version)

However I cannot reclaim the space, I have done the following

The initial size of the file SMS2_Data is 30gb.

DBCC SHRINKFILE (N'SMS2_Data' , 0, TRUNCATEONLY)  

followed by

DBCC SHRINKFILE (N'SMS2_Data' , 19500)  

nojoy.

I have tried making a backup, creating a new DB with a low initial size then restoring, no joy as the initial size gets overwritten

Have tried

  ALTER DATABASE SMS2HazSub MODIFY FILE     (NAME = 'SMS2_Data', SIZE = 20000)   

errored saying MODIFY FILE failed. Specified size is less than current size.

Have done the shrink then changed the recovery mode from FUll to simple and back again. no joy

Any ideas??

ADDITION: So I thought it was to do with some TEXT fields. We have about 6 accross the system. So as a test I dropped them all and then did a shrink of the file and still no change.

ADDITION 2 : Cheers everyone for your answers, no single answert has sorted teh problem. The only option left is to reimport the data to another DB. This is not practicle as it would have to be done on the Live DB, which carries too much risk. I've no idea what will happen to the bonus points. I cant pick an answer as nothing has given me a solution to my problem

ADDITION 3 : I've tried lots of things and nothign has worked, thanks all for your help but I've just got to accept that its gonna be 30 gig at least.

No comments:

Post a Comment

Search This Blog