[how to] How do I create rows of test data in each table? |
- How do I create rows of test data in each table?
- Database structure for tags?
- Percona Xtradb cluster only replicating DDL statements
- Table restraints to limit value insertions based off of other values in a row
- MySQL Insert Row With Foreign Key Equal to ID - Circular Reference
- packages that php need to connect to mysql
- MySQL Master/Slave without populating tables
- Add new shard - always best?
- Sudden increase in log_file_sync waits
- "relation does not exist" trying to import mysql dump into postgres
- How do I add a index column to a database with non unique columns
- Candidate key = Key = Minimal Superkey?
- import openoffice .ods file with specific structure like date and value fields
- Gathering data from independent databases to a central one
- Performance of large queries on low bandwidth connections
- Performance of database with transactional data
- MySQL : Query WHERE clause and JOIN
- MySQL Federated tables and Triggers
- Re enable Windows Authetication in SQL Server
- TokuDB/InnoDB Question
- Auditing specific database events in SQL Server
- Large INSERTs performance optimization
- MySQL Hanging Completely when `ALTER TABLE... ENABLE KEYS`
- How can I convert an Oracle dump file into SQL Server?
- MySQL users corrupt
- I have multiple sources INSERTing into a MySQL innodb table. My periodic aggregation script never makes accurate aggregates. Why?
- MySQL ignoring some params from my.cnf
- PostgreSQL is running locally but I cannot connect. Why?
- Why are numbers tables "invaluable"?
How do I create rows of test data in each table? Posted: 17 May 2013 09:28 PM PDT I am currently working on an assignment. Part of the assignment states "create a minimum of 15 rows of test data in each table and create at least 3 queries that joins two tables and returnes values from both tables. How do I do this? Any suggestions is much appreciated. Thanks M |
Posted: 17 May 2013 04:46 PM PDT I posted a thread on Stack Overflow and one person pointed out that my database structure wasn't the best so now I'm here to ask how to do this in a good and proper way. Currently I have one table called "users" and in the table I have a lot of userinfo. In "users" I have a column called "tags" were information inside the column look like "#dj #personal #coding" and things like that for every user. Is there any better way to store tags describing the user? I have one column called "tags" were the user have stored tags describing him/her and also a column called "interests" were tags are stored to remember what the user is looking for... Basically, is there any better way to store these tags for each user? |
Percona Xtradb cluster only replicating DDL statements Posted: 17 May 2013 07:00 PM PDT I'm having a hard time with percona xtradb cluster. The cluster has 3 nodes, and is up and running (all tip top according to wsrep status indicators). It did the SST perfectly fine from one node to the other 2, but every time I do an update in a table, the write is not propagated to the other nodes, even though ddl statements such as table creation are! Any idea of what's going on? I turned all sorts of logging on and I keep seeing: "Skipping empty log_xid" messages in error log, but not affecting the tables im testing so not sure this is related.. Im talking about innodb tables by the way, that where inherited from mysql 5.5 (not percona mysql). Attaching the status of wsrep_ of a single node, the other 2 are very similar. At the very least there should be some logging indicating where the problem is, the debug level logs of galera are silent about these queries! wsrep status: gist.github.com/anonymous/fdb2501ee146fba99c5d dump of mysql variables (i have too many config files): gist.github.com/anonymous/d6bae549ea9d4bae9ea6 error log: gist.github.com/anonymous/8050335826568394cbf3 |
Table restraints to limit value insertions based off of other values in a row Posted: 17 May 2013 08:21 PM PDT Using SQL Server 2008 R2 Is it possible to limit what values are allowed in a coulumn based off of other values in the row. Ex: myTable: ID, Test_mode, Active 1 , 1 , Null 2 , 0 , 1 3 , 1 , 0 Is there a way to either change the value of OR If OR Throw some kind of error if
I hope this makes sense, if not let me know and Ill update the question. |
MySQL Insert Row With Foreign Key Equal to ID - Circular Reference Posted: 17 May 2013 03:18 PM PDT In MySQL database I have an events table which has a circular reference to itself with the following columns. Each new row inserted has a circular reference to another row in the table. The problem is how do I create the first event for each user? I have tried manually inserting the IDs so i know it is possible to have an event reference itself. For example if i send an explicit query with id = 1, user_id = 1, event_id = 1, title = test then the row is added fine and then adding other events for that user is then straightforward. The problem is that this solution is only appropriate for testing. In production, when a user wants to create their first event, how can I construct a query so it sets the event_id to the same value as the new auto increment value on the id column? To clarify, the first event for a user will throw a foriegn key constraint error as I do not know the event_id that should be set. As requested |
packages that php need to connect to mysql Posted: 17 May 2013 01:24 PM PDT I want to know minimum packages that php need to connect to mysql? I know some packages such as php5-mysql, mysql-common, mysql-client,... Also I want to know a short description about architecture of connection php and mysql, for example what does php5-mysql or mysql-common,... do? |
MySQL Master/Slave without populating tables Posted: 17 May 2013 01:33 PM PDT So I have some Zabbix instances which are using MySQL for their backends. What I would like to do is, for DR purposes, backup the DBs for each instance using a Master/Slave configuration, with the slave node sitting at the DR site. However, I don't want to capture trends/historical data, all I want is configuration data. Is there a way to replicate Zabbix's database without populating the tables which hold all of the applications historical monitoring metrics? |
Posted: 17 May 2013 12:56 PM PDT In our setup, we currently have 3 shard set sharded cluster, each shard being a replica set of 3. Our writes are about to go up significantly to implement a new feature, and we know the extra data will be necessary. The nature of our writes are basically all upserts(which will likely be updates) and updates where we increment a particular field by 1. Our updates are always being incremented by 1 and the way our data is distributed, not all documents are treated equally, some get their fields incremented a lot more. An alternative solution that I thought could be effective is to have some type of middle man, like a few Redis databases (or some smaller mongods) where we do the updates to them first and after about 5 minutes (or use some queueing system), we have a bunch of workers consume the data and update the actual live cluster with the documents. This would save our main cluster a ton of writes as it would allow certain update heavy documents to accumulate their updates and could save us a ton of writes (exact numbers I will post shortly in an edit). So bottom line, when is adding another shard not the right solution? |
Sudden increase in log_file_sync waits Posted: 17 May 2013 03:08 PM PDT I'm on Oracle 11gR2 with a 2 node RAC system. It's shared fiber storage to an EMC Clariion. Last friday things went bad..fast. All of the sudden processes that normally ran fine for years became very, very slow. I noticed a sudden increase in log_file_sync waits and the LGWR process is listed as a blocker for several processes. Nothing changed on that Friday that we're aware of. Also, it appears to be just on one node. Statspack reports confirm that log_file_sync wait time went from around 1ms to 47ms ! Additionally statspack shows this - meaning some of them are waiting a lot: And before it was this: What can cause this? What should I be checking for? |
"relation does not exist" trying to import mysql dump into postgres Posted: 17 May 2013 08:18 PM PDT environment: Here's the sequence of steps:
At this point, the process began, and after getting tons of I get I checked the mydump.sql file, and it has a CREATE TABLE "pricedata", but obviously this didn't have the desired effect on the postgres db. I'm not sure what I'm supposed to do to get this to work. Any assistance would be greatly appreciated. |
How do I add a index column to a database with non unique columns Posted: 17 May 2013 12:04 PM PDT I have a database with non unique data in columns, How do I add a index column or add a primairy key column? |
Candidate key = Key = Minimal Superkey? Posted: 17 May 2013 11:56 AM PDT I got a little confused by all these keys. As I understand Is this correct or not? |
import openoffice .ods file with specific structure like date and value fields Posted: 17 May 2013 01:19 PM PDT How I can import with keeping the structure of the data like date fields and numiric fields the rest of the fields is text. With import into mysql with import function from phpmyadmin from an .ods spreadsheet all is converted to text fields. |
Gathering data from independent databases to a central one Posted: 17 May 2013 11:13 AM PDT I need to do the following. In a LAN there are 4 postgres boxes each having a small postgres database. These servers are gathering data from the internet and insert them as records in a table in their db. I need to setup a server that will gather the data from each table from those 4 servers and merge the records in one table of the same structure. This table will be then used for centralized processing. The 4 servers fill their table at about 10 records per minute and there is the requirement that at least once a minute each server's new data must be propagated to the "master". Data in the servers must be held for some time (~ one month). The tables have a field of type UUID so uniqueness across all servers is not a problem. But I dont know what would be dest practice in my scenario:
The obvious solution of having the servers update the master directly is not an option because it creates a single point of failure for the 4 servers and anyway it is a given condition to me that the servers update their own database instance. Any comments, other solutions? (All the servers are fedora 17 32bit / postgres 9.2) EDIT: I started investigating replication solutions but they seem overkill for my case since I dont need any fancy failover or conflict management, but maybe I am mistaken. |
Performance of large queries on low bandwidth connections Posted: 17 May 2013 11:10 AM PDT I've been running some measurements on bandwidth needs and the effects of latency on query performance against Postgres 8.3 (waiting for management to approve upgrade) on Windows. SELECTs of various quantities of rows and some INSERTs with bandwidth of 25 Mbps, 5 Mbps, and 1 Mbps, with latency of 50,40,30,20,10. For selects of 1, 3, and 250 rows the performance was basically the same with each bandwidth, and the same effects of latency. However, for a select of 7100 rows, the 1 Mbps situation had identical performance across each latency value. Additionally, while 25 and 5 Mbps performance was linearly correlated to difference in rows (28x rows, 28x time), the 1 Mbps queries are at about 44x. So, I was curious if Postgres has some built in throttling that would mask the effects of latency. This may be more of a networking question, but I don't think so. Seems to be specific behavior to larger queries on slower connections. |
Performance of database with transactional data Posted: 17 May 2013 11:06 AM PDT There are lots of row modifications (several thousands of rows per minute) in our PostgreSQL database that stores transactional data. We have a problem because PostgreSQL runs vacuuming process and this very slow down our database performance. I am looking for the most efficient solution how to solve this problem. Could be a solution some of these points?:
|
MySQL : Query WHERE clause and JOIN Posted: 17 May 2013 01:53 PM PDT I'm having an issue using a WHERE clause and JOIN. I'm just trying to limit data set by time "starttime', but I get query error and it refers to join. Any ideas? I will be happy to provide more info if required. |
MySQL Federated tables and Triggers Posted: 17 May 2013 11:12 AM PDT This is the scenario. Have two MySQL servers (S1, S2) on different machines, with a database on each (DB1, DB2). I have a table (T2) on DB2 that needs to "fectch" rows from another table (T1) on DB1. I have created a temporary table (base on DB1 and federated on DB2), so when I insert a row on T1 wich complies with some requirements I copy to my DB1 temporary table. Thats OK. With the federated table on DB2 I get the row I need for T2. I supposed that with a trigger on the federated temporary table of DB2 it's possible to insert that row to T2, but the trigger never fires. Any suggestion? |
Re enable Windows Authetication in SQL Server Posted: 17 May 2013 02:34 PM PDT My old employee has disabled Windows Authentication in our server. Now I'm not able to access the SQL Server even though I have Administrator access to the server. I need to reset the sa password. I tried logging in using single user mode as Remus described but I get the following error:
|
Posted: 17 May 2013 11:22 AM PDT What's the faster scheme for insert data to That's okay to answer of one engine. |
Auditing specific database events in SQL Server Posted: 17 May 2013 12:54 PM PDT For auditing purposes, we have a requirement in which we have to enable Auditing for theses events:
Changes made by our application are stored in tables (CreatedBy, ModifiedBy, CreatedOn, ModifiedOn) and row changes are stored in XML in a specific table so we won't need to log changes made by our application. Previously we had C2 Audit Mode temporarily enabled, but due to data volume and performance considerations and having these requirements in mind we considered it excessive and disabled it. Enabling C2 Audit mode is fairly easy, how can i configure the database to perform this kind of logging ? Additional Notes:
|
Large INSERTs performance optimization Posted: 17 May 2013 01:04 PM PDT I have 15 Amazon AWS EC2 t1.micro instances which simultaneously populate Amazon RDS MySQL d2.m2.xlarge database with data using large INSERTs (40000 rows in query). The queries are send continuously. The table is INNODB, two INT columns, there is index for both columns. CPU Utilization of RDS instance is about 30% during data receiving. When I have one EC2 instance, the speed is in orders of magnitude faster than I run 15 instances simultaneously. In light of this, the 15-instances group works slower and slower until the speed becomes totally unsatisfactory. How can I optimize performance for this process? UPDATEMy I need 2 indexes because it is necessary for me to fetch data by I insert data with 15 parallel instances insert ~121 000 000 rows in 2 hours, but I am sure that it can be much more faster. |
MySQL Hanging Completely when `ALTER TABLE... ENABLE KEYS` Posted: 17 May 2013 12:39 PM PDT I know very little about database administration but I have to deal with some very large tables on my site. This server has 64GB of RAM and Intel Core i7-3820 (4 x 3600 MHz). Most of everything it does is MySQL. I use half MyISAM and half InnoDB tables. I have a couple of tables in MyISAM with billions of rows. Every day I have a script which disables keys, add a few million more rows, then enables keys again. The Please also advise me on how to setup the my.cnf file to fix this issue and optimize for rebuilding these indexes as fast as possible. Someone told me to increase the key_buffer_size, but I'm unsure about whether this is good as everyone seems to have a different opinion..? Currently it looks like this: MySQL Version UPDATE I've started a bounty. I changed some of the my.conf settings (also updated in this post). Then when I tried to rebuild the indexes on the large table it started out with Please help me optimize this! It's supposed to run every day but currently takes several days just to Here are some other variables I was asked for, which I do not understand but might help you help me: |
How can I convert an Oracle dump file into SQL Server? Posted: 17 May 2013 09:08 PM PDT I want to ask about converting Oracle dump files ( I've searched for related technologies, such as Oradump to SQL Server. Do you have another suggestion to solve this? Open source ones I mean. Thanks for both of your response. I see how difficult it will, but is there any possibility to use another way in converting oracle dump file? because all of solution's converter tools always provide a connection database server. I'm so thankful what if you can suggest another tool. thanks anyway |
Posted: 17 May 2013 11:08 AM PDT I have a strange situation here: From time to time I cannot log in with any of my mysql users. I even cannot make a mysql dump. So I started searching in mysql files and I found that I searched about the problem and I found that there was some bug in the MySQL, but it was in the older versions (4.X). I'm running 5.5. Any ideas? Thanks! |
Posted: 17 May 2013 12:08 PM PDT I apologize in advance if this is a repeat. I'm not really sure how to properly ask for what I'm running into. I have a large InnoDB table set up. I have 3 sources that all INSERT concurrently at a moderately high volume (300-500 INSERT/s). I have a PERL script running on a cron job every 5 minutes that aggregates data from the last time it ran. It keeps track of the last record it processed by storing the auto_increment value of the last row in a metadata table. The aggregates are always off. But not by much. This has been so frustrating because it's just plain simple math (SELECT account_id,sum(sold) GROUP BY account_id). I have a suspicion that it has something to do with the transaction isolation (repeatable-read). I recently found FlexViews which looks very cool and might address my problem. But I was wondering if anyone could:
I'm like 95% sure the auto_increment thing along with transaction commit ordering is hosing me up, but I don't understand enough about DBs to really ask the right question. Oh, one thing to note, I've already checked over the field types. This issues isn't the result of rounding. |
MySQL ignoring some params from my.cnf Posted: 17 May 2013 12:12 PM PDT I have a virtual box with 2 installed instances of MySQL. First instance: Second instance: I can connect to both. The first instance is fine. The second is not. MySQL doesn't ignoring some params from I need to make MySQL read all params from my.cnf. I tried: - to move Could someone guide me? |
PostgreSQL is running locally but I cannot connect. Why? Posted: 17 May 2013 01:49 PM PDT Recently updated my machine from Mac OS X Lion (10.7.4) to Mountain Lion (10.8) and I think it borked my PostgreSQL installation. It was installed originally via Homebrew. I'm not a DBA, but hoping someone can tell me how to troubleshoot this. I am unable to connect (but was able to before pre-Mountain Lion): But Postgres is still clearly running: And it's responding to queries (both to a test db and the development db) from a local Rails app There appears to be no How can I troubleshoot this? |
Why are numbers tables "invaluable"? Posted: 17 May 2013 12:53 PM PDT Our resident database expert is telling us that numbers tables are invaluable. I don't quite understand why. Here's a numbers table: Per the blog post, the rationale given is
But I don't understand what those uses are, exactly -- can you provide some compelling, specific examples of where a "numbers table" saves you a ton of work in SQL Server -- and why we should have them? |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment