[how to] ER Schema "Optimisation" |
- ER Schema "Optimisation"
- Classifieds Database Design
- DB2 10.1 Client throws SQL0552N running a script
- Which Postgresql Replication Solution to Use? (Asynchronous Multimaster / Bucardo?)
- SSRS partial printing issue
- how to create a data warehouse with Kettle ETL and OLAP?
- SSRS 2008 parameters question
- Oracle no privileges on tablespace USERS
- "Restore With Replace" or drop/restore with Instant Initialization on?
- MySQL Lat/Lon Analytics [on hold]
- Normalizing nearly identical tables
- SQL Server: deadlocked on lock communication buffer resources
- How do I efficiently get "the most recent corresponding row"?
- Plan cache memory: parameterized SQL vs stored procedures
- Percona Xtradb Cluster : How to speed up insert?
- mysql innodb space x did not exist in memory
- Is there any rule of thumb to optimize sql queries like this
- MySQL shutdown unexpectedly
- Why are these two INSERTs deadlocking? Is it the trigger? What does this deadlock trace 1222 log tell me?
- xbase sql query for limiting the output
- Looking for a database-design to model a availability problem (large data sets are expected) [on hold]
- SqlPackage does not pick up variables from profile
- Speeding up mysqldump / reload
- Database Mail sending functionality not working on local system
- createdb: could not connect to database postgres: FATAL: could not write init file
- Repeated values in group_concat
- MySQL PDO Cannot assign requested address
- How much data is needed to show MySQL Cluster's performance scaling vs. InnoDB
- SQL Server: index creation date
Posted: 09 Sep 2013 07:20 PM PDT I am trying to make an ER Schema for a person. The person will have Current Address and Permanent Address. So far I have it so that both Current and Permanent Address are Composite Attributes and they have the same Attributes. Picture http://www.flickr.com/photos/101617879@N08/9715094700/ If you look at the picture you see that it is pretty much a copy of the same attributes. How can I combine them/make this better? |
Posted: 09 Sep 2013 07:40 PM PDT I have always worked with CMSs, but I am trying to get into using frameworks like Laravel and Yii. My main issue is when working with CMSs, I didn't have to think much about the database design since it was done for me. I have my plan drawn out on paper, but I am not really sure where to go from here.... I am trying to develop a Craigslist clone, but a little more specific. I have Googled all over for designs, and this is currently what I have. However, I want certain categories to have specific fields. Some categories may have fields in common with other categories, but not all categories are the same. For example: Those are just two examples, but I have a huge list of Categories and the required fields for each category. My current plan is to load all of these fields into the ad table. What effect will this have on performance? At some point there could be 60 fields attached to the ad table, but only 5-10 may be filled at a time, and the others would be empty/NULL. What is the best way to go about associating images with ads? I was thinking to just create an assets folder and create subfolders based on the ad id, and upload images to those subfolders of the corresponding ad id. Something like... What's the best way to set up this kind of database? Would sticking to MySQL be best for this? What if I want some states to have certain categories but not others? |
DB2 10.1 Client throws SQL0552N running a script Posted: 09 Sep 2013 05:05 PM PDT I am trying to run a sql script on a remote database using DB2 10.1 on AIX 6.1. I have granted my admin user (Adm101) on the database server SECADM. But when i run my script on the remote server with the db2 client (Client101) I get: Adm101 is in listed in the database but Client101 is not. How can I create a table on the remote client? |
Which Postgresql Replication Solution to Use? (Asynchronous Multimaster / Bucardo?) Posted: 09 Sep 2013 03:54 PM PDT First, I'm not a DBA, so pardon me if any of this question seems "off." I've written a peer-to-peer multiplayer game (the client) which connects to one of multiple servers for match making. Currently, there is only one server (a linode, let's call it Server 1) which runs the game's custom matchmaking process and PostgreSQL 8.4 (I will be upgrading this to 9.1, 9.2 or 9.3 if necessary). The matchmaking process uses libpq asynchronously for all SQL statements. Statements are not too complex, so load balancing is not an issue. I plan to add more linodes (call them Servers 2, 3, 4, etc.) that run the matchmaking process and PostreSQL as necessary. The challenge is that I want high-availability for all clients. If server 1 is unreachable, then server 2 can be used instead, with access to all the same data. The original plan was to have all servers connect to Server 1's database and asynchronously send SQL statements via libpq. The problem there is that if Server 1 is ever temporarily offline or unreachable, then every other server will fail. The "simplest" solution I can imagine is for each of the servers to completely mirror the database. If Server 1 is down, clients can connect to Server 2 which reads and writes to its own database, replicating any changes to Servers 3 and 4 immediately, and Server 1 once it comes back online. In this fashion, every server would hold an entire "mirrored" copy of the database. After reading through the introductory sections of PostgreSQL 9.3's documentation on replication, it seems like way to implement this solution would be asynchronous multi-master replication. (Is Bucardo the only choice here?) The thing I'm worried about with asynchronous replication is SQL inserts. When a new client plays for the first time, a player database entry is created. If Servers 2, 3, and 4 are online and Server 1 is offline, will there be any issues with 1, 3 or 4 if 2 inserts a new player row? (Imagine 1 coming back online and immediately trying to insert another player row.) Is asynchronous multi-master the right way to go for the above mentioned scenario? Or, is there a simpler or easier solution that I am overlooking? Perhaps one that doesn't require middleware, but just uses PostgreSQL 9.3's built in replication? |
Posted: 09 Sep 2013 07:34 PM PDT We have SSRS 2005 that publishes reports. The reports are often as big as 1000-3000 pages. So the problem is that when the users try to print the report, it prints some of the pages and stops. And this happens randomly. For instance, 10 people can print at the approximately the same time and 4 could face the problem. When I look at the even log it shows an ASP.Net warning that says:
In a development setting, I tried upgrading it to SSRS 2012 and moved all the reports. But I got the same error message:
And the error log is:
What I have already done is make changes to the config file and set Do you know why this is happening and why it is happening so randomly? |
how to create a data warehouse with Kettle ETL and OLAP? Posted: 09 Sep 2013 01:51 PM PDT i'm building a data warehouse for a erp system in java and i have some doubt first, I'm using kettle etl to perform the transformation of data of trasactional tables and store this data in a separate database for datawarehouse. for example joining the data in the tables invoices, debit notes and credit notes from the transactional database, formatting it and save it in the sales table of the datawarehouse database. In these tables can be modified and deleting data within past year, for this reason the running ETL checking and updating data within the last year, using kitchen (a cronjob) For this reason the ETL runs occasionally. but if I want to generate a sales report at the time, How do I show the data updated without affecting the speed of the report? because if i run the etl before the report I add a big delay to it second, in what case i must use olap? and for what? thanks |
Posted: 09 Sep 2013 12:43 PM PDT SSRS allows you to create 5 types of parameters: Text, Boolean, Date/Time, Integer i Float. I created a new integer parameter which allows user to enter a minimum price and view a report. However if the user tries to enter for example 100.00, the reports throws an error. Is there a way to conver that parameter to allow decimal data types? |
Oracle no privileges on tablespace USERS Posted: 09 Sep 2013 03:58 PM PDT I have a brand new Oracle database that is giving the error: I have done: Still, a single insert returns that error. Other than disk quota, what else causes the "no privileges on tablespace 'USERS'" error? UPDATE: Oracle version is 11.2.0.3.0 (11g). I am logging in from the command prompt on the server. So, I alter user kainaw as sysdba. Then, I logout and login a user kainaw to test: Note: i.test is a table with only a number field. I get the error above. I logout as kainaw, login as sysdba, play with permissions, logout, login, test, error, logout, login, ... |
"Restore With Replace" or drop/restore with Instant Initialization on? Posted: 09 Sep 2013 07:09 PM PDT What exactly does the restore argument "With Replace" do? I'm looking at needing to restore a database back to a beginning point on a regular basis, and I've been trying to figure out if there are any disadvantages to using Restore With Replace versus Dropping/Deleting the database entirely and restoring it. Will "With Replace" wipe the log files and reset whatever bits might be left in system databases as well? It seems that it would be much quicker, as I don't have to wait for the database to finish dropping (the database in question is around 2TB). I've already checked the TechNet article on the Restore arguments, it doesn't go into this specific question. |
MySQL Lat/Lon Analytics [on hold] Posted: 09 Sep 2013 10:20 AM PDT db n00b here, What is the best way to store GPS data points along with a product ID (or key) for location based advertising. Each advertisement will post it's location along with it's currently displayed product to our central server. Our server will then store that data into an analytics table to show customers how much their product ad is displayed. Should I compound (sum up the times) the data points daily, hourly or just throw them all in there (the database) one by one? |
Normalizing nearly identical tables Posted: 09 Sep 2013 09:29 AM PDT BackgroundI'm managing a relatively small database project in which we are adding support for reporting involving status updates for items in out product table. I was bit thrown into this and I've only got about a month of ever writing SQL. Problem DescriptionAt it's core we have a central table [product] with a bigint unique key. We now want to record various messages that come in from a satellite application. The messages come in 2 major types (MessageA and MessageB) that are almost identical, MessageB contains an extra column that MessageA doesn't posses. Also of note is that there is no overlap between the 2 message_type columns and no columns are NULL. That is to say both messages have their own set of message_types.MessageA:
MessageB:
What I triedMy initial design was to add 2 tables, one for each new data type exactly mirroring the datatypes. This "seemed" more "normalized" based off my month of so of SQL experience. But after I started writing a query that tried to combine the data into a report, I couldn't come up with a non-redundant query to build the dataset. My primitive query looked like:Pseudocode I'm a little paranoid about the long-term performance implications of querying [product] twice since it's our biggest table (adds up to 1M rows a year, maybe more) and the DB runs largely unmaintained off-site on consumer level hardware for an average life-cycle of 3-5 years between upgrades and we have had some reports trickle in of issue at out largest sites. These new 2 tables would potentially grow at 3-7 times the rate of [product] (possibly 5 million rows per year or more). I started to think it might be simpler to just have 1 table and make section_number NULL. If section_number = NULL then it is or type A otherwise it is B. The actual questionIs this a good idea?Should I be worrying about this optimization? Is this even an optimization or just a more accommodating design? I'm looking for some guidance whether I should shape the data based on "input" format or "output". Normalization is elegant but at what point should I bend the schema to look like the desired output structure? |
SQL Server: deadlocked on lock communication buffer resources Posted: 09 Sep 2013 09:23 AM PDT what could be possible reason for this deadlock type? (not deadlock in general) lock communication buffer resources is this indicated system is low in memory and buffers count ran out of limit? Detailed Error: Transaction (Process ID 59) was deadlocked on lock communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction |
How do I efficiently get "the most recent corresponding row"? Posted: 09 Sep 2013 06:39 PM PDT I have a query pattern that must be very common, but I don't know how to write an efficient query for it. I want to look up the rows of a table that correspond to "the most recent date not after" the rows of another table. I have a table, "inventory" say, which represents the inventory I hold on a certain day. and a table, "price" say, which holds the price of a good on a given day How can I efficiently get the "most recent" price for each row of the inventory table, i.e. I know one way of doing this: and then join this query again to inventory. For large tables even doing the first query (without joining again to inventory) is very slow. However, the same problem is quickly solved if I simply use my programming language to issue one Is there a standard way to do this efficiently? It feels like it must come up often and that there should be a way to write a fast query for it. [EDIT: I'm using Postgres, but an SQL-generic answer would be appreciated] |
Plan cache memory: parameterized SQL vs stored procedures Posted: 09 Sep 2013 10:27 AM PDT In making a case to disallow parameterized SQL in my company's development environment the lead developer related a story about how the last time they used parameterized SQL the server had major performance issues. He said this was because the plan caching ate up almost all of the available memory on the server and switching to stored procedures cleared up the performance issues. My question: is there a major difference in the memory footprint of a compiled/cached stored procedure and the cached plan for a parameterized SQL? I have a guess that they also simplified the number of calls by going to procs and that probably had as much impact or more than just going to procs by itself, but I don't know. |
Percona Xtradb Cluster : How to speed up insert? Posted: 09 Sep 2013 09:15 AM PDT I recently installed a 3 full master node cluster based on Percona Xtradb (very easy install). But now i need to make some tuning to increase INSERT/UPDATE requests. Actually, i made around 100 insertions every 5 minutes, but also made around 400 update in the same time. All this operation take less than 3 minutes when i was on a single server architecture. And now, with 3 node server, it takes more than 5 minutes ... Is there any tuning i can do to speed up this operations ? Here is my actual cnf configuration : Here are the 3-server hard config : Node#1 Node#2 Node#3 UPDATE Actualy there's around 2.4M records (24 fields each) in the table concerned by the INSERT/UPDATE statements (6 fields indexed). |
mysql innodb space x did not exist in memory Posted: 09 Sep 2013 09:33 AM PDT Into my innodb log I got the errors below. How to fix? What did it mean? Some tables are corrupted but not all. |
Is there any rule of thumb to optimize sql queries like this Posted: 09 Sep 2013 07:31 PM PDT this is my first question here. Although I've been helped out from this forum over a hundred times. I was having difficulties in optimizing sql query. It takes hours to execute. Record set is also large enough. The query was not written by me. So just to find the bottle neck I tried removing conditional clauses but that doesn't make any difference. Indexing the ID's done. Can any sql guru here could throw some light on it. Is there any fine tuning room left in the query below? The server hosting the database in DB2. I'm not too pro in sql. Thanks as always. Regards, Nuh This is the query: |
Posted: 09 Sep 2013 08:15 PM PDT Hello guys.I need your help. my phpmyadmin is not working. How can i make it working again, i have a very large data, please help me how to fix this. i just want to get my database. I have this following files. I want to reinstall my xampp but is there a way to back up my db when i can't run it? like copying this folder 'DBNAME' with files |
Posted: 09 Sep 2013 04:29 PM PDT We are seeing intermittent deadlocks in production when receiving multiple simultaneous API requests. Each request basically culminates in an INSERT statement into the same table, which is where we see the deadlock. I wrote a double-threaded console application that can reliably reproduce the issue by simply executing two API requests simultaneously, but only in production, not in staging. (This leads me to believe that there is something about our staging database -- possibly the volume of data, SQL Server 2012 vs 2005, or index tuning -- that differs from production in such a way that the deadlock is avoided. The code is identical, as I believe is the schema.) Since I can now reproduce the deadlock, I was able to convince my boss to enable trace flag 1222 temporarily, and captured the log below: One thing to note is that there is a trigger on the insert into the relevant table. The trigger is necessary to determine a status code for the overall record, which may depend on sibling records in the same table. For a long time we thought the trigger was the cause of the deadlocks, so we added increasingly aggressive locking hints to the trigger, culminating in the current setup where we do a TABLOCKX, HOLDLOCK on the relevant table(s) before the critical section. We figured this would completely prevent the deadlocks, at the expense of some performance, by effectively serializing all inserts. But it seems that is not the case. As I understand it, something else prior to our exclusive table locks must already be holding a shared or update lock. But what? Other info that might help you help me: The table DomainTransferRANT is heavily indexed. Its primary key is a non-clustered GUID. There is a clustered index on another important INT column. And there are 7 other non-clustered indexes. Finally, there are several foreign key constraints. |
xbase sql query for limiting the output Posted: 09 Sep 2013 05:16 PM PDT I want to query in my xbase database an limit the output like: But it does not work on xbase |
Posted: 09 Sep 2013 06:43 PM PDT I am looking for a database-design to store and query information about disposability of cars in a care-sharing-community, where users can rent cars provided by other users. The data which is relevant for this query will be:
Some example queries that should be possible:
The last query is the most difficult. It seems that information when cars are booked (not available) could not be stored by just one date field and it could not be stored as a string (ie. something similar to iCalendar), because it would be hard to query. I guess the date needs to be "normalized" somehow. I think I am not the first one facing this kind of problem - anyone knows some open source or literature dealing with this topic? Or could help with a draft. BTW: Large data sets are expected (100.000 to 1.000.000 cars and 10.000.000 or more bookings contracts per year). |
SqlPackage does not pick up variables from profile Posted: 09 Sep 2013 06:20 PM PDT I want to upgrade a database using .dacpac and sqlpackage.exe here is how I run sqlpackage: The error I get is: * The following SqlCmd variables are not defined in the target scripts: foo. I have verified that myprofile.publish.xml file does contain that var: I also verified that project that creates dacpac does publish successfully from within visual studio using What else could I be missing? (I'm using SQL Server 2012) |
Speeding up mysqldump / reload Posted: 09 Sep 2013 10:19 AM PDT Converting a large schema to file-per-table and I will be performing a mysqldump/reload with --all-databases. I have edited the my.cnf and changed "innod_flush_log_at_trx_commit=2" to speed up the load. I am planning to "SET GLOBAL innodb_max_dirty_pages_pct=0;" at some point before the dump. I am curious to know which combination of settings will get me the fastest dump and reload times? SCHEMA stats: 26 myisam tables 413 innodb ~240GB of data [--opt= --disable-keys; --extended-insert; --quick, etc] --no-autocommit ?? vs prepending session vars like: "SET autocommit=0; SET unique_checks=0; SET foreign_key_checks=0;" Are the mysqldump options equivalent or not really? Thanks for your advice! |
Database Mail sending functionality not working on local system Posted: 09 Sep 2013 01:20 PM PDT I am using Database Mail functionality to send mail from a SQL Server 2008 database via following stored procedure execution: I have tried with my gmail account profile on my local system it's working properly but not with my company or outlook profile. Error message:
Reference What would be the problem? Thanks |
createdb: could not connect to database postgres: FATAL: could not write init file Posted: 09 Sep 2013 08:20 PM PDT RedHat Enterprise Server 3.0 32 Bits psql (PostgreSQL) 8.2.3 user: postgres server is running: I had just created a new database cluster with initdb; but when I run createdb: any clues as to the cause and possible solutions to this problem? |
Repeated values in group_concat Posted: 09 Sep 2013 04:20 PM PDT I have two tables, first the table food and Second is Activity: For now I'm using the following query: Could you please help me, I need output in the below format: |
MySQL PDO Cannot assign requested address Posted: 09 Sep 2013 02:20 PM PDT Can someone help me with this error? I have a Server with a lot connections per second; out of about 100 Connections, a single one got this error. I've tried this recommendation from stackoverflow however it does not solve my problem. |
How much data is needed to show MySQL Cluster's performance scaling vs. InnoDB Posted: 09 Sep 2013 09:20 AM PDT I am evaluating MySQL Cluster as a possible replacement for an InnoDB schema. So far, I have tested it with 10s of MB of data, and found MySQL Cluster slower than InnoDB; however, I have been told MySQL Cluster scales much better. How much data does it take to show a performance benefit to MySQL Cluster vs. an InnoDB schema? Or, is there a better way to demonstrate MySQL Cluster's merits? EDIT Perhaps an important note: My cluster is currently a heterogeneous cluster with 4 machines. On each machine, I have given an equal amount of Data and Index Memory; 4GB, 2GB, 2GB, and 1GB respectively. The machines are running i7's and are connected over a Gigabit Lan. NumOfReplicas is set to 2. EDIT This application is a low-usage analytics database, which has roughly 3 tables >= 200M rows and 5 tables <= 10K rows. When we use it, it takes 15 seconds to run our aggregate functions. My boss asked me to research MySQL Cluster, to see if we could increase performance, since we thought aggregate functions could run pretty well in parallel. |
SQL Server: index creation date Posted: 09 Sep 2013 12:08 PM PDT In SQL Server 2005 and above, how can I find when an index was created? |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment