[how to] Defining a two-way link |
- Defining a two-way link
- Is it worth to separate columns into multiple tables for one-to-one relational table
- how to rebuild / reinstall ssrs (reportserver, reportservertempdb) databases?
- "Row not found at subscriber" with a row filter
- Select unique value whereas the time is highest in the most optimal way
- How to restrict row explosion in join? - Distinct or union?
- How to run a SELECT query within while loop in PHP?
- If I update a column record in a table, will indexes that do NOT have this column in it be affected?
- Using wm_concat to concatenate rows, but in an order determined by another column
- Firebird database performance after server upgrade/restart
- SQL Agent embedded PowerShell script in CmdExec step fails with import-module sqlps
- Database replication using wamp?
- Can I add a unique constraint that ignores existing violations?
- Postgresql constrains on FK
- How can I reset a mysql table auto-increment to 1 in phpMyAdmin?
- Convert Oracle database to Derby
- Index on foreign key makes query extremely slow
- How should I best handle a rapidly growing database?
- Why would increase in innodb_buffer_pool_size slow down MySQL?
- How to import a text file with '|' delimited data to PostgreSQL database?
- How to recover/restore corrupted Innodb data files?
- Mysql reliable with 1000 new entries / minute?
- TokuDB not much faster than MySQL
- How database administrators can see my requests to SQL Server?
- How do I check if a constraint exists on Firebird?
- ORA-16000 when trying to perform select on read only access ORACLE database
- How to run a cold backup with Linux/tar without shutting down MySQL slave?
- SQL to read XML from file into PostgreSQL database
Posted: 04 Jun 2013 05:28 PM PDT I have a Up until now, I've used two different methods for this:
I'm not particularly satisfied with either of these solutions. The first one feels messy with that So that's why I'm here. What would you suggest for such a link? Note that I don't need any more information saved with it, I just need two user IDs associated with each other, and preferably some kind of status like |
Is it worth to separate columns into multiple tables for one-to-one relational table Posted: 04 Jun 2013 05:14 PM PDT I need to make a decision for database structure on whether to separate one-to-one relational columns into multiple tables and link with one relationship id or just add all columns into one table. The number of columns would be around 45 and I need to sort data on different columns on different query (one sort per query). I will be using MyISAM storage engine. Furthermore, there will be millions of data in the table(s). |
how to rebuild / reinstall ssrs (reportserver, reportservertempdb) databases? Posted: 04 Jun 2013 04:30 PM PDT Our server crashed. We got it back up and running however, the mentioned databases have been corrupted. Is there a programmatic / automatic way of rebuilding or reinstalling the SSRS databases? If not:
|
"Row not found at subscriber" with a row filter Posted: 04 Jun 2013 03:53 PM PDT I had a production issue today where delivery of a handful of update statements failed at the subscriber with "row not found". What's odd about it is that I have a horizontal filter set up on the article in question such that the rows in question shouldn't have been at the subscriber. What's especially odd is that there were many other rows within the same transaction that also qualified for exclusion via the filter that didn't trigger the same error. I got past it by setting the distribution agent to ignore errors. Does anyone have any idea what happened and how I can keep it from happening in the future? |
Select unique value whereas the time is highest in the most optimal way Posted: 04 Jun 2013 03:24 PM PDT Given a simple, with a text and time field, I want to select X unique values from the text field, whereas that row contains the highest value for time. The query that meets most of my requirements is: For small tables, this works perfect. Thou in my table (300k+ rows) it makes mysql crash, due to the subquery. Is it possible to optimize this query? If it cannot be optimized, would it be possible to select the last inserted unique values for text? (the id and time are theoretically uncorrelated, though in 99% of the cases a correlation will be found, whereas the higher the id, the higher the time) Thank you |
How to restrict row explosion in join? - Distinct or union? Posted: 04 Jun 2013 02:10 PM PDT Here are the cardinalities of my tables: I am joining in cross-product, which as you can see outputs The join is in a There are no constraints on the tables, as this is an OLAP data-set. I can however impose some uniqueness constraints. Each table is joined on the same attribute. How do I reduce the size of the output to only show the distinct results, rather than the full cross-product? |
How to run a SELECT query within while loop in PHP? Posted: 04 Jun 2013 03:47 PM PDT Within a but this does not work. I cannot |
If I update a column record in a table, will indexes that do NOT have this column in it be affected? Posted: 04 Jun 2013 02:16 PM PDT In terms of performance if I have a table like so: And then create a non-clustered index like this: If I perform an update to the table, only changing Just curious how far performance touches all indexes? |
Using wm_concat to concatenate rows, but in an order determined by another column Posted: 04 Jun 2013 01:10 PM PDT Let's say I have 3 columns: I would like to concatenate the description for all like How do I concatenate rows in this fashion? |
Firebird database performance after server upgrade/restart Posted: 04 Jun 2013 12:02 PM PDT Got a 350 GB database (more than 40M records plus 0 - 1000 BLOBs for each record in another table). After upgrading Firebird to version 2.1.5 (mainly because of filesystem cache issue) database became terribly slow, both insertion and fetching. How to restore performance? I tried running some queries to force caching, it was somewhat helpful, and currently left it with |
SQL Agent embedded PowerShell script in CmdExec step fails with import-module sqlps Posted: 04 Jun 2013 12:14 PM PDT SQL Server 2008R2 PowerShell 2.1 I am trying to create a SQL Agent job that dynamically backs up all non-corrupted SSAS databases on an instance without the use of SSIS. In my SQL Agent job, when I create a CmdExec step and point to a PowerShell script file (.ps1) like this: the job executes successfully (or at least gets far enough to only encounter logic or other syntax issues). This approach won't work for a final solution, because there is a requirement to keep the PowerShell script internal to SQL. So I have a different CmdExec step that embeds the PowerShell script like so: However, when executed with the embedded script, the job errors out quickly with the following response:
Why can't I reference the module from an embedded script, but doing so in a ps1 file works just fine? |
Database replication using wamp? Posted: 04 Jun 2013 02:05 PM PDT I have created a POS system for our corporation, in the HQ we have a wamp server with the main database, we also have more than 25 branches across the country. I will setup a wamp server on each branch, so I can acces its database directly when putting wamp online. I want to make MySQL replication with all branches, so every query on any branch will affect the main database on HQ. I tried to test but found no one explain how to do it using wamp on different PCs. |
Can I add a unique constraint that ignores existing violations? Posted: 04 Jun 2013 06:45 PM PDT I have a table which currently has duplicate values in a column. I cannot remove these erroneous duplicates but I would like to prevent additional non-unique values from being added. Can I create a I have tried using In this case I have a table which ties licensing information to "CompanyName" EDIT: Having multiple rows with the same "CompanyName" is bad data, but we can't remove or update those duplicates at this time. One approach is to have the This data is queried by company name. For the few existing duplicates this will mean that multiple rows are returned and displayed... While this is wrong, it's acceptable in our use case. The goal is to prevent it in the future. It seems to me from the comments that I have to do this logic in the stored procedures. |
Posted: 04 Jun 2013 12:23 PM PDT I am trying to design a (part of a) database which has to accomplish the following:
I try to create a good database design to represent this data, however there are quite a few difficulties. A design I came up with is as followed:
However, there is a problem with this design: a Would it be possible to add a constraint which can check exactly that? If not, what other options are available? |
How can I reset a mysql table auto-increment to 1 in phpMyAdmin? Posted: 04 Jun 2013 01:01 PM PDT I know that in MySQL at the command line I can reset a table's auto-increment field to 1 with this: I am curious if there is a way to do this from within phpMyAdmin. Something like a check box to reset the auto-increment or something else along those lines? Not that there is anything wrong with the command line approach. More one of those curiosity things I keep thinking on... Thanks in advance! |
Convert Oracle database to Derby Posted: 04 Jun 2013 08:10 PM PDT I need to migrate an existing Oracle Database into a Derby one. I want to know if there's a tool, a script or another way to do that work. It is using any of the interesting features of Oracle, as I can see from the database information from SQL Developer, except sequences and indexes. Thanks! |
Index on foreign key makes query extremely slow Posted: 04 Jun 2013 01:40 PM PDT We are recently experiencing a tremendous query slowdown with spilled over temp tablespace. A specific query causes this problem. The queried table ( Only instance restart solved the issue. Even killing connections did no help. After futher investigation on the FKs and the indexes, it turned out that the index on the The problematic part is Now the question is, how can an index cause such a slowdown and moreover, how can I investigate the cause? It simply doesn't add up for me. Competitive times: normal 10 s, slowdown > 2 min or never returning. |
How should I best handle a rapidly growing database? Posted: 04 Jun 2013 03:31 PM PDT I have a database that I need to maintain. Sadly, the setup and use of that database I can't change, much (thanks to some internal politics). It's running on SQL Server 2008r2. Its only been live for 5 days and has grown from 20GB to upwards of 120GB in that time. (essentially most of the data gets deleted and then imported, but like I say I can't control that side of things) I would love to run nightly jobs to shrink the database and reorganise the indexes, but I know that's a long way from best practices and could lead to more problems than I've already got! QUESTIONS
|
Why would increase in innodb_buffer_pool_size slow down MySQL? Posted: 04 Jun 2013 01:27 PM PDT 5.1.68-cll - MySQL Community Server on CentOS The system has 32GB of RAM. I increased innodb_buffer_pool_size from 10240M to 15360M (10GB -> 15GB). Time taken for a series of identical operations increased from 720 to 822 seconds (14% increase). This was the result only a single test at each setting. But 4 previous tests performed a few months ago resulted in times between 726 and 740s. I just tried running it again with 8GB, and the time taken was 719s. Why would more memory result in a slower process? EDIT: More details on process The process that I'm testing involves emptying some tables and rebuilding them from data from existing tables. I'm not sure if it's using There are no schema definition changes being made. Here is the output of And Edit by RolandoMySQLDBA Please run this query RESULT: and this one RESULT: |
How to import a text file with '|' delimited data to PostgreSQL database? Posted: 04 Jun 2013 09:10 PM PDT I have a text file with It says an error has occurred: Extradata after last expected column. CONTEXT: COPY , line1: What I am doing wrong here? Column header: Table Schema: CREATE TABLE medicaldevice1 ( medical_device_id serial NOT NULL, k_number character varying(8), applicant character varying(150) NOT NULL, contact character varying(50), street1 character varying(80), street2 character varying(40), city character varying(50), state character varying(8), zip character varying(16), device_name character varying(500) NOT NULL, date_received character varying(8), decision_date character varying(8), decision character varying(2), review_advise_comm character varying(2), product_code character varying(3), state_or_summary character varying(16), class_advise_comm character varying(2), ssp_indicator character varying(25), third_party character varying(2), expedited_review character varying(4), CONSTRAINT medical_device_id_pk PRIMARY KEY (medical_device_id) ) |
How to recover/restore corrupted Innodb data files? Posted: 04 Jun 2013 02:38 PM PDT A while ago, my Windows 7 system on which a MySQL Server 5.5.31 was running crashed and corrupted the InnoDB database. The weekly backup that's available does not cover all the tables that were created in the meantime, therefore I would endeavor to recover as much as possible from the data. Right after the crash, I copied the whole data folder of MySQL to an external drive. I would like use this as the starting point for my rescue attempts. In the following I'll describe the steps of my (not yet convincing) rescue attempt so and would be thankful for any comments or guidance on how to improve it:
Now to my questions:
Thanks. UPDATE: Additional notes for my later reference |
Mysql reliable with 1000 new entries / minute? Posted: 04 Jun 2013 06:47 PM PDT I have been developing an application that in the WORST case writes 1000 entries each minute into a database for over a year... I wanted to use Mysql as DB, but I have read that with high datatransfers it becomes unreliable when writing. Is this true? Is 1000 entries considered a high amount of data? What would be such a high amount of data? Would corrupt data mean that I miss one entry or that I lose the whole table? Thanks |
TokuDB not much faster than MySQL Posted: 04 Jun 2013 07:55 PM PDT I have converted a MySQL database with 80.000.000 rows to TokuDB. Now when I run: it takes 90% of the time of the normal MySQL request. What do I have to further optimize so that it runs faster? The table definition: |
How database administrators can see my requests to SQL Server? Posted: 04 Jun 2013 06:36 PM PDT I'm a SQL Server 2008 user. I have access to some tables. I need to request few columns from table as I usually do. But I need to do it once (for example) in 5 seconds and system administrators shouldn't see (feel:) my activity. Result of request - table with approximately 100 lines. My query contains only select and where clause by index. (it is light and it is executing very fast) As I know, C2 audit, as I can see in properties, is disabled. Is there any other ways to see my activity? Thanks. |
How do I check if a constraint exists on Firebird? Posted: 04 Jun 2013 09:19 PM PDT I'm about to publish a script which will update a lot of Firebird databases all at once. Some will not have this constraint, so I would like to check for the existence of a constraint before I try to drop it. |
ORA-16000 when trying to perform select on read only access ORACLE database Posted: 04 Jun 2013 01:23 PM PDT My application's SQL encounters ORA-16000 when trying to access read only Oracle Database This is the query that involves the XMLTYPE, the INTERFACE_CONTENT is a CLOB COLUMN : I also did A lot OF EXTRACTVALUE( ) method on an XML FIELD TYPE. The SQL is working perfectly if the Database is not read only ( read write ). My Question here is what is the issue here - Is this related to some missing priviledges/grant ? |
How to run a cold backup with Linux/tar without shutting down MySQL slave? Posted: 04 Jun 2013 03:23 PM PDT I run the following before tar-ing up the data directory: However, tar will sometimes complain that the The slave machine is in a cold standby machine so there are no client processes running while tar is running. CentOS release 5.6 64bits, MySQL 5.1.49-log source distribution. |
SQL to read XML from file into PostgreSQL database Posted: 04 Jun 2013 07:45 PM PDT How can I write SQL to read an XML file into a PostgreSQL PostgreSQL has a native XML data type with the But I don't see a way to write native PostgreSQL SQL statements to read the content from a filesystem entry and use that to populate an |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment