[how to] the value of undo_sql in flashback_transaction_query view is always null |
- the value of undo_sql in flashback_transaction_query view is always null
- Does SQL Server check disk space before allocating a new page?
- Contact system database design
- How to select a row on one distinct attribute?
- Errors keeping stored procedures through restore from backup
- How to group by when querying duplicate records?
- [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified
- How do these snapshot isolation level configurations interact on a SQL Server 2005 instance?
- related column contents
- Constraint based rules engine
- How to drop inheritance in Postgresql
- Transaction Log maintenance when switching to Simple Recovery
- Stored procedures vs. inline SQL
- How to find memory usage by table?
- Does the size of the primary key contribute to table size?
- Weird SQL error
- Auto-increment key and Foreign Key as composite for relational database
- Automated query log
- MySQL HDD activity with empty process list
- Why don't databases create their own indexes automatically?
- Is using SUM() twice suboptimal?
- Oracle RAC 11g R2 DB silent install: parameter needed for system class
- Getting "BadImageFormatException" when attempting to connect through Visual Studio to Oracle
- SHOW TABLE STATUS very slow on InnoDB
- Difference between database vs user vs schema
- Designing Simple Schema for Disaggregation of Demand Forecast
- SQL Server BPA 2008R2
- mysql duplicate entry error 1062 when restoring backup
- MySQL optimization - year column grouping - using temporary table, filesort
- Copy Database option not found in SQL Server Management Studio 2008 R2
the value of undo_sql in flashback_transaction_query view is always null Posted: 14 Jun 2013 09:07 PM PDT so I have a table defined as follow: and ran the following update statements: I want to find the undo_sql for the transaction above, so I ran the following statement to find the transaction id and ran the following statement to find the undo_sql statement as you can see, I got nothing, if I run the following statement, I also get nothing. looks like undo_sql is always null; Is there anything I didn't configure ? Here are the values from v$system_parameter |
Does SQL Server check disk space before allocating a new page? Posted: 14 Jun 2013 06:27 PM PDT I have a bet with my old boss. I bet her that SQL Server, when allocating a new extent, always allocates from the buffer pool and never checks to see whether there is some place on disk where the allocation could be stored. Essentially, she contests that SQL Server should check for available space on the LUN before allocating a page. This seems wrong, since I could place my storage on the moon, which would cause some serious latency. I feel that she really wants SQL Server always to bring in a page from disk firstly and then perform the DML task(s). Here is my "proof" that she is wrong. If you disagree with my "proof", then please most definitely respond with a better one! Let us create a trivial database and table. The database's recovery model will be set to SIMPLE and AUTO_CREATE_STATISTICS will be shut off, in order to minimize log record bloat. Before we begin, let me divulge the version of SQL Server that I am using. SELECT @@VERSION; ------------------------------------------------------------------------------------- Microsoft SQL Server 2012 - 11.0.2100.60 (X64) Developer Edition (64-bit) on Windows NT 6.1 (Build 7601: Service Pack 1) Now, the code... USE master; GO IF DATABASEPROPERTYEX(N'PageAllocDemo' , N'Version') > 0 BEGIN ALTER DATABASE PageAllocDemo SET SINGLE_USER WITH ROLLBACK IMMEDIATE; DROP DATABASE PageAllocDemo; END; GO CREATE DATABASE PageAllocDemo GO USE PageAllocDemo; GO SET NOCOUNT ON; GO -- Set the database to SIMPLE and turn off log generating crapola ALTER DATABASE PageAllocDemo SET RECOVERY SIMPLE; GO ALTER DATABASE PageAllocDemo SET AUTO_CREATE_STATISTICS OFF; GO CREATE TABLE dbo.X ( c1 INT IDENTITY (1,1) ) ON [PRIMARY]; GO Now, let us check how many pages were allocated? I suspect zero, since we have only created a "logical table", in our case an empty heap. -- How many pages are allocated to our table? DBCC IND (PageAllocDemo,X,-1); GO Now, clear the log. -- Clear the log CHECKPOINT; GO What is currently in the log? -- What is in the log right now? SELECT * FROM fn_dblog(NULL,NULL); GO /* --------------------------------------- -- Operation -------------- Context --- --------------------------------------- LOP_BEGIN_CKPT LCX_NULL LOP_XACT_CKPT LCX_BOOT_PAGE_CKPT LOP_END_CKPT LCX_NULL */ This is expected, since we are in the SIMPLE recovery model. We will now create an explicit transaction that will insert one and only one record into our table;but, before we do that, let us open Process Monitor and filter for our MDF and LDF file as well as the PID for the SQL Server process. Begin the transaction: BEGIN TRAN INSERT INTO dbo.X DEFAULT VALUES; GO Process Monitor shows two writes to the transaction log file. Let us check the log records. -- What is in the log right now? SELECT * FROM fn_dblog(NULL,NULL); /* I omitted all the log records for PFS, GAM, SGAM, etc. --------------------------------------------------------------- -- Operation -------------- Context ------- Transaction ID --- --------------------------------------------------------------- LOP_BEGIN_XACT LCX_NULL 0000:0000030e LOP_BEGIN_XACT LCX_NULL 0000:0000030f LOP_FORMAT_PAGE LCX_HEAP 0000:0000030f LOP_COMMIT_XACT LCX_NULL 0000:0000030f LOP_INSERT_ROWS LCX_HEAP 0000:0000030e LOP_COMMIT_XACT LCX_NULL 0000:0000030e */ I omit the bit map and PFS allocations and we can see that a page is allocated and one row is inserted as one would anticipate. How many pages are allocated to our heap? -- How many pages are allocated to our table? DBCC IND (PageAllocDemo,X,-1); GO /* One IAM page and one data page and nothing else --------------------------------- PageFID PagePID IAMFID IAMPID ------- ----------- ------ ------ 1 264 NULL NULL 1 231 1 264 */ This is as anticipated. We have one IAM page and one data page. Now, our penultimate action is committing the transaction. I expect a 512B log block flush to occur at this point. COMMIT TRAN; Let us finish the "proof" with a checkpoint operation. So far, nothing has been committed to the data file only to the log file. CHECKPOINT; GO Cool, the data pages flushed to disk as expected. My conclusion, from the evidence from Process Monitor is that SQL Server allocates in-memory, adds the record in memory, and commits the page to disk all without checking anything at the storage level. Does anyone object to this hypothesis? If yes, why? |
Contact system database design Posted: 14 Jun 2013 03:31 PM PDT I'm designing a database for contacts and I'm looking for some feedback on the current design. What i want:
What i have:
What bothers me the most is that the contact entity has one organization or one person, and also a self reference (EmployerId) if a organization has employees (which are also contact entities) and vice versa. Is this a problem or see you a better way? |
How to select a row on one distinct attribute? Posted: 14 Jun 2013 02:16 PM PDT I want to |
Errors keeping stored procedures through restore from backup Posted: 14 Jun 2013 06:22 PM PDT We are running SQL Server 2008 R2 on two identical servers. One, called LIVE, is our production environment. The other, called QA, is our dev enivornment. In order to check backup integrity and run integrity checks, and also so that our dev environment has reasonably fresh data, we restore LIVE's database backups on QA daily and then run integrity checks. It all works well but for one issue: We need QA to maintain its own versions of the all the stored procedures. We develop new SPs and modify existing SPs on QA, and test them there. Only after they have been tested do we push them over to LIVE. In order to retain QA's versions on QA, what I have been doing is scripting the CREATE PROCEDURE commands which are saved to files. After the databases are restored, all the procedures are dropped. The CREATE PROCEDURE scripts are then executed, recreating all the databases' SPs as they were on QA prior to the restore from backup. For a time this worked well, although the occasional disappearance of a SP raised eyebrows. However, this week the process has not worked, as several SPs disappeared each day, escalating to 20 disappearances yesterday and--ugh--over 100 this morning. A look through the job history log indicates the nature of the problem: And so forth, up to the limit for the log's message. I have run the command that generates the CREATE script for this particular database, and there are no syntax issues in it of any kind. Starting tomorrow and until this is resolved, I will be retaining the scripts as they are generated; unfortunately, they have been deleted after running up to this point, so I cannot rule out the possibility that the script did have syntax errors. The command that generates the CREATE script is this: I have only two ideas as to what might be the issue. My leading suspicion is that a bcp has a character limit on the fields which causes db1.sys.sql_modules.definition to be truncated. This would explain why the errors started with The only other idea is that somehow I am not calling bcp correctly. Since I'm only selecting a single field, I turned the default \t field separator off by using -t with no argument. The default row separator \n should be fine. Perhaps I should not be specifying Unicode with -w, or something else is missing. If it makes any difference, the CREATE scripts are executed with I would appreciate any ideas anyone could provide as to why this issue is occurring. I am not opposed to a different approach to persisting SPs across--I haven't been able to shake the feeling that this is somewhat of a Rube-Goldberg-like solution. |
How to group by when querying duplicate records? Posted: 14 Jun 2013 12:51 PM PDT I have a table with duplicates and I want to present it to the stakeholders so they can help me determine the source of duplication. Currently I can print 1 row for each duplicate entry, but I want to show all attributes for each. Here's the query I'm using: I'm not sure how to alter the query to get each record where another record shares a name. Thanks for any guidance. |
[Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified Posted: 14 Jun 2013 12:20 PM PDT I am trying to open a program for the first time on Windows XP Pro that uses PostgreSQL 9. I'm getting an error message that says : A problem was encountered while trying to log into or create the production database. Details: [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified In my ODBC manager, I have a list of User DSN's and System DSN's. I tried installing a postgres odbc driver to see if that would help, but it didn't. There is a connect.dat file in the program file with a line saying "OLE DB Provider = MSDASQL". Changing this entry alters the error message I get to "Provider cannot be found, it may not be properly installed". I don't know what provider name to insert to get this to work properly. I have done extensive research on this error to no avail. Any suggestions would be greatly appreciated. |
How do these snapshot isolation level configurations interact on a SQL Server 2005 instance? Posted: 14 Jun 2013 01:49 PM PDT I started troubleshooting a
I looked at the database configuration in I looked up what this meant in BOL and the salient information is as follows:
If I'm understanding this correctly, |
Posted: 14 Jun 2013 11:18 AM PDT I've designed a mysql database made of several independent/unrelated tables to use in my office. Different tables hold information about clients (names, addresses etc), visits to the office (dates, reason etc) or other information like comments (related to a specific visit). When a known client enters the office I can retrieve all the relevant data, insert the new visit data and possible comments about the visit. I need to store visits in a specific order (1st visit 1st, etc). Sometimes I forget to store a visit and I have to enter the data later. The order of the visits gets messed up and I manually (update commands) correct it by changing the visits' autoincrementing primary key number (which represents the order). This way I get the right order of visits but comments (related to the changed primary key) cannot be retrieved. Is there any way to make sure that changing the primary key would make the necessary changes to the rest of the database, so that everything would work as before the changes? Do relational 1:many or other implementations accomplish that? And how? Please note that I'm an amateur, self-taugth "database designer". |
Posted: 14 Jun 2013 09:54 AM PDT I am evaluating software systems which configure products that tout constraint based versus rules based engines. Can someone explain (with maybe an example or two) to a non-expert what the difference is? My guess is that a constraint based engine is rules defining relationships between objects, and rules based engines are more like the WHERE clause of a SQL language query. |
How to drop inheritance in Postgresql Posted: 14 Jun 2013 06:10 PM PDT I am new to PostgreSQL. The situation I have is someone created a child table inherits from the Parent table. And dropped the child table. But the Parent table still have the Down Arrow sign on it. I checked and theres no other link/relation on the Parent table. Is this really an issue? I was told that the Parent table is still in the 'been inherited' status and causing the performance issue. How to resolve this by removing the 'been inherited' status' from the parent table? |
Transaction Log maintenance when switching to Simple Recovery Posted: 14 Jun 2013 08:56 AM PDT Background: I recently inherited 50+ SQL Servers with 450+ databases. The nightly backups are roughly 8TB and, needless to say, we're using more disk space than we'd like. All of the databases are set to FULL recovery and the transaction logs have never been backed up. I've gone through all of the SQL Servers and identified low priority ones that only need a nightly backup and a where a day of data loss is acceptable. Question: I'm switching a lot of low priority databases to |
Stored procedures vs. inline SQL Posted: 14 Jun 2013 10:58 AM PDT I know stored procedures are more efficient through the execution path (than the inline sql in applications). However, when pressed, I'm not super knowlegeable about why. I'd like to know the technical reasoning for this (in a way that I can explain it to someone later). Can anyone help me formulate a good answer? |
How to find memory usage by table? Posted: 14 Jun 2013 10:05 AM PDT I want to discover how much memory is being used by each table. Is this information available in DMVs or elsewhere? |
Does the size of the primary key contribute to table size? Posted: 14 Jun 2013 07:46 PM PDT I have a table (InnoDB) with data length around 36G and index length 23G. It had a composite primary key across three columns. As an initial attempt to reduce the size of this table, I dropped the primary key (wasn't actually needed anyway). Yet, the data length and index length are the same. Is this to be expected? |
Posted: 14 Jun 2013 09:15 AM PDT am seeing the error logs below on my SQL server, I'm not sure if they are related to trace flag 3604. I also see something to do with memory dump as well. Please advice. |
Auto-increment key and Foreign Key as composite for relational database Posted: 14 Jun 2013 02:22 PM PDT I do not have any experience in databases other than what I have from just playing with workbench and trying to understand how I need to setup this database for a project. I have been searching the internet and I have become increasingly confused as to how I will do what I want to do. Below is what I am trying to accomplish. I have a need for a MySQL database that will hold patient information. Patient information will be name, id (unique), and so on as well as all of the tests and samples for each test.
So far I understand a relational database is what I need. I also understand that I can use foreign keys to link the patient id's for each table as well as the test id for each table. I have also come to the conclusion that I would like data integrity so that I don't have to manually manage deleting and updating the rows. This would also not allow for orphaned rows. I believe this can be done with a composite key but this is where I start getting confused on how to set it up. I would expect it to work like this (with each line being an insert statement): Patient Table: Tests Table: Sample Table: Can this be done easily? I also know that this can be accomplished with one or two triggers but I understood that to not handle deletions and the like. Is there another way to do this that is easier to implement? Also, data being written and read from this database is handled by my LabVIEW program. |
Posted: 14 Jun 2013 09:03 AM PDT I would like to ask for some opinions or recommendations or guidelines on how to create an automatic logging system for the executed querys on a MySQL database. Some details:
I am familiar with TRIGGERS and STORED PROCEDURES, but cant really figure out if that's the the way, and if yes how (don't need code, just pure logic, although a usable minimalistic example would be welcomed). Thank you. Edit: Wow, is this question that hard ? |
MySQL HDD activity with empty process list Posted: 14 Jun 2013 11:29 AM PDT What does this mean and how can I understand what is mysql doing? I use InnoDB, my database is huge. This happens when I disconnect my app. Thanx. |
Why don't databases create their own indexes automatically? Posted: 14 Jun 2013 01:03 PM PDT I would have thought that databases would know enough about what they encounter often and be able to respond to the demands they're placed under that they could decide to add indexes to highly requested data. |
Is using SUM() twice suboptimal? Posted: 14 Jun 2013 01:05 PM PDT I know I have to write My question now is, whether or not this is suboptimal. As a programmer, this query looks like the DB will calculate the sum twice. Is that so, or should I rely on optimizations the DB engine will do for me? Update: an explain of a comparable query: |
Oracle RAC 11g R2 DB silent install: parameter needed for system class Posted: 14 Jun 2013 08:28 PM PDT I'm installing Oracle RAC DB 11gR2 using response file and silent mode. The installation is failing with the error message:
Which parameter is needed for a server-class install? I tried changing the value for |
Getting "BadImageFormatException" when attempting to connect through Visual Studio to Oracle Posted: 14 Jun 2013 02:28 PM PDT I get the following message: "Attempt to load Oracle client libraries threw BadImageFormatException. This problem will occur when running in 64 bit mode with the 32 bit Oracle client components installed." when I attempt to create a connection from both Visual Studio 2008 and 2012 to Oracle from my Windows 8 64bit workstation. I'm not running a web app when this message happens. I'm only trying to establish an Oracle connection with Visual Studio. Any solutions posted on the internet that I have seen that resolve this issue with others, doesn't seem to apply to my circumstances.
My environment is
Any suggestions would be appreciated. |
SHOW TABLE STATUS very slow on InnoDB Posted: 14 Jun 2013 12:28 PM PDT Recently we've been migrating from MyISAM to InnoDB and I understand that MyISAM uses meta information against each table to track information and such - however, the database is about 1.8gb with 1+ million records across 400+ or so tables. The problem comes with software like PHPMyAdmin runs "SHOW TABLE STATUS FROM dbase;" where it can take up to 50 seconds to execute that command. Is there a way to optimise this? (MySQL or MariaDB) Thanks! |
Difference between database vs user vs schema Posted: 14 Jun 2013 03:19 PM PDT I'm really confused with the terms database, user and schema. Can anyone explain how they are different from each other (if they are)? If they are same then, what are the similarities between them? How do we use them? And how do we create them? |
Designing Simple Schema for Disaggregation of Demand Forecast Posted: 14 Jun 2013 01:28 PM PDT I am doing a simple database design task as a training exercise where I have to come up with a basic schema design for the following case: I have a parent-child hierarchy of products (example, Raw Material > Work in Progress > End Product).
Demand Forecast is usually done at the higher level in hierarchy (Raw Material or Work in Progress level) It has to be disaggregated to a lower level (End Product). There are 2 ways in which demand forecast can be disaggregated from a higher level to lower level:
Forecast shall be viewable in weekly buckets for the next 6 months and the ideal format should be: PRODUCT_HIERARCHY table could look like this: ORDERS table might look like this: where,
How to store forecast? What would be a good basic schema for such a requirement? My idea to select orders for 26 weekly buckets is: But this will give weekly buckets starting from today irrespective of the day. How can I convert them to Sunday to Saturday weeks in Oracle? Please help designing this database structure. (will be using Oracle 11g) |
Posted: 14 Jun 2013 11:28 AM PDT I have been using SQL server BPA for getting good information from Microsoft. I was using its 2005 version in which i used to export the results in csv format but recently I got two new servers which has got SQL Server 2008 R2 installed on it and I know I can't run BPA 2005 on these, so chose the R2 version of it, but it doesn't have an option to save the report on csv format, only xml, I have tried the excel to convert it into csv but no use, even-though it display the details but can't narrow down the results any idea of converting the results to csv format? |
mysql duplicate entry error 1062 when restoring backup Posted: 14 Jun 2013 09:28 PM PDT Sorry, I seen similar threads but I still couldn't find it addressing my issue plus, I needed some more info on this. Requirement: To create an exact replica 'db4' of an existing DB 'db3'. Procedure followed:
The 2nd step throws in the error: I ran the 2nd step again with --force. The restore completed but with 2 additional similar errors: On completion when I queried certain tables of db4 database, I was able to see missing records. Question:
Thanks, |
MySQL optimization - year column grouping - using temporary table, filesort Posted: 14 Jun 2013 03:28 PM PDT I have a transactions table which is having 600,000 records, I need to list the count for the dashboard on financial year basis. The table used is MyISAM. I tried adding index for the transaction date ( SELECT COUNT( * ) AS cnt, CASE WHEN MONTH( tran_date ) >=3 THEN concat( YEAR( tran_date ) , '-', YEAR( tran_date ) +1 ) ELSE concat( YEAR( tran_date ) -1, '-', YEAR( tran_date ) ) END AS financial_year FROM `transactions1` WHERE tran_date >= '2010-06-01' GROUP BY financial_year Showing rows 0 - 4 (5 total, Query took 1.2095 sec) id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE transactions1 range PRIMARY,tran_date tran_date 8 NULL 346485 Using where; Using index; Using temporary; Using filesort Keyname Type Unique Packed Field Cardinality Collation PRIMARY BTREE Yes No tran_date 205720 A tran_ID 617162 A coupon_No BTREE No No coupon_No 617162 A account_typeBTREE No No account_type 3 A prodCode BTREE No No prodCode 430 A tran_date 308581 A tran_date BTREE No No tran_date 205720 A cust_ID BTREE No No cust_ID 3265 A tran_date 308581 A account_type 308581 A points_earned 617162 A Update : Tried adding partition which is not that much helpful in comparison with non partitioned one. Does replication help in this case for reading this table?. There will be more grouping based on the dates (using the date functions) when reading the data. Edit: I altered the query and reduced the query execution time. The query I used is, SELECT SUM( count ) FROM ( SELECT COUNT( * ) AS count, CASE WHEN MONTH( tran_date ) >=3 THEN concat( YEAR( tran_date ) , '-', YEAR( tran_date ) +1 ) ELSE concat( YEAR( tran_date ) -1, '-', YEAR( tran_date ) ) END AS format_date FROM transactions1 GROUP BY tran_date ) AS s GROUP BY format_date Showing rows 0 - 4 (5 total, Query took 0.5636 sec) id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY <derived2> ALL NULL NULL NULL NULL 229676 Using temporary; Using filesort 2 DERIVED transactions1 index NULL tran_date 8 NULL 617162 Using index But when using SELECT COUNT( * ) AS count, CASE WHEN MONTH( tran_date ) >=3 THEN concat( YEAR( tran_date ) , '-', YEAR( tran_date ) +1 ) ELSE concat( YEAR( tran_date ) -1, '-', YEAR( tran_date ) ) END AS format_date FROM transactions1 GROUP BY tran_date Showing rows 0 - 29 (229,676 total, Query took 0.0006 sec) gives less time without using the |
Copy Database option not found in SQL Server Management Studio 2008 R2 Posted: 14 Jun 2013 12:50 PM PDT I am unable to see the Copy Database option in SQL Server Management Studio 2008 R2. Can anyone tell me whats going on please? |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment