[how to] Do natural keys provide higher or lower performance in SQL Server than surrogate integer keys? |
- Do natural keys provide higher or lower performance in SQL Server than surrogate integer keys?
- How to flush SQL Server database from RAM to hard disk?
- How to create properly tree-hierarchy
- What does this SQLCMD statement do?
- Snapshot for transactional replication taking too long for large database
- Display executed commands and number of executions
- Can the 'returning' clause return source columns that are not inserted?
- Error on changing ownership of database from Files Page in database properties dialog box
- Entity with >150 attributes: One table or many tables with relationships
- How do I write this script?
- How do I find out how far along my PostgreSQL query is?
- Oracle query is slower the second time
- How to measure a perfomance of SQL Server database? [closed]
- Access denied when disabling agent job, despite SqlAgentOperator membership
- Minimizing Page Fetches
- Query to get reposts from people the user is following
- Need to install Oracle Express 11g Release 2 on a Windows 7 64-bit laptop
- Help my database isn't performing fast enough! 100M Merge with 6M need < 1 hour!
- USER_ID field in alert logs (also in V$DIAG_ALERT_EXT view)
- How can I generate a usage log?
- Bitmask Flags with Lookup Tables Clarification
- SQL Server 2012 catalog.executions to sysjobhistory - any way to join them?
- SQLite writing a query where you select only rows nearest to the hour
- Mongo replication lag slowly increasing
- Unable to start Oracle Database Server : Get an error as I try to start it
- SQL Server replication subscriptions marked as inactive
- What's the difference between a temp table and table variable in SQL Server?
- Is nested view a good database design?
Do natural keys provide higher or lower performance in SQL Server than surrogate integer keys? Posted: 28 Sep 2013 07:54 PM PDT I'm a fan of surrogate keys. There is a risk my findings are confirmation biased. Many questions I've seen both here and at http://stackoverflow.com use natural keys instead of surrogate keys based on My background in computer systems tells me performing any comparative operation on an integer will be faster than comparing strings. This comment made me question my beliefs, so I thought I would create a system to investigate my thesis that integers are faster than strings for use as keys in SQL Server. Since there is likely to be very little discernible difference in small datasets, I immediately thought of a two table setup where the primary table has 1,000,000 rows and the secondary table has 10 rows for each row in the primary table for a total of 10,000,000 rows in the secondary table. The premise of my test is to create two sets of tables like this, one using natural keys and one using integer keys, and run timing tests on a simple query like: The following is the code I created as a test bed: The code above creates a database and 4 tables, and fills the tables with data, ready to test. The test code I ran is: These are the results: Am I doing something wrong here, or are INT keys 3 times faster than 25 character natural keys? |
How to flush SQL Server database from RAM to hard disk? Posted: 28 Sep 2013 04:57 PM PDT As part of the nightly maintenance plan, my working MS SQL Server 2012 databases are backup-ed with options of simple recovery and full backup. Transaction log is not backup-ed. Also, at nights the MSSQLSERVER service or SQL Server is restarted Checking the the properties of corresponding files in operating system (Windows Server 2008), I observe that "Modified" time of .mdf files correspond to the time of maintenance plan (making backup) run but .log files modification time is one-two weeks earlier. Why is such difference and why the operating system fails to update modification time? So, what is the time and reason of .mdf file modification time? How can I estimate when a database was last written to a physical disk, that is how recent is .mdf file of a corresponding database? |
How to create properly tree-hierarchy Posted: 28 Sep 2013 05:01 PM PDT I'm using web2py to create an application I've got 2 different roles: client and referents. Every client can have multiple referents. Both client and referent can insert row in a table, called object. The user table, where client and referent are registered, has a column called "client_id". Every referent in the column_id has the id of the client to whom he reference to. The problem is: which value should a client have in client_id? A possible answer could be NULL or the id of the client itself. Which one is better? Or there's something better? |
What does this SQLCMD statement do? Posted: 28 Sep 2013 11:41 AM PDT I am new to SqlCmd. Can anyone give a short explanation of the following command? |
Snapshot for transactional replication taking too long for large database Posted: 28 Sep 2013 12:56 PM PDT I want to establish a transaction replication of a large database (200GB). But when I create the publisher and the snapshot agent starts, it takes very very long time and it sticks on the following message
but the problem is when I see the size of the snapshot folder, I can see it is growing up! But too slowly.
Do you have any Idea about this? or any different way of doing the replication? how can I find the more detailed warning or errors messages? P.S. In my database there are two tables for Images that they make 98% of the size of my database. when I don't include them in my publication, everything goes fine. but when I Include them, I will face the problem. |
Display executed commands and number of executions Posted: 28 Sep 2013 08:53 AM PDT How would I go about displaying the SQL statements (say..the first 15 characters) and the number of executions for all that have been executed? |
Can the 'returning' clause return source columns that are not inserted? Posted: 28 Sep 2013 06:24 AM PDT Here's a minimal example of my real-world problem: of course you can return the inserted columns with a you can also return a literal: but you can't return the source columns: Is there any way I can get |
Error on changing ownership of database from Files Page in database properties dialog box Posted: 28 Sep 2013 08:42 AM PDT I've restored a database and created a new user(same user from which I've logged in) as db_owner. Now when I wanted to see database diagram got an error
Now when I tried to change ownership from Files page of Database Properties I am getting error as
Please help me fix the issue so that I may be able to see database diagrams on this computer after restoring the database backup from another system. |
Entity with >150 attributes: One table or many tables with relationships Posted: 28 Sep 2013 05:44 AM PDT I have entity with 170 attributes. It's data from vehicle monitoring. Every entity has datatime label and unique id of gps terminal. Datetime and id of terminal - these are conditions for GROUP BY operations. I could create one table for all entity: or I could create many tables with relationships: Please advise optimal structure for the database of this type. |
Posted: 28 Sep 2013 11:43 AM PDT I have a very large database and I want to select certain records and then set a column value such as: Then I want to set the I'm just not sure how to string this together. |
How do I find out how far along my PostgreSQL query is? Posted: 28 Sep 2013 07:14 AM PDT I have a pretty decent idea of how many rows my SELECT...INTO query will actually process (e.g. I know how many will materialize). I understand Postgres won't tell me percentage completeness, is there a way (buried deep in logs, system tables, or otherwise) that I can find out how many rows have been pumped into the destination table or have been read by the SELECT query? |
Oracle query is slower the second time Posted: 28 Sep 2013 11:16 AM PDT I have a general question about something I'm seeing in performance tuning in Oracle. I have a query that I'm testing. It's too big and complex for me to provide a simple example, sorry! (I tried lowering it down, but lost the symptom) The first time I run the query, it's 2 seconds. Subsequent times it's 3, 4, or even 5 seconds. This happens consistently - if I change the spacing, I get 2 seconds again, then it goes back up. This is the opposite of what I would expect - after the first time, the database doesn't need to parse the query, and blocks should be in read into the buffer cache. The only thing I can think of is that it is writing temp data somewhere, in such a way that it is faster to write and read it than it is to just read it. But that of course makes no sense. Does anyone know why this is happening? Or any red flags in the execution plan that would cause something like this? FYI, this is a SELECT, with multiple joins, aggregation, and views. No internal PL/SQL. Thank you @Colin It's unlikely another process could be affecting this Dev machine, at least so consistently. Not sure what dynamic sampling is. Same thing as cardinality feedback? @druzin No bind variables, but no predicate changes either - same hash. @mustaccio Good idea, although I suspect it's something to do with implicit temp tables or caching. I'll give it a try. |
How to measure a perfomance of SQL Server database? [closed] Posted: 28 Sep 2013 03:36 PM PDT I have the task to improve the performance of a SQL Server 2012 database (one of 4 in an instance) by 60% confirmed by corresponding statistics. So, I need to measure "performance" of the RDBMS database before performance tuning and optimization and after. Which metrics are better suited for this? Trying to answer the obvious questions ahead ... I/O (hardware) bottlenecks are absent since the SQL Server runs on a virtual rack having plenty of physical resources under it. The database is used by approx. 60 users (mostly 8 hours a day) with widely varying load (per sec). This is a company management task, so the results of this task should be easy to grasp. UPDATE:
Quite good for me. Also I do not see why it was deleted in SO with the reason: "This question does not appear to be about programming" and closed in this site with: "There are either too many possible answers, or good answers would be too long for this format." |
Access denied when disabling agent job, despite SqlAgentOperator membership Posted: 28 Sep 2013 12:59 PM PDT I am attempting to disable a SQL agent job from an account which a member of the SqlAgentOperator role (but not sysadmin) The doco says this should be possible
...and indeed the code for msdb.dbo.sp_update_job appears to support this - it has explicit handling for exactly this scenario. Yet if I run: ... I get the following error:
The error message appears to indicate the proc's not even being run, but even granting that user explicit EXECUTE on that stored proc doesn't seem to fix it. Does anyone know how to grant a user the ability to disable/enable SQL agent jobs that they don't own, or has successfully used this functionality. Maybe it's just broken in SQL 2012 SP1 CU4 (which is what I am using) |
Posted: 28 Sep 2013 02:24 PM PDT I have a complicated database structure and am trying to use it to retrieve Records based on multiple selection criteria from several tables. As a general rule, is it better to attempt to use correlated sub-queries to check the state of flags in other tables to determine Eligibility, or am I better of creating Views that utilize Joins that represent the valid Records? As a more concrete example, is this: Superior or Inferior to something like this: -- Edited -- As a corollary question: Is it productive to create Views which contain Intermediary validations? IE, if I repeatedly need to check if And then later use the existence of a record in T1Validated in subsequent checks, or is that likely to produce additional database page retrievals and/or table scans? |
Query to get reposts from people the user is following Posted: 28 Sep 2013 01:24 PM PDT I have a table | post_id | post_user | post_content | post_date | the users table is as follows | user_id | username | user_joined | user relationship table is as follows | follower | following | rel_date | this is the query I am using to get the posts from people that user is following to show them.
Now I want users to share posts, for which I created a table repost_user as follows | repost_user | original_post_user | post_id | repost_date | I want to get posts from people that user following, which includes reposts too.. How do I do this? EDIT : How my resultset should look
for eg if its normal post the row should look like
if its a repost the row would be
|
Need to install Oracle Express 11g Release 2 on a Windows 7 64-bit laptop Posted: 28 Sep 2013 07:24 PM PDT I need the Oracle 11g Release 2 sample schemas (HR, OE, etc.) in order to do most of the available online tutorials. I was hoping to install Oracle Express Edition on my Windows 7 laptop to get these; but I have never heard of anybody successfully installing Oracle XE on a 64-bit Windows platform. Is there a version of Oracle XE 11g R2 available for Windows 7? And if so, could you please point me to it? Thanks... |
Help my database isn't performing fast enough! 100M Merge with 6M need < 1 hour! Posted: 28 Sep 2013 06:24 PM PDT I have a server right now receiving more raw data files in 1 hour then I can upsert (insert -> merge) in an hour. I have a table with 100M (rounded up) rows. Table is currently MyISAM. The table has 1000 columns mostly boolean and a few varchar. Currently the fastest way i've found to get the information into my DB until now was: Process raw data into CSV files. Load Data In File to rawData Table. Insert rawData table into Table1. (on dupe key do my function) Truncate rawData Repeat. Worked fine until im merging 6M+ Rows into 100M rows and expecting it to take under an hour. I got 16G of ram so I set my Key_Buffer_Pool to 6G. I have my query cache pool to 16M I have my query cache limit to 10M I would just replace the information however it has to be an Upsert, Update the fields that are true if exists and insert if it does not. Things im looking into atm; - Possibly switching server table to InnoDB? |-> Not sure about the performance, as the insert into an empty table is fine, its the merge that's slow. Maybe allowing more table cache? Or even Query Cache? mysql sql mysqli innodb myisam Merge Code:
To compare my 2 bool columns. Update
|
USER_ID field in alert logs (also in V$DIAG_ALERT_EXT view) Posted: 28 Sep 2013 12:24 PM PDT Does anyone know what triggers the USER_ID field in the log.xml to be populated? The value also exists in the V$DIAG_ALERT_EXT view. I've found by observing the logs that if a temp tablespace fills up, it will log the USER_ID of the problematic SQL statement causing the issue. But other than that, it appears that value is always NULL. |
How can I generate a usage log? Posted: 28 Sep 2013 05:48 PM PDT I'm trying to automatically generate an events log report in SQL Server that records events including successful login, Create/Delete Account, An Account was Unlocked, Account locked due to maximum login attempts being reached. This query will also be customized for After Hours Login. I already know that some of data requested can be generated by using SQL Profiler or C2 audit, Event logs and the However, I cannot get to the SQL script that would allow me to run the reports at any given time. |
Bitmask Flags with Lookup Tables Clarification Posted: 28 Sep 2013 08:24 PM PDT I've received a dataset from an outside source which contains several bitmask fields as varchars. They come in length as low as 3 and as long as 21 values long. I need to be able to run SELECT queries based on these fields using AND or OR logic. Using a calculated field, where I just convert the bits into an integer value, I can easily find rows that match an AND query, by using a simple WHERE rowvalue = requestvalue, but the OR logic would require using bitwise & in order to find matching records. Given that I would need to work with several of these columns and select from hundreds of millions of records, I feel that there would be a huge performance hit when doing bitwise & operations to filter my SELECT results. I came across this answer from searching and it looked like it may fit my needs, but I need some clarification on how it is implemented. Is this as simple as creating a lookup table that has all possible search conditions? Example for 3 bits using (a & b) (Edit: Wrong bitwise op) The author mentions that it's counter-intuitive initially, but I can't help but feel I'm interpreting the solution incorrectly, as this would give me a single lookup table with likely billions of rows. Any clarifications on the answer I linked above or other suggestions that would preserve the existing database are appreciated. Edit: A more concrete example using small data.
Any number of flags, from all to none, can be flipped, and results must be filtered where selection matches all (Using exact value comparison) or at least 1 (Using bitwise &). Adding a single calculated column for each bitmask is ok, but adding a column for each bit for more than 100 bits, coupled with how to insert/update the data is why I'm trying to find alternative solutions. |
SQL Server 2012 catalog.executions to sysjobhistory - any way to join them? Posted: 28 Sep 2013 03:24 PM PDT I have exhausted my resources and can't find a foolproof way to join the ssisdb.catalog tables to the jobs that run them. Trying to write some custom sprocs to monitor my execution times and rows written from the catalog tables, and it would be greatly beneficial to be able to tie them together with the calling job. |
SQLite writing a query where you select only rows nearest to the hour Posted: 28 Sep 2013 05:24 PM PDT I've got a set of data where data has been taken approximately every minute for about three month and the time has been stored as a unix timestamp. There is no regularity to the timestamp (i.e. the zero minute of the hour may not contain a reading, 00:59:55 and the next measurement could be 01:01:01) and days may be missing. What I need is the row nearest to the hour, with the timestep rounding to the hour, as long as the nearest value is not more than 30 minutes away from the hour. Where a matching hour could not be found it would be helpful if the query could include a time but no value. I realise I'm asking a lot, but this would be incredibly helpful Thanks for taking the time to read this. James BTW, The table is just PK (autoincrement),timestamp,value, sensor id(FK). I've tried this to get the data out: |
Mongo replication lag slowly increasing Posted: 28 Sep 2013 09:24 AM PDT I am running a replica-set in production with slaveOk = false using mongo 2.0.7 in AWS The replication lag on one of the servers is close to 58 hours. This replication lag is sometimes decreasing ( at very slow rate ) and sometimes increasing, but overall its lag is increasing 1-2 hours per day.
One more weird thing that I noted about this server ( secondary ) is that mms is showing the version to be 2.2.1 and type as 'standalone' server, which is not the case ( cross checked using db.version() and rs.status() command ) |
Unable to start Oracle Database Server : Get an error as I try to start it Posted: 28 Sep 2013 10:24 AM PDT I just installed Oracle Database Express Edition 11g Release 2 for windows.It created a short cut icon on the desktop : but as I click this icon I see this dialog box : What is it ? How do I start my Oracle Database Server ? |
SQL Server replication subscriptions marked as inactive Posted: 28 Sep 2013 11:24 AM PDT Is there any way to force SQL Server NOT to mark subscriptions as inactive, ever? It happens sporadically when there are connection issues and I don't want to have to reinitialize the subscription every time. Note, I'm not talking about the subscriptions being marked as expired...just as inactive. Thank you. |
What's the difference between a temp table and table variable in SQL Server? Posted: 28 Sep 2013 09:18 AM PDT This seems to be an area with quite a few myths and conflicting views. So what is the difference between a table variable and a local temporary table in SQL Server? |
Is nested view a good database design? Posted: 28 Sep 2013 10:51 AM PDT I have read somewhere long time ago. The book states that we should not allow to having a nested view in SQL Server. I am not sure the reason why we can't do that or I might remember incorrect statement. Students Teachers Schools At my workplace, I have investigated one of our in-house database application. I checked through the objects found out that there are two or three layers of the view stack each other. So that was remind me about what I read in the past. Can any one help explaining it? If it is not OK to do so, I want to know that it is limited to just SQL Server or it is for database design in general. Additional Info: I updated an example from my company. I change a bit to be more general without too many technical (too many columns in this example). Mostly the nested view we used is based on abstract or aggregated view. For example, we have a large student table with hundred of columns. Say, |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment