[how to] How can row estimates be improved in order to reduce chances of spills to tempdb |
- How can row estimates be improved in order to reduce chances of spills to tempdb
- How do I model the database for an application containing canvases which are divided into building blocks?
- Help my database isn't performing fast enough! 100M Merge with 6M need < 1 hour!
- Should numeric identifiers like serial numbers be stored as integers?
- Security Concerns of SQL Server Express for Web e-Voting web application
- financial transactions with and without commissions
- Does a SQL Server Job Run on the Server or the Local Machine?
- MSSQL compound query [migrated]
- Are there design patterns/best practices in writing dynamic SQL in stored procedures?
- Does Change Data Capture (CDC) work in a SQL Server 2012 AlwaysOn Failover Cluster Instances setup?
- Creating Superset and Subset table with relationship mapping
- Rackspace Cloud Databases: incremental backup
- software to automatically create document specific glossary? [closed]
- USER_ID field in alert logs (also in V$DIAG_ALERT_EXT view)
- Parent-Child Tree Hierarchical ORDER
- How to see what is cached in memory in SQL server 2008?
- How to solve "The table ... is full" with "innodb_file_per_table"?
- How can I dynamically back up all SSAS databases on a given instance?
- Connection pools being reset with Error: 18056, Severity: 20, State: 46. & Perfmon Counters not showing
- Bitmask Flags with Lookup Tables Clarification
- SQL Server 2012 catalog.executions to sysjobhistory - any way to join them?
- BIT columns all "1" after a phpMyAdmin export/import
- SQLite writing a query where you select only rows nearest to the hour
- Breaking Semisynchronous Replication in MySQL 5.5
- Unable to start Oracle Database Server : Get an error as I try to start it
- Postgres: Can the archive command for master and standby point to the same directory?
- SQL Server replication subscriptions marked as inactive
- How do I configure multi master replication on WAMP?
- Initializing Transactional Replication From Backup
How can row estimates be improved in order to reduce chances of spills to tempdb Posted: 31 May 2013 06:01 PM PDT I notice that when there are spill to tempdb events (causing slow queries) that often the row estimates are way off for a particular join. I've seen spill events occur with merge and hash joins and they often increase the runtime 3x to 10x. This question concerns how to improve row estimates under the assumption that it will reduce chances of spill events. Actual Number of rows 40k. For this query, the plan shows bad row estimate (11.3 rows): For this query, the plan shows good row estimate (56k rows): Can statistics or hints be added to improve the row estimates for the first case? I tried adding statistics with particular filter values (property = 3330) but either could not get the combination correct or perhaps it is being ignored because the ObjectId is unknown at compile time and it might be choosing an average over all ObjectIds. Is there any mode where it would do the probe query first and then use that to determine the row estimates or must it fly blindly? This particular property has many values (40k) on a few objects and zero on the vast majority. I would be happy with a hint where the max expected number of rows for a given join could be specified. This is a generally haunting problem because some parameters may be determined dynamically as part of the join or would be better placed within a view (no support for variables). Are there any parameters that can be adjusted to minimize chance of spills to tempdb (e.g. min memory per query)? Robust plan had no effect on the estimate. |
Posted: 31 May 2013 04:40 PM PDT How should I model my database if I wanted to have this functionality: I want to be able to create a canvas which has many building blocks (they are something like it's sections). There are many types of canvases which can be made and each of them has a different set of blocks and different order. There are not too many types of canvases and all of them are predefined by application (users cannot edit the types, they are to be added to the database as a seed data as they are predefined templates, see example in the link given later). User can add and remove canvases. Each building block has its own title and description which are predefined, but they also have a specific value which user can input for each of the canvases that he created. The use case is very similar to here: https://bmfiddle.com/f/#/ , it is only that there can only be one canvas of each but in my case there should be an option to have the ability that user can add more of the same type of canvas. (But if you could show me how to model it for the use case at bmfiddle.com that could be enough as its not too different.) Thanks a lot for even reading this question, and it would be really helpful to see an ER diagram or something similar of the possible solution for this problem. |
Help my database isn't performing fast enough! 100M Merge with 6M need < 1 hour! Posted: 31 May 2013 04:47 PM PDT I have a server right now receiving more raw data files in 1 hour then I can upsert (insert -> merge) in an hour. I have a table with 100M (rounded up) rows. Table is currently MyISAM. The table has 1000 columns mostly boolean and a few varchar. Currently the fastest way i've found to get the information into my DB until now was: Process raw data into CSV files. Load Data In File to rawData Table. Insert rawData table into Table1. (on dupe key do my function) Truncate rawData Repeat. Worked fine until im merging 6M+ Rows into 100M rows and expecting it to take under an hour. I got 16G of ram so I set my Key_Buffer_Pool to 6G. I have my query cache pool to 16M I have my query cache limit to 10M I would just replace the information however it has to be an Upsert, Update the fields that are true if exists and insert if it does not. Things im looking into atm; - Possibly switching server table to InnoDB? |-> Not sure about the performance, as the insert into an empty table is fine, its the merge that's slow. Maybe allowing more table cache? Or even Query Cache? mysql sql mysqli innodb myisam Merge Code:
To compare my 2 bool columns. Update
|
Should numeric identifiers like serial numbers be stored as integers? Posted: 31 May 2013 04:37 PM PDT Many of our databases store numeric tracking codes (like serial numbers) as integers. There is little chance of these codes every getting alphabetic characters, but it still seems like an incorrect data type since you would never be performing mathematical operations on them. What is appropriate data type for these types of identifiers? |
Security Concerns of SQL Server Express for Web e-Voting web application Posted: 31 May 2013 03:47 PM PDT I am building a database driven web application. The web application is basically an E-Voting website, where users can register and vote on things that concern their community ... etc. The database engine in use is SQL Server Express 2008. I know that my Domain Model and Application Code does not allow any silly security breaches, e.g. a User viewing the vote of other users ... etc. But that aside, are there things (free) I can do to make my database secure? So even if a hacker gets access to my Db, he can't do anything with it? or make it very hard for him. I realise this question can be a little bit open ended, but some simple tips would be greatly appreciated. |
financial transactions with and without commissions Posted: 31 May 2013 12:38 PM PDT I'm building a gambling website. All financial transactions from and to external sources (credit cards, money transfer systems) are subject to commission. Amount of the commission depends on the source selected. And moreover, it changes periodically. These external commissions go to the payment systems. Also some inner transactions are subject to commission. Inner commissions depend on the type of the game user playing. These internal commissions go to gambling website, it is the income. I'm looking for the best way to store the history of transactions and commissions. Should they be stored in one table TRANSACTIONS (with a connection between payment and commission) or should I have separate table COMMISSIONS. The purpose of storing is to represent the history of transactions to the user and of course for internal bookkeeping (in order to always know how much have we paid to payment systems and how much have we earned ourselves). Any advice greatly appreciated. |
Does a SQL Server Job Run on the Server or the Local Machine? Posted: 31 May 2013 01:05 PM PDT Does a SQL Server Job always run on the server or will it run in the context of the local machine, similar to a DTS package run from Enterprise Manager on the user's machine? |
MSSQL compound query [migrated] Posted: 31 May 2013 12:28 PM PDT Hello and thanks for looking. I wrote a simple application for my softball team and I am having some issues with the query I need to rewrite. This query simply gets the upcoming weeks game schedule. What I need to do and I had forgotten, was to check also for _RescheduleDates we currently have 1 or 2 games that are rescheduled. So somehow I need to modify this to check if reschedule date has a valid date and not "TBD" or NULL. The second problem is, our games are on Friday, on the day of the game, this changes the data on the website to next weeks game and I need it to not change until the day after the game which is Saturdays. I tried adding 1 day But obviously this did not work. So I would surely appreciate some help with that. |
Are there design patterns/best practices in writing dynamic SQL in stored procedures? Posted: 31 May 2013 12:15 PM PDT I'm an experienced programmer but I'm new to SQL/databases so please bare with me :). One of my current tasks involve editing (and refactoring, if needed) a dynamic generated SQL statement in a stored procedure. Are there any "best practices" or "design patterns" that I can follow for writing stored procedures in general or better yet, stored procedures that generate long (300 lines) dynamic sql statements? |
Does Change Data Capture (CDC) work in a SQL Server 2012 AlwaysOn Failover Cluster Instances setup? Posted: 31 May 2013 04:00 PM PDT Does Change Data Capture (CDC) work in a SQL Server 2012 AlwaysOn Failover Cluster Instances setup? There would be two nodes in the AlwaysOn Failover Cluster Instances setup and two Domain Controllers. We are not using AlwaysOn Availability Groups. Will CDC work? and will it failover? |
Creating Superset and Subset table with relationship mapping Posted: 31 May 2013 09:52 AM PDT I have requirement in one of the project where different entities like Teacher,School,Student Store addresses in single table and maintain their relationship. Example -School can add teacher and store their address information. -Teacher store their other address information. -Teacher can add the student address information. -Student can add their other address information. -Teacher can mark their address information to School entity. Those which are created by the each entity for others are visible to then and owner himself. Example School enter teacher address information are their address where other school cannot view it. The teacher view it but could not edit it. If Teacher create address which are added in the school entities are private copies of teacher. School can view it but not edit it. Problem is how to map those tables.Students :ID Teachers :ID Schools:ID Addresses :ID |StarDate |EndDate AnotherTable :ID Other Fields which have relationship between each entities(e.g who added it and from whom) (private and public relationship). |
Rackspace Cloud Databases: incremental backup Posted: 31 May 2013 12:00 PM PDT Rackspace Cloud Databases are a little bit special in that we do not have access to the bin_log file, they are not servers in the traditional sense. I do have a couple of Cloud Servers with them but I want to do incremental backups on my own and am currently looking into ways to do this specifically for Cloud Databases, so far without success. What options do we have if we want to do incremental backups on Cloud Databases? Note: I do not want to start a discussion about which method is best, I just want to know which methods are available for this scenario. |
software to automatically create document specific glossary? [closed] Posted: 31 May 2013 09:27 AM PDT I'm not sure if this is the right place to ask this but I couldn't find a more appropriate one. My team needs to create documentation for the software we produce and the application area has a LOT of acronyms/terms. We'd like to maintain a single master glossary of terms/acronyms + their definitions, and we'd like to have a software tool that searches for these terms in new microsoft word docs (pdfs too would be a bonus) and for the terms/acronyms it finds it copies them out of the master glossary for inclusion as a document specific glossary. Do any of you know a software tool that can do this? Thanks for your time. |
USER_ID field in alert logs (also in V$DIAG_ALERT_EXT view) Posted: 31 May 2013 10:02 AM PDT Does anyone know what triggers the USER_ID field in the log.xml to be populated? The value also exists in the V$DIAG_ALERT_EXT view. I've found by observing the logs that if a temp tablespace fills up, it will log the USER_ID of the problematic SQL statement causing the issue. But other than that, it appears that value is always NULL. |
Parent-Child Tree Hierarchical ORDER Posted: 31 May 2013 10:00 AM PDT I have to following data in SQL Server 2008 R2. SQLFiddle Schema: CREATE TABLE [dbo].[ICFilters]( [ICFilterID] [int] IDENTITY(1,1) NOT NULL, [ParentID] [int] NOT NULL DEFAULT 0, [FilterDesc] [varchar](50) NOT NULL, [Active] [tinyint] NOT NULL DEFAULT 1, CONSTRAINT [PK_ICFilters] PRIMARY KEY CLUSTERED ( [ICFilterID] ASC ) WITH PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON ) ON [PRIMARY] ) ON [PRIMARY] INSERT INTO [dbo].[ICFilters] SELECT 0,'Product Type',1 UNION ALL SELECT 1,'ProdSubType_1',1 UNION ALL SELECT 1,'ProdSubType_2',1 UNION ALL SELECT 1,'ProdSubType_3',1 UNION ALL SELECT 1,'ProdSubType_4',1 UNION ALL SELECT 2,'PST_1.1',1 UNION ALL SELECT 2,'PST_1.2',1 UNION ALL SELECT 2,'PST_1.3',1 UNION ALL SELECT 2,'PST_1.4',1 UNION ALL SELECT 2,'PST_1.5',1 UNION ALL SELECT 2,'PST_1.6',1 UNION ALL SELECT 2,'PST_1.7',0 UNION ALL SELECT 3,'PST_2.1',1 UNION ALL SELECT 3,'PST_2.2',0 UNION ALL SELECT 3,'PST_2.3',1 UNION ALL SELECT 3,'PST_2.4',1 UNION ALL SELECT 14,'PST_2.2.1',1 UNION ALL SELECT 14,'PST_2.2.2',1 UNION ALL SELECT 14,'PST_2.2.3',1 UNION ALL SELECT 3,'PST_2.8',1 Table: | ICFILTERID | PARENTID | FILTERDESC | ACTIVE | -------------------------------------------------- | 1 | 0 | Product Type | 1 | | 2 | 1 | ProdSubType_1 | 1 | | 3 | 1 | ProdSubType_2 | 1 | | 4 | 1 | ProdSubType_3 | 1 | | 5 | 1 | ProdSubType_4 | 1 | | 6 | 2 | PST_1.1 | 1 | | 7 | 2 | PST_1.2 | 1 | | 8 | 2 | PST_1.3 | 1 | | 9 | 2 | PST_1.4 | 1 | | 10 | 2 | PST_1.5 | 1 | | 11 | 2 | PST_1.6 | 1 | | 12 | 2 | PST_1.7 | 0 | | 13 | 3 | PST_2.1 | 1 | | 14 | 3 | PST_2.2 | 0 | | 15 | 3 | PST_2.3 | 1 | | 16 | 3 | PST_2.4 | 1 | | 17 | 14 | PST_2.2.1 | 1 | | 18 | 14 | PST_2.2.2 | 1 | | 19 | 14 | PST_2.2.3 | 1 | | 20 | 3 | PST_2.8 | 1 | Every row has the ID of its parent and the root's The QuestionIs it possible to select all the rows in a tree-like manner? If so, how? When I say 'tree-like', I mean recursively select the parent followed by all of its children, then all the children of each one of those and so on. My Friends and I have tried but we have fallen short of working solutions but will keep trying. I am fairly new to sql so maybe this can be done easily and i'm just making things harder than necessary. Example(desired) output: | ICFILTERID | PARENTID | FILTERDESC | ACTIVE | -------------------------------------------------- | 1 | 0 | Product Type | 1 | | 2 | 1 | ProdSubType_1 | 1 | | 6 | 2 | PST_1.1 | 1 | | 7 | 2 | PST_1.2 | 1 | | 8 | 2 | PST_1.3 | 1 | | 9 | 2 | PST_1.4 | 1 | | 10 | 2 | PST_1.5 | 1 | | 11 | 2 | PST_1.6 | 1 | | 12 | 2 | PST_1.7 | 0 | | 3 | 1 | ProdSubType_2 | 1 | | 13 | 3 | PST_2.1 | 1 | | 14 | 3 | PST_2.2 | 0 | | 17 | 14 | PST_2.2.1 | 1 | | 18 | 14 | PST_2.2.2 | 1 | | 19 | 14 | PST_2.2.3 | 1 | | 15 | 3 | PST_2.3 | 1 | | 16 | 3 | PST_2.4 | 1 | | 20 | 3 | PST_2.8 | 1 | | 4 | 1 | ProdSubType_3 | 1 | | 5 | 1 | ProdSubType_4 | 1 | |
How to see what is cached in memory in SQL server 2008? Posted: 31 May 2013 09:59 AM PDT Is there a way how to find out what is cached in SQL Server 2008 R2? I have found the following nice article: http://blog.sqlauthority.com/2010/06/17/sql-server-data-pages-in-buffer-pool-data-stored-in-memory-cache . However, I would like to know how much data (e.g. in percentage and KB) are stored of each table and index. Is there some simple way how to obtain such data? |
How to solve "The table ... is full" with "innodb_file_per_table"? Posted: 31 May 2013 10:20 AM PDT I have a MySQL database that holds a large amount of data (100-200GB - a bunch of scientific measurements). The vast majority of the data is stored in one table
The file I'd be grateful for any ideas what could be the cause, or what to investigate. Update: I'm using My configuration is: Update: Calling Calling |
How can I dynamically back up all SSAS databases on a given instance? Posted: 31 May 2013 01:11 PM PDT I want to dynamically back up all the databases on a given SSAS instance using a SQL Agent job (which would most likely involve executing an SSIS package). It is imperative that this is a dynamic process - if users add databases or cubes, I want to set up a job one time that can automatically detect all existing SSAS metadata. Unfortunately, I don't see anything out there that tells me how I can automatically and dynamically back up all of the databases on an SSAS instance in a clean way. By "clean", I mean:
|
Posted: 31 May 2013 06:25 PM PDT We are using SQL authentication & .net 4.0 Connection strings to connect to an Enterprise Edition 2012 SP1 SQL Server on a windows 2008r2 Enterprise Server. We use about 50 Servers split into 8 different groups different parts of a website. Our website is using this SQL Server to log Visit tracking data and over the last few days it has spat out the following messages about the resetting connection pools.
Errorlog reads
After some digging I found this document from CSS blog and this one by the scholar Aaron Bertrand (I know the error number is different but the failure ID is the same with a number of the messages are identical) Failure ID 46, suggests that the login did not have permissions. Our logins default to the master database and the db name is specificied in the connection string. I wanted to check the number of connection strings pools, etc and checked all the counters in Perfmon for ".Net Data Provider for SqlServer" it only gave me the option of defaultdomain9675 for the instance so I selected that assuming that is a system generated ID name for our Datacentre network. Unfortunately all of the counters are reading zero. On one of our other main servers the connection pools are hovering around 10 which is what I expected to see on a healthy server with that kind of load. My question is 3 fold 1- Can anyone suggest why the Windows 2008 R2 Server is not showing ".Net Data Provider for SqlServer"? 2- Has anyone experienced this as I obviously believe that the login not having permissions is a red herring ? 3- If different groups of web servers have the same connection string syntax but with slightly different whitespace, would this cause the server to use another connection pool ? |
Bitmask Flags with Lookup Tables Clarification Posted: 31 May 2013 07:17 PM PDT I've received a dataset from an outside source which contains several bitmask fields as varchars. They come in length as low as 3 and as long as 21 values long. I need to be able to run SELECT queries based on these fields using AND or OR logic. Using a calculated field, where I just convert the bits into an integer value, I can easily find rows that match an AND query, by using a simple WHERE rowvalue = requestvalue, but the OR logic would require using bitwise & in order to find matching records. Given that I would need to work with several of these columns and select from hundreds of millions of records, I feel that there would be a huge performance hit when doing bitwise & operations to filter my SELECT results. I came across this answer from searching and it looked like it may fit my needs, but I need some clarification on how it is implemented. Is this as simple as creating a lookup table that has all possible search conditions? Example for 3 bits using (a & b) (Edit: Wrong bitwise op) The author mentions that it's counter-intuitive initially, but I can't help but feel I'm interpreting the solution incorrectly, as this would give me a single lookup table with likely billions of rows. Any clarifications on the answer I linked above or other suggestions that would preserve the existing database are appreciated. Edit: A more concrete example using small data.
Any number of flags, from all to none, can be flipped, and results must be filtered where selection matches all (Using exact value comparison) or at least 1 (Using bitwise &). Adding a single calculated column for each bitmask is ok, but adding a column for each bit for more than 100 bits, coupled with how to insert/update the data is why I'm trying to find alternative solutions. |
SQL Server 2012 catalog.executions to sysjobhistory - any way to join them? Posted: 31 May 2013 01:16 PM PDT I have exhausted my resources and can't find a foolproof way to join the ssisdb.catalog tables to the jobs that run them. Trying to write some custom sprocs to monitor my execution times and rows written from the catalog tables, and it would be greatly beneficial to be able to tie them together with the calling job. |
BIT columns all "1" after a phpMyAdmin export/import Posted: 31 May 2013 03:16 PM PDT I have to import data from a MySQL database using phpMyAdmin because that's the only interface my hosting provider supports. I have exported my database from my local machine using phpMyAdmin. After that I imported the script file to my host. All of data in the columns that is BIT type are changed to '1'. Hosting database version is 5.5.29. Is it a phpMyAdmin problem, or MySQL version problem? How can I fix this? |
SQLite writing a query where you select only rows nearest to the hour Posted: 31 May 2013 03:52 PM PDT I've got a set of data where data has been taken approximately every minute for about three month and the time has been stored as a unix timestamp. There is no regularity to the timestamp (i.e. the zero minute of the hour may not contain a reading, 00:59:55 and the next measurement could be 01:01:01) and days may be missing. What I need is the row nearest to the hour, with the timestep rounding to the hour, as long as the nearest value is not more than 30 minutes away from the hour. Where a matching hour could not be found it would be helpful if the query could include a time but no value. I realise I'm asking a lot, but this would be incredibly helpful Thanks for taking the time to read this. James BTW, The table is just PK (autoincrement),timestamp,value, sensor id(FK). I've tried this to get the data out: |
Breaking Semisynchronous Replication in MySQL 5.5 Posted: 31 May 2013 07:12 PM PDT I've set up Semisynchronous Replication between two MySQL 5.5 servers running on Windows 7. My application is running and updating the database of the master server and same is being updated in the slave database server. But due to some unknown reasons sometimes, Replication breaks. On running the command: It gives this status: Ideally, in semi synchronization, when the sync breaks the status should come as OFF since master is not able to receive any acknowledgement from the slave. Please help us in this regard. |
Unable to start Oracle Database Server : Get an error as I try to start it Posted: 31 May 2013 09:16 AM PDT I just installed Oracle Database Express Edition 11g Release 2 for windows.It created a short cut icon on the desktop : but as I click this icon I see this dialog box : What is it ? How do I start my Oracle Database Server ? |
Postgres: Can the archive command for master and standby point to the same directory? Posted: 31 May 2013 02:15 PM PDT I use streaming replication and PITR (WAL files) in my cluster, currently I have different versions of postgresql.conf for the master and the standby server. The only difference in the files is the archive command, that points to a different directory. for example, on master I have: And on the standby: I figure that if the standby is not generating any archives while on 'standby mode' I could use the same directory and have a single postgresql.conf? |
SQL Server replication subscriptions marked as inactive Posted: 31 May 2013 10:16 AM PDT Is there any way to force SQL Server NOT to mark subscriptions as inactive, ever? It happens sporadically when there are connection issues and I don't want to have to reinitialize the subscription every time. Note, I'm not talking about the subscriptions being marked as expired...just as inactive. Thank you. |
How do I configure multi master replication on WAMP? Posted: 31 May 2013 09:37 AM PDT I want to configure |
Initializing Transactional Replication From Backup Posted: 31 May 2013 08:17 PM PDT There is an option when setting up a publication for replication to Allow Initialization From Backup. We have been creating replicated databases for several years now and have always initialized from a backkup but have never set this flag (we just noticed it for the first time a couple of days ago). The replication certainly has worked without issue all this time. I've found a lot of hits explaining that this needs to be used but none that explain why. Does anyone have an understanding of what this actually does? From my vantage, it doesn't seem to be necessary but I'm thinking I have to be missing something. |
You are subscribed to email updates from Recent Questions - Database Administrators Stack Exchange To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google Inc., 20 West Kinzie, Chicago IL USA 60610 |
No comments:
Post a Comment