Friday, May 31, 2013

[how to] How can row estimates be improved in order to reduce chances of spills to tempdb

[how to] How can row estimates be improved in order to reduce chances of spills to tempdb


How can row estimates be improved in order to reduce chances of spills to tempdb

Posted: 31 May 2013 06:01 PM PDT

I notice that when there are spill to tempdb events (causing slow queries) that often the row estimates are way off for a particular join. I've seen spill events occur with merge and hash joins and they often increase the runtime 3x to 10x. This question concerns how to improve row estimates under the assumption that it will reduce chances of spill events.

Actual Number of rows 40k.

For this query, the plan shows bad row estimate (11.3 rows):

select Value    from Oav.ValueArray   where ObjectId = (select convert(bigint, Value) NodeId                       from Oav.ValueArray                      where PropertyId = 3331                          and ObjectId = 3540233                        and Sequence = 2)     and PropertyId = 3330  option (recompile);  

For this query, the plan shows good row estimate (56k rows):

declare @a bigint = (select convert(bigint, Value) NodeId                         from Oav.ValueArray                        where PropertyId = 3331                          and ObjectId = 3540233                          and Sequence = 2);    select Value    from Oav.ValueArray   where ObjectId = @a                    and PropertyId = 3330    -- Workpiece Side Active Objects      option (recompile);  

Can statistics or hints be added to improve the row estimates for the first case? I tried adding statistics with particular filter values (property = 3330) but either could not get the combination correct or perhaps it is being ignored because the ObjectId is unknown at compile time and it might be choosing an average over all ObjectIds.

Is there any mode where it would do the probe query first and then use that to determine the row estimates or must it fly blindly?

This particular property has many values (40k) on a few objects and zero on the vast majority. I would be happy with a hint where the max expected number of rows for a given join could be specified. This is a generally haunting problem because some parameters may be determined dynamically as part of the join or would be better placed within a view (no support for variables).

Are there any parameters that can be adjusted to minimize chance of spills to tempdb (e.g. min memory per query)? Robust plan had no effect on the estimate.

How do I model the database for an application containing canvases which are divided into building blocks?

Posted: 31 May 2013 04:40 PM PDT

How should I model my database if I wanted to have this functionality:

I want to be able to create a canvas which has many building blocks (they are something like it's sections). There are many types of canvases which can be made and each of them has a different set of blocks and different order. There are not too many types of canvases and all of them are predefined by application (users cannot edit the types, they are to be added to the database as a seed data as they are predefined templates, see example in the link given later). User can add and remove canvases.

Each building block has its own title and description which are predefined, but they also have a specific value which user can input for each of the canvases that he created.

The use case is very similar to here: https://bmfiddle.com/f/#/ , it is only that there can only be one canvas of each but in my case there should be an option to have the ability that user can add more of the same type of canvas. (But if you could show me how to model it for the use case at bmfiddle.com that could be enough as its not too different.)

Thanks a lot for even reading this question, and it would be really helpful to see an ER diagram or something similar of the possible solution for this problem.

Help my database isn't performing fast enough! 100M Merge with 6M need < 1 hour!

Posted: 31 May 2013 04:47 PM PDT

I have a server right now receiving more raw data files in 1 hour then I can upsert (insert -> merge) in an hour.

I have a table with 100M (rounded up) rows. Table is currently MyISAM. The table has 1000 columns mostly boolean and a few varchar.

Currently the fastest way i've found to get the information into my DB until now was:

Process raw data into CSV files. Load Data In File to rawData Table. Insert rawData table into Table1. (on dupe key do my function) Truncate rawData Repeat. Worked fine until im merging 6M+ Rows into 100M rows and expecting it to take under an hour.

I got 16G of ram so I set my Key_Buffer_Pool to 6G. I have my query cache pool to 16M I have my query cache limit to 10M I would just replace the information however it has to be an Upsert, Update the fields that are true if exists and insert if it does not.

Things im looking into atm; - Possibly switching server table to InnoDB? |-> Not sure about the performance, as the insert into an empty table is fine, its the merge that's slow.

Maybe allowing more table cache? Or even Query Cache? mysql sql mysqli innodb myisam

Merge Code:

b.3_InMarket = (b.3_InMarket OR r.3_InMarket),

To compare my 2 bool columns.

Update

  • Ok I set Raid0
  • Changed my query to Lock Write on tables when inserting
  • When importing csv im disabling keys then re-enabling them before upsert.
  • Changed concurrent_insert to 2

Should numeric identifiers like serial numbers be stored as integers?

Posted: 31 May 2013 04:37 PM PDT

Many of our databases store numeric tracking codes (like serial numbers) as integers. There is little chance of these codes every getting alphabetic characters, but it still seems like an incorrect data type since you would never be performing mathematical operations on them.

What is appropriate data type for these types of identifiers?

Security Concerns of SQL Server Express for Web e-Voting web application

Posted: 31 May 2013 03:47 PM PDT

I am building a database driven web application. The web application is basically an E-Voting website, where users can register and vote on things that concern their community ... etc.

The database engine in use is SQL Server Express 2008. I know that my Domain Model and Application Code does not allow any silly security breaches, e.g. a User viewing the vote of other users ... etc.

But that aside, are there things (free) I can do to make my database secure? So even if a hacker gets access to my Db, he can't do anything with it? or make it very hard for him.

I realise this question can be a little bit open ended, but some simple tips would be greatly appreciated.

financial transactions with and without commissions

Posted: 31 May 2013 12:38 PM PDT

I'm building a gambling website. All financial transactions from and to external sources (credit cards, money transfer systems) are subject to commission. Amount of the commission depends on the source selected. And moreover, it changes periodically. These external commissions go to the payment systems.

Also some inner transactions are subject to commission. Inner commissions depend on the type of the game user playing. These internal commissions go to gambling website, it is the income.

I'm looking for the best way to store the history of transactions and commissions. Should they be stored in one table TRANSACTIONS (with a connection between payment and commission) or should I have separate table COMMISSIONS.

The purpose of storing is to represent the history of transactions to the user and of course for internal bookkeeping (in order to always know how much have we paid to payment systems and how much have we earned ourselves).

Any advice greatly appreciated.

Does a SQL Server Job Run on the Server or the Local Machine?

Posted: 31 May 2013 01:05 PM PDT

Does a SQL Server Job always run on the server or will it run in the context of the local machine, similar to a DTS package run from Enterprise Manager on the user's machine?
The job in question calls a DTS package. The package succeeds when run locally; the job fails whether run manually or scheduled.

MSSQL compound query [migrated]

Posted: 31 May 2013 12:28 PM PDT

Hello and thanks for looking.

I wrote a simple application for my softball team and I am having some issues with the query I need to rewrite.

SELECT DISTINCT _StartDate, _RescheduleDate, _GameTime, _FieldNumber, _Notes  FROM            _Schedule  WHERE        (_StartDate >= DATEADD(wk, 0, { fn NOW() })) AND (_StartDate < DATEADD(wk, 1, { fn NOW() }))  

This query simply gets the upcoming weeks game schedule. What I need to do and I had forgotten, was to check also for _RescheduleDates we currently have 1 or 2 games that are rescheduled. So somehow I need to modify this to check if reschedule date has a valid date and not "TBD" or NULL.

The second problem is, our games are on Friday, on the day of the game, this changes the data on the website to next weeks game and I need it to not change until the day after the game which is Saturdays. I tried adding 1 day

(wk, 1 +1d {fn NOW() }))   

But obviously this did not work. So I would surely appreciate some help with that.

Are there design patterns/best practices in writing dynamic SQL in stored procedures?

Posted: 31 May 2013 12:15 PM PDT

I'm an experienced programmer but I'm new to SQL/databases so please bare with me :).

One of my current tasks involve editing (and refactoring, if needed) a dynamic generated SQL statement in a stored procedure. Are there any "best practices" or "design patterns" that I can follow for writing stored procedures in general or better yet, stored procedures that generate long (300 lines) dynamic sql statements?

Does Change Data Capture (CDC) work in a SQL Server 2012 AlwaysOn Failover Cluster Instances setup?

Posted: 31 May 2013 04:00 PM PDT

Does Change Data Capture (CDC) work in a SQL Server 2012 AlwaysOn Failover Cluster Instances setup?

There would be two nodes in the AlwaysOn Failover Cluster Instances setup and two Domain Controllers.

We are not using AlwaysOn Availability Groups.

Will CDC work? and will it failover?

Creating Superset and Subset table with relationship mapping

Posted: 31 May 2013 09:52 AM PDT

I have requirement in one of the project where different entities like Teacher,School,Student Store addresses in single table and maintain their relationship.

Example -School can add teacher and store their address information.

-Teacher store their other address information.

-Teacher can add the student address information.

-Student can add their other address information.

-Teacher can mark their address information to School entity.

Those which are created by the each entity for others are visible to then and owner himself. Example School enter teacher address information are their address where other school cannot view it.

The teacher view it but could not edit it. If Teacher create address which are added in the school entities are private copies of teacher. School can view it but not edit it.

Problem is how to map those tables.

Students :ID

Teachers :ID

Schools:ID

Addresses :ID |StarDate |EndDate

AnotherTable :ID

Other Fields which have relationship between each entities(e.g who added it and from whom) (private and public relationship).

Rackspace Cloud Databases: incremental backup

Posted: 31 May 2013 12:00 PM PDT

Rackspace Cloud Databases are a little bit special in that we do not have access to the bin_log file, they are not servers in the traditional sense. I do have a couple of Cloud Servers with them but I want to do incremental backups on my own and am currently looking into ways to do this specifically for Cloud Databases, so far without success.

What options do we have if we want to do incremental backups on Cloud Databases?

Note: I do not want to start a discussion about which method is best, I just want to know which methods are available for this scenario.

software to automatically create document specific glossary? [closed]

Posted: 31 May 2013 09:27 AM PDT

I'm not sure if this is the right place to ask this but I couldn't find a more appropriate one. My team needs to create documentation for the software we produce and the application area has a LOT of acronyms/terms. We'd like to maintain a single master glossary of terms/acronyms + their definitions, and we'd like to have a software tool that searches for these terms in new microsoft word docs (pdfs too would be a bonus) and for the terms/acronyms it finds it copies them out of the master glossary for inclusion as a document specific glossary. Do any of you know a software tool that can do this?

Thanks for your time.

USER_ID field in alert logs (also in V$DIAG_ALERT_EXT view)

Posted: 31 May 2013 10:02 AM PDT

Does anyone know what triggers the USER_ID field in the log.xml to be populated? The value also exists in the V$DIAG_ALERT_EXT view.

I've found by observing the logs that if a temp tablespace fills up, it will log the USER_ID of the problematic SQL statement causing the issue. But other than that, it appears that value is always NULL.

Parent-Child Tree Hierarchical ORDER

Posted: 31 May 2013 10:00 AM PDT

I have to following data in SQL Server 2008 R2. SQLFiddle

Schema:

  CREATE TABLE [dbo].[ICFilters](     [ICFilterID] [int] IDENTITY(1,1) NOT NULL,     [ParentID] [int] NOT NULL DEFAULT 0,     [FilterDesc] [varchar](50) NOT NULL,     [Active] [tinyint] NOT NULL DEFAULT 1,   CONSTRAINT [PK_ICFilters] PRIMARY KEY CLUSTERED    ( [ICFilterID] ASC ) WITH       PAD_INDEX  = OFF,      STATISTICS_NORECOMPUTE = OFF,      IGNORE_DUP_KEY = OFF,      ALLOW_ROW_LOCKS  = ON,      ALLOW_PAGE_LOCKS  = ON   ) ON [PRIMARY]  ) ON [PRIMARY]    INSERT INTO [dbo].[ICFilters]  SELECT 0,'Product Type',1  UNION ALL  SELECT 1,'ProdSubType_1',1  UNION ALL  SELECT 1,'ProdSubType_2',1  UNION ALL  SELECT 1,'ProdSubType_3',1  UNION ALL  SELECT 1,'ProdSubType_4',1  UNION ALL  SELECT 2,'PST_1.1',1  UNION ALL  SELECT 2,'PST_1.2',1  UNION ALL  SELECT 2,'PST_1.3',1  UNION ALL  SELECT 2,'PST_1.4',1  UNION ALL  SELECT 2,'PST_1.5',1  UNION ALL  SELECT 2,'PST_1.6',1  UNION ALL  SELECT 2,'PST_1.7',0  UNION ALL  SELECT 3,'PST_2.1',1  UNION ALL  SELECT 3,'PST_2.2',0  UNION ALL  SELECT 3,'PST_2.3',1  UNION ALL  SELECT 3,'PST_2.4',1  UNION ALL  SELECT 14,'PST_2.2.1',1  UNION ALL  SELECT 14,'PST_2.2.2',1  UNION ALL  SELECT 14,'PST_2.2.3',1  UNION ALL  SELECT 3,'PST_2.8',1  

Table:

  | ICFILTERID | PARENTID |    FILTERDESC | ACTIVE |  --------------------------------------------------  |          1 |        0 |  Product Type |      1 |  |          2 |        1 | ProdSubType_1 |      1 |  |          3 |        1 | ProdSubType_2 |      1 |  |          4 |        1 | ProdSubType_3 |      1 |  |          5 |        1 | ProdSubType_4 |      1 |  |          6 |        2 |       PST_1.1 |      1 |  |          7 |        2 |       PST_1.2 |      1 |  |          8 |        2 |       PST_1.3 |      1 |  |          9 |        2 |       PST_1.4 |      1 |  |         10 |        2 |       PST_1.5 |      1 |  |         11 |        2 |       PST_1.6 |      1 |  |         12 |        2 |       PST_1.7 |      0 |  |         13 |        3 |       PST_2.1 |      1 |  |         14 |        3 |       PST_2.2 |      0 |  |         15 |        3 |       PST_2.3 |      1 |  |         16 |        3 |       PST_2.4 |      1 |  |         17 |       14 |     PST_2.2.1 |      1 |  |         18 |       14 |     PST_2.2.2 |      1 |  |         19 |       14 |     PST_2.2.3 |      1 |  |         20 |        3 |       PST_2.8 |      1 |  

Every row has the ID of its parent and the root's parentid = 0. The FilterDescs are just sample descriptions so I can't try to parse those for ordering.

The Question

Is it possible to select all the rows in a tree-like manner? If so, how? When I say 'tree-like', I mean recursively select the parent followed by all of its children, then all the children of each one of those and so on. My Friends and I have tried but we have fallen short of working solutions but will keep trying. I am fairly new to sql so maybe this can be done easily and i'm just making things harder than necessary.

Example(desired) output:

  | ICFILTERID | PARENTID |    FILTERDESC | ACTIVE |  --------------------------------------------------  |          1 |        0 |  Product Type |      1 |  |          2 |        1 | ProdSubType_1 |      1 |  |          6 |        2 |       PST_1.1 |      1 |  |          7 |        2 |       PST_1.2 |      1 |  |          8 |        2 |       PST_1.3 |      1 |  |          9 |        2 |       PST_1.4 |      1 |  |         10 |        2 |       PST_1.5 |      1 |  |         11 |        2 |       PST_1.6 |      1 |  |         12 |        2 |       PST_1.7 |      0 |  |          3 |        1 | ProdSubType_2 |      1 |  |         13 |        3 |       PST_2.1 |      1 |  |         14 |        3 |       PST_2.2 |      0 |  |         17 |       14 |     PST_2.2.1 |      1 |  |         18 |       14 |     PST_2.2.2 |      1 |  |         19 |       14 |     PST_2.2.3 |      1 |  |         15 |        3 |       PST_2.3 |      1 |  |         16 |        3 |       PST_2.4 |      1 |  |         20 |        3 |       PST_2.8 |      1 |  |          4 |        1 | ProdSubType_3 |      1 |  |          5 |        1 | ProdSubType_4 |      1 |  

How to see what is cached in memory in SQL server 2008?

Posted: 31 May 2013 09:59 AM PDT

Is there a way how to find out what is cached in SQL Server 2008 R2? I have found the following nice article: http://blog.sqlauthority.com/2010/06/17/sql-server-data-pages-in-buffer-pool-data-stored-in-memory-cache . However, I would like to know how much data (e.g. in percentage and KB) are stored of each table and index. Is there some simple way how to obtain such data?

How to solve "The table ... is full" with "innodb_file_per_table"?

Posted: 31 May 2013 10:20 AM PDT

I have a MySQL database that holds a large amount of data (100-200GB - a bunch of scientific measurements). The vast majority of the data is stored in one table Sample. Now I'm creating a slave replica of the database and I wanted to take the advantages of innodb_file_per_table during the process. So I set innodb_file_per_table in my slave configuration and imported the dump of the database. To my surprise, it failed with

ERROR 1114 (HY000) at line 5602: The table 'Sample' is full

The file Sample.ibd is currently about 93GB, with more than 600GB free space available on the partition, so it's not a disk free-space issue. Neither it seems to be hitting any kind of file-system limit (I'm using ext4).

I'd be grateful for any ideas what could be the cause, or what to investigate.


Update: I'm using mysql Ver 14.14 Distrib 5.1.66, for debian-linux-gnu (x86_64).

My configuration is:

# This will be passed to all mysql clients  # It has been reported that passwords should be enclosed with ticks/quotes  # escpecially if they contain "#" chars...  # Remember to edit /etc/mysql/debian.cnf when changing the socket location.  [client]  port            = 3306  socket          = /var/run/mysqld/mysqld.sock    # Here is entries for some specific programs  # The following values assume you have at least 32M ram    # This was formally known as [safe_mysqld]. Both versions are currently parsed.  [mysqld_safe]  socket          = /var/run/mysqld/mysqld.sock  nice            = 0    [mysqld]  #  # * Basic Settings  #  user            = mysql  pid-file        = /var/run/mysqld/mysqld.pid  socket          = /var/run/mysqld/mysqld.sock  port            = 3306  basedir         = /usr  datadir         = /home/var/lib/mysql  tmpdir          = /tmp  language        = /usr/share/mysql/english  skip-external-locking  #  # Instead of skip-networking the default is now to listen only on  # localhost which is more compatible and is not less secure.  bind-address            = 127.0.0.1  #  # * Fine Tuning  #  key_buffer              = 16M  max_allowed_packet      = 16M  thread_stack            = 192K  thread_cache_size       = 8  # This replaces the startup script and checks MyISAM tables if needed  # the first time they are touched  myisam-recover         = BACKUP  #max_connections        = 100  #table_cache            = 64  #thread_concurrency     = 10  #  # * Query Cache Configuration  #  query_cache_limit       = 1M  query_cache_size        = 16M  #  # * Logging and Replication  #  # Both location gets rotated by the cronjob.  # Be aware that this log type is a performance killer.  # As of 5.1 you can enable the log at runtime!  #general_log_file        = /var/log/mysql/mysql.log  #general_log             = 1    innodb_file_per_table  

Update: Calling SELECT @@datadir; returns /home/var/lib/mysql/. Then df -h /home/var/lib/mysql/ gives 768G 31G 699G 5% /home.

Calling SHOW VARIABLES LIKE '%innodb_data_file_path%' prints | innodb_data_file_path | ibdata1:10M:autoextend |.

How can I dynamically back up all SSAS databases on a given instance?

Posted: 31 May 2013 01:11 PM PDT

I want to dynamically back up all the databases on a given SSAS instance using a SQL Agent job (which would most likely involve executing an SSIS package). It is imperative that this is a dynamic process - if users add databases or cubes, I want to set up a job one time that can automatically detect all existing SSAS metadata.

Unfortunately, I don't see anything out there that tells me how I can automatically and dynamically back up all of the databases on an SSAS instance in a clean way. By "clean", I mean:

Connection pools being reset with Error: 18056, Severity: 20, State: 46. & Perfmon Counters not showing

Posted: 31 May 2013 06:25 PM PDT

We are using SQL authentication & .net 4.0 Connection strings to connect to an Enterprise Edition 2012 SP1 SQL Server on a windows 2008r2 Enterprise Server. We use about 50 Servers split into 8 different groups different parts of a website.

Our website is using this SQL Server to log Visit tracking data and over the last few days it has spat out the following messages about the resetting connection pools.

The client was unable to reuse a session with SPID 1327, which had been reset for connection pooling. The failure ID is 46. This error may have been caused by an earlier operation failing. Check the error logs for failed operations immediately before this error message.

Errorlog reads

Error: 18056, Severity: 20, State: 46.

The client was unable to reuse a session with SPID 959, which had been reset for connection pooling. The failure ID is 46. This error may have been caused by an earlier operation failing. Check the error logs for failed operations immediately before this error message.

Login failed for user 'xxxx'. Reason: Failed to open the database 'xxxxxxxx' configured in the login object while revalidating the login on the connection. [CLIENT: 10.xx.xx.xxx]

After some digging I found this document from CSS blog

http://blogs.msdn.com/b/psssql/archive/2010/08/03/how-it-works-error-18056-the-client-was-unable-to-reuse-a-session-with-spid-which-had-been-reset-for-connection-pooling.aspx

and this one by the scholar Aaron Bertrand (I know the error number is different but the failure ID is the same with a number of the messages are identical)

http://sqlblog.com/blogs/aaron_bertrand/archive/2011/01/14/sql-server-v-next-denali-additional-states-for-error-18456.aspx

Failure ID 46, suggests that the login did not have permissions. Our logins default to the master database and the db name is specificied in the connection string.

I wanted to check the number of connection strings pools, etc and checked all the counters in Perfmon for ".Net Data Provider for SqlServer" it only gave me the option of defaultdomain9675 for the instance so I selected that assuming that is a system generated ID name for our Datacentre network. Unfortunately all of the counters are reading zero. On one of our other main servers the connection pools are hovering around 10 which is what I expected to see on a healthy server with that kind of load.

My question is 3 fold

1- Can anyone suggest why the Windows 2008 R2 Server is not showing ".Net Data Provider for SqlServer"?

2- Has anyone experienced this as I obviously believe that the login not having permissions is a red herring ?

3- If different groups of web servers have the same connection string syntax but with slightly different whitespace, would this cause the server to use another connection pool ?

Bitmask Flags with Lookup Tables Clarification

Posted: 31 May 2013 07:17 PM PDT

I've received a dataset from an outside source which contains several bitmask fields as varchars. They come in length as low as 3 and as long as 21 values long. I need to be able to run SELECT queries based on these fields using AND or OR logic.

Using a calculated field, where I just convert the bits into an integer value, I can easily find rows that match an AND query, by using a simple WHERE rowvalue = requestvalue, but the OR logic would require using bitwise & in order to find matching records.

Given that I would need to work with several of these columns and select from hundreds of millions of records, I feel that there would be a huge performance hit when doing bitwise & operations to filter my SELECT results.

I came across this answer from searching and it looked like it may fit my needs, but I need some clarification on how it is implemented.

Is this as simple as creating a lookup table that has all possible search conditions?

Example for 3 bits using (a & b) (Edit: Wrong bitwise op)

001,001  001,011  001,101  001,111  010,010  010,011  010,110  011,011  011,111  etc  

The author mentions that it's counter-intuitive initially, but I can't help but feel I'm interpreting the solution incorrectly, as this would give me a single lookup table with likely billions of rows.

Any clarifications on the answer I linked above or other suggestions that would preserve the existing database are appreciated.

Edit: A more concrete example using small data.

Four flags, HasHouse,HasCar,HasCat,HasDog, 0000 is has none, 1111 is has all.

Any number of flags, from all to none, can be flipped, and results must be filtered where selection matches all (Using exact value comparison) or at least 1 (Using bitwise &).

Adding a single calculated column for each bitmask is ok, but adding a column for each bit for more than 100 bits, coupled with how to insert/update the data is why I'm trying to find alternative solutions.

SQL Server 2012 catalog.executions to sysjobhistory - any way to join them?

Posted: 31 May 2013 01:16 PM PDT

I have exhausted my resources and can't find a foolproof way to join the ssisdb.catalog tables to the jobs that run them. Trying to write some custom sprocs to monitor my execution times and rows written from the catalog tables, and it would be greatly beneficial to be able to tie them together with the calling job.

BIT columns all "1" after a phpMyAdmin export/import

Posted: 31 May 2013 03:16 PM PDT

I have to import data from a MySQL database using phpMyAdmin because that's the only interface my hosting provider supports.

I have exported my database from my local machine using phpMyAdmin. After that I imported the script file to my host. All of data in the columns that is BIT type are changed to '1'. Hosting database version is 5.5.29.

Is it a phpMyAdmin problem, or MySQL version problem? How can I fix this?

SQLite writing a query where you select only rows nearest to the hour

Posted: 31 May 2013 03:52 PM PDT

I've got a set of data where data has been taken approximately every minute for about three month and the time has been stored as a unix timestamp. There is no regularity to the timestamp (i.e. the zero minute of the hour may not contain a reading, 00:59:55 and the next measurement could be 01:01:01) and days may be missing.

What I need is the row nearest to the hour, with the timestep rounding to the hour, as long as the nearest value is not more than 30 minutes away from the hour.

Where a matching hour could not be found it would be helpful if the query could include a time but no value.

I realise I'm asking a lot, but this would be incredibly helpful Thanks for taking the time to read this. James

BTW, The table is just PK (autoincrement),timestamp,value, sensor id(FK). I've tried this to get the data out:

SELECT strftime('%S',time, 'unixepoch'),strftime('%M',time, 'unixepoch'),strftime('%H',time, 'unixepoch'), strftime('%d',time, 'unixepoch'), strftime('%m',time, 'unixepoch'), strftime('%Y',time, 'unixepoch'), value from Timestream where idSensor=359;  

Breaking Semisynchronous Replication in MySQL 5.5

Posted: 31 May 2013 07:12 PM PDT

I've set up Semisynchronous Replication between two MySQL 5.5 servers running on Windows 7.

My application is running and updating the database of the master server and same is being updated in the slave database server.

But due to some unknown reasons sometimes, Replication breaks.

On running the command:

SHOW STATUS LIKE 'Rpl_semi_sync%';  

It gives this status:

'Rpl_semi_sync_master_no_times', '0'  'Rpl_semi_sync_master_no_tx', '0'  'Rpl_semi_sync_master_status', 'ON'     <<-------------  'Rpl_semi_sync_master_timefunc_failures', '0'  'Rpl_semi_sync_master_tx_avg_wait_time', '338846'  'Rpl_semi_sync_master_tx_wait_time', '29479685'  'Rpl_semi_sync_master_tx_waits', '87'  'Rpl_semi_sync_master_wait_pos_backtraverse', '0'  'Rpl_semi_sync_master_wait_sessions', '0'  'Rpl_semi_sync_master_yes_tx', '3106'  

Ideally, in semi synchronization, when the sync breaks the status should come as OFF since master is not able to receive any acknowledgement from the slave. Please help us in this regard.

Unable to start Oracle Database Server : Get an error as I try to start it

Posted: 31 May 2013 09:16 AM PDT

I just installed Oracle Database Express Edition 11g Release 2 for windows.It created a short cut icon on the desktop :

enter image description here

but as I click this icon I see this dialog box :

enter image description here

What is it ? How do I start my Oracle Database Server ?

Postgres: Can the archive command for master and standby point to the same directory?

Posted: 31 May 2013 02:15 PM PDT

I use streaming replication and PITR (WAL files) in my cluster, currently I have different versions of postgresql.conf for the master and the standby server.

The only difference in the files is the archive command, that points to a different directory.

for example, on master I have:

archive_command = 'cp %p /dbckp/server1master/%f'  

And on the standby:

archive_command = 'cp %p /dbckp/server2master/%f'  

I figure that if the standby is not generating any archives while on 'standby mode' I could use the same directory and have a single postgresql.conf?

SQL Server replication subscriptions marked as inactive

Posted: 31 May 2013 10:16 AM PDT

Is there any way to force SQL Server NOT to mark subscriptions as inactive, ever?

It happens sporadically when there are connection issues and I don't want to have to reinitialize the subscription every time.

Note, I'm not talking about the subscriptions being marked as expired...just as inactive.

Thank you.

How do I configure multi master replication on WAMP?

Posted: 31 May 2013 09:37 AM PDT

I want to configure mysql multi master replication, but I've not seen any comprehensive article or tutorial on how to do that on WAMP. I will be grateful if some will point me to a tutorial, or better still show me how to achieve that.

Initializing Transactional Replication From Backup

Posted: 31 May 2013 08:17 PM PDT

There is an option when setting up a publication for replication to Allow Initialization From Backup. We have been creating replicated databases for several years now and have always initialized from a backkup but have never set this flag (we just noticed it for the first time a couple of days ago). The replication certainly has worked without issue all this time.

I've found a lot of hits explaining that this needs to be used but none that explain why.

Does anyone have an understanding of what this actually does? From my vantage, it doesn't seem to be necessary but I'm thinking I have to be missing something.

[SQL Server] importing an access database

[SQL Server] importing an access database


importing an access database

Posted: 31 May 2013 01:47 AM PDT

I imported a database from Access to SQL server and am trying to work with the tables. The primary key fields in my Access DB were of type "auto number". The imported field was set to int. I found this out when performing an insert and got a message that the primary key field cannot be blank. I researched this and found out that I need to change the properties of the PK field to be an identity. I tried making the change and got an error message telling me that the change could not be saved because the table has to be dropped and recreated. How do I get around this? I'd rather not have to increment the PK manually with every insert. Secondarily, I had a bunch of yes/no fields in my access tables. When they were imported, they were set up to not allow blank and had to be set manually. I eventually figured out that they needed a default value set.Is there anything else that I need to watch out for that will make my running SQL statements a nightmare?Mike

msbi

Posted: 31 May 2013 01:23 AM PDT

currently i am undertaking msbi course..will it be a added value to my resume in which direction i should look for a job ??thank youguide me

[SQL Server] Advanced SQL Server 2008 Extended Events with Examples


Advanced SQL Server 2008 Extended Events with Examples




Weekly SQL Server newsletter with articles, forum posts, and blog posts via email. Subscribers receive our white paper with performance tips for developers.



SQLTeam.com Articles via RSS


SQLTeam.com Weblog via RSS



.

sqlteam.com

[SQL Server] Testing with Profiler Custom Events and Database Snapshots


Testing with Profiler Custom Events and Database Snapshots




Weekly SQL Server newsletter with articles, forum posts, and blog posts via email. Subscribers receive our white paper with performance tips for developers.



SQLTeam.com Articles via RSS


SQLTeam.com Weblog via RSS



.

sqlteam.com

[SQL Server] Efficiently Reuse Gaps in an Identity Column


Efficiently Reuse Gaps in an Identity Column




Weekly SQL Server newsletter with articles, forum posts, and blog posts via email. Subscribers receive our white paper with performance tips for developers.



SQLTeam.com Articles via RSS


SQLTeam.com Weblog via RSS



.

sqlteam.com

[SQL Server] Using REPLACE in an UPDATE statement


Using REPLACE in an UPDATE statement




Weekly SQL Server newsletter with articles, forum posts, and blog posts via email. Subscribers receive our white paper with performance tips for developers.



SQLTeam.com Articles via RSS


SQLTeam.com Weblog via RSS



.

sqlteam.com

[SQL Server] Handling SQL Server Errors


Handling SQL Server Errors




Weekly SQL Server newsletter with articles, forum posts, and blog posts via email. Subscribers receive our white paper with performance tips for developers.



SQLTeam.com Articles via RSS


SQLTeam.com Weblog via RSS



.

sqlteam.com

[SQL Server] Using Dynamic SQL in Stored Procedures


Using Dynamic SQL in Stored Procedures




Weekly SQL Server newsletter with articles, forum posts, and blog posts via email. Subscribers receive our white paper with performance tips for developers.



SQLTeam.com Articles via RSS


SQLTeam.com Weblog via RSS



.

sqlteam.com

[Articles] The Bicentennial

[Articles] The Bicentennial


The Bicentennial

Posted: 30 May 2013 11:00 PM PDT

Tomorrow is the bicentennial SQL Saturday event. SQL Saturday #200 takes place in Philadelphia, and Steve Jones has some thoughts as he travels to the City of Brotherly Love.

sqldbabundle 'Disturbing Development'
Grant Fritchey & the DBA Team present the latest installment of the Top 5 hard-earned lessons of a DBA – read it now

[MS SQL Server] Transactional Push Replication - Overhead of having Distributor on the Publisher

[MS SQL Server] Transactional Push Replication - Overhead of having Distributor on the Publisher


Transactional Push Replication - Overhead of having Distributor on the Publisher

Posted: 31 May 2013 12:21 AM PDT

I've recently set up Transactional Replication with Push subscriptions. At the moment the distributor is running on the publisher server. I've been getting vague reports of "it's going slow since we put replication on". I'm aware that running the distributor on the publisher server has an overhead, but I can't find any specific information on what that overhead is. I'd imagine that it's an amount of I/O as it writes the transactions into the distribution database, plus a bit of CPU? Weirdly, the reports are that the published database is running slowly, but other databases on the server (and on the same disk) are running fine. I'm a little skeptical to honest as I'd have thought extra I/O or CPU load would affect the whole server, rather than one specific database.If it turns out that it [i]is[/i] causing issues, what are my options?- I've ruled out Pull subscriptions because we're replicating to SQL Server Express, which doesn't have SQL Server Agent to run the jobs.- It would be a very hard sell to convince the customer to buy another server to run the distributor on.- Would it be worth adding a new disk to put the distribution database on, but still running the distributor on the publisher?I realise that this post is lacking specific information, but at this stage I'm just looking at what my options might be. I'm a developer who does a bit of DBA-ing and I'm bit new to replication, so I understand the basics but the minutiae are a bit of a mystery to me! Any help you could provide would be most appreciated. Thanks in advance

Database just says restoring...for last 20 hours?!

Posted: 13 Jan 2010 12:00 AM PST

Hello,I've got a SQL Server 2005 server that I have remote access onto as well as using SSMS.I've got a SQL Server 2008 server that I don't have remote access onto and only can use SSMS.I need to move a database from the old 2005 box to the new 2008 box.I did a backup to our Backup server (across network). That went OK.Then I needed to get the backup to the new server, but I can't just remote it and copy from the backup server.I tried a restore backup with move. That seemed to work. It said it was restoring. That was 20 hours ago. The database is a small database. 1) How do I stop it? I tried taking it offline. It gave me an error message saying that it was busy restoring the database.2) How do you restore a database through TSQL from a network server? I thought the restore "WITH MOVE" was the right way.Thanks!

DB Randomly sets in Single User Mode

Posted: 30 May 2013 10:37 PM PDT

I have a database that on at least 2 occasions randomly went into single user mode.It was not immediately after a backup - I don't see any alter database statements and I find no comments in syscommentsIs there any idea why it does/did this and where else I can look? Please advise

VLFs sizes

Posted: 30 May 2013 09:42 AM PDT

Hi All,I want to see someone's explanation if there is a performance loss (and why in case of) when some VLFs are small and some are big, in a transaction log.For example a database had had small growths of 128MB, and each VLF with 16MB; and then a change of 4608MB is set for the growth, and each new VLF is now 288MB.I know how to make the VLFs all equal. It will be good posting some links where this issue is discussed, too.Thank you,IgorMi

DB Backup Maintenance Plan Won't Execute

Posted: 31 Jul 2012 08:22 AM PDT

I've created a back up plan on one DB using the Maintenance Plans Wizard and the regular way (New Maintenance Plans). Everything looks to be setup correctly but when I try to execute the plan to see if it'll run properly, the plan fails to execute giving me an "Execution failed" message. Looking at the SQL Server Agent job history it seems like my SQL Server can't determine if my Windows NT group/User ID has the appropriate server access. I'm able to do basically everything I want to inside the server but execute the maintenance plan. Does any body know how to change my server access so I can get my maintenance plans running?

SQL 2008 Audit doesn't show parameters

Posted: 05 Sep 2009 03:54 AM PDT

I need to audit all accesses, updates, inserts, deletes, executes, etc. done to specific tables in our database. I upgraded my SQL Server to 2008 so I could use the new Audit functionality. This works great and shows me everything I need and who accessed it. The problem is I don't know what the query contained. Let me give some examples:If I run a query from the Management studio it gives me the following:Query that was run: Select * From Customer Where ID = '123'Audit shows this: Select * From Customer Where ID = '123' This is what I would expect all the time. But if I add in a parameter it gives the following:Query that was run: Declare @id varchar(255) Set @id = '123' Select * From Customer Where ID = @idAudit show this: Select * From Customer Where ID = @idI would like to be able to see what the parameter was in case I need to investigate something.Does anyone have any suggestions? Our application sends in thousands of stored procedures with parameters and how it currently works won't help.Thanks in advance!JN

[SQL 2012] Log file continues to grow after backup (full recovery mode)

[SQL 2012] Log file continues to grow after backup (full recovery mode)


Log file continues to grow after backup (full recovery mode)

Posted: 31 May 2013 02:07 AM PDT

I recently upgraded to SQL Server 2012 from SQL Server 2005. The database is in full recovery mode and I take full backups nightly and transaction log backups hourly. However, the transaction log is not being shrunk after backup and continues to grow daily. Switching the database to simple recovery and running DBCC Shrinkfile will reduce the file size. However, the issue continues after the database is back in full recovery and log backups are taken. Has anyone experienced this issue?

Questions on installing a PowerPivot for Sharepoint instance in a cluster

Posted: 31 May 2013 02:25 AM PDT

Hello all,Still reading through documentation; however have some quick questions (for now) on how to properly setup a Sharepoint instance for PowerPivot. Short version is that we have a multi-node, multi-instance cluster in which we host several different database instances. One in particular is purely for our Enterprise SharePoint. Now that we're moving to SQL 2012 and SharePoint 2013, I'm a bit confused on a couple of key pieces.1) Does the SSAS instance reside with the database instance, or the Sharepoint application server?2) If using SSRS integrated mode on the database instance, I can still have the SSRS application end live on it's own server correct?3) Heading to lunch, will come back to this as I'm drawing a blank at the moment.Thank you

Update query with right([xxx],2) changes varchar to smallint

Posted: 31 May 2013 12:58 AM PDT

This shouldn't be that hard.My situation. Working with addresses on a map. Some of the addresses will be the full length. 4 or 5 characters that are numbers but reside in a varchar column. One column contains the number of characters(numbers) I want presented. For example, think of a row of townhouses. I want both ends to show the full address i.e. 9900 to 9908. The inner addresses will only have two i.e. 02, 04, 06, 08. My query will show apartment numbers, but that isn't a concern.My query:UPDATE [Parcels].[tbl_Address_Basic] SET [FONT_TEXT] = CASE WHEN [FONT_NUM] = 9 THEN [APPT_NUM] WHEN [FONT_NUM] = 4 or [FONT_NUM] = 5 THEN [ST_NUM] WHEN [FONT_NUM] = 2 THEN RIGHT([ST_NUM_TXT], 2) ENDProblem: After update, the column only returns 2,4,6,8 instead of 02,04,06,08. 9900 would return 0. ST_NUM is small int and I thought that was the problem, so I created ST_NUM_TXT which is varchar(10). Same result. I've tried casting and same thing. I tried right('0' + [ST_NUM_TXT], 2) and same thing. I tried ('a' + [ST_NUM_TXT], 2). I know, I know, but I was out of ideas. I received the error: "Conversion failed when converting the varchar value 'a00' to data type smallint". I've tried casting on the inside and outside. I'm out of ideas. BTW, FONT_TEXT is varchar(10). Any help would be appreciated.Monte

Help on performance issues using SQL Server 2012

Posted: 30 May 2013 10:03 PM PDT

Hi,I need help on performance issues with our new database server using SQL Server 2012. A migration was done over the weekend. And we moved several databases to a new data center. Now the ERP system is slow and users cannot get their work done.I created a reindex and reorganize scripts, we were using a job but that was taking too long and failing. I ran a fragmentation analysis and using a threshold of 20% for reindexing and everything else reorganize. This enable the job to complete. However the performance is still awful. The database server shares multiple databases, one of the recommendations I've made was to have a dedicated server for the ERP system.There is an audit trail file that hasn't been purged in about 10 years. Could this be the reason for this bad performance. It's affecting all systems pointing to this database server.I didn't have a say in this decision - Now the business is ticked.Any help would be greatly appreciated. And if none if this works we need to migrate back to SQL Server 2005.

Open SSMS 2012 using RUN window

Posted: 30 May 2013 09:52 PM PDT

Hi,Just want to know what shortcut can we use to open SSMS 2012 as,we are using RUN(window + R) --> SSMS for 2K8

SQL Query autoformating

Posted: 30 May 2013 03:42 PM PDT

Hi all, There are many third party tools available in market to format your SQL queries. If you are using any of these tools then can you please share your experience. Which tool is better?

Parsing huge XML file into a database table using sql server

Posted: 30 May 2013 05:39 PM PDT

I tried to parse a huge xml file using sql server (OPENROWSET) into a database table, but got an error that says duplicate attributes (author). The author column varies for each article, ranging from one author to upto six authors. The columns I am really interested in are author, title and journal. Below is a snippet of the xml data. Any suggestions will be highly appreciated. Thanks in advance.<?xml version="1.0" encoding="ISO-8859-1"?><!DOCTYPE dblp SYSTEM "dblp.dtd"><dblp><article mdate="2002-01-03" key="persons/CoddD74"><author>E. F. Codd</author><author>C. J. Date</author><title>Interactive Support for Non-Programmers: The Relational and Network Approaches.</title><journal>IBM Research Report, San Jose, California</journal><volume>RJ1400</volume><month>June</month><year>1974</year></article> </dblp>

Kerberos on SQL Server 2012 Reporting Services Sharepoint integrated

Posted: 26 Apr 2012 02:07 AM PDT

Hi.I have sharepoint 2010 installed and SQL Server 2012. With SQL 2012, if using reporting services in sharepoint integrated mode there is no longer a Reporting Services Service, instead it is fully integrated into sharepoint.This unfortunately leaves me with no idea how to enable kerberos for reporting services when using SSRS 2012 in Sharepoint integrated mode.Any info i can find is for SSRS 2008 and Sharepont, not SSRS 2012 and sharepont.Can anyone advise what needs to be done here to ensure im using kerberos?

Search This Blog