Friday, May 31, 2013

[how to] How can row estimates be improved in order to reduce chances of spills to tempdb

[how to] How can row estimates be improved in order to reduce chances of spills to tempdb


How can row estimates be improved in order to reduce chances of spills to tempdb

Posted: 31 May 2013 06:01 PM PDT

I notice that when there are spill to tempdb events (causing slow queries) that often the row estimates are way off for a particular join. I've seen spill events occur with merge and hash joins and they often increase the runtime 3x to 10x. This question concerns how to improve row estimates under the assumption that it will reduce chances of spill events.

Actual Number of rows 40k.

For this query, the plan shows bad row estimate (11.3 rows):

select Value    from Oav.ValueArray   where ObjectId = (select convert(bigint, Value) NodeId                       from Oav.ValueArray                      where PropertyId = 3331                          and ObjectId = 3540233                        and Sequence = 2)     and PropertyId = 3330  option (recompile);  

For this query, the plan shows good row estimate (56k rows):

declare @a bigint = (select convert(bigint, Value) NodeId                         from Oav.ValueArray                        where PropertyId = 3331                          and ObjectId = 3540233                          and Sequence = 2);    select Value    from Oav.ValueArray   where ObjectId = @a                    and PropertyId = 3330    -- Workpiece Side Active Objects      option (recompile);  

Can statistics or hints be added to improve the row estimates for the first case? I tried adding statistics with particular filter values (property = 3330) but either could not get the combination correct or perhaps it is being ignored because the ObjectId is unknown at compile time and it might be choosing an average over all ObjectIds.

Is there any mode where it would do the probe query first and then use that to determine the row estimates or must it fly blindly?

This particular property has many values (40k) on a few objects and zero on the vast majority. I would be happy with a hint where the max expected number of rows for a given join could be specified. This is a generally haunting problem because some parameters may be determined dynamically as part of the join or would be better placed within a view (no support for variables).

Are there any parameters that can be adjusted to minimize chance of spills to tempdb (e.g. min memory per query)? Robust plan had no effect on the estimate.

How do I model the database for an application containing canvases which are divided into building blocks?

Posted: 31 May 2013 04:40 PM PDT

How should I model my database if I wanted to have this functionality:

I want to be able to create a canvas which has many building blocks (they are something like it's sections). There are many types of canvases which can be made and each of them has a different set of blocks and different order. There are not too many types of canvases and all of them are predefined by application (users cannot edit the types, they are to be added to the database as a seed data as they are predefined templates, see example in the link given later). User can add and remove canvases.

Each building block has its own title and description which are predefined, but they also have a specific value which user can input for each of the canvases that he created.

The use case is very similar to here: https://bmfiddle.com/f/#/ , it is only that there can only be one canvas of each but in my case there should be an option to have the ability that user can add more of the same type of canvas. (But if you could show me how to model it for the use case at bmfiddle.com that could be enough as its not too different.)

Thanks a lot for even reading this question, and it would be really helpful to see an ER diagram or something similar of the possible solution for this problem.

Help my database isn't performing fast enough! 100M Merge with 6M need < 1 hour!

Posted: 31 May 2013 04:47 PM PDT

I have a server right now receiving more raw data files in 1 hour then I can upsert (insert -> merge) in an hour.

I have a table with 100M (rounded up) rows. Table is currently MyISAM. The table has 1000 columns mostly boolean and a few varchar.

Currently the fastest way i've found to get the information into my DB until now was:

Process raw data into CSV files. Load Data In File to rawData Table. Insert rawData table into Table1. (on dupe key do my function) Truncate rawData Repeat. Worked fine until im merging 6M+ Rows into 100M rows and expecting it to take under an hour.

I got 16G of ram so I set my Key_Buffer_Pool to 6G. I have my query cache pool to 16M I have my query cache limit to 10M I would just replace the information however it has to be an Upsert, Update the fields that are true if exists and insert if it does not.

Things im looking into atm; - Possibly switching server table to InnoDB? |-> Not sure about the performance, as the insert into an empty table is fine, its the merge that's slow.

Maybe allowing more table cache? Or even Query Cache? mysql sql mysqli innodb myisam

Merge Code:

b.3_InMarket = (b.3_InMarket OR r.3_InMarket),

To compare my 2 bool columns.

Update

  • Ok I set Raid0
  • Changed my query to Lock Write on tables when inserting
  • When importing csv im disabling keys then re-enabling them before upsert.
  • Changed concurrent_insert to 2

Should numeric identifiers like serial numbers be stored as integers?

Posted: 31 May 2013 04:37 PM PDT

Many of our databases store numeric tracking codes (like serial numbers) as integers. There is little chance of these codes every getting alphabetic characters, but it still seems like an incorrect data type since you would never be performing mathematical operations on them.

What is appropriate data type for these types of identifiers?

Security Concerns of SQL Server Express for Web e-Voting web application

Posted: 31 May 2013 03:47 PM PDT

I am building a database driven web application. The web application is basically an E-Voting website, where users can register and vote on things that concern their community ... etc.

The database engine in use is SQL Server Express 2008. I know that my Domain Model and Application Code does not allow any silly security breaches, e.g. a User viewing the vote of other users ... etc.

But that aside, are there things (free) I can do to make my database secure? So even if a hacker gets access to my Db, he can't do anything with it? or make it very hard for him.

I realise this question can be a little bit open ended, but some simple tips would be greatly appreciated.

financial transactions with and without commissions

Posted: 31 May 2013 12:38 PM PDT

I'm building a gambling website. All financial transactions from and to external sources (credit cards, money transfer systems) are subject to commission. Amount of the commission depends on the source selected. And moreover, it changes periodically. These external commissions go to the payment systems.

Also some inner transactions are subject to commission. Inner commissions depend on the type of the game user playing. These internal commissions go to gambling website, it is the income.

I'm looking for the best way to store the history of transactions and commissions. Should they be stored in one table TRANSACTIONS (with a connection between payment and commission) or should I have separate table COMMISSIONS.

The purpose of storing is to represent the history of transactions to the user and of course for internal bookkeeping (in order to always know how much have we paid to payment systems and how much have we earned ourselves).

Any advice greatly appreciated.

Does a SQL Server Job Run on the Server or the Local Machine?

Posted: 31 May 2013 01:05 PM PDT

Does a SQL Server Job always run on the server or will it run in the context of the local machine, similar to a DTS package run from Enterprise Manager on the user's machine?
The job in question calls a DTS package. The package succeeds when run locally; the job fails whether run manually or scheduled.

MSSQL compound query [migrated]

Posted: 31 May 2013 12:28 PM PDT

Hello and thanks for looking.

I wrote a simple application for my softball team and I am having some issues with the query I need to rewrite.

SELECT DISTINCT _StartDate, _RescheduleDate, _GameTime, _FieldNumber, _Notes  FROM            _Schedule  WHERE        (_StartDate >= DATEADD(wk, 0, { fn NOW() })) AND (_StartDate < DATEADD(wk, 1, { fn NOW() }))  

This query simply gets the upcoming weeks game schedule. What I need to do and I had forgotten, was to check also for _RescheduleDates we currently have 1 or 2 games that are rescheduled. So somehow I need to modify this to check if reschedule date has a valid date and not "TBD" or NULL.

The second problem is, our games are on Friday, on the day of the game, this changes the data on the website to next weeks game and I need it to not change until the day after the game which is Saturdays. I tried adding 1 day

(wk, 1 +1d {fn NOW() }))   

But obviously this did not work. So I would surely appreciate some help with that.

Are there design patterns/best practices in writing dynamic SQL in stored procedures?

Posted: 31 May 2013 12:15 PM PDT

I'm an experienced programmer but I'm new to SQL/databases so please bare with me :).

One of my current tasks involve editing (and refactoring, if needed) a dynamic generated SQL statement in a stored procedure. Are there any "best practices" or "design patterns" that I can follow for writing stored procedures in general or better yet, stored procedures that generate long (300 lines) dynamic sql statements?

Does Change Data Capture (CDC) work in a SQL Server 2012 AlwaysOn Failover Cluster Instances setup?

Posted: 31 May 2013 04:00 PM PDT

Does Change Data Capture (CDC) work in a SQL Server 2012 AlwaysOn Failover Cluster Instances setup?

There would be two nodes in the AlwaysOn Failover Cluster Instances setup and two Domain Controllers.

We are not using AlwaysOn Availability Groups.

Will CDC work? and will it failover?

Creating Superset and Subset table with relationship mapping

Posted: 31 May 2013 09:52 AM PDT

I have requirement in one of the project where different entities like Teacher,School,Student Store addresses in single table and maintain their relationship.

Example -School can add teacher and store their address information.

-Teacher store their other address information.

-Teacher can add the student address information.

-Student can add their other address information.

-Teacher can mark their address information to School entity.

Those which are created by the each entity for others are visible to then and owner himself. Example School enter teacher address information are their address where other school cannot view it.

The teacher view it but could not edit it. If Teacher create address which are added in the school entities are private copies of teacher. School can view it but not edit it.

Problem is how to map those tables.

Students :ID

Teachers :ID

Schools:ID

Addresses :ID |StarDate |EndDate

AnotherTable :ID

Other Fields which have relationship between each entities(e.g who added it and from whom) (private and public relationship).

Rackspace Cloud Databases: incremental backup

Posted: 31 May 2013 12:00 PM PDT

Rackspace Cloud Databases are a little bit special in that we do not have access to the bin_log file, they are not servers in the traditional sense. I do have a couple of Cloud Servers with them but I want to do incremental backups on my own and am currently looking into ways to do this specifically for Cloud Databases, so far without success.

What options do we have if we want to do incremental backups on Cloud Databases?

Note: I do not want to start a discussion about which method is best, I just want to know which methods are available for this scenario.

software to automatically create document specific glossary? [closed]

Posted: 31 May 2013 09:27 AM PDT

I'm not sure if this is the right place to ask this but I couldn't find a more appropriate one. My team needs to create documentation for the software we produce and the application area has a LOT of acronyms/terms. We'd like to maintain a single master glossary of terms/acronyms + their definitions, and we'd like to have a software tool that searches for these terms in new microsoft word docs (pdfs too would be a bonus) and for the terms/acronyms it finds it copies them out of the master glossary for inclusion as a document specific glossary. Do any of you know a software tool that can do this?

Thanks for your time.

USER_ID field in alert logs (also in V$DIAG_ALERT_EXT view)

Posted: 31 May 2013 10:02 AM PDT

Does anyone know what triggers the USER_ID field in the log.xml to be populated? The value also exists in the V$DIAG_ALERT_EXT view.

I've found by observing the logs that if a temp tablespace fills up, it will log the USER_ID of the problematic SQL statement causing the issue. But other than that, it appears that value is always NULL.

Parent-Child Tree Hierarchical ORDER

Posted: 31 May 2013 10:00 AM PDT

I have to following data in SQL Server 2008 R2. SQLFiddle

Schema:

  CREATE TABLE [dbo].[ICFilters](     [ICFilterID] [int] IDENTITY(1,1) NOT NULL,     [ParentID] [int] NOT NULL DEFAULT 0,     [FilterDesc] [varchar](50) NOT NULL,     [Active] [tinyint] NOT NULL DEFAULT 1,   CONSTRAINT [PK_ICFilters] PRIMARY KEY CLUSTERED    ( [ICFilterID] ASC ) WITH       PAD_INDEX  = OFF,      STATISTICS_NORECOMPUTE = OFF,      IGNORE_DUP_KEY = OFF,      ALLOW_ROW_LOCKS  = ON,      ALLOW_PAGE_LOCKS  = ON   ) ON [PRIMARY]  ) ON [PRIMARY]    INSERT INTO [dbo].[ICFilters]  SELECT 0,'Product Type',1  UNION ALL  SELECT 1,'ProdSubType_1',1  UNION ALL  SELECT 1,'ProdSubType_2',1  UNION ALL  SELECT 1,'ProdSubType_3',1  UNION ALL  SELECT 1,'ProdSubType_4',1  UNION ALL  SELECT 2,'PST_1.1',1  UNION ALL  SELECT 2,'PST_1.2',1  UNION ALL  SELECT 2,'PST_1.3',1  UNION ALL  SELECT 2,'PST_1.4',1  UNION ALL  SELECT 2,'PST_1.5',1  UNION ALL  SELECT 2,'PST_1.6',1  UNION ALL  SELECT 2,'PST_1.7',0  UNION ALL  SELECT 3,'PST_2.1',1  UNION ALL  SELECT 3,'PST_2.2',0  UNION ALL  SELECT 3,'PST_2.3',1  UNION ALL  SELECT 3,'PST_2.4',1  UNION ALL  SELECT 14,'PST_2.2.1',1  UNION ALL  SELECT 14,'PST_2.2.2',1  UNION ALL  SELECT 14,'PST_2.2.3',1  UNION ALL  SELECT 3,'PST_2.8',1  

Table:

  | ICFILTERID | PARENTID |    FILTERDESC | ACTIVE |  --------------------------------------------------  |          1 |        0 |  Product Type |      1 |  |          2 |        1 | ProdSubType_1 |      1 |  |          3 |        1 | ProdSubType_2 |      1 |  |          4 |        1 | ProdSubType_3 |      1 |  |          5 |        1 | ProdSubType_4 |      1 |  |          6 |        2 |       PST_1.1 |      1 |  |          7 |        2 |       PST_1.2 |      1 |  |          8 |        2 |       PST_1.3 |      1 |  |          9 |        2 |       PST_1.4 |      1 |  |         10 |        2 |       PST_1.5 |      1 |  |         11 |        2 |       PST_1.6 |      1 |  |         12 |        2 |       PST_1.7 |      0 |  |         13 |        3 |       PST_2.1 |      1 |  |         14 |        3 |       PST_2.2 |      0 |  |         15 |        3 |       PST_2.3 |      1 |  |         16 |        3 |       PST_2.4 |      1 |  |         17 |       14 |     PST_2.2.1 |      1 |  |         18 |       14 |     PST_2.2.2 |      1 |  |         19 |       14 |     PST_2.2.3 |      1 |  |         20 |        3 |       PST_2.8 |      1 |  

Every row has the ID of its parent and the root's parentid = 0. The FilterDescs are just sample descriptions so I can't try to parse those for ordering.

The Question

Is it possible to select all the rows in a tree-like manner? If so, how? When I say 'tree-like', I mean recursively select the parent followed by all of its children, then all the children of each one of those and so on. My Friends and I have tried but we have fallen short of working solutions but will keep trying. I am fairly new to sql so maybe this can be done easily and i'm just making things harder than necessary.

Example(desired) output:

  | ICFILTERID | PARENTID |    FILTERDESC | ACTIVE |  --------------------------------------------------  |          1 |        0 |  Product Type |      1 |  |          2 |        1 | ProdSubType_1 |      1 |  |          6 |        2 |       PST_1.1 |      1 |  |          7 |        2 |       PST_1.2 |      1 |  |          8 |        2 |       PST_1.3 |      1 |  |          9 |        2 |       PST_1.4 |      1 |  |         10 |        2 |       PST_1.5 |      1 |  |         11 |        2 |       PST_1.6 |      1 |  |         12 |        2 |       PST_1.7 |      0 |  |          3 |        1 | ProdSubType_2 |      1 |  |         13 |        3 |       PST_2.1 |      1 |  |         14 |        3 |       PST_2.2 |      0 |  |         17 |       14 |     PST_2.2.1 |      1 |  |         18 |       14 |     PST_2.2.2 |      1 |  |         19 |       14 |     PST_2.2.3 |      1 |  |         15 |        3 |       PST_2.3 |      1 |  |         16 |        3 |       PST_2.4 |      1 |  |         20 |        3 |       PST_2.8 |      1 |  |          4 |        1 | ProdSubType_3 |      1 |  |          5 |        1 | ProdSubType_4 |      1 |  

How to see what is cached in memory in SQL server 2008?

Posted: 31 May 2013 09:59 AM PDT

Is there a way how to find out what is cached in SQL Server 2008 R2? I have found the following nice article: http://blog.sqlauthority.com/2010/06/17/sql-server-data-pages-in-buffer-pool-data-stored-in-memory-cache . However, I would like to know how much data (e.g. in percentage and KB) are stored of each table and index. Is there some simple way how to obtain such data?

How to solve "The table ... is full" with "innodb_file_per_table"?

Posted: 31 May 2013 10:20 AM PDT

I have a MySQL database that holds a large amount of data (100-200GB - a bunch of scientific measurements). The vast majority of the data is stored in one table Sample. Now I'm creating a slave replica of the database and I wanted to take the advantages of innodb_file_per_table during the process. So I set innodb_file_per_table in my slave configuration and imported the dump of the database. To my surprise, it failed with

ERROR 1114 (HY000) at line 5602: The table 'Sample' is full

The file Sample.ibd is currently about 93GB, with more than 600GB free space available on the partition, so it's not a disk free-space issue. Neither it seems to be hitting any kind of file-system limit (I'm using ext4).

I'd be grateful for any ideas what could be the cause, or what to investigate.


Update: I'm using mysql Ver 14.14 Distrib 5.1.66, for debian-linux-gnu (x86_64).

My configuration is:

# This will be passed to all mysql clients  # It has been reported that passwords should be enclosed with ticks/quotes  # escpecially if they contain "#" chars...  # Remember to edit /etc/mysql/debian.cnf when changing the socket location.  [client]  port            = 3306  socket          = /var/run/mysqld/mysqld.sock    # Here is entries for some specific programs  # The following values assume you have at least 32M ram    # This was formally known as [safe_mysqld]. Both versions are currently parsed.  [mysqld_safe]  socket          = /var/run/mysqld/mysqld.sock  nice            = 0    [mysqld]  #  # * Basic Settings  #  user            = mysql  pid-file        = /var/run/mysqld/mysqld.pid  socket          = /var/run/mysqld/mysqld.sock  port            = 3306  basedir         = /usr  datadir         = /home/var/lib/mysql  tmpdir          = /tmp  language        = /usr/share/mysql/english  skip-external-locking  #  # Instead of skip-networking the default is now to listen only on  # localhost which is more compatible and is not less secure.  bind-address            = 127.0.0.1  #  # * Fine Tuning  #  key_buffer              = 16M  max_allowed_packet      = 16M  thread_stack            = 192K  thread_cache_size       = 8  # This replaces the startup script and checks MyISAM tables if needed  # the first time they are touched  myisam-recover         = BACKUP  #max_connections        = 100  #table_cache            = 64  #thread_concurrency     = 10  #  # * Query Cache Configuration  #  query_cache_limit       = 1M  query_cache_size        = 16M  #  # * Logging and Replication  #  # Both location gets rotated by the cronjob.  # Be aware that this log type is a performance killer.  # As of 5.1 you can enable the log at runtime!  #general_log_file        = /var/log/mysql/mysql.log  #general_log             = 1    innodb_file_per_table  

Update: Calling SELECT @@datadir; returns /home/var/lib/mysql/. Then df -h /home/var/lib/mysql/ gives 768G 31G 699G 5% /home.

Calling SHOW VARIABLES LIKE '%innodb_data_file_path%' prints | innodb_data_file_path | ibdata1:10M:autoextend |.

How can I dynamically back up all SSAS databases on a given instance?

Posted: 31 May 2013 01:11 PM PDT

I want to dynamically back up all the databases on a given SSAS instance using a SQL Agent job (which would most likely involve executing an SSIS package). It is imperative that this is a dynamic process - if users add databases or cubes, I want to set up a job one time that can automatically detect all existing SSAS metadata.

Unfortunately, I don't see anything out there that tells me how I can automatically and dynamically back up all of the databases on an SSAS instance in a clean way. By "clean", I mean:

Connection pools being reset with Error: 18056, Severity: 20, State: 46. & Perfmon Counters not showing

Posted: 31 May 2013 06:25 PM PDT

We are using SQL authentication & .net 4.0 Connection strings to connect to an Enterprise Edition 2012 SP1 SQL Server on a windows 2008r2 Enterprise Server. We use about 50 Servers split into 8 different groups different parts of a website.

Our website is using this SQL Server to log Visit tracking data and over the last few days it has spat out the following messages about the resetting connection pools.

The client was unable to reuse a session with SPID 1327, which had been reset for connection pooling. The failure ID is 46. This error may have been caused by an earlier operation failing. Check the error logs for failed operations immediately before this error message.

Errorlog reads

Error: 18056, Severity: 20, State: 46.

The client was unable to reuse a session with SPID 959, which had been reset for connection pooling. The failure ID is 46. This error may have been caused by an earlier operation failing. Check the error logs for failed operations immediately before this error message.

Login failed for user 'xxxx'. Reason: Failed to open the database 'xxxxxxxx' configured in the login object while revalidating the login on the connection. [CLIENT: 10.xx.xx.xxx]

After some digging I found this document from CSS blog

http://blogs.msdn.com/b/psssql/archive/2010/08/03/how-it-works-error-18056-the-client-was-unable-to-reuse-a-session-with-spid-which-had-been-reset-for-connection-pooling.aspx

and this one by the scholar Aaron Bertrand (I know the error number is different but the failure ID is the same with a number of the messages are identical)

http://sqlblog.com/blogs/aaron_bertrand/archive/2011/01/14/sql-server-v-next-denali-additional-states-for-error-18456.aspx

Failure ID 46, suggests that the login did not have permissions. Our logins default to the master database and the db name is specificied in the connection string.

I wanted to check the number of connection strings pools, etc and checked all the counters in Perfmon for ".Net Data Provider for SqlServer" it only gave me the option of defaultdomain9675 for the instance so I selected that assuming that is a system generated ID name for our Datacentre network. Unfortunately all of the counters are reading zero. On one of our other main servers the connection pools are hovering around 10 which is what I expected to see on a healthy server with that kind of load.

My question is 3 fold

1- Can anyone suggest why the Windows 2008 R2 Server is not showing ".Net Data Provider for SqlServer"?

2- Has anyone experienced this as I obviously believe that the login not having permissions is a red herring ?

3- If different groups of web servers have the same connection string syntax but with slightly different whitespace, would this cause the server to use another connection pool ?

Bitmask Flags with Lookup Tables Clarification

Posted: 31 May 2013 07:17 PM PDT

I've received a dataset from an outside source which contains several bitmask fields as varchars. They come in length as low as 3 and as long as 21 values long. I need to be able to run SELECT queries based on these fields using AND or OR logic.

Using a calculated field, where I just convert the bits into an integer value, I can easily find rows that match an AND query, by using a simple WHERE rowvalue = requestvalue, but the OR logic would require using bitwise & in order to find matching records.

Given that I would need to work with several of these columns and select from hundreds of millions of records, I feel that there would be a huge performance hit when doing bitwise & operations to filter my SELECT results.

I came across this answer from searching and it looked like it may fit my needs, but I need some clarification on how it is implemented.

Is this as simple as creating a lookup table that has all possible search conditions?

Example for 3 bits using (a & b) (Edit: Wrong bitwise op)

001,001  001,011  001,101  001,111  010,010  010,011  010,110  011,011  011,111  etc  

The author mentions that it's counter-intuitive initially, but I can't help but feel I'm interpreting the solution incorrectly, as this would give me a single lookup table with likely billions of rows.

Any clarifications on the answer I linked above or other suggestions that would preserve the existing database are appreciated.

Edit: A more concrete example using small data.

Four flags, HasHouse,HasCar,HasCat,HasDog, 0000 is has none, 1111 is has all.

Any number of flags, from all to none, can be flipped, and results must be filtered where selection matches all (Using exact value comparison) or at least 1 (Using bitwise &).

Adding a single calculated column for each bitmask is ok, but adding a column for each bit for more than 100 bits, coupled with how to insert/update the data is why I'm trying to find alternative solutions.

SQL Server 2012 catalog.executions to sysjobhistory - any way to join them?

Posted: 31 May 2013 01:16 PM PDT

I have exhausted my resources and can't find a foolproof way to join the ssisdb.catalog tables to the jobs that run them. Trying to write some custom sprocs to monitor my execution times and rows written from the catalog tables, and it would be greatly beneficial to be able to tie them together with the calling job.

BIT columns all "1" after a phpMyAdmin export/import

Posted: 31 May 2013 03:16 PM PDT

I have to import data from a MySQL database using phpMyAdmin because that's the only interface my hosting provider supports.

I have exported my database from my local machine using phpMyAdmin. After that I imported the script file to my host. All of data in the columns that is BIT type are changed to '1'. Hosting database version is 5.5.29.

Is it a phpMyAdmin problem, or MySQL version problem? How can I fix this?

SQLite writing a query where you select only rows nearest to the hour

Posted: 31 May 2013 03:52 PM PDT

I've got a set of data where data has been taken approximately every minute for about three month and the time has been stored as a unix timestamp. There is no regularity to the timestamp (i.e. the zero minute of the hour may not contain a reading, 00:59:55 and the next measurement could be 01:01:01) and days may be missing.

What I need is the row nearest to the hour, with the timestep rounding to the hour, as long as the nearest value is not more than 30 minutes away from the hour.

Where a matching hour could not be found it would be helpful if the query could include a time but no value.

I realise I'm asking a lot, but this would be incredibly helpful Thanks for taking the time to read this. James

BTW, The table is just PK (autoincrement),timestamp,value, sensor id(FK). I've tried this to get the data out:

SELECT strftime('%S',time, 'unixepoch'),strftime('%M',time, 'unixepoch'),strftime('%H',time, 'unixepoch'), strftime('%d',time, 'unixepoch'), strftime('%m',time, 'unixepoch'), strftime('%Y',time, 'unixepoch'), value from Timestream where idSensor=359;  

Breaking Semisynchronous Replication in MySQL 5.5

Posted: 31 May 2013 07:12 PM PDT

I've set up Semisynchronous Replication between two MySQL 5.5 servers running on Windows 7.

My application is running and updating the database of the master server and same is being updated in the slave database server.

But due to some unknown reasons sometimes, Replication breaks.

On running the command:

SHOW STATUS LIKE 'Rpl_semi_sync%';  

It gives this status:

'Rpl_semi_sync_master_no_times', '0'  'Rpl_semi_sync_master_no_tx', '0'  'Rpl_semi_sync_master_status', 'ON'     <<-------------  'Rpl_semi_sync_master_timefunc_failures', '0'  'Rpl_semi_sync_master_tx_avg_wait_time', '338846'  'Rpl_semi_sync_master_tx_wait_time', '29479685'  'Rpl_semi_sync_master_tx_waits', '87'  'Rpl_semi_sync_master_wait_pos_backtraverse', '0'  'Rpl_semi_sync_master_wait_sessions', '0'  'Rpl_semi_sync_master_yes_tx', '3106'  

Ideally, in semi synchronization, when the sync breaks the status should come as OFF since master is not able to receive any acknowledgement from the slave. Please help us in this regard.

Unable to start Oracle Database Server : Get an error as I try to start it

Posted: 31 May 2013 09:16 AM PDT

I just installed Oracle Database Express Edition 11g Release 2 for windows.It created a short cut icon on the desktop :

enter image description here

but as I click this icon I see this dialog box :

enter image description here

What is it ? How do I start my Oracle Database Server ?

Postgres: Can the archive command for master and standby point to the same directory?

Posted: 31 May 2013 02:15 PM PDT

I use streaming replication and PITR (WAL files) in my cluster, currently I have different versions of postgresql.conf for the master and the standby server.

The only difference in the files is the archive command, that points to a different directory.

for example, on master I have:

archive_command = 'cp %p /dbckp/server1master/%f'  

And on the standby:

archive_command = 'cp %p /dbckp/server2master/%f'  

I figure that if the standby is not generating any archives while on 'standby mode' I could use the same directory and have a single postgresql.conf?

SQL Server replication subscriptions marked as inactive

Posted: 31 May 2013 10:16 AM PDT

Is there any way to force SQL Server NOT to mark subscriptions as inactive, ever?

It happens sporadically when there are connection issues and I don't want to have to reinitialize the subscription every time.

Note, I'm not talking about the subscriptions being marked as expired...just as inactive.

Thank you.

How do I configure multi master replication on WAMP?

Posted: 31 May 2013 09:37 AM PDT

I want to configure mysql multi master replication, but I've not seen any comprehensive article or tutorial on how to do that on WAMP. I will be grateful if some will point me to a tutorial, or better still show me how to achieve that.

Initializing Transactional Replication From Backup

Posted: 31 May 2013 08:17 PM PDT

There is an option when setting up a publication for replication to Allow Initialization From Backup. We have been creating replicated databases for several years now and have always initialized from a backkup but have never set this flag (we just noticed it for the first time a couple of days ago). The replication certainly has worked without issue all this time.

I've found a lot of hits explaining that this needs to be used but none that explain why.

Does anyone have an understanding of what this actually does? From my vantage, it doesn't seem to be necessary but I'm thinking I have to be missing something.

No comments:

Post a Comment

Search This Blog