SOME OBSERVATIONS OF THE SPAB RESEARCH REPORT 2. THE SPAB BUILDING       PERFORMANCE SURVEY 2016 INTERIM REPORT: 2017

Introduction:

The report ‘The SPAB Building Performance Survey 2016 Interim Report: 2017’ was first made available last year. Having first taken a quick glance at its contents it became apparently clear there were obvious discrepancies in the included tables relating to relative humidity (RH) and absolute humidity (AH) for the 3 properties under investigation.

A more in-depth inspection of the data in the tables has revealed errors and flaws which may reflect on the recorded information and interpretation of results.

The data recorded in the report are given as ‘averages’. It is therefore very important to appreciate that an average is:

“Number that is obtained by adding two or more amounts and dividing the total by the number of amounts”

Thus the average of the sum of 1 to 5 is 3: the spread is 1 and to 5. It is important to note that there is a spread around the average. Statistically this spread is important in that they will distinguish between natural variations and real differences

Be aware that there will be natural variations in data; thus it is imperative that the data shows a significant difference to identify any real change.

Table 2:

Given there would distinctly be a spread around the above figures then there is no significant differnce between 2012 – 2016 for each sensor. Also note RH is temperature dependent thus it can simply reflect a change in temperature and nothing else

S1:   Mean: 65.5%      Min. 64%        Max. 66%

S2:   Mean: 71.3%      Min. 71%        Max. 72%

S3:   Mean: 77.2%      Min. 75%        Max. 80%

S4:   Mean: 81.8%      Min. 79%        Max. 84%

Note the largest spread is in those sensors towards the outer part of the wall, ie, closest to the external variable atmospherics.

Table 3:

 Note 13 potential data sets; there are 4 omissions for the external data (no data – left blank). To obtain the average the 4 omissions are ignored and the total remaining divided – correctly – by 9

Table 5:

Note: 13 potential data sets for the external data; there are 4 recorded as 0.00. It is impossible to get an AH of 0.00 in normal earth atmosphere. Looking at Table 3 these 0.00’s are clearly ‘absence of data’ yet they have been included in the total of 13 to get the average. This makes a nonsense of the average and distinctly reflects poor data handling.

Table 4:

Given there would distinctly be a spread around the above figures then there is no significant difference between 2013 – 2016 for each sensor.

S1:   Mean: 9.80          Min. 9.56                    Max. 9.94

S2:   Mean: 9.74          Min. 9.42                    Max. 9.92

S3:   Mean: 10.25        Min. 9.69                    Max. 10.71

S4:   Mean: 10.01        Min. 9.65                    Max. 10.58

Note again the largest spread is in those sensors towards the outer part of the wall,ie, closest to the external variable atmospherics.

Table 5 (mis-numbered in report):

Average pre-insulation data collected over only 4 weeks in January/February 2011 (winter). This is being compared with an annual average (over approximately 52 weeks) 2012 to 2016. This is extremely bias scientific analyses – it therefore becomes irrelevant.

2012 – 2016:

S1:   Mean: 6.48          Min. 6.33        Max. 6.85

S2:   Mean: 5.09          Min. 5.00        Max. 5.16

S3:   Mean: 4.04          Min. 3.08        Max. 4.24

S4:   Mean: 4.79          Min. 4.62        Max. 5.11

Note again the largest spread is in those sensors towards the outer part of the wall,ie, closest to the external variable atmospherics.

Table 6:

Note: 13 potential data sets for the external data; there are 4 recorded as 0.00. Looking at Table 3 these 0.00’s are ‘absence of data’ yet they have again been included in the total of 13 to get the average. This again makes a nonsense of the average and distinctly reflects poor data handling.

Figure 5:

The report records an average internal RH of 69.33%. This figure suggests that there is likely to be an internal atmospheric moisture problem – if the recorded data is correct. This very likely to influence the recorded data through the wall

Table 8:

Given there would distinctly be a spread around the above figures then there is no significant difference between 2012 – 2016 for each sensor. Also note RH is temperature dependent thus it can simply reflect a change in temperature and nothing else

S1:   Mean: 64.8%      Min. 63%        Max. 68%

S2:   Mean: 88.0%      Min. 85%        Max. 90%

S3:   Mean: 93.3%      Min. 90%        Max. 96%

S4:   Mean: 96.8%      Min. 96%        Max. 98%

Table 9:

Note 12 potential data sets; there are 2 omissions for the external data (no data – left blank). To obtain the average the 2 omissions are ignored and the total remaining divided – correctly – by 10 for the average.

Table 10:

Given there would distinctly be a spread around the above figures then there is no significant difference between 2013 – 2016 for each sensor.

S1:   Mean:  9.34        Min.  9.15                  Max. 9.64

S2:   Mean: 10.59        Min. 10.04                  Max. 11.13

S3:   Mean: 10.91        Min. 10.24                  Max. 11.49

S4:   Mean: 10.68        Min. 10.17                  Max. 11.04

Table 11:

Note: 12 potential data sets for the external data; there are 2 recorded as 0.00. It is impossible to get an AH of 0.00 in normal earth atmophere. Looking at Table 9 these 0.00’s are ‘absence of data’ yet they have been included in the total of 12 to get the average. This makes a nonsense of the average and distinctly reflects poor data handling.

Also from the recorded data:

 

Month RH AH g/m3 Temperature ºC
September 88.28 0.53 ?!!
October 89.01 2.52 ?!!
November 90.19 3.06 ?!!

To obtain AH averages of this order given the RHs’, the average temperatures would have to be so absurdly low as to be completely unbelievable.

Table 12:

The average pre-insulation data was obtained over 2 weeks only in March (towards end of winter) 2011 then compared to the FULL ANNUAL data (around 52 weeks) AVERAGE 2012-2016. This can only be considered an unrealistic comparison (eg, ‘apples to pears’) and is meaningless.

Table 15:

Given there would distinctly be a spread around the above figures then there is no significant difference between 2013 – 2016 for each sensor. Also note RH is temperature dependent thus it can simply reflect a change in temperature and nothing else

S1:   Mean: 77.7%      Min. 77%        Max. 78%

S2:   Mean: 90.3%      Min. 89%        Max. 91%

S3:   Mean: 96.7%      Min. 95%        Max. 99%

Note the largest spread is in those sensors towards the outer part of the wall,ie, closest to the external variable external atmospherics.

The recorded average RHs for sensor 4 of 110% and 112% are not possible and are well outside the reported operating parameters of +-3%: this strongly suggests faulty loggers.

Figure 34:

The minimum external RH data is recorded as 0.00%. This is an absurd figure and indicates a total absence of water vapour in the atmosphere – nonsense.

Table 16:

Logger S4 records consistently an RH average of 111-112%. RHs’ of this order do not exist: they are likely to indicate a faulty logger. The data is well outside the reported operating parameter of +-3%.

Table 17:

Given there would distinctly be a spread around the above figures then there is no significant differnce between 2013 – 2016 for each sensor.

S1:   Mean: 11.98        Min. 11.56                  Max. 12.24

S2:   Mean: 12.97        Min. 12.73                  Max. 13.32

S3:   Mean: 12.76        Min. 12.60                  Max. 12.91

S4:   Mean: 12.29        Min. 11.75                  Max. 13.05

Note the largest spread is in those sensors towards the outer part of the wall,ie, closest to the external variable external atmospherics.

Table 18:

The external average external AH for April is given as 2.33

So for April:   RH=93.99%    AH = 2.33 g/m3           Temperature = ?!!

To obtain an AH average of 2.33 given the RH, the average temperature would have to be so ludicrously low as to be completely unbelievable for April, thus an absurd figure

Table 19:

The average pre-insulation data was obtained over around 2 weeks only in February/March (winter) 2011 then compared to the FULL ANNUAL data (around 52 weeks) AVERAGE from 2012 (7 month average) to 2016. This can only be considered an unrealistic comparison (eg, ‘apples to pears’) and is meaningless.

COMMENT:

The minor average variations in the annual data are the result of the natural variation of internal and external atmospherics. There is no evidence provided, certainly from 2013, to show the insulation has afforded a change.

There is insufficient pre-insulation data to make any proper evaluation of subsequent conditions.

Walls are effectively a constant and will relatively rapidly come into equilibrium with internal/external environments should any changes been made to those walls such as insulation; such changes do not take years; this is clearly shown in the recorded data

There is no evidence provided that the 2013-2016 data is not simply reflecting the internal/external atmospherics, ie, there has been no real change since 2013 and possibly the year before. In order to do so it would require appropriate statistical analyses.

The basic data handling errors and calculations are readily evident and are so fundamentally basic and obvious it is surprising that they were missed in the reported review of the report let alone published. If such erroneous data can readily have escaped review then one must seriously question all previous data recorded from 2011 onwards.

The fact that these fundamental errors are obvious have escaped both the researchers and the reported review does seriously question the experience of both the researchers and reviewers in investigations of this nature, eg, Averages which include ‘no data’, RH of 0%, absurdly low Absolute humidities at levels which are not obtainable as average in the UK.; the report distinctly shows a lack of scientific rigour.

Based on the evidence in the report there seems to be no justification in continuing to monitor the 3 properties when there is no evidence to show there have been any real changes, certainly since 2013.

This report has been published and in the public domain. Therefore, given the errors identified, some at an absurd level and readily evident, then it might be prudent to ‘pull’ the report from further exposure; given the above this same consideration may also be relevant to previous reports.

G.R.Coleman B.Sc(Hons).,M.R.S.B.,C.Biol.,A.I.M.M.M..

RESPONSE FROM ARCHIMETRICS

A Response to the reply from Archimetrics Ltd concerning their report

My response to each section of the reply, where appropriate, is given in bold italics: The reply from Archimetrics is given in full.

“We thank Graham Coleman for taking the time to feedback on this work and appreciate that all comment is welcomed.

In summary we understand that Graham has highlighted a data handling error and he feels the data collected for all 3 properties simply shows that the walls rapidly find their equilibrium with external conditions regardless of wall material and thickness, retrofit measures taken, orientation and other variables which may have an effect on performance.”

There is more than “a data handling error” eg, recording data which includes absolute humidities which cannot occur in the UK, recording a relative humidity of 0% which cannot occur and concern over the performance of the data loggers and interpretation of the data.

“In response we would like to offer the following:

The data handling error concerns external conditions data. Data collection can of course be a difficult undertaking in the real world and when allowing for uncontrolled events. Our external data has indeed suffered outages for variety of reasons which include one home owner turning the mains off every night until the morning without prior notification, damage to our external wireless logger from building maintenance again without notification, sensor failure due to in-service conditions etc. These data holes are both declared and visible.”

I have over 45 years of data collection and analyses in the real world, and fully appreciate the comments about data losses, etc. It is important to show how the data was handled where it is missing/intermittent-this is not the case in the report, and as a consequence the results of which can and will lead to flawed outcomes.

In the report page 5 it states, “Where data is missing from an analysis values are shown as unchanging or as a gap and where this impinges on the written discussion the absence is noted within the text.”. There are only 2 instances where a “gap” has obviously been left – correctly – for absence of data (tables 3 and 9). In tables 5 and 11 these same gaps have been filled with 0.00 figures, that is data-and this has been included in the average. If there have been missing data then it certainly is not clear in what data series these occur, and how they were actually handled. According to the report the absence of data can be made up by, “-analysis values are shown as unchanging–” – in other words add false data. The data holes are not properly declared and only in 2 tables are they clearly visible.

“We acknowledge that these data holes have been incorrectly handled by the code we have written to analyse the data, this simple error is clear to see and easy to correct. This has obviously slipped through the review process.”

This statement is of concern and may have serious consequences for the project as a whole.

If the “code” you have written has mishandled data holes (and thus probably some data itself, eg, obtaining some absurdly low absolute humidity figures) it is seriously flawed. If the “code” written has mishandled data for the latest report figures to give erroneous results then a serious question must be asked, has the same mishandling of data via faulty “code” occurred from 2011 to the present?

The above also states that, “-this simple error is clear to see and easy to correct. This has obviously slipped through the review process” If the “simple error” is now that “clear to see” then why wasn’t it “clear to see” at the time of collection, data handling and during the review? This now leads to the question of how many simple “clear to see” data errors have been missed over previous years. If such “clear to see” simple errors are being missed then this may question the experience and expertise of the researchers and reviewers. It also brings up the question with reference to ‘not quite so clear to see data errors’. As for, “–easy to correct.”, this is not going to be the case. If there are such data errors then it is going to be more than a simple adjustment since the data sets will need to be reviewed as well, of course, as the “code” that has led to the errors- present and probable past.

“The purpose of this work comes about from the widely accepted concerns that insulating buildings, especially traditional buildings can have unintended consequences. This long-term observation of 3 buildings is intended to shed a little more light on this area of concern with the view that it is better to look than assume.”

Notwithstanding the likely data errors, the current published data from past years to present does not appear to identify anything except internal/external atmospheric changes. There was only 2-4 weeks data collection in February/March 2011 prior to insulation. Subsequent to that there appears to have been effectively nearly 6 full years data collection for each property, certainly consistently from 2013 to present. The data during this period does not reflect any apparent change to the walls.

“We do not hold the view that all walls insulated or not, quickly find equilibrium with the external environment regardless.”

The report’s own data shows that walls find equilibrium moderately rapidly following changes in internal/external atmospherics.

“We do hold the view that insulating buildings may have negative impacts on some building fabric and occupants and that we should endeavour to understand the implications as best we can so as to improve the potential outcomes of improving the energy efficiency of our building stock.”

Unfortunately the monitoring data as reported does not throw any light on this; the data shows that there has been no effective change since at least 2013. There is no data over the years of monitoring that records any “impact to the occupants”.

“We do hold the view that this work, along with any other study, has its flaws and that there is always scope for improvement-we are constantly evaluating, developing and improving through experience. It is not uncommon that the act of undertaking to answer a question often reveals further questions, acknowledgement of this fact forms an important part of our ethos.”

Any study will have ‘limitations” but should not have “flaws”, in other words wrong information-there is a significant difference between limitations and flaws. If a study is shown to have flaws then its value will be diminished: any flaws should have been identified prior to publication and resolved, and if subsequently found, the paper should be withdrawn for correction and re-evaluation.

“As part of a constructive review process, we would welcome a face to face meeting with Graham to discuss his views, in the hope that this might further understanding of this important field.”

I would be most pleased to meet to discuss

Posted by Complete Preservation

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.