eTool recently changed from offering numerous fairly localised benchmark options to a single international average benchmark for each building type. The decision making process was interesting so I thought I’d quickly document it.
The purpose of the eToolLCD benchmark is:
- To establish a common measuring stick against which all projects are assessed so that any project can be comparable to another (for the same building type);
- To create a starting point, or “average, business as usual case” from which to measure improvements.
From the outset we’ve always understood that a benchmark needs to be function specific. That is, there needs to be a residential benchmark for measuring residential buildings against etc. The first point essentially addresses this.
The second point introduces some complexity. What is, or should be, “average, business as usual”? More specifically, are people interested in understanding how their building performs when compared compared locally, regionally, nationally, or internationally?
When we started trying to answer this question, some scenarios were very helpful. If a designer wants to compare locally, the benchmark needs to reflect the things that are most important to the overall LCA results. The two most critical things are probably electricity grid and climate zone. Localising just these two inputs gets pretty tricky and the number of possible benchmark permutations starts to add up pretty quickly. In Australia there are four main independent electricity grids (NEM, SWIS, NWIS and Darwin). In the Building Code of Australia there’s 10 climate zones. Accounting for which climate zones occur within each grid, there’s about 20 different benchmarks required. To add to the complexity though, the NEM is split into different states (New South Wales, Victoria, Australian Capital Territory, Queensland, Tasmania and South Australia). Generally, because the National Greenhouse and Energy Reporting guidance splits the NEM into different states, the NEM is usually considered as six different grids. So there’s upwards of 50 different benchmarks we’d need to create and maintain for Australia alone just to localise electricity grids and climate zone.
One disadvantage of this method is it’s still not all-accommodating. It doesn’t account for remote grids of which there are many in Australia. An example is Kunnanurra which is 100% hydro power. So even in this scenario where we had 50 or so benchmarks for Australia, there’s still big potential for a designer patting themselves on the back for a great comparison to the benchmark when really it’s just a local condition, and vice versa. The same can be said about an off grid scenario (effectively just a micro grid of it’s own).
The other disadvantage is maintenance of all these benchmarks. Expanding the above scenario internationally there could easily be 1000’s of possible benchmarks. There’s so many that it would be hard for eTool to initially create them, and even harder to subsequently maintain them. Clearly the localised benchmark option had some big challenges.
At the other end of the benchmarking philosophy we considered just having generic benchmarks, or even one global benchmark. This is perhaps a more user-centric, or building occupant sensitive system. That is, the building occupants are probably more interested in this measure as it’s more about how they live compared to the global community. So a building may be “average” compared to the local context, but actually be very low impact compared to the broader average (due to favourable local conditions). Conceivably, the local conditions contributing to the ease with which a building can perform may be part of people’s motivation for living in a particular area.
The disadvantage of the generic benchmarking approach is that it isn’t as useful for a designer to compare their building’s performance against this as the local conditions (which may create a significant advantage of disadvantage) aren’t considered. This was a big consideration for us, eToolLCD is a design tool, it has to be relevant to designers. Interestingly though, the way eToolLCD is generally used is the base design is modelled, and then improvements are identified against this base design. The benchmark is usually only used towards the end of the process as a communication and marketing tool.
Also, there’s no reason why the designer can’t model their own local benchmark, for example, a code compliant version of their own design.
This topic spurred some serious debate at eTool. In the end, the deciding factors were:
- A local approach couldn’t really be adopted without localising at least the grid and climate zone for each benchmark option. That is, it would have been too difficult to go half way with localisation (for example, only localising climate zone and not grid), as this really just revoked the whole advantage of localising the benchmarks.
- Taking the very localised approach was going to put a huge benchmark creation and maintenance burden on eTool which wasn’t necessarily productive
- The choice of a generic benchmark didn’t detract from the function of eToolLCD as a design tool.
- Greenhouse Gas pollution is a global problem not a local problem, we feel people probably need to measure and improve their performance against a global benchmark rather than a local one.
So the single global benchmark was the direction we choose. Once this decision was made, we needed to determine how to statistically represent global averages. We decided to choose an aspirational mix of countries to make up the global benchmark, that is, select the standard of living that we felt most people in the world aspire to and determine the average environmental impacts of buildings in these demographic locations. This does mean the global benchmarks are generally higher than the actual global average building stock for a given function. That doesn’t stop us from estimating what the sustainable level of GHG savings is against this aspirational benchmark (90%+). It also enables us to strive for this level of savings without adversely effecting our standard of living aspirations (globally). The global benchmark created using this approach is the residential benchmark. More information about how this was conducted can be found here.
For those people or organisations that would like a customised benchmark, eTool can provide this service. Please get in touch.