That is a great question and it doesn’t have a straight forward answer. Firstly, the certifiers don’t re-review the whole model after every round of the review. There is an initial thorough review, then feedback is provided to the user. Once the user has responded to each feedback and notifies the reviewer that the design is ready for the next round, the reviewer will focus on the items that were questioned during the initial review. This is an inherent risk in most review processes which focus on addressing specific issues each round of review rather than starting a new ground up review each time. If you made changes to the design outside of the reviewers feedback which are then not communicated to the reviewer, it can quite easily be missed by the reviewer subsequent to their initial thorough review.

Secondly the reviewer does not repeat the whole review of the LCA study to identify issues with your study. This would lead to a review effort and cost higher than the study cost. So things may be missed even though that is certainly not the intent. Our aim in the Life Cycle Design Coaching team is to ensure that design conclusions drawn from the study are not at risk due to inaccuracies in the study. For example, if a reviewer misses something that leads to a design strategy being overlooked, that’s a bad outcome for the planet and the project stakeholders therefore this is something we put a lot of effort into avoiding. If however the number of door knobs were calculated incorrectly and the reviewer didn’t pick that up, that will be very unlikely to drive overall conclusions made from the study and not something that will get attention during the review (providing they’re not made of gold!). So the relative magnitude of the item in question is also very important.