Hindsight bias in high hazard incident investigations
Author : Richard Booth, Professor Emeritus, Aston University, Birmingham, and Director, HASTAM
01 July 2012
The purpose of this paper is to explain the concept of hindsight bias and explain how this bias can be minimised in incident investigations. Hindsight bias promotes the belief that adverse events are more foreseeable than they actually are and that errors in the causal chain are more culpable with the knowledge of the consequences.

Hindsight bias promotes the belief that adverse events are more foreseeable than they actually are
In this paper I refer briefly to the 2005 Buncefield Explosion where this bias, in my opinion, distorted perceptions of ‘blameworthiness’ and led to a failure to learn important lessons. Those who (quite sincerely) believe that they have ‘factored out’ hindsight in investigations are profoundly mistaken [1]. Reason (2008) has summarised the issue as follows:
“There is a universal tendency to perceive past events as somehow more foreseeable and more avoidable than they actually were. Our knowledge of the outcome unconsciously colours our ideas of how and why it occurred. To the retrospective observer all the lines of causality home in on the bad event; but those on the spot, possessed only of foresight, do not see this convergence.” [My emphases]
In the case of Buncefield, hindsight bias has particular resonance. None of the parties involved anticipated a cold petrol vapour explosion as a result of a storage tank overflow. In the aftermath, it now seems that here was an explosion waiting to happen. Thus preventive measures taken by some of the participants who were subsequently prosecuted appear more serious with the knowledge of the catastrophic consequences [2].
Research background
The point of departure for studies of hindsight bias derives from a factor in research on perceptions of risk, namely the concept of ‘availability’ introduced by Tversky & Kahneman (1974), though with a longer ancestry.
“The bias of availability refers to the greater role played in the … assessment of probability by information that is regarded as important and salient. This means that people are more influenced by events which have happened recently or frequently or which are particularly vivid” (Hale & Glendon, 1987).
It is a commonplace that people’s perceptions of risk increase sharply following a serious event (an air crash, nuclear incident or terrorist attack, for example), but gradually decay at a rate affected by the frequency and potency of reminders of the incident, such as newspaper reports, television documentaries and prosecutions. It is a commonplace that agencies and individuals are often made scapegoats following such events, in my view unfairly in most cases.
The psychologist Fischoff with colleagues studied hindsight bias and published papers that now carry uncontested authority. The following quotations from Fischoff (1975) indicate the nature and consequences of the phenomenon [My emphases throughout]:
“… it appears that what passes for the wisdom of hindsight often contains heady doses of sophistry - that the perceived inevitability of reported outcomes is imposed upon, rather than legitimately inferred from, the available evidence.”
“Consider a decision maker who has been caught unprepared by some turn of events and who tries to see where he went wrong ... If, in retrospect, the event appears to have seemed relatively likely, he can do little more than berate himself for not taking the action which his knowledge seems to have dictated … When second guessed by a hindsightful observer, his misfortune appears to have been incompetence, folly, or worse.”
“Given knowledge of outcome, reviewers will tend to simplify the problem-solving situation that was actually faced by the practitioner. … Possessing knowledge of the outcome, because of the hindsight bias, trivializes the situation confronting the practitioners and makes the correct choice seem crystal clear.”
The prosecution’s case following the Buncefield explosion was replete with hindsight bias: heady doses of sophistry; a presumption of incompetence, folly or worse, and the simplification of the challenges facing those involved, meaning that the adverse events were claimed to be far more preventable than they actually were.
Hindsight bias and Buncefield

The Buncefield blast was the biggest peacetime explosion in Europe since WW2
The devastating Buncefield explosion occurred at 06.01 on Sunday 11 December 2005 at Hertfordshire Oil Storage Ltd (HOSL), 25 miles NW of London. HOSL was a JVP between Total UK Ltd (60%) and the Chevron Corporation (40%). The explosion followed an overflow of 250,000 litres of petrol from Tank T912 into the surrounding bunded area. The source of ignition was a spark from the fire pump motor when the emergency shutdown was belatedly activated.
The overflow was a result of a failure of an endemically-unreliable automatic tank gauge (ATG) level indicator/alarm together with the non-operation of an apparently functioning independent high level switch (IHLS), but which had a crippling latent defect. Both the ATG and IHLS were supplied and maintained by Motherwell Control Systems (2003) Ltd (MCS).
A very experienced night-shift pipeline supervisor, with multiple other tasks and concerns, misunderstood which tank was being filled and from which pipeline. He thought the tank being filled was from a low-flow-rate Chevron pipeline (as was recorded in the day-shift log [3]). This was a result of a misunderstanding by the day-shift supervisor, and a cursory shift handover.
The night supervisor did not realize the ATG alarm had failed to operate, or later that the tank was overflowing not wholly unaccountably as he believed that the tank was far from full at the time.
Earlier, he did not monitor the level in the tank – not an essential check for him when the tank was (he thought) far from full. This would have revealed that the ATG had seized up, ie, the level displayed on the VDU was ‘flatlining’ despite the tank filling up; but not of course the latent IHLS failure which nobody knew about. Nonetheless the pipeline supervisor was scapegoated.
The IHLS inoperability was a result of a bizarre error by MCS engineers. They failed to fit essential balance weights to new IHLSs. The weights supplied by the manufacturer TAV were actually padlocks. MCS staff believed that they were merely for security, and were not necessary.
Total were adopting good and in some respects best practice in promoting and monitoring the safety of their JVP subsidiary. In particular they engaged DNV to carry out audits of HOSL’s safety systems. The audit reports were favourable and the close-out of the audit recommendations was exemplary.
There was a strong Total safety team in support. The key conclusion of my enquiries was that the ‘reach’ of monitoring systems in relation to a subsidiary company were deficient – this was a ‘failure’ of methods that are generally regarded as very appropriate.
The key manifestation of hindsight bias in the prosecution evidence included the criticism of Total for not anticipating a vapour cloud explosion. Before the event, HSE and the oil majors believed that a cold petrol vapour explosion was not realistically credible (MIIB (2008). In consequence, Total (HOSL) did not include tank overflow as a major accident hazard in their COMAH Safety Report - at the behest of HSE. Total were nonetheless censured for this omission. As stated above, DNV were engaged to audit HOSL but their best-practice question-set and verification failed to identify major shortcomings in throughput management and safety-system reliability. The prosecution claimed that these audits were (on the basis of selective quotation) unfavourable, and Total’s response cursory.
My Buncefield experience prompted me to consider how the distortions created by hindsight bias can be mitigated.
Hindsight bias and incident investigation
All the foregoing demonstrated that hindsight bias is a malign influence, notably in incident investigation and legal proceedings. But if bias is minimised, the simplifications of hindsight quoted by Reason in section 1 (“To the retrospective observer all the lines of causality home in on the bad event”) is actually an advantage. The sequence of events and conditions leading up to the bad outcome are still the important issues, even if people are blamed unduly for their part.
The targeted investigation approach, if it explores underlying and root causes, provides a coherent, comprehensible and simple (but not simplistic) explanation that can be used to devise appropriate precautions to minimise repetitions, and more. But there are considerable challenges that stand in the way of this ideal outcome, and suggestions are made to overcome these obstacles, which are of course linked to hindsight bias. They are covered in the following sub-sections:

Figure 1: Perceptions of risk following a serious accident (from Booth (2000)
* The heightened perceptions of risk of investigators immediately after a serious incident;
* Perceptions of the rationality of the behaviour of incident participants;
* Experience of investigators and their beliefs of causation.
* Analytical methods for incident investigations
Heightened perceptions of risk
Figure 1 shows how perceptions of risk increase dramatically immediately after a major incident and then gradually decay. An incident investigation naturally takes place when perceptions are close to their peak. At this point hindsight bias exaggerates the likelihood and perhaps the severity of a repetition.
The primary purpose of incident investigations should be to introduce proportionate controls to prevent repetitions of causally-related events following a genuinely root-cause analysis.
But as shown in the figure, all too frequently, controls put in place when perceptions are at their peak may be violated when perceptions decay, and the new controls may be viewed as over-bureaucratic and disproportionate. Investigators should recognise that their perceptions are heightened and this may affect their recommendations.
Secondly, the chosen precautions need to be re-evaluated in (say) a year’s time, when passions have cooled, listening to those who have to comply with them with a view to dismantling some of the unduly austere new rules. It is a commonplace that investigators (and regulators), having identified rule violations, propose yet further rules, thus exacerbating the challenge.
Perceptions of the rationality of behaviour
Dekker (2003) observes that in order to understand incidents it is essential to study the environments in which people are working and the rationality of their decisions in these circumstances:
“A key commitment …. is to understand why it made sense for people to do what they did. A premise is that system goals and individual goals overlap; that people do not come to work to do a bad job. Behaviour is rational within situational contexts.”
“Jens Rasmussen points out that if we cannot find a satisfactory answer to questions such as ‘how could they not have known?’ then this is not because these people were behaving bizarrely [reference omitted]. It is because we have chosen the wrong frame of reference for understanding their behaviour. The frame of reference for understanding people's behaviour is their own normal, individual work context, the context they are embedded in and from which point of view the decisions and assessments made are mostly normal, daily, unremarkable, perhaps even unnoticeable [4]. A challenge is to understand how assessments and actions that from the outside look like ‘errors’, become neutralised or normalised so that from the inside they appear- non-remarkable, routine, normal.” [My emphases]

Figure 2: Traditional accident investigations – perceptions of cause (from Booth & Lee (1995), reproduced from HSC (1993)
All this counters bias predicated on the fact that blameworthiness is magnified if investigators think that acts were not only unsafe but also inexplicable and/or irrational.
Dekker’s and Rasmussen’s comments serve to emphasise that incident investigation must take into consideration the historical, social and cultural context in which the decisions, errors or practices that apparently contributed to the event were made or became the norm. Interpretations of the emerging data must be anchored to what was ‘normal’ knowledge and practice within the industry and organisation when critical contributing decisions were made or actions taken.
Experience, and beliefs on causation
A further difficulty for investigators is that their investigation and the application of hindsight bias are further prejudiced by which aspects of ‘hindsight’ are applied. As Braybrooke and Linbolm (1970) indicate (in the context of evaluating policy), investigations “always begin somewhere, not ab nihilo ...” The human mind is incapable of starting with a complete blank canvass and investigators come to the investigation with information which they have derived from “historical experience, from contemporary experience in other societies or locations and from imagination stimulated by experience ...” and this unwittingly is the filter through which they examine the emerging facts and draw conclusions.
The investigator begins with the familiar, with what they think they know or have experienced, and then incrementally try to relate the incident data to it. Those ‘facts’ that are attuned with the familiar frequently form the selected starting point, and influence the thought patterns and the judgement as to what data is important. Not all of the developing hindsight will have equal weight or influence in the conclusions or subsequent improvement recommendations. The malignancy of hindsight bias is in its selective nature.
Braybrooke & Linbolm’s view that investigators come to an investigation with pre-conceptions is illustrated in its simplest form in Figure 2. Here the sum of the pre-conceptions divides into two views of causation. (Both mind-sets presume that accidents are uni-causal).
The difference is that some investigators believe that accidents are ‘caused’ by unsafe acts, others that accidents are ‘caused’ by unsafe physical conditions. As can be seen in the figure, this leads either to new rules or to technical changes.
The diagram in fact illustrates long-standing but still current perceptions of very many people, not just investigators. Indeed it is captured by Rasmussen’s ‘stop rule’, where investigators halt investigations as soon as they have found either a human error or a technical shortcoming that in their opinion sufficiently explains what has gone wrong. Rasmussen (1990) states:
“In an analysis to explain an accident, the backtracking will be continued until a cause is found that is familiar to the analysts. … [extended omission] … In one case, a design or manufacturing error, in the other, an operator error will be accepted as an explanation.”
The gross simplifications just described (together with new precautions that may be wholly inappropriate) may be countered by incident investigation training founded on multi-causality. Contemporary causation models should be the foundation for such training, although they can become an investigative straight jacket, in contrast to ECFA and FTA (see below).
Analytical methods for incident investigations
Dekker (2004) makes the case for analytical models to simplify and thus to make sense of the past as the foundation for future actions:
“The linearisation and simplification that happens in the hindsight bias may be a form of abstraction that allows us to export and project our and others’ experiences onto future situations. … Predictions are possible only because we have created some kind of ‘model’ for the situation we wish to gain control over … This model - any model - is an abstraction away from context, an inherent simplification. The model we create … after past events with a bad outcome inevitably becomes a model That is the only useful kind of model we can take with us into the future if we want to guard against … pitfalls .and forks in the road. [My emphasis]

The high profile of the Buncefield incident overshadowed Total’s commendable safety record and practices at the site up until then
A key approach to minimising bias and promoting foresight is the adoption of structured incident analysis models, argued well by Dekker above. My preferences are to use Events and Causal Factors Analysis (ECFA) and Fault Tree Analysis (FTA), sometimes in combination.
While hindsight bias may make errors appear more culpable, they are nonetheless still crucial in understanding what happened, and the preventive changes that need to be made. The argument is simply that the detailed analysis of an incident in linear stages provides a comprehensible structure – effectively a map - with which to chart a pathway to the future, which would otherwise be even more opaque and unpredictable.
How then are Dekker’s precepts to be implemented? Here are some immediate suggestions which seek to cover the insights contained in the above quotations and hindsight bias more generally:
i) The only way that hindsight bias can truly be eliminated is to have an investigation team that was not aware of the actual outcome. This is manifestly impracticable in almost all cases.
ii) Systematic modelling of the events and conditions leading to a bad outcome helps to minimise bias and lead to an understanding of what went wrong, and why.
iii) Perhaps investigators themselves should create a detailed ‘counter-factual’ (fictitious but plausible) history of a satisfactory outcome, and investigate both with this, as far as possible, and the actual ‘bad’ outcome in mind.
iv) The creation of the counter factual outcome should be based in part on the recorded perceptions and beliefs of, for example, rule-violators (if available), taking into account Dekker’s comment: ‘Behaviour is rational within situational contexts’. Investigators anyway should not ‘stop’ until they have examined the case in terms of the challenges and perceptions of the principle parties when they made what were to them rational decisions [5,6].
v) Investigation training would be relatively easy and a deliverable in itself: delegates might be presented with a ‘bad outcome’ fictitious case, perhaps a concatenation of several actual cases with bad outcomes, and asked to create a counter-factual good outcome and then carry out the investigation with the two outcomes, as outlined. There are simpler group exercises that might achieve the same purpose. If the training is effective then the inevitability and consequences of bias might at least be explicitly and coherently addressed in real investigations.
Concluding comments
Hindsight bias is uniformly a harmful influence in the evaluation of bad events. This has been demonstrated here, inter-alia by the extensive quotations, and by the Buncefield experience. The key issues are that participants in adverse events are unfairly blamed and that inappropriate lessons may be learnt from investigations. An equally important conclusion is that while hindsight bias cannot be eliminated, hindsight is the essential means for learning from experience.
Hindsight-rich bias-aware investigations of past events where the facts are structured, ‘simplified’ and made comprehensible by bias-aware experts, is the foundation for coherent efforts to prevent repetitions of adverse events.
Incident investigation training courses embracing an analysis of hindsight bias together with the practical use of structured investigation models, for example ECFA, may maximise the utility of looking at past events. Such training might both improve the efficacy of investigation recommendations, and lead to a more balanced view of blameworthiness. Moreover the training should also recognise that investigators’ heightened risk perceptions my lead to ‘gold plating’ of new precautions, which may subsequently be violated.

Richard Booth, Professor Emeritus, Aston University, Birmingham, and Director, HASTAM
Footnotes
1. I was an expert engaged to assist the court by Total UK Ltd. This is a source of bias to which readers should be alert, and there could of course be others. The opinions expressed here are wholly my responsibility and are not necessarily the views of any other organisation.
2. Of course, uncontrolled releases of flammable substances from storage tanks are highly undesirable on any terms. This is discussed further below.
3. Shortcomings in shift record keeping and shift handovers were key causal factors in the explosion on Piper-Alpha (1987), and at Longford (1998), Port Talbot (2001) and Texas City (2005).
4. Violations that are routine for control room supervisors and condoned or approved by their immediate managers may be particularly difficult for safety auditors to detect. Moreover potential ‘whistle-blowers’ may not see such issues as meriting disclosure.
5. Note that rationally-made decisions may still be culpable. For example a shift supervisor might ‘skip’ shift handovers for personal reasons, but believing them to be a bureaucratic chore.
6. My colleague Professor Andrew Hale’s mantra in accident investigations is “Go on collecting and analysing data until you feel that you too, in the given circumstances, would have made the same decision which proved in practice to be wrong.”
References
Atherley GRC & Booth RT (1975) “Could there be another Flixborough?”
Booth, RT & Lee, TR (1995) “The Role of Human Factors and Safety Culture in Safety Management”. Proc. Inst. Mechanical Engineers Vol 209, pp 393-400.
Booth, RT (2000) “An iconoclastic critique of the Sacred Cows of Health & Safety. An analysis of the conventional wisdoms of health & safety management”. Inter-alia, Sampson Gamgee Memorial Lecture, Birmingham Medical Institute, 22 November (unpublished).
Braybrooke, D & Linbolm, C (1970) “A strategy of Decision”. New York Free Press, pp 83-84.
BSI (2008) “BS 18004: Guide to achieving effective occupational health and safety performance”. British Standards Institution, London.
Dekker, WA (2004) “The hindsight bias is not a bias and not about history”. Human Factors and Aerospace Safety 4(2), 87-99 Ashgate Publishing.
Fischoff, B (2003) “Hindsight foresight: the effect of outcome knowledge on judgment under uncertainty”. Qual Saf Health Care 2003; 12: pp 304–312. Reprint of a paper in Journal of Experimental Psychology: Human Perception and Performance 1975, Volume 1, pages 288–299.
Hale, AR & Glendon, AI (1987) “Individual Behaviour in the Face of Danger”. Elsevier.
HSC (1993) “Organising for safety”. ACSNI Human Factors Study Group third report, HMSO, London.
MIIB (2008) “The Buncefield Incident 11 December 2005. The final report of the Major Incident Investigation Board” [in three parts]. [The MIIB published 14 preliminary reports. The report on the prior role of the Competent Authority (HSE & EA) has not been published. The latter, drawing on pre-event HSE guidance and operational decisions, reveals a picture significantly at variance with the prosecution case.
Rasmussen, J (1990) “Human error and the problem of causality in analysis of accidents”. Phil Transactions of the Royal Society, London, B 12 April vol. 327 no. 1241 449-462.
Reason, JT (2008) “The human contribution: unsafe acts, accidents and heroic recoveries”. Ashgate, Farnham, England.
Tversky, A & Kahneman, D (1974) “Judgment under Uncertainty: Heuristics and Biases: biases in judgments reveal some heuristics of thinking under uncertainty”. Science, New Series, Vol. 185, No. 4157, pp. 1124-1131. American Association for the Advancement of Science.
Wears, RL & Nemeth, CP (2007) “Replacing Hindsight With Insight: Toward Better Understanding of Diagnostic Failures”. Annals of Emergency Medicine Volume 49 No 2. American College of Emergency Physicians.
Contact Details and Archive...