This website uses cookies primarily for visitor analytics. Certain pages will ask you to fill in contact details to receive additional information. On these pages you have the option of having the site log your details for future visits. Indicating you want the site to remember your details will place a cookie on your device. To view our full cookie policy, please click here. You can also view it at any time by going to our Contact Us page.

Contact us for details of exhibiting and conference

Integrating culture and leadership into process incident prevention

12 May 2017

The implementation of process safety management (PSM) has received significant attention in the last 25 years, but despite this attention, major catastrophic incidents continue to occur, with organisational cultures often taking the blame. This pattern tells us that process safety must make the same migration as personal safety – that systems are only as effective as the culture in which they operate, says R. Scott Stricoff of DEKRA Insight.

Culture is often described as “the way we do things around here” or “the unwritten rules.” Culture arises from shared values and beliefs held within the organisation, which lead to shared norms of behaviour. These behavioural norms are reinforced over time as they lead to successful outcomes and positive consequences. A common example relates to operating procedures.

It is not uncommon to find that across time operators and their supervisors deviate from the letter of the written procedure, finding ways to do the job more quickly and efficiently (at least in their perception). They are praised, reinforcing the belief that efficiency and speed are valued, and do not experience adverse consequences (e.g., injuries or chemical releases) reinforcing the belief that their practice is sound. In this situation leaders would probably say that operating procedures are important and operational discipline is valued, but shortcuts become the norm.

Because ultimately all of our PSM systems are implemented through the behaviour of engineers, managers, supervisors, front-line workers, and/or others within the organisation, the behavioural norms created by the culture have a direct impact on the effectiveness of the PSM systems. In one recent instance a transfer hose was to be replaced monthly according to the procedures that were in place and that had been the basis of hazard assessments. When lack of availability of a replacement hose caused replacement time to be delayed to two months and no safety incident occurred, the replacement cycle was informally changed to every other month, and later to quarterly. This eventually led to a chemical release when a hose failed – not due to poor hazard assessment or a mechanical integrity programme that was flawed  on paper, but because various individuals allowed the programmes to slip based on perceptions of risk derived from their recent experience.

We also know that the culture of an organisation and the organisation’s leadership are inextricably linked. Leaders have the ability to create and drive change in a culture through what they do, and through what they do not do. For example a leader who never asks about resolution of safety audit items but requests frequent updates on installation of new production equipment helps create a culture that values production but may take safety shortcuts to achieve it.

To advance beyond the level of safety achieved through process safety management systems alone, we have found that organisations should adopt a Comprehensive Process Incident Prevention (CPIP) approach that builds on the foundation of PSM to create a safety process integrating culture and leadership characteristics critical to catastrophic event prevention. This approach includes four major components, Anticipation, Inquiry, Execution, and Resilience, to build a strong safety culture that supports effective technical and management safety systems.


An organisation that is strong in Anticipation will have mechanisms to capture information from a variety of sources. Process deviations, unusual maintenance requests, and even front line workers detecting differences in sounds can potentially be meaningful early indicators of change to exposure. If there are not systems to seek out and capture this information it can easily be lost.

Once captured, the various types of information that can be early indicators must be accessible and structured  in a way that allows analysis of patterns across various types of data. In many organisations there is a wealth of information collected, but different types of information cannot be integrated and effectively used.

Even when systems are in place, it is incumbent upon the organisation to assure that the systems are used.

Individuals will not bring forward information if they perceive that nothing happens or, even worse, that raising a concern is considered to be a nuisance. It is important for leaders encourage individuals to be alert for weak signals and reinforce this behaviour.

Finally the information gathered must be used effectively. This requires having people with appropriate skills who are given appropriate encouragement and attention. Early warning indicators will not always lead to actual findings of increased exposure; by its very nature, sensitivity to weak signals will result in numerous “false positives.” It is easy for the organisation to begin to discount early warnings in the manner of the boy who cried wolf.

Organisations with strong cultures have leaders who visibly value the search for early warnings and reinforce the analysis of these indicators, even when this does not result in identification of serious risk. These leaders understand that supporting the detection and investigation of many false positives is worth it if it results in just one catastrophic event avoided.


Inquiry involves making effective use of information to analyse, understand, and plan mitigation of risks.

Traditional PSM includes a number of elements (such as process hazard analysis, pre-startup safety review, management of change) designed to evaluate and plan for control of hazards and risks. However there are common (but often undetected) cultural characteristics that can undermine the effectiveness of these efforts and leave the organisation vulnerable.

Cognitive bias refers to the tendency we all have to rely on intuitive rather than analytical thinking in order to process information efficiently. Our knowledge and experience allow us to reach conclusions and make decisions quickly and efficiently in many circumstances. However this can also trap us in poor decisions. Recency bias is a common example – we tend to overweight our recent experience in assessing data and situations. Confirmation bias is another example – we tend to give more weight to data that supports our intuition and belief than to data that refutes it. These and other cognitive biases have been researched and demonstrated to be present in everyone.

Where cognitive bias becomes particularly problematic for catastrophic event prevention is that our methods for systematic hazard analysis can be influenced and can cause us to underestimate or even miss entirely potential failure scenarios. In addition, the routine operational decision-making that occurs day-to-day can be influenced by cognitive bias, leading to unintended increases in risk.

While awareness of cognitive bias helps to counteract its effect, the best way to guard against the insidious effects of cognitive bias is to have an organisational culture that combats it. There are specific leadership behaviours (for example, encouraging the voicing of dissenting opinions) that promote a culture in which the effect of cognitive bias is minimised. There are also specific skills involved in asking the right question in the right way to get the right data. Organisations should promote and measure the use of these leadership behaviours and skills.


As seen in the examples cited earlier in this article, excellent hazard identification and assessment, as well as hazard control efforts related to mechanical integrity, safe operating procedures, and management of change, can be undermined if the programmes and practices are not followed as intended. While many organisations use periodic audits to provide a check on implementation, the key to assuring consistent and on-going activity is leaders who monitor, reinforce, and verify effective programme execution.

Monitoring involves regularly acquiring information on what subordinates are doing, how they are progressing toward achievement of goals, and what issues or problems they may be encountering. This is not micromanaging; rather it is assuring that the leader has sufficient information to meaningfully recognise good performance, provide support when subordinates need it, and providing corrective feedback on those hopefully rare occasions when subordinates fail to fulfil their responsibilities.

A leader’s monitoring behaviour may take many forms.

Depending on the situation and the leader, these may include walking around and observing, informal conversations at the front line, periodic meetings with subordinates to review progress, use of written progress reports, review of appropriate metrics, etc.

Reinforcement involves providing feedback that recognises good performance. This communicates the importance and priority of the catastrophic event prevention activities and maintains focus on consistent execution. Effective reinforcement is based on effective monitoring, which provides the leader with specific data on which to base reinforcement, avoiding the vague and ineffective “good job” type of feedback to subordinates.

Verification is similar to monitoring, but where monitoring is focused on the performance of subordinates, verification is focused on activities and programmes. Assuring that audit findings are resolved in a timely manner is an example of verification, and leaders who do this effectively are more likely to have organisations in which consistent execution is valued as part of the culture.


Upset conditions occur from time to time in any system.

Resilience refers to the organisation’s ability to react in ways that prevent upset conditions from becoming catastrophic events, and then learning from the experience. This has a major influence on ultimate results.

Even where automated control systems are designed to handle upset conditions, it is important that workers understand when and how to intervene, and are not only able but also willing to make appropriate interventions early. An organisation that is strong in resilience is more likely to prevent a small process disruption from becoming a major incident.

One requirement for strong resilience is knowledge; that is, do people at various levels have a broad enough understanding of the operation so that they can make good judgments in case of emergency. Some organisations approach this through use of extensive sets of rules and procedures. That approach is intended to assure consistency and avoid having to rely on technical knowledge at the operating level. However two problems arise with that approach.

First, the range of possibilities that must be planned for results in a proliferation of procedures and rules that become impractical for anyone to know. Second, this approach assumes that all possibilities (with all variations in every scenario) can be identified in advance – something that is unlikely to be true. The alternative is to develop an organisation in which people are knowledgeable and are taught to make good judgments based on their knowledge and the information at hand.

The second requirement for resilience is willingness, and this relates directly to culture. Simply put, people are less likely to take action on their own initiative if they are not confident that the organisation will support them. Perceptions of the culture’s support for resilience are formed over a long period and are based on many small actions taken and not taken by leaders. An organisation desiring strong catastrophic event prevention will be sensitive to this and intentionally create the culture that supports resilience.

Creating comprehensive process incident prevention

The discussion above emphasises the importance of supplementing systems with specific leadership behaviours to create and sustain the culture that leads to prevention of catastrophic events. Focused initiative is often needed to introduce these leadership behaviours in a way that will integrate them effectively with other safety efforts and assure their use in day-to-day activities.

An important starting point is to assess the cultural impacts on existing technical and management systems for catastrophic event prevention. There are key organisational characteristics indicative of culture that can be objectively measured and that will predict the effectiveness of catastrophic event prevention programmes. One example is Perceived Organisational Support. This measures the extent to which individuals feel that they are valued by the organisation. When this dimension is strong individuals are more likely to take initiative to support the organisation’s objectives, which is critical to strong Anticipation and Resilience.

In addition to measuring these key organisational functioning characteristics it is important to assess the extent to which Anticipation, Inquiry, Execution, and Resilience are present and consistent in the organisation. This assessment will help the organisation understand its strengths and improvement opportunities in creating the cultural foundation to optimise technical and management systems.

Based on the assessment findings, an organisation can take steps to ensure that it has a Comprehensive Process Incident Prevention approach. This may involve strengthening the technical and management systems themselves, improving communication, and/or implementing and using new metrics. However it most often also involves creating the framework of leadership behaviours that drive CPIP-supporting culture.

A common misconception is that leadership behaviours and culture can be changed through training.

While training may be part of the effort, creating a CPIP-supporting culture requires a process that creates understanding of the need for change, elucidates for each individual the leadership behaviours on which he or she must focus, provides new skills and knowledge where necessary, provides structured reinforcement to help establish the new culture, and measures progress.

Through a focused initiative individual leaders can be helped to understand their role in catastrophic event prevention and learn to communicate commitment to that objective. With knowledge of the key behaviours that support Anticipation, Inquiry, Execution, and Resilience and appropriate measurement tools, leaders can receive individual feedback on their use of the behaviours.

Leaders can improve their ability to use the key leadership behaviours, and through a variety of both personal and virtual methods receive reinforcement and guidance in adopting new leadership practices.

About the author

R. Scott Stricoff is president, process safety and Americas region for DEKRA Insight. He has over 40 years of experience in safety performance improvement and today helps organizations build strong safety leadership and culture, working with senior executives to enhance safety performance. He has worked in the chemical, railroad, utilities, and metals industries, as well with NASA following the Columbia space shuttle accident.

Contact Details and Archive...

Print this page | E-mail this page

CSA Sira Test