Maintaining Safe Operations – Is it time for a verification scheme for Management Systems?
23 January 2019
Betteridge’s Law suggests that any newspaper headline that ends with a question mark can be truthfully answered with the word ‘no’. Iain Wilson of DNV GL–Oil & Gas suggests that finding the right answer to the question in the headline may not be quite so straightforward.
It is universally recognised that Process Safety Management (PSM) or the management of Major Accident Hazards (MAH) relies on the interaction between Plant, Process and People risk management barriers.
For offshore production installations there exists in law, under the EU directive on Offshore Safety(1), a requirement for the independent verification of the suitability and sufficiency of the arrangements for the inspection, test and maintenance of the Plant. Namely, verification schemes for the management of Safety and Environmentally Critical Elements (SECEs).
It is becoming apparent, from ongoing incident histories that weaknesses, actual and potential, in the Process and People aspects of risk management are creating opportunities for major incidents. The findings of the Offshore Safety Directive Regulator’s (OSDR) first round of ‘In Depth Maintaining Safe Operations’ (ID MSO) audits, supports this and highlights the organisation’s increased level of attention in this area.
It is estimated that around 40% of ignited process safety incidents occur during normal, steady-state operations, while 60% result from transient activities, such as start-up, and maintenance(2). This needs to be set against the background that transient operations only account for a small fraction of the running time for any particular piece of equipment.
It can be argued that normal operations can be more reliant on Plant barriers and that transient operations are more significantly controlled by Process and People barriers. If this is accepted, it can then be inferred that these barriers are the ‘weak links’ in the overall risk management picture. As this is clearly not the intended outcome, it suggests that the position of these non-SECE barriers as ‘poor relations’ in terms of assurance. This may be leading to less dependable performance of these barriers.
Is it time for the Process and People barriers to be subject to the same level of independent scrutiny as the SECEs and for verification to be extended to management systems? And, if the verification of ‘Plant’ barriers encompasses the inspection, test and maintenance of these barriers as well as checking these activities are suitable and sufficient, what are equivalent processes to support the Process and People barriers?
The sections below examine the approach taken for SECE assurance (Plant barriers) and compare it with the approach taken for Safety and Environment Management Systems, encompassing Process and People barriers. Some specific questions prompted by some of the differences are highlighted.
Identification of critical barriers
SECEs are identified from the formal safety assessment and are broadly defined in the PFEER regulations(3). Identified SECEs across all installations show a high degree of commonality.
OSCR2015 guidance(4) gives no clear definition or guidance to identify critical SEMS related barriers. While tools such as bowtie analysis can assist in the identification of critical SEMS elements, these processes most commonly focus on SECEs. SEMS elements are dealt with superficially, if at all. The technique can be applied to identifying the management system elements which are critical to managing major accident hazards. Additionally, information from incident investigations and audits can provide indications of the criticality, strength and weakness of SEMS elements, if root cause analysis is carried out effectively and the findings are analysed sufficiently.
Performance standards
For each SECE, a performance standard describing the required operation of the SECE is developed. This is commonly expressed as a description of the required functionality (what the SECE is intended or designed to do); availability or reliability (the required level of confidence that, when needed, the SECE will operate as intended); and survivability (the expectation that the SECE will continue to function during a developing major accident scenario).
Often key performance indicators (KPIs) are used to measure the health of the overall performance of the organisation. These may be ‘leading’ (measuring the inputs into the system); or ‘lagging’ (measuring the outputs from the system), usually in terms of failures. KPIs rarely reflect the performance of a single SEMS element in isolation. Using KPIs as a benchmark criteria or health indicator for specific SEMS element assessments is likely to be problematic and imprecise.
KPIs should be analysed to ensure that they are measuring the right things. Criteria should be set for audit finding categorisation and these should be used as pass/fail indicators for the health of SEMS elements.
Inspection, test and maintenance
SECEs are subject to a programme of preventative maintenance designed to ensure continued satisfaction of the requirements set out in the performance standard. These commonly take the form of tests to ensure that the functionality is as intended and that test and maintenance intervals are such that the availability/reliability meets the criteria set in the performance standard.
Audit is the primary form of assurance for SEMS elements. This can come in a variety of forms. SEMS audits are typically compliance-focussed and follow the definition set out by the European Foundation for Quality Management (EFQM) that “an audit is a check against a defined standard to confirm whether people are doing what they are told they should be doing”. This is necessary but can only provide part of the measures required.
Compliance-based audits can provide information about how the actions set out in the SEMS elements are being followed (analogous to the availability/reliability of SECEs) but cannot provide information about the success of SEMS element in achieving its intended objective (the functionality aspect).
Measurement of how rigorously a particular process is being followed does not tell us anything about the effectiveness of the process nor provide opportunities to identify improvements which can be made.
Some form of assessment, as defined by EFQM(5) as “a learning activity investigating why people have chosen to do things the way they do and what other options have been considered”, would provide the opportunity to explore the effectiveness of the SEMS element and look for opportunities for improvement. These are often less frequent, less systematic and less detailed than compliance audits.
Pass/fail criteria should be set for audits and specific, targeted KPIs (both leading and lagging) should be defined. It is common for organisations to measure and trend backlog relating to SECE maintenance and this should be also be applied to SEMS. If the audit programme has fallen behind, a risk assessment should be carried out to identify any exposures and put additional safeguards in place.
When known degradations or constraints are put on SECEs, the industry typically instigates an operational risk assessment to mitigate any additional major hazard risk that this presents. Identifying critical weaknesses with SEMS should be treated exactly the same as physical control measures, including how we consider the risks associated with management of change, known impairments and deferral of audits.
Continuous improvement
It is an expectation that the data gathered from the SECE inspection, test and maintenance programme be analysed and, if necessary, the performance standards and underlying risk assessment be updated to reflect the findings. There is also an underlying expectation that the overall risk management performance will improve over time and that risk levels will be reduced.

To drive improvement, objective measurement of SEMS element performance is required
Likewise, there is an expectation that SEMS element performance should be improved over time. This cannot be achieved by compliance monitoring alone as that will only maintain the intended, current situation.
To drive improvement, objective measurement of SEMS element performance is required and the processes themselves must be examined for opportunities for improvement.
As described in the HSE Managing for Health and Safety guide(6), the current favoured management model is: ‘Plan, Do, Check, Act’. This is a departure from the previous policy, organising, planning, measuring, audit and review model. Although audit is no longer a specific management system element it is noted as being integral in both the check (the audits themselves) and act (learning from the audit findings) phases. Therefore a combination of audit and measurement is required to measure SEMS elements performance to demonstrate improvement.
The audit must go beyond compliance monitoring and into the realm of the ‘assessment’ and must challenge the effectiveness and efficiency of the SEMS element, potentially benchmarking against best practice and/or implementation elsewhere.
Independent oversight
Verification of SECEs is defined as “a system of independent and competent scrutiny of safety-critical elements throughout the lifecycle of an installation, to obtain assurance that satisfactory standards will be achieved and maintained.”
The verifier is required to confirm that the identified SECEs and defined performance standards are suitable and sufficient. They are also responsible for checking that the activities required to maintain the operation in line with the performance standards is being carried out.
SEMS elements are typically audited from within any given organisation. Simple compliance audits are often carried out by line management and higher-level management system audits may be carried out by personnel from other sections or assets. Corporate level audits may also be carried out on an infrequent basis. External third-party audits will normally be confined to ISO certification or similar. The use of third-party resources to support and coach internal auditors can be a way of increasing the quality of audit findings and the effectiveness of audits.
The requirement of independent oversight of SECEs gives a number of benefits. A clear, minimum level of performance is defined; deviation or drift from published maintenance plans can be identified and challenged; and the independent party can provide an insight into best practice.
Third party oversight has the potential to bring many advantages. It offers an incentive to ensure that the audit programme remains on track and provides an external quality check on the audit processes and findings as well as facilitating benchmarking.
Independent assurance and continuous improvement across the entire SEMS could be achieved through use of DNV GL’s ISRSTM protocols, these present best practice benchmarks for safe and sustainable management. The system, which is currently being updated, helps change people’s behaviour by systematically building risk competence. A new version will be launched later this year.
Summary
It is clear that all barriers are not treated in the same way not is the same degree of scrutiny applied to their performance.
Evidence states that Process and People barriers, primarily relating to elements of the SEMS, are not as effective or reliable and Plant barriers, primarily our SECEs. Whilst there may be inherent reasons why SECEs should be more reliable than SEMS elements, there can be no sound justification for not assuring the performance of the SEMS elements to as high a level as is reasonably practicable. These issues raise pertinent questions on how to identify, measure and assess critical SEMS elements.
OSDR through the ID MSO audits are showing an increased level of interest in specific SEMS related aspects. Regulations, as they stand, enshrine a different level of oversight for physical barriers but the question remains, should high performing organisations (or those who aspire to high performance) limit their activities to those required by regulation?
References
1. EU Directive 2013/30/EU, June 2013.
2. HSE, Offshore Hydrocarbon Release Database 1992-2014.
3. HSE, Offshore Installations (Prevention of Fire and Explosion, and Emergency Response) Regulations 1995, Approved Code of Practice and guidance, L65, 2016.
4. HSE, The Offshore Installations (Offshore Safety Directive) (Safety Case etc) Regulations 2015. Guidance on Regulations, L154, December 2015.
5. EFQM, http://www.efqm.org/blog/whats-the-difference-between-assessment-and-audit
6. HSE, Managing for health and safety. HSG65, 2013.
About the author
Iain Wilson is a Senior Principal Consultant at DNV GL - Oil & Gas with 25 years of experience in the management of major accident hazards including the fields of Safety Cases, Safety and Environmental Management Systems, auditing and training.
Contact Details and Archive...