Measuring and Reporting on Work Health & Safety

by Dave Collins on March 16, 2017

in Safety Statistics



Disaster in business negative graphGreg, the author of this article (first published here), certainly does upset some safety people on Linkedin! He makes an excellent point here – you may get 97%, a big tick or a star for having a plethora of JHAs (or SWMS, JSAs, SOPs etc), another for them being extremely comprehensive and a few more for indoctrinating all the workers with them and having them sign off on their understanding and compliance……..But, how do you really know how effective they are? You don’t want to wait until you blow something up and have give evidence on a witness stand to find out:

Measuring and Reporting on Work Health & Safety

By Greg Smith (WHS Lawyer and co-author of Risky Conversations, The Law, Social Psychology and Risk)

I approach this article with some trepidation.

I was recently sent a copy of Safe Work Australia’s report, Measuring and Reporting on Work Health & Safety, and subsequently saw a post on LinkedIn dealing with the same.  I made some observations on the report in response to the original post which drew the ire of some commentators (although I may be overstating it and I apologise in advance if I have), but I did promise a more fulsome response, and in the spirit of a heartfelt desire to contribute to the improvement of health and safety in Australia – here it is.

I want to start by saying, that I have the utmost respect for the authors of the report and nothing is intended to diminish the work they have produced.  I also accept that I am writing from a perspective heavily influenced by my engagement with health and safety through the legal process.

I also need to emphasise that I am not dismissing what is said in the report, nor saying that some of the structures and processes proposed by the report are not valid and valuable.  But I do think the emphasis in the report on numerical and graphical information has the potential to blind organisations to the effectiveness of crucial systems.

I also want to say that I have witnessed over many years – and many fatalities – organisations that can point to health and safety accreditations, health and safety awards, good personal injury rate data, good audit scores and “traffic lights” all in the green.  At the same time, a serious accident or workplace fatalities exposes that the same “good” safety management systems are riddled with systemic failure – long term systemic departures from the requirements of the system that had not been picked up by any of the health and safety measures or performance indicators.

I am not sure how many ways I can express my frustration when executive leadership hold a sincere belief that they have excellent safety management systems in place, only to realise that those systems do not even begin to stand up to the level of scrutiny they come under in a serious legal process.

In my view, there is a clarity to health and safety assurance that has been borne out in every major accident inquiry, a clarity that was overlooked by the drafters of WHS Legislation and a clarity which is all too often overlooked when it comes to developing assurance programs.  With the greatest respect, possible to the authors of this report, I fear this has been overlooked again.

In my view, the report perpetuates activity over assurance, and reinforces that assumptions can be drawn from the measure of activity when those assumptions are simply not valid.

Before I expand on these issues, I want to draw attention to another point in the report.  At page 38 the report states:

Each injury represents a breach of the duty to ensure WHS

To the extent that this comment is meant to represent in some way the “legal” duty, I must take issue with it.  There is no duty to prevent all injuries, and injury does not represent, in and of itself, a breach of any duty to “ensure WHS”.  The Full Court of the Western Australia Supreme Court made this clear in Laing O’Rourke (BMC) Pty Ltd v Kiwin [2011] WASCA 117 [31], citing with approval the Victorian decision, Holmes v RE Spence & Co Pty Ltd (1992) 5 VIR 119, 123 – 124:

The Act does not require employers to ensure that accidents never happen.  It requires them to take such steps as are practicable to provide and maintain a safe working environment.”

But to return to the main point of this article.

In my view, the objects of health and safety assurance can best be understood from comments of the Pike River Royal Commission:

The statistical information provided to the board on health and safety comprised mainly personal injury rates and time lost through accidents … The information gave the board some insight but was not much help in assessing the risks of a catastrophic event faced by high hazard industries. …  The board appears to have received no information proving the effectiveness of crucial systems such as gas monitoring and ventilation.”

I have written about this recently, and do not want to repeat those observations again (See: Everything is Green: The delusion of health and safety reporting), so let me try and explain this in another way.

Whenever I run obligations training for supervisors and managers we inevitably come to the question of JHAs – and I am assuming that readers will be familiar with that “tool” so will not explain it further.

I then ask a question about how important people think the JHA is.  On a scale of 1 to 10, with 1 being the least important and 10 being the most, how important is the JHA?

Inevitably, the group settles on a score of somewhere between 8 and 10.  They all agree that the JHA is “critically important” to managing health and safety risk in their business.  They all agree that every high hazard activity they undertake requires a JHA.

I then ask, what is the purpose of the JHA.  Almost universally groups agree that the purpose of the JHA is something like:

  • To identify the job steps
  • To identify hazards associated with those job steps
  • To identify controls to manage the hazards; and
  • To help ensure that the work is performed having regard to those hazards and the controls.

So, my question is, if the JHA is a “crucial system” or “critically important” and a key tool for managing every high-risk hazard in the workplace, is it unreasonable to expect that the organisation would have some overarching view about whether the JHA is achieving its purpose?

They agree it is not unreasonable, but such a view does not exist.

I think the same question could be asked of every other potentially crucial safety management system including contractor safety management, training and competence, supervision, risk assessments and so on. If we look again to the comments in the Pike River Royal Commission, we can see how important these system elements are:

Ultimately, the worth of a system depends on whether health and safety is taken seriously by everyone throughout an organisation; that it is accorded the attention that the Health and Safety in Employment Act 1992 demands.  Problems in relation to risk assessment, incident investigation, information evaluation and reporting, among others, indicate to the commission that health and safety management was not taken seriously enough at Pike.”

But equally, the same question can be asked of high-risk “hazards” – working at heights, fatigue, psychological wellbeing etc.

What is the process to manage the hazard, and does it achieve the purpose it was designed to achieve?

The fact that I have 100% compliance with closing out corrective actions tells me no more about the effectiveness of my crucial systems than the absence of accidents.

The risk of performance measures that are really measures of activity is tha they can create an illusion of safety.  The fact that we have 100% compliance with JHA training, a JHA was done every time it was required to be done, or that a supervisor signed off every JHA that was required to be signed off – these are all measures of activity, they do not tell us whether the JHA process has achieved its intended purpose.

So, what might a different type of “assurance” look like?

First, it would make a very conscious decision about the crucial systems or critical risks in the organisation and focus on those. Before I get called out for ignoring everything else, I do not advocate ignoring everything else – by all means, continue to use numerical and similar statistical measures for the bulk of your safety, but when you want to know that something works – you want to prove the effectiveness of your crucial systems – make a conscious decision to focus on them.

I thought that the JHA process was a crucial system, I would want to know how that process was supposed to work? If it is “crucial”, I should understand it to some extent.

I would want a system of reporting that told me whether the process was being managed the way it was supposed to be. And whether it worked. I would like to know, for example:

  • How many JHAs were done;
  • How many were reviewed;
  • How many were checked for technical compliance and what was the level of technical compliance? Were they done when they were meant to be done, were they completed correctly etc.
  • How many were checked for “quality”, and what the quality of the documents like? Did they identify appropriate hazards? Did they identify appropriate controls? Were people working in accordance with the controls?

I would also want to know what triggers were in place to review the quality of the JHA process – was our documented process a good process? Have we ever reviewed it internally? Do we ever get it reviewed externally? Are there any triggers for us to review our process and was it reviewed during the reporting period – if we get alerted to a case where an organisation was prosecuted for failing to implement its JHA process, does that cause us to go and do extra checks of our systems?

We could ask the same questions about our JHA training.

I would want someone to validate the reporting. If I am being told that our JHA process is working well – that it is achieving the purpose it was designed for – I would like someone (from time to time) to validate that. To tell me, “Greg, I have gone and looked at operations and I am comfortable that what you are being told about JHAs is accurate. You can trust that information – and this is why …”.

As part of my personal due diligence, if I thought JHA were crucial, when I went into the field, that is what I would check too. I would validate the reporting for myself.

I would want some red flags – most importantly, I would want a mandatory term of reference in every investigation requiring the JHA process to be reviewed for every incident – not whether the JHA for the job was a good JHA, but whether our JHA process achieved its purpose in this case, and if not, why not.

If my reporting is telling me that the JHA process is good, but all my incidents are showing that the process did not achieve its intended purpose, then we may have systemic issues that need to be addressed.

I would want to create as many touch points as possible with this crucial system to understand if it was achieving the purpose it was intended to achieve.

My overarching concern, personally and professionally, is to structure processes to ensure that organisations can prove the effectiveness of their crucial systems. I have had to sit in too many little conference rooms, with too many managers who have audits, accreditations, awards and health and safety reports that made them think everything was OK when they have a dead body to deal with.

I appreciate the attraction of traffic lights and graphs. I understand the desire to find statistical and numerical measures to assure safety.

I just do not think they achieve the outcomes we ascribe to them.

They do not prove the effectiveness of crucial systems.

  • Goran Prvulovic

    The fascination with numbers and traditional management principles, which comes from various disciplines associated with accounting, engineering and many others does not do any favours to management of occupational health and safety.
    Management of HSE risk is a different beast and requires different thinking as it is vastly different from managing financial risk. This is the point which really needs to be emphasised and understood by directors and managers. We need to stop thinking about numbers and indicators and start thinking about people, leadership, operational decision making and proactive management of risks. If we take care of this as a business input, the performance will improve. So how do we measure leadership, balanced operational decision making and utilisation of people as a solution rather than a source of a problem?
    The problems we have are many. Firstly, we have people running the organisations who are looking for data to inform them of the state of culture and management of risks, mostly because this is the only way they can understand its state. Can data measure culture, or climate as described in the report? The view of many safety practitioners would disagree. Furthermore, individual perceptions are not descriptive of a safety ‘climate’ but rather the safety culture itself. Culture is really not about the values, assumptions and beliefs, but rather about collective practices in operational decision making – a set of visible actions, and actions which the board clearly needs to be aware of in terms of seeking assurances on management of risks. The focus in this particular space between board and the management should be the same, contrary to the report findings. Decisions themselves do not produce safe healthy and production work. Senior management practices do. If there is one organisational KPI which needs to be included in management of risk it is the observation of those practices at the management level, by the board members. See, we have invested countless millions over the years in observing workers via various BBS programs, but which organisation has a system where board observes safety related practices of the senior management? What is the mechanism for that? Board reports with TRIFR and LTIFR?

    There are also two dimensions to due diligence concept. First one which is strictly related to the legislation and this is well covered. But what about moral and ethical aspects of due diligence? How is that being met in practice and what does it mean? We need to think beyond compliance, legislations, and corporations act, as we are dealing with people’s lives. Legal framework and practices in this space are often dehumanising. We need to depart from the thinking of ‘duty’ under the act but rather a duty to a fellow human, first and foremost. There is a big difference there and sadly the gap is not getting smaller. It is disappointing that safety material, including this report does not emphasise those points. How does suppression and limitations placed in some organisations on the internal accident investigations line up with the concept of due diligence and ethics?

    Organisational maturity is a complex subject and it is not about data it monitors but rather about how it understands it people and numerous other practices. Here are some of them

    http://www.riskwisesolutions.com.au/resources/RiskWise%20Pathway%20Leadership%20Safety%20Culture%20Model.png

    Organisational ‘risk picture’ comes from trying to understand uncertainty and the most effective method for this is through evaluation of critical controls, consultation and group projection on the possible scenarios. Hazard ID is only a start of this process and audits and review of the past incidents are useful but very limiting and in some cases completely misleading indicator.

    I think on the overall the report is a good resource, providing people, especially directors, managers and safety professionals understand its limitations and relatively narrow envelope.

    • Rob Long

      Thanks Goran. Some great thoughts.

    • Rob Long

      Thanks Goran. Some great thoughts.

    • Rob Long

      Thanks Goran. Some great thoughts.

  • Rob Long

    Unfortunately, the SWA Report is loaded with so many assumptions about measurement and due diligence I wouldn’t be quite sure where to start. There is no discussion in the report on the nature of attribution and that both due diligence and the requirements for reporting under the Act are highly subjective and at best should be understood as ‘guides’ not ‘standards’. The assumptions about measurement and the objectivity of measurement are not discussed as even a thought moment in the Report. Unfortunately, Directors and Managers in businesses believe that some statistical and numerical form of data actually tells them something about risk and safety in their business. Indeed, none of these are in any way a measure of culture or even slightly helpful in a forward looking sense. Indeed the assumptions in the report about culture help drive common myths of measurement. This doesn’t mean there is no value in the report but I would have thought safety would be able to move on from the traditional myths that sustain the current problems that plague safety.
    Hindsight bias may tell us about History but don’t throw much light on Futures. Numerical data is not an indicator or a KPI of WHS ‘performance’ neither does it help understand the underlying cultural issues to do with organisational ‘collective unconscious’. When it comes to understanding such things numerics are not just unhelpful but misleading. Unfortunately, most of the data the industry gives to boards and CEOs simply helps maintain popular mythology about risk and safety. The problem in WHS is not really about negative or positive indicators of risk and safety at all but rather the assumption that numerics should define safety itself. The ‘risk picture’ indeed, should be able to help CEOs and Directors understand if they or their organisation are ‘risk intelligent’ and what drives decision making in their organisation. Unfortunately, the semiotics of the report reinforce the traditional assumptions and tools of safety (anchored to pyramids and matrices) but in reality neither risk nor safety conform to the linear and reductionist assumptions attributed to common safety symbols.
    We need to move away from the mechanistic and systemics worldview (and how it has anchored the meaning of safety) to a more sophisticated understanding of risk and culture that embraces what we know about emergence, wicked problems, the unconscious and risk intelligence.

    • Goran Prvulovic

      Spot on

  • Sharron O’Neill

    Dear Greg,

    Thank you for your thoughtful article. As one of the authors of the report in questions I must say I wholeheartedly agree with your sentiments (and see your point about a strict legal interpretation of duty on p38 😊). I suppose in response I make two points. First, the brief for this research report was around measuring WHS. For that reason, it is explicitly focused on quantitative measures. However, we clearly do not expect that measures are, or should be, the only thing managers look at. This data must be contextualised by the types of qualitative information you articulated here. Our key concern was that if managers are going to look at numbers, they need to be a far more useful set of measures for informing decisions than the LTIFR rates that most are currently receiving. To that end, we explain why traditional measures are not useful and outline some measurement quality issues for consideration by those who select and manage WHS data. Second, and aligned with your sentiments above, and the initial Pike River Royal Commissions quote cited above, a key thrust of the report is to encourage those developing KPIs in their organisations to measure things that matter by explicitly considering controls in terms of both the activity (measures of which we refer to as the leading KPIs of that control) and the subsequent effectiveness of those controls (which we refer to as the lagging measures of a control, as on p12 and the section on WHS position). It is not possible in a report like this to identify relevant effectiveness measures for every type of business, although we provide a few examples for illustration. In your article, you identify a number of other outcomes that clearly could be monitored in the same way with respect to JHAs. Thank you for that!

    I am pleased to read your article as it would be a shame if our report was interpreted by readers as inferring that measures are all that matter. Equally it would be a shame if readers reject out of hand the potential insights that are available from identifying relevant data and measuring and reporting it well. So thanks for your article, Greg and for taking the time to read the report. Cheers, Sharron

Previous post:

Next post: