February 26, 2016

Raising the Bar for Leading Indicators

Over the past several years many safety organizations have focused in on leading indicators. The idea is that OSHA rates are too far down the pipe to guide preventative action, so metrics reflecting earlier activity are needed to guide safety action. In concept this makes sense, but in actual practice can be flawed.

A panel of experts convened by the National Safety Council’s Campbell Institute recently surveyed a group of companies asking them how they approach the development of leading indicators. Much good work has been done by these organizations and it is helpful to see it summarized. But it is surprising how few of the companies surveyed seem to understand the importance of validating leading indicators prior to putting them into use, or what it takes to establish validation. Although the summary defines leading indicators as “proactive, preventative, and predictive measures….”, the predictive part is often spotty.

These days, safety dashboards are filled with safety activity and outcome measures. Some include true leading indicators, but too often that is not the case. Many organizations fall prey to the concept of leading indicator without the essential validation. Just because leaders think a measure is important and predictive doesn’t mean it is. And if a measure is misunderstood this way, precious time and resources can be wasted taking action on the wrong things, credibility can be lost, and safety will suffer.

Measures Have Gotten Better
I remember when nobody tracked anything systematically and OSHA logs were hand-written on paper then stacked in the corner of someone’s office to gather dust. The first improvement came when organizations began looking at trends in their safety outcomes over time. A few used statistical tools like control charts to separate special variation from common variation. As performance management became more sophisticated, organizations started tracking safety activities as part of their goal-setting and performance appraisals. When they added things like safety observations to the mix, even more leading metrics became available. Today, leading indicators are in common use. I even know of organizations that have discarded lagging indicators altogether and replaced them with safety activities and other indicators of exposure. I’m not sure that was wise, but that’s a subject for another post.

More Work to Do
Few organizations I know are tracking leading indicators that fully meet the definition set forth by the NSC in 2013. A notable exception involved some work done by an Internal Consultant I knew in the 1990s. We’ll call him Dave. Dave was implementing a safety observation process throughout his company and he had an abundance of metrics like the frequency of observations and the degree of observed safety available to him. Theoretically, we knew that higher % safe was better for safety, more observations were better for safety, fewer risks were better for safety. But the validity of these metrics for measuring and improving safety performance on a broad scale in the full organizational context had never been established. We did not know if the numbers we were getting were predictive of outcome measures, or if acting on the numbers would cause the outcome metrics to improve.

Dave provided the first data allowing us to validate a whole set of potential leading indicators and we were surprised to discover which metrics were predictive and which ones were not. Consider % safe. Today, it is common knowledge among anyone doing an observation process that a 100% safe observation is not necessarily a good observation. Why? Because there are many ways to get to 100% safe, including pencil whipping. Wasting time is bad enough, but this also ruins % safe as a leading indicator for many organizations.

It’s Worth the Effort to Find Your True Leading Indicators
The fact that a central, seemingly predictive metric was not a true leading indicator was a surprise to Dave and I. Fortunately, the project revealed a handful of excellent leading indicators that he was able to incorporate into his safety dashboard. Dave had at his disposal information about the health of his safety management systems, providing him with actionable feedback to strengthen his processes and prevent more injuries. Moreover, Dave’s project demonstrated that stronger performance on his safety leading indicators correlated with greater efficiency and effectiveness in his organization generally. This was a big deal. By investing the effort to develop and validate true safety leading indicators for his organization, Dave had created the business case for safety that propelled his organization toward excellence.

Learn more about our approach to safety leadership

Raising the Bar for Leading Indicators

February 26, 2016

By 

Posted in:

Search for articles