The risk benefit misdirection

Peter Selvey, October 2018

While auditing a hearing aid company several years ago, I started to get a little concerned about device with a peak output of 140dB: surely that sound level was moving into a seriously dangerous region? As a first defence, the manufacturer simply said “that’s what the doctors are asking for”, itself an interesting side question of who should should take responsibility in such cases. But the case was also an interesting study of the whole concept of risk and benefit: the manufacturer eventually provided a clinical study showing around 6% of users suffered hearing loss even from normal 120dB type hearing aids – in other words, the medical device that was meant to help the patient was actually causing a significant amount of injury.

Mounting more defence, the manufacturer quickly turned to other literature that showed that without the hearing aid, the brain allocates the unused neural networks to other tasks – making the patient effectively deaf. So those 6% of patients would end up deaf anyway, while the other 94% were happy. The benefits far outweighed the risks, making the overall situation acceptable.

Highlighting benefit to justify risks is an obvious defence for those that provide devices, services, goods or facilities. Google the phrase “balancing risk and benefit” and you will receive “about 170,000,000” hits (a slightly suspicious figure, given that’s one article for every 45 people on the planet, but nevertheless indicates it is a rather popular phrase).

An obvious implication of using benefit to justify risk is that the higher the benefit, the higher the risk that we can tolerate.

Medical devices provide significant benefit. It follows then that, for example, we can accept significantly higher risks of for a dialysis machine than, say, a toaster.

Another obvious conclusion: since medical devices vary greatly with the benefits they provide, it follows that the criteria for acceptable risk should also vary greatly with each type of medical device. The risk we can accept from a tongue depressor is vastly different from the risk we can accept from a life saving cancer killing gamma-ray therapy machine.

Or not.

The logic in paragraphs 3, 4, 5 above is at best misleading, and logic in 2 and 6 is utterly wrong. Incorrect. Mistaken. Erroneous. Not stupid, mind you … because we have all been seduced at some stage by the argument of balancing risk and benefit. But definitely barking up the wrong tree.

Some 20 years ago, while preparing material for internal risk management training, I decided to analyse my wallet. My target was to find a mathematical model using risk and benefit that could derive the optimal amount of cash to withdraw from the ATM. Withdraw too much (say, $1,000), and the risks of a lost or stolen wallet would be excessive. Withdraw too little (say $10) and the lack of benefits of having cash to pay for purchases would become apparent. It was a good case study since it was easy to quantify risk in dollars, meaning it should be possible to build a mathematical model and find a specific value. It was just a matter of figuring out the underlying math.

 It turned out to be a seminal exercise in understanding risk. Among several light bulb moments was the realisation that the optimal point had nothing to do with the benefit. The mantra of “balancing risk and benefit” (or mathematically “Risk = Benefit”) was so far embedded in my skull that it took some effort to prise away. Eventually I realised the target is to find the point of “maximum net benefit”, a value that took into account three parameters: benefit, risks and costs. The benefit, though, was constant in the region of interest, so ultimately it did not play any role in the decision. That meant the optimal value was simply a matter of minimising risks and costs.

From there the equations tumbled out like a dam released: risks (lost or stolen wallet) and costs (withdrawal fees, my time) could both be modelled as a function of the cash withdrawn. Solve the equations to find a point at which risk and costs are minimised (mathematically, the point where the derivative of the sum is zero; surprisingly needing quadratic equations!). I don’t remember the exact result, it was something like $350. But I do remember the optimal point had nothing to do with benefit.

Mathematics aside, it’s fairly easy to generate case studies showing that using benefit to justify risk doesn’t make sense. It’s not rocket science. Consider for example the hearing aid case: yes, benefits far exceed the risk. But what if there was a simple, low cost software solution that reduces the injury rate from 6% to 1%? Say a software feature that monitors the accumulated daily exposure and reduces the output accordingly, or minimises hearing aid feedback events (that noisy ringing, which if pumped out at 140dB would surely be destructive). If such software existed, and only adds $0.05 to the cost of each unit, obviously it makes sense to use it. But how do you arrive at that decision using risk and benefit?

You can’t. There is no logical reason to accept risk because of the benefit a device provides, whether a toaster, hearing aid, tongue depressor or dialysis machine. Instead, the analysis should focus on whether further risk reduction is practical. If risk reduction is not practical, we can use risk/benefit to justify going ahead anyway. But the literal decision on whether to reduce risk is not based on the risk/benefit ratio; rather the decision is based purely on whether risk reduction is “practical”. Risk/benefit is like a gate which is applied at the end of the process, after all reasonable efforts to minimise risk have been taken.

So the hearing aid manufacturer was wrong to point to risk/benefit as a first justification. They should have pointed at solutions A, B C which they were already doing, and perhaps X, Y, Z which they considered but rejected as being impractical.

The authors of ISO 14971 obviously knew this, and careful inspection of the standard shows that the word “benefit” appears only twice in the normative text, and each time it is preceded by a qualifier that further risk reduction was “not practicable”.

The standard though is a little cheeky: there are no records required to document why risk reduction was “not practicable”. This can easily lead manufacturers to overlook this part and jump directly to risk/benefit, as did our hearing aid manufacturer.

Beyond this, another key reason why manufacturers (and to some extent regulators) might want to skip over documenting the “not practicable” part is that it’s an ethical minefield. Reasons for not taking action can include:

  • the absence of suitable technology

  • risk controls that increase other risks or reduce the benefit

  • high cost of risk control

  • competitive pressure

The latter two items (cost, competition) are valid concerns: the healthcare budget is not unlimited, and making your product more expensive than the rest does not reduce the risk if nobody buys it. Despite being valid concerns, using cost and competition to justify no action is a tricky thing to write up in a regulatory file. ISO 14971 seems to have quietly (deliberately?) sidestepped the issue by not requiring any records.

Even so, skipping over the “not practicable” part and jumping straight to risk/benefit can lead to reasonable risk controls being overlooked. And that not only places patients at risk, it also exposes the manufacturer to legal negligence when things go wrong. The risk/benefit argument might work for the media, but end up in court and the criteria will be whether or not reasonable action was taken.

There is one more aspect to this story: even before we get to stage of deciding if further risk reduction is “not practicable”, there is an earlier point in the process of determining if risk reduction is necessary in the first place. For this we need to establish the “criteria for acceptable risk” which is used in the main part of risk management.

Thanks to the prevalence of risk/benefit, there remains a deep held belief that the criteria should vary significantly with different devices. But as the above discussion shows, the criteria should be developed independently from any benefit, and as such should be fairly consistent from device to device, for the same types of harm.

Instead, common sense (supported by literature) suggests that the initial target should be to make the risk “negligible”. What constitutes as “negligible” is of course debatable, and a separate article looks into a possible theory which justifies fairly low rates such as the classic “one in a million” events per year for death.  Aside from death, medical devices vary greatly in the type of harm that can occur, so we should expect to see criteria which is tuned for each particular device. But it’s not based on benefit, but rather the different types of harm.

At least for death, we can find evidence in standards that the “negligible” criteria is well respected, irrespective of the benefits.

Dialysis machines, for example, are devices that arguably provide huge benefit: extending the patient’s life for several years. Despite this benefit, designers still work towards a negligible criteria for death wherever possible. Close attention is given to the design of independent control and protection systems associated with temperature, flow, pressure, air and dialysate using double sensors, two CPUs and double switching, and even applying periodic start up testing of the protection systems. The effective failure rates are far below the 1/1,000,000 events per year criteria that is normally applied for death.

Other high risk devices such as infant incubators, infusion pumps and surgical lasers follow the same criteria: making the risk of death from device failure negligible wherever possible.

Ironically, it is often the medium risk devices (such as our hearing aid manufacturer) that are most likely to misunderstand the role of risk/benefit.

Regardless, the next time you hear someone reach for risk/benefit as a justification: tell them to stop, take a step back and explain how they concluded that further risk reduction was “not practicable”. ISO 14971 may not require any records, but it’s still a requirement.