IEC 60601-1 Clause 4.3 Essential Performance

The basic idea behind essential performance is that some things are more important than others. In a world of limited resources, regulations and standards should try to focus on the important stuff rather than cover everything. A device might have literally 1000’s of discrete “performance” specifications, from headline things such as equipment accuracy through to mundane stuff like how many items an alarm log can record. And there can be 100’s of tests proving a device meets specifications in both normal and fault condition: clearly it’s impossible check every specification during or after every test. We need some kind of filter to say OK, for this particular test, it’s important to check specifications A, B and F, but not C, D, E and G,

Risk seems like a great foundation on which to decide what is really “essential”.

But is it very complicated area, and the “essential performance“ approach in IEC 60601-1 is doomed to fail as it oversimplifies it to a single rule: "performance ... where loss or degradation beyond the limits ... results in an unacceptable risk".

A key point is that using acceptable risk as the criteria is, well, misleading. Risk is in fact the gold standard, but in practice it gets messy because of a bunch of assumptions hiding in the background. For example, most people would assume that the correct operation of an on/off switch is not essential performance. Yet if the switch fails, the device then fails to treat, monitor or diagnose as expected. The user would then have to find another device to perform the same function. In the delay, there is a small but significant probability that patient is harmed (if there wasn’t, it means that the medical device had no real purpose). But your gut is still saying … nah, it doesn’t make sense - how can a on/off switch be considered essential performance? The hidden assumption is that the switch will rarely fail - instinctively we know that modern switches are sufficiently reliable that they are not worth checking, the result of decades of evolution in switch design. The two factors combined - switch failure, and harm due to delay - makes the overall probability very small and in most cases acceptable.

This implies that what is essential is not just how important the function is, but also how likely it is to fail, and in particular how likely it is to be affected by the tests under consideration.

That turns out to be highly context driven, you can't derive this purely from the function. Technology A might be susceptible to humidity, technology B to mechanical wear, technology C might be so well established that spot checks are reasonable. Under waterproof testing, function X might be important to check, while under EMC test function Y is far more susceptible.

Which means that simply deriving a list of what is "essential" out of context makes absolutely no sense.

In fact, a better term to use might be "susceptible performance", which is decided on a test by test basis, taking into account:

  • technology used (degree to which it well established, reliable)

  • susceptibility of the technology to the particular test

  • probability of the test condition occurring (i.e. normal use or an abnormal condition)

  • the severity of harm if the function fails

It could be possible to create a rating system for each of the four factors above, and then make a table which decides if a parameter should be checked in a particular test. But in practice, engineers can usually make a reasonable judgement call about what items to check in any particular test. It’s not as simple as saying “loss or degradation beyond the limits ... results in an unacceptable risk”, but it’s not overly complicated either.

While “susceptible performance” may be a more reasonable approach, there are a few complications to consider:  

First is that "susceptible performance" presumes that, in the absence of any particular test condition, general performance has already been established. For example, a bench test in a base condition like 23°C, 60% RH, no special stress conditions (water ingress, electrical/magnetic, mechanical etc.). Currently in IEC 60601-1 there is no general clause which establishes what could be called "basic performance" prior to starting any stress tests like waterproof, defib, EMC and so on. This is an oversight in the standard.

Second is that third party test labs are often involved and the CB scheme has set rules that test labs need to cover everything. As such there is reasonable reluctance to consider true performance for fear of exposing manufacturers to even higher costs and test labs thrown into testing they are not qualified to perform. This needs to be addressed before embedding too much performance in IEC 60601-1. Either we need to get rid of test labs (not a good idea), or structure the standards that allows them to do their job efficiently and focus on the areas they are qualified and can add value.

Third is that for well established technology (such as diagnostic ECGs, dialysis, infusion pumps) it is in the interests of society to establish standards for performance. As devices become popular, more manufacturers will get involved; standardisation helps users be sure of a minimum level of performance and protects against poor quality imitations. This driver can range from very high risk devices through to mundane low risk devices. But the nature of standards is such that it is very difficult to be comprehensive: for example, monitoring ECG have well established standards with many performance tests, but many common features like ST segment analysis are not covered by IEC 60601-2-27. The danger here is using defined terms like “essential performance” when a performance standard exists can mislead people to think that it is the only critical performance.

Finally, IEC 60601-1 has special requirements for PEMS for which applicability can be critically dependent on what is defined as essential performance. These requirements can be seen as special design controls, similar to what would be expected for Class IIb devices in Europe. The are not appropriate for lower risk devices, and again the inclusion under the single umbrella of essential performance creates more confusion.

Taking these into account, it is recommended to revert a general term "performance", and then consider four sub-types:

Basic performance: performance according manufacturer specifications, labelling, public claims, risk controls or can be reasonably inferred from the intended purpose of the medical device. 

Standardised performance: requirements and tests for performance for well established medical devices published in the form of a national or international standard. 

Susceptible performance: subset of basic and/or standardised performance to be monitored during a particular test, decided on a test by test basis, taking into account the technology, nature of test, severity if a function fails and other factors as appropriate, and documented in the risk management file.

Critical performance: subset of basic and/or standardised performance performance which if fails, can lead to significant direct or indirect harm with high probability; this includes functions which provide or extract energy, liquids, radiation or gases to the patient in a potentially harmful way; devices which monitor vital signs with the purpose of providing alarms for emergency intervention, and other devices with similar risk profile (Class IIb devices in Europe can be used as a guide). Aspects of critical performance are subject to additional design controls as specified in Clause 14 of IEC 60601-1  

Standards should then be structured in a way that allows third party laboratories to be involved without necessarily taking responsibility for performance evaluation that is outside the laboratories competence.