The basic idea behind essential performance is that some things are more important than others. In a world of limited resources, regulations and standards should try to focus on the important stuff rather than cover everything. A device might have literally 1000’s of discrete “performance” specifications, from headline things such as equipment accuracy through to mundane stuff like how many items an alarm log can record. And there can be 100’s of tests proving a device meets specifications in both normal and fault condition: clearly it’s impossible check every specification during or after every test. We need some kind of filter to say OK, for this particular test, it’s important to check specifications A, B and F, but not C, D, E and G,
Risk seems like a great foundation on which to decide what is really “essential”.
But is it very complicated area, and the “essential performance“ approach in IEC 60601-1 is doomed to fail as it oversimplifies it to a single rule: "performance ... where loss or degradation beyond the limits ... results in an unacceptable risk".
The first problem is that using acceptable risk as the criteria is, well, misleading. Generally you need a higher threshold when deciding to vary “special controls” based on risk. The reason is that even without the special controls, most things will be OK anyway, so there is a hidden probability factor involved. For example, when allocating different controls for Class I, II, III devices in medical device regulations, they don’t use “acceptable risk” as the criteria. It’s a lot more complicated and various factors need to be considered. In Europe for example, there are several pages of questions used to decide what classification to use, other regulators such as in the US and Japan prefer to decide for each device individually. It's not easy and certainly not something that can be handled by a singe sentence rule.
Following on from this, the second problem is knowing what is important is highly context driven, you can't derive this purely from the function. Technology A might be susceptible to humidity, technology B to mechanical wear, technology C might be so well established that spot checks are reasonable. Under waterproof testing, function X might be important to check, while under EMC test function Y is far more susceptible. Simply deriving a list of what is "essential" out of context makes absolutely no sense.
In fact, a better term to use might be "susceptible performance", which is decided on a test by test basis, taking into account:
technology used (degree to which it well established, reliable)
susceptibility of the technology to the particular test
probability of the test condition occurring (i.e. normal use or an abnormal condition)
the severity of harm if the function fails
It could be possible to create a rating system for each of the four factors above, and then make a table which decides if a parameter should be checked in a particular test. But in practice, engineers can usually make a reasonable judgement call about what items to check in any particular test. It’s not as simple as saying “loss or degradation beyond the limits ... results in an unacceptable risk”, but it’s not overly complicated either.
While “susceptible performance” may be a more reasonable approach, there are a few complications to consider:
First is that "susceptible performance" presumes that, in the absence of any particular test condition, general performance has already been established. For example, a bench test in a base condition like 23°C, 60% RH, no special stress conditions (water ingress, electrical/magnetic, mechanical etc.). Currently in IEC 60601-1 there is no general clause which establishes what could be called "basic performance" prior to starting any stress tests like waterproof, defib, EMC and so on. This is an oversight in the standard.
Second is that third party test labs are often involved and the CB scheme has set rules that test labs need to cover everything. As such there is reasonable reluctance to consider true performance for fear of exposing manufacturers to even higher costs and test labs thrown into testing they are not qualified to perform. This needs to be addressed before embedding too much performance in IEC 60601-1. Either we need to get rid of test labs (not a good idea), or structure the standards that allows them to do their job efficiently and focus on the areas they are qualified and can add value.
Third is that for well established technology (such as diagnostic ECGs, dialysis, infusion pumps) it is in the interests of society to establish standards for performance. As devices become popular, more manufacturers will get involved; standardisation helps users be sure of a minimum level of performance and protects against poor quality imitations. This driver can range from very high risk devices through to mundane low risk devices. But the nature of standards is such that it is very difficult to be comprehensive: for example, monitoring ECG have well established standards with many performance tests, but many common features like ST segment analysis are not covered by IEC 60601-2-27. The danger here is using defined terms like “essential performance” when a performance standard exists can mislead people to think that it is the only critical performance.
Finally, IEC 60601-1 has special requirements for PEMS for which applicability can be critically dependent on what is defined as essential performance. These requirements can be seen as special design controls, similar to what would be expected for Class IIb devices in Europe. The are not appropriate for lower risk devices, and again the inclusion under the single umbrella of essential performance creates more confusion.
Taking these into account, it is recommended to revert a general term "performance", and then consider four sub-types:
Basic performance: performance according manufacturer specifications, labelling, public claims, risk controls or can be reasonably inferred from the intended purpose of the medical device.
Standardised performance: requirements and tests for performance for well established medical devices published in the form of a national or international standard.
Susceptible performance: subset of basic and/or standardised performance to be monitored during a particular test, decided on a test by test basis, taking into account the technology, nature of test, severity if a function fails and other factors as appropriate, and documented in the risk management file.
Critical performance: subset of basic and/or standardised performance performance which if fails, can lead to significant direct or indirect harm with high probability; this includes functions which provide or extract energy, liquids, radiation or gases to the patient in a potentially harmful way; devices which monitor vital signs with the purpose of providing alarms for emergency intervention, and other devices with similar risk profile (Class IIb devices in Europe can be used as a guide). Aspects of critical performance are subject to additional design controls as specified in Clause 14 of IEC 60601-1
Standards should then be structured in a way that allows third party laboratories to be involved without necessarily taking responsibility for performance evaluation that is outside the laboratories competence.