IEC 60601-2-24 Infusion pumps - what's the story?

Users of the standard IEC 60601-2-24:2012 (infusion pumps and controllers) might scratch their heads over some of the requirements and details associated with performance tests. To put it bluntly, many things just don’t make sense.

A recent project comparing the first and second edition along with the US version AAMI ID26 offers some possible clues as to what happened.

The AAMI standard, formally ANSI/AAMI ID26 is a modified version of IEC 60601-2-24:1998. It includes some dramatic deviations and arguably makes performance testing significant more complex. This AAMI standard has since been withdrawn and is no longer available for sale, and is not used by the FDA as a recognized standard. But the influence on the 2012 edition of IEC 60601-2-24 is such that it is worth getting a copy of this standard if you can, just to understand what is going on.

It seems that committee working on the new edition of IEC 60601-2-24 were seriously looking at incorporating some of the requirements from the AAMI standard, and got as far as deleting some parts, adding some definitions and then … forgot about it and published the standard anyway. The result is a standard that does not make sense in several areas.

For example, in IEC 60601-2-24:1998 the sample interval for general performance tests is fixed at 0.5 min. In the AAMI edition, this is changed to 1/6min (10 sec). In IEC 60601-2-24:2012, the sample interval is .. blank. It’s as if the authors deleted the text “0.5 min”, were about to type in “1/6 min”, paused to think about it, and then just forgot and went ahead and published the standard anyway. Maybe there was a lot of debate about the AAMI methods, which are indeed problematic, followed by huge pressure to get the standard published so that it could be used with the 3rd edition of IEC 60601-1, so they just hit the print button without realising some of the edits were only half baked. Which we all do, but … gee … this is an international standard used in a regulatory context and cost $300 bucks to buy. Surely the publication process contains some kind of plausibility checks?

Whatever the story is, this article looks at key parameters in performance testing, the history, issues and suggests some practical solutions that could be considered in future amendments or editions. For the purpose of this article, the three standards will be called IEC1, IEC2 and AAMI to denote IEC 60601-2-24:1998, IEC 60601-2-24:2012 and ANSI/AAMI ID 26 (R)2013 respectively.

Minimum rates

In IEC1, the minimum rate is defined as being the lowest selectable rate but not less than 1mL/hr. There is a note that ambulatory use pumps should be the lowest selectable rate, essentially saying ignore the 1mL/hr for ambulatory pumps. OK, time to be pedantic - according IEC rules, “notes” should be used for explanation only, so strictly speaking we should apply the minimum 1mL/hr rate. The requirement should have simply be “the lowest selectable rate, but not less than 1mL/hr for pumps not intended for ambulatory use”. It might sound nit-picking, but putting requirements in notes is confusing, and as we will see below, even the standards committee got themselves in a mess trying to deal with this point.

The reason 1mL/hr makes no sense for ambulatory pumps is that they usually have rates that are tiny compared to a typical bedside pump - insulin pumps for example have basal (background) rates as low as 0.00025mL/hr, and even the highest speed bolus rates are less than 1mL/hr. So a minimum rate of 1mL/hr makes absolutely no sense.

The AAMI standard tried to deal with this by deleting the note, creating a new defined term - “lowest selectable rate” and then adding a column to Table 102 to include this as one of the rates that can be tested. Slight problem, though, is that they forgot to select this item in Table 102 for ambulatory pumps, the very device that we need it for. Instead, ambulatory pumps must still use the “minimum rate”. But, having deleted the note, it meant that ambulatory pumps have to be tested 1mL/hr - a rate that for insulin (applied continuously) would probably kill an elephant.

While making a mess of the ambulatory side, the tests for other pumps are made unnecessarily complex as they do require tests at the lowest selectable rate in addition to 1mL/hr. Many bedside pumps are adjustable down to 0.1mL which is the resolution of the display. However, no one really expects this rate to be accurate, it’s like expecting your car’s speedometer to be accurate at 1km/hr. It is technically difficult to test, and the graphs it produces are likely to be messy and meaningless to the user. Clearly not well thought out, and appears to be ignored by manufacturers in practice.

IEC2 also tried to tackle the problem. Obviously following AAMI’s lead, they got as far as deleting the note in the minimum rate definition, adding the new defined term (“minimum selectable rate”), … but that’s as far as it got. This new defined term never gets used in the normative part of the standard. The upshot is that ambulatory pumps are again required to kill elephants, with a “minimum rate” of 1mL/hr.

So what is the recommended solution? What probably makes the most sense is to avoid fixing it at any number, and leave it up to the manufacturer to decide a “minimum rate” at which pump provides reliable repeatable performance. This rate should of course be declared in the operation manual, and the notes (this time used properly as hints rather than a requirement) can suggest a typical minimum rate of 1mL/hr for general purpose bedside pumps, and highlight the need to assess the risks associated with the user selecting a rate lower than the declared minimum rate.

That might sound a bit wishy washy, but the normal design approach is to have reliable range of performance which is greater than what people need in clinical practice. In other words, the declared minimum rate should already be well below what is needed. As long as that principle is followed, the fact that the pump can still be set below the “minimum rate” is not a significant risk. Only if the manufacturer tries to cheat - in order to make nicer graphs - and select a “minimum rate” that is higher than the clinical needs that the risk starts to become significant. And competition should encourage manufacturers to use a minimum rate that is as low as possible.

Maximum rate

In IEC1, there are no performance tests required at the maximum selectable rate. Which on the face of it seems a little weird, since the rate of 25mL/hr is fairly low compared to maximum rates in the 1000mL/hr or more. Performance tests are normally done at minimum, intermediate and maximum settings.

AAMI must have thought the same, and introduced the term “maximum selectable rate”, and added another column to Table 102, requiring tests at the maximum rate for normal volumetric and other types of pumps with continuous flow (excluding ambulatory pumps).

Sounds good? Not in practice. The tests in the standard cover three key aspects - start up profile, short to medium term variability, and long term accuracy. Studying all three parameters at all four points (lowest selectable, minimum, intermediate and maximum selectable, i.e. 0.1, 1.0, 25, 999mL/hr) is something of a nightmare to test. Why? Because range from represents 4 orders of magnitude. You can’t test that range with a single set up, you need at least two and possibly three precision balances with different ranges. A typical 220g balance that is used for IEC testing doesn’t have the precision for tests at 0.1mL/hr or the range for tests at 999mL/hr.

IEC2 again started to follow AAMI, adding the new defined term “maximum selectable rate”. But again, that’s pretty much as far as it got. It was never added to Table 201.102. Fortunately, unlike the other stuff ups, this one is benign in that it does not leave the test engineer confused or force any unrealistic test. It just defines a term that is never used.

The recommended solution? A rate of 25ml/hr seems a good rate to inspect the flow rate start up profile, trumpet curves and long term accuracy. As a pump gets faster, start up delays and transient effects will become less of an issue. This suggests that for higher speeds, the only concern is the long term flow rate accuracy. This can be assessed using fairly simple tests that use the same precision balance (typically 220g, 0.1mg resolution). For example, an infusion pump with 1000mL/hr range could be tested at 100, 500 and 1000mL/hr by measuring the time taken to deliver 100mL, or any volume which is large enough that the measurement errors are negligible. Flow rate accuracy and pump errors can be easily be derived from this test, without needing multiple balances.

Sample intervals

IEC1 used a sample interval of 30s for general purpose pumps and 15min for ambulatory, both of which seem reasonable in practice.

AAMI changed this to 10s for general purpose pumps. The reason was probably to allow the true profile to be viewed at the maximum selectable rate, such as 999mL/hr. It also allows for a smoother looking trumpet curve (see below).

While that is reasonable, the drawback is that for low flow rates, the background “noise” such as resolution of the balance will appear three times larger, which is unrelated to the infusion pump’s performance. For example, a typical 220g balance with a 0.1mg resolution has a rounding error “noise” of 1.2% at rates of 1mL/hr sampled at 30s. If the sample interval is reduced to 10s, that noise becomes 3.6%, which is highly visible in a flow rate graph, is not caused by the pump and has no clinical relevance.

Needless to say, this is another AAMI deviation that seems to have been ignored in practice.

IEC2 again appears to have started to follow AAMI, and got as far as deleting the reference to a 30s interval (0.5 min). But, that’s as far as they got - the sentence now reads “Set the sample interval S”, which implies that S is somehow defined elsewhere in the standard. Newbie test engineers unaware of the history could spend hours searching the standard to try and find where or how S is determined. A clue that it is still 0.5 min can be found in Equation (6) which defines j and k as 240 and 120, which only makes sense if S = 0.5 min. With the history it is clear that there was an intention to follow AAMI, but cooler heads may have prevailed and they just forgot to re-insert the original 0.5min.

Recommended solution? The 30s interval actually makes a bumpy looking flow graph, so it would be nice to shift to 10s. The “noise” issue can be addressed by using a running average over 30s, calculated for each 10s interval. For example, if we have weight samples W0, W1, W2, W3, W4, W5, W6 covering the first minute at 10s intervals, the flow rate data can be plotted every 10s using data derived from (W3-W0), (W4-W1), (W5-W2), (W6-W3). That produces a smoother looking graph while still averaging over 30s. It can also produce smoother looking trumpet curves as discussed below.

Trumpet curves

For general pumps, IEC1 requires the analysis to be performed at 2, 5, 11, 19 and 31 min (with similar points for ambulatory adjusted for the slower sample rate). That requires some 469 observation windows to be calculated, with 10 graph points.

The AAMI puts this on steroids: first it introduces a 10s sample interval, next it expands the range down to 1min, and finally requires the calculation to be made for all available observation windows. That results in some 47965 window calculations with 180 graph points

That said, the AAMI approach is not unreasonable: with software it is easy to do. It is possible to adopt a 10s sample interval from which a smooth trumpet curve is created (from 1 to 31 min as suggested). And, it is unclear why the IEC version uses only 2, 5, 11, 19 and 31min. It is possible that some pumps may have problem intervals hidden by the IEC approach.

So, somewhat unusually in this case, the recommended solution could be to adopt the AAMI version.

Back pressure

IEC1 and IEC2 require tests at the intermediate rate to be performed using a back pressure of ±100mmHg. This is easily done by shifting the height of the pump above or below the balance (approximately 1.5m = 100mmHg).

AAMI decided to change this to +300mmHg / -100mHg. The +300mmHg makes some sense in that real world back pressures can exceed 100mmHg. However, it makes the test overly complicated: it is likely to involve mechanical restriction which will vary with time and temperature and hence needs to be monitored and adjusted during tests up to 2hrs.

Recommended solution: The impact on accuracy is likely to be linearly related to pressure, so for example if there is a -0.5% change at +100mmHg, we can expect a -1.5% change at +300mmHg. The IEC test (±100mmHg) is a simple method and gives a good indication if the pump is influenced by back pressure, and allows direct comparison between different pumps. Users can estimate the error at different pressures using this value. Hence, the test at ±100mmHg as used in the IEC standards makes sense.

Pump types

In IEC1 and AAMI, there are 5 pump types in the definition of an infusion pump. The types (Type 1, Type 2 etc) are then used to determine the required tests.

In IEC2, the pump types have been relegated to a note, and only 4 types are shown. The original Type 4 (combination pumps) were deleted, and profile pumps, formally Type 5 are now referred to as Type 4. But in any case … it’s a note, and notes are not part of the normative standard.

Unfortunately this again seems to be a case of something half implemented. While the definition was updated the remainder of the standard continues to refer to these types as if they were defined terms, and even more confusingly, referring to Types 1 to Type 5 according to the definition in the previous standard. A newbie trying to read the standard would be confused by the use of Type 5 which is completely undefined, and by the fact that references to Type 4 pumps may not fit with the informal definition. A monumental stuff up.

In practice though the terms Type 1 to Type 5 were confusing and made the standard hard to read. It seems likely that the committee may have relegated it to a note with the intention of replacing the normative references with direct terms such as “continuous use”, “bolus”, “profile” and “combination” and so. So for practical terms, users might just ignore the references to “Type” and hand write in the appropriate terms.

Drip rate controlled pumps

Drip rate controlled pumps were covered in IEC1, but appear to have been removed in IEC2. But … not completely - there are lingering references which are, well, confusing.

A possible scenario is that the authors were planning to force drip rate controlled pumps to display the rate in mL/hr by using a conversion factor (mL/drip), effectively making them volumetric pumps, and allowing the drip rate test to be deleted. The authors then forgot to delete a couple of references to “drop" (an alternative to drips), but retained the test for tilting drip chambers.

Another possible explanation is that the authors were planning to outlaw drip rate controlled pumps altogether, and just forgot to delete all references. That seems unlikely as parts of the world still seem to use these pumps (Japan for example).

For this one there is no recommended solution - we really need to find out what the committee was thinking.

Statistical variability

Experience from testing syringe pumps indicates there is a huge amount of variability to the test results from one syringe to another, and from one brand to another, due to variations in “stiction”: the stop start motion of plunger. And even for peristaltic type pumps, the accuracy is likely to be influenced by the tube’s inner diameter, which is also expected to vary considerably in regular production. While this has little influence on the start up profile and trumpet curve shape, it will affect the overall long term accuracy.

Nevertheless, the standard only requires the results from a single “type test” to be put in the operation manual, leading to the question - what data should I put in? As it turns out, competitive pressures are such that manufacturers often use the “best of the best” data in the operation manuals, which is totally unrepresentative of the real world. Their argument is that - since everybody does it, everybody … well … we have to do it. And, usually the variability is coming from the administration set or syringe, which the pump designers feel they are not responsible for.

It is exactly the type of situation where we need standards to force manufacturers to do the right thing - to create a level playing field.

In all three standards this issue is discussed (Appendix AA.4), but it is an odd discussion since it is disconnected with normative requirements in the standard: it concludes that for example, a test on one sample is not enough, yet the normative parts only requires one sample.

Recommended solution? It is a big subject and probably needs a more nuanced solution depending on the pump mechanism. Nevertheless it is an important subject, in particular for syringe pumps which do suffer from huge variability in practice. And the committee has had more than 20 years to think about it - time enough to find a solution and put it in the normative part of the standard.

Overall conclusion

One might wonder why the IEC 60601-2-24:2012 (EN 60601-2-24:2015) is yet to be adopted in Europe. In fact there are quite a few particular standards in the IEC 60601 series which have yet to be adopted in Europe. One possible reason that the EU regulators are starting to having a close look at these standards and questioning whether they do, indeed, address the essential requirements. Arguably EN 60601-2-24:2015 does not, so perhaps there is a deliberate decision not to harmonise this and other particular standards.

The FDA , mysteriously, has no reference to either IEC and AAMI versions - mysteriously because it is a huge subject with lots of effort by the FDA to reduce infusion pump incidents, yet references to standards either formally or in passing seem to be non-existent.

Health Canada does list IEC 60601-2-24:2012, but interestingly there is a comment below the listing that “Additional accuracy testing results for flow rates below 1 ml/h may be required depending on the pump's intended use”, suggesting that they are aware of short comings with the standard.

Ultimately, it may be that the IEC committees are simply ill-equipped or structurally unsound to provide particular standards for medical use. The IEC is arguably set up well for electrical safety, with heavy representation by test laboratories and many, many users of the standards - meaning that gross errors are quickly weeded out. But for particular standards in the medical field, reliance seems to be largely placed on relatively few manufacturers, with national committees unable to provide plausible feedback and error checking. The result is standards like IEC 60602-2-24:2012 - full of errors and questionable decisions with respect to genuine public concerns. It seems we need a different structure - for example designated test agencies with good experience in the field, that are charged with reviewing and “pre-market” testing of the standard at the DIS and FDIS stages, with the objective of improving the quality of medical particular standards.

Something needs to change!

IEC 60601-2-25 Clause 201.12.4.103 Input Impedance

The input impedance test is fairly simple in concept but can be a challenge in practice. This article explains the concept, briefly reviews the test, gives typical results for ECGs and discusses some testing issues. 

What is input impedance? 

Measurement of voltage generally requires loading of the circuit in some way. This loading is caused by the input impedance of the meter or amplifier making the measurement. 

Modern multimeters typically use 10MΩ input for dc measurements and 1MΩ for ac measurements. This high impedance is usually has negligible effect, but if the input impedance is similar to the circuit impedance significant errors can result. For example, in the circuit shown, the real voltage at Va should be exactly 1.000Vdc, but a meter with 10MΩ input impedance will cause the voltage to fall by about 1.2%, due to the circuit resistance of 240kΩ.   

Input impedance can be derived (measured) from the indicated voltage if the circuit is known and resistances are high enough to to make a significant difference relative to noise and resolution of the measurement system. For example in Figure 1, if the supply voltage, Rs and Ra are known, it is possible to work back from the displayed value (0.9881) and calculate an input impedance of 10MΩ. 

The test

The test for ECGs is now largely harmonised in all the IEC and ANSI/AAMI standards, and uses a test impedance of 620kΩ in parallel with 4.7nF. Although the requirement is written with a limit of 2.5MΩ minimum input impedance, the actual test only requires the test engineer to confirm if the voltage has dropped by 20% or less, relative a the value shown without the test impedance.  

ECGs are ac based measurements so the test is usually performed with an sine wave input signal. Input impedance can also change with frequency, so the IEC standards perform the test at two points: 0.67Hz and 40Hz. Input impedance test is also performed with ±300mV offset, and repeated for each lead electrode. That makes a total of 2 frequencies x 4 conditions (reference value + open + 2 offsets) x 9 electrodes = 72 test conditions for a 12 Lead ECG.  

Typical results

Most ECGs have the following typical results:   

  • no measurable reduction at 0.67Hz
  • mild to significant reduction at 40Hz, sometimes close to the 20% limit
  • not affected by ±300mV dc offset 

Issues, experience with the test set up and measurement

Although up to 72 measurements can be required (for a 12 Lead ECG), in practice it is reasonable to reduce the number of tests on the basis that selected tests are representative. For example, Lead I, Lead III, V1 could be comprehensively tested, while Lead II, V1 and V5 could be covered by spot checks at 40Hz only, without ±300mV.

In patient monitors with optional 3, 5, and 10 lead cables, it is normal to test the 10 lead cable as representative. However, for the 3 lead cable there can be differences in the hardware design that require it to be separately tested (see this MEDTEQ article on 3-Leads for more details).  

The test is heavily affected by noise. This is a result of the CMRR being degraded due to the high series impedance, or more specifically, the high imbalance in impedance. As this CMRR application note shows, CMRR is heavily dependent on the imbalance impedance. 

An imbalance of 620kΩ is 12 times larger than the CMRR test, so there is proportional degrading of the CMRR by the same factor of 12. This means for example that with a typical set up having 0.1mm (10µV@10mm/mV) mains noise for typical tests, would increase to 1.2mm of noise once the 620kΩ/4.7nF is in circuit. 

For the 0.67Hz test, the noise appears as a think line. It is possible consider the noise as an artefact and measure the middle point this think line (that is, ignore the noise). This is a valid approach especially as at 0.67Hz, there is usually no measurable reduction, so even increased measurement error from the noise, it is a clear "Pass" result. 

However, for the 40Hz test there is no line as such, and the noise is similar frequency resulting in beating, obscuring the result. And the result is often close to the limit. As such, the following is steps are recommended to minimise the noise:  

  • take extra care with the test environment, check grounding connections between the test circuit, the ECG under test, and the ground plate under the test set up
  • During measurement, touch the ground plate (this has often been very effective)
  • If noise still appears, use a ground plate above the test set up as well (experience indicates this works well)
  • Enable the mains frequency filter; this is best done after at least some effort is made to reduce the noise to using one or more of the methods above to avoid excessive reliance on the filter
  • Increase to a higher printing speed, e.g. 50mm/s 

Note that if the filter is used it should be on for the whole test. Since 40Hz is close to 50Hz, many simple filters have a measurable reduction at 40Hz. Since the test is proportional (relative), having the filter on does not affect the result as long as it is enabled for both the reference and actual measurement (i.e. with and without the test impedance). 

IEC 60601-2-47 (AAMI/ANSI EC 57) Databases

This page contains zip files of the ECG databases referred to in IEC 60601-2-47 (also ANSI/AAMI EC 57) and which are offered free by Physionet. The files can also be downloaded individually from the Physionet ATM and also via the the database description pages as shown below. The zip files contain the heater file (*hea), the data file (*dat) and the annotation file (*atr) for each waveform.

The software for the MECG can load these files individually via the main form button "Get ECG source from file" and the subform function "Physionet (*.hea)". The header file (*hea) and the data file (*.dat) must be unzipped into the same directory for the function to work. The annotation file (*.atr) is not used by the MECG software. It is intended for use as the reference data when analyzing the output results using the WFDB functions such as bxb

The AHA database is not free and must be purchased from ECRI

Database File Size (MB)
MIT-BIH Arrhythmia Database MITBIH.zip 63
European ST-T Database ESC.zip 283
MIT-BIH Noise Stress Test Database NST.zip 19
Creighton University Ventricular Tachyarrhythmia Database CU.zip 5

 

 

 

 

 

Note: these databases have been downloaded automatically using software developed by MEDTEQ. There are a large number of files and the original download process required a relatively long period of time. If any files are missing or incomplete, please report to MEDTEQ. Note that the zip files may include waveforms which are excluded in the standard (e.g. MIT-BIH waveforms 102, 104, 107, 217 are normally excluded from the tests). 

 

IEC 60601-2-34 General information

The following information is transferred from the original MEDTEQ website, originally posted around 2009

This article provides some background for the design and test of IBP (Invasive Blood Pressure) monitoring function, as appropriate to assist in an evaluation to IEC 60601-2-34 (2000).

Key subjects include discussion on whether patient monitors can be tested using simulated signals, and how to deal with accuracy and frequency response tests.


Principle of operation

Sensors

IBP sensors are a bridge type, usually adjusted to provide a sensitivity of 5µV/V/mmHg. This means the output changes by 5µV per 1 mmHg, for every 1V of supply. Since most sensors are supplies at 5V supply, this means they provide a nominal 25µV/mmHg. A step change of of 100mmHg, with a 5V supply, would provide an output of 2.5mV (5µV/V/mmHg x 5V x 100mmHg = 2.5mV).

The sensors are not perfectly linear, and start to display some significant "error" above 150mmHg. This error is typically around -1% to -2% at the extreme of 300mmHg (i.e. at full scale the output is slightly lower than expected). Some sensors have internal compensation for this error, and in many cases the manufacturers of patient monitors include some software compensation.

The sensors are temperature sensitive, although not greatly compared to the limits in IEC 60601-2-34. Tests indicate that over a temperature range of 15°C to 35°C the "zero drift" is typically less than 0.5mmHg, and gain variation less than 0.5%.

The sensors also exhibit variation of up to 0.3mmHg depending on orientation, so for accurate measurements they should be fixed on a plane.

Sensors well exceed the 10Hz frequency response limit in IEC 60601-2-34. Step response analysis (using a solenoid valve to release the pressure) found a rise time in the order to 1ms, and a frequency response of around 300-400Hz.

Equipment (patient monitor)

Most patient monitors use a nominal 5V supply to the sensor, but it is rarely exactly 5V. This does not influence the accuracy as most monitors use a ratio measurement, for example by making the supply to the sensor also the supply to the ADC (analogue to digital converter). When providing simulated signals (e.g. for performance testing of the monitor) the actual supply voltage should be measured and used for calculating the simulated signal. MEDTEQ's IBP simulator has a function to do this automatically.

The measurement circuit must be carefully designed as 1mmHg is only 25µV. A differential gain of around 300 is usually required to increase the voltage to a range suitable for ADC measurement, as well as a circuit to provide an offset which allows negative pressures to be measured. IBP systems always include a function to "zero" the sensor. This is required to eliminate residual offsets due to (a) the sensor, (b) measurement electronics and (c) the "head of water" in the fluid circuit. In practice, the head of water dominates this offset, since every 10cm of water represents around 7mmHg. Offsets associated with the sensor and electronics is usually <3mmHg.

Drift in offset and gain can be expected from electronic circuits, but assuming reasonable quality parts are used, the amount is negligible compared to the sensor. For example, between 15-35°C an AD620 differential amplifier (used in MEDTEQ's precision pressure measurement system) was found to have drift of less than 0.1mmHg, and a gain variation of less than 0.05%.

Because of the very low voltages involved, filtering and/or special sampling rates are often used to remove noise, particularly mains frequency noise (50/60Hz). This filtering and sampling is far more likely to impact the 10Hz frequency response requirement than the frequency response of the sensor.

Basics of pressure

The international unit of pressure is the Pascal, commonly seen as kPa or MPa, since 1Pa is a very small pressure. Due to the prior use of mercury columns to measure blood pressure, in medicine the use mmHg (millimeters of mercury) remains common for blood pressure. Many patient monitors can select either kPa or mmHg indication. The conversion between kPa and mmHg is not as straightforward as it might appear - whenever a liquid column is used to represent pressure (such as for mmHg), accurate conversion requires both temperature and gravity to be known. It turns out that "mmHg" commonly used in medicine is that at 0°C and "standard gravity".

The use of the 0°C figure rather than room temperature might be the result of convenience: at this temperature the relationship is almost exactly 1kPa = 7.5mmHg, within 0.01% of the precise figure (7.500615613mmHg/kPa). This means easy conversion, for example 40kPa = 300mmHg.

A water column can also be used as a highly accurate calibration source. To know the exact relationship between the height of water and pressure, you only need to know temperature to determine the density of water, and gravity at the site of measurement. After that, it is only a matter of using a simple relationship of P = gdh, although care is needed with units.

At 25°C, at Japan (Ise) the ratio for pure water to "standard" mmHg is 13.649mmH20/mmHg, or 136.5cm/100mmHg (contact MEDTEQ for more details on how to calculate this). Literature indicates that purity of the water is not critical and normal treated tap water in most modern cities will probably suffice. To be sure, pure or distilled water should be used, but efforts to find out just how pure the water would be overkill.

Testing to IEC 60601-2-34

IEC conumdrum: System test, or monitor only?

The IEC 60601 series has a major conflict with medical device regulations, in that they are written to test the whole system. In contrast, regulation supports the evaluation each component of a system as a seperate medical device. This reflects the practical reality of manufacturing and clinical use - many medical systems are constructed using parts from different manufacturers, where no individual manufacturer takes responsibility for the complete system, and patient safety is maintained through interface specifications.

The IBP function of a patient monitor is a good example of this, with specifications such as a sensitivity of 5µV/V/mmHg being industry standard. In addition, sensors are designed with high frequency response, and an insulation barrier to the fluid circuit. Together with the sensitivity, it allows the sensor to be used with a wide range of patient monitors (and the monitor with a wide range of sensors).

Thus, following the regulatory approach, standards should allow patient monitors and IBP sensors to be tested seperately. This would mean sensors are tested with true pressures, while patient monitors are tested with simulated signals, both using 5µV/V/mmHg interface specification. Accuracy and frequency response limits would be distributed to ensure an overall system specification is always met.

In fact, part of this approach already exists. In the USA, there is standard dedicated to IBP sensors (ANSI/AAMI BP 22), which has also largely been adopted for use in Japan (JIS T 3323:2008). This standard requires the accuracy of sensitivity to of ±1mmHg ±1% of reading up to 50mmHg, and ±3% of readings above 50 to 300mmHg. Among many tests, it also has tests for frequency response (200Hz), defibrillator protection and leakage current.

In principle, a sensor which complies with ANSI/AAMI BP 22 (herein referred to as BP 22) would be compatible with most patient monitors. Unfortunately, IEC has not followed up and the standard IEC 60601-2-34 is written for the system. Nevertheless, we can derive limits for accuracy for the patient monitor by using both standards:

 

Test point
(mmHg)

IEC 60601-2-34 limit (mmHg)

BP 22 limit(mmHg)

Effective patient monitor limit (mmHg)

-45

±4

±1.5

±2.5

-30

±4

±1.3

±2.7

0

±4

±1

±3

30

±4

±1.3

±2.7

60

±4

±1.6

±2.4

150

±6

±4.5

±1.5

240

±9.6

±7.2

±2.4

300

±12

±9

±3

There are a few minor complications with this approach: the first is that patient monitors usually only display a resolution of 1mmHg. Ideally for the accuracy test, the manufacturer would enable a special mode which displays 0.1mmHg resolution, or the simulator can be adjusted in 0.1mmHg sets to find the change point. Second is that simulated signals should be accurate to around 0.3mmHg, or 0.1% full scale; this requires special equipment (MEDTEQ's IBP simulator has a compensated DAC output to provide this accuracy). Finally, many monitors compensate for sensor non-linearity, typically reading high around 200-300mmHg. This compensation improves accuracy, but could be close to or exceed the limits in the table above. Since virtually all sensors exhibit negative errors at high pressure, BP 22 should really be adjusted to limit positive errors above 100mmHg (e.g. change from ±3% to +2%, -3%), which in turn would allow patient monitors to have a greater positive error (+2%, or +6mmHg at 300mmHg), when tested with simulated signals.

Testing by simulation

In principle all of the performance and alarm tests in IEC 60601-2-34 can be performed using a similator, which can be constructed using a digital function generator and a simple voltage divider to produce voltages in the range of around -1mV to +8mV. For the tests in the standard, a combination of dc offset and sine wave is required.A digital function generator is recommended to provide ease of settings and adjustment.

As discussed above, the simulator should have an accuracy equivalent to  ±0.3mmHg (±0.1% or ±7.5µV), which can be achieved by monitoring the output with a high accuracy digital mutlimeter. In addition, the output should be adjusted as appropriate for the actual sensor supply voltage; for example, if the sensor is 4.962V, the output should be based on 24.81µV/mmHg, not the nominal 25µV/mmHg.

MEDTEQ has developed a low cost IBP simulator which is designed specifically for testing to IEC 60601-2-34, and includes useful features such as automated measurement and adjustment for the supply voltage, and includes real biological waveforms as well as sine waves for more realistic testing .

Testing by real pressures

MEDTEQ has tested sensors against both IEC 60601-2-34 and ANSI/AAMI BP 22. For static pressures the test set up is only moderately complicated, with the main problem being creating stable pressures. For dynamic pressures, a system has been developed which provides a fast step change in pressure, to allow measurement of the sensor frequency response as described in BP 22 (although technical not using the 2Hz waveform required by the standard, the result is still the same). Test results normally show a response in excess of 300Hz (15% bandwidth).

Manufacturers have indicated that the mecahnical 10Hz set up required by IEC 60601-2-34 has severe complications, and practical set ups exhibit non-linearilty which affects the test result. Given that the sensors have demonstrated frequency response well above 200Hz, it is clear that patient monitors can be tested with a 10Hz simulated signal. Even for systems that can only be tested as a set, the test should be replaced by a step response test, which is simpler and more reproducable.     

51.102.1 Sensitivity, repeatability, non-linearity, drift and hysteresis

(to be completed)

 

 

 

IEC 60601-2-27 Update (2005 to 2011 edition)

In 2011, IEC 60601-2-27 was updated to fit with the 3rd edition (IEC 60601-1:2005). Most of the performance tests are the same, but the opportunity has been taken to tweak some tests and correct some of the errors in the old standard. The following table provides an overview of significant changes, based on a document review only. It is expected that after practical experience provide more detail on the changes can be provided.

Clause

Change compared to IEC 60601-2-27:2005

201.5.4

Resistors in the test networks and defibrillator tester set ups should be ±1% unless otherwise specified (previously ±2%)
 

Note: the MEDTEQ SECG system uses 0.5% resistors for the 51k and 620k resistors, and the precision divider (100k/100R) uses on 0.05% resistors.

 

201.7.2.4.101

New requirement: both ends of detachable lead wires shall use the same identifiers (e.g. color code).

 

201.7.9.2.9.101

Instructions for use: significant new requirements and rewording, each item should be re-verified for compliance.

 

201.11.8.101

Depletion of battery test:

-          technical alarm 5min before shutdown

-          shutdown in a safe manner

 

201.12.1.101

Essential performance tests, general:

51kΩ/47nF not required except for Neutral electrode (previously, required for each electrode).

 

Note: MEDTEQ SECG system includes relay switchable 51k/47n impedance, allowing compliance with both editions.

 

Some test method “errors” have been corrected:

 

-     Accuracy of signal reproduction: test starts at 100% and reduces to 10%, rather than starting at 10% and increasing to 10%;

-     Input dynamic range: input signal can be adjusted to 80% of full scale, rather than adjusting the sensitivity;

-     Multichannel cross talk: actual test signal connections and leads to be inspected are fully defined.

 

201.12.1.101.8

Frequency response test:

Mains (ac) filter should be off for the test.

 

201.12.1.101.4

Input noise test (30µVpp):

10 tests of 10s required, at least 9 must pass (previously only one test required). 

 

201.12.1.101.9

Gain indicator

New test to verify the accuracy of the gain indicator (input 1mV signal input and verify same as the gain indicator).

 

201.12.1.101.10

CMRR test:

Must be performed at both 50Hz and 60Hz

 

201.12.1.101.12

Pacemaker indication tests:

Need to perform with all modes / filter settings

 

201.12.1.101.13

Pacemaker rejection (rate accuracy):

If pulse rejection disabled, indication is required

 

 

Pacemaker tests: the test circuit has been defined (Figure 201.114).

 

Note: this circuit is already implemented in MEDTEQ SECG equipment. 
 

201.12.1.101.14

Synchronizing pulse for cardioversion (<35ms delay to SIP/SOP)

Test more detailed (more test conditions)

 

201.12.1.101.15

Heart rate accuracy:

New test @ 0.15mV (70-120ms), and also with QRS of 1mV 10ms, both cases no heart rate shall be indicated. 

 

Note: the most recent software for MEDTEQ SECG includes this function

 

201.12.4.101.1

Indications on display:

-          Filter settings

-          Selected leads

-          Gain indicator

-          Sweep speed

 

201.15.4.4.101

Indication of battery operation and status required

 

208

Alarms:

-          Greatly modified, needs full re-check

-          IEC 60601-1-8 needs to be applied in full

-          Distributed alarm systems: disconnection should

o   Make technical alarm at both sides

o   Turn on audible alarms in the patient monitor

 

 

 

IEC 60601-2-25 Clause 201.12.4.102.3.2 - Goldberger and Wilson LEADS

In a major change from the previous edition (IEC 60601-2-51:2003), this standard tests the Goldberger and Wilson LEAD network using CAL waveforms only. There are some concerns with the standard which are outlined below: 

  • the standard does not indicate if the tests must be performed by analogue means, or if optionally digital tests are allowed as indicated in other parts of the standard. It makes sense to apply the test in analogue, as there is no other test in the standard which verifies the basic accuracy of sensitivity for the complete system (analogue and digital).
     
  • The CAL (and ANE) signals are designed in a way that RA is the reference ground (in the simulation data, RA is always zero; in the circuit recommended in IEC 60601-2-51, RA is actually connected to ground). This means that an error on RA cannot be detected by CAL or ANE signals. The previous standard was careful to test all leads individually, including cases where a signal is provided only to RA (other leads are grounded), ensuring errors on any individual lead would be detected. 
     
  • The allowable limit is 10%. This is a relaxation from IEC 60601-2-51, conflicts with the requirement statement in Clause 201.12.4.102.3.1 and also requirements for voltage measurements in Clause 201.12.1.101.2, all of which use 5%. Furthermore, many EMC tests refer to using CAL waveforms with the criteria from Clause 201.12.1.101.2 (5%), not the 10% which comes from this clause.  

A limit of 5% makes sense for diagnostic ECGs and is not difficult with modern electronics and historically has not been an issue. There is no explanation where the 10% comes; at a guess the writers may have trying to separate basic measurement sensitivity (5%) from the network accuracy (5%). In practice, it makes little sense to separate these out as ECGs don't provide access to the raw data from each lead electrode, only the final result which includes both the sensitivity and the network. As such we can only evaluate the complete system based on inputs (lead electrodes, LA, LL, RA etc) and outputs (displayed LEAD I, II, III etc).  

As mentioned above, there is no other test in IEC 60601-2-25 which verifies the basic sensitivity of the ECG. Although sensitivity errors may become apparent in other tests, it makes sense to establish this first as a system, including the weighting network, before proceeding with other tests. While modern ECGs, from quality manufacturers and designed specifically for for diagnostic work generally have little problem for 5%, experience indicates that lower quality manufacturers and in particular multipurpose devices (e.g. patient monitor with diagnostic functions) can struggle to meet basic accuracy requirement for sensitivity. 

IEC 60601-2-25 Clause 201.12.4.101 Indication of Inoperable ECG

This test is important but has a number of problems in implementation. To understand the issue and solution clearly, the situation is discussed in three stages - the ECG design aspect the standard is trying to confirm; the problems with the test in the standard; and finally a proposed solution. 

The ECG design issue

Virtually all ECGs will apply some opamp gain prior to the high pass filter which removes the dc offset. This gain stage has the possibility to saturate with high dc levels. The point of saturation varies greatly with each manufacturer, but is usually in the range of 350 - 1000mV. At the patient side a high dc offset is usually caused by poor contact at the electrode site, ranging from an electrode that is completely disconnected through to other issues such as an old gel electrode. 

Most ECGs detect when the signal is close to saturation and trigger a "Leads off" or "Check electrodes" message to the operator. Individual detection needs to be applied to each lead electrode, and both positive and negative voltages, this means that there are up to 18 different points (LA, RA, LL, V1 - V6). Due to component tolerances, the points of detection in each lead often vary by around 20mV  (e.g. LA points might be +635mV, -620mV, V3 might be +631mV, -617mV etc). 

If the signal is completely saturated it will appear as a flat-line on the ECG display. However, there is a small region where the signal is visible, but distorted (see Figure 1). Good design ensures the detection occurs prior to any saturation. Many ECGs automatically show a flat line once the "Leads Off" message is indicated, to avoid displaying a distorted signal. 

Problems with the standard

The first problem is the use of a large ±5V offset. This is a conflict with the standard as Clause 201.12.4.105.2 states that ECGs only need to withstand up to ±0.5V without damage. Modern ECGs use ±3V or less for the internal amplifiers, and applying ±5V could unnecessarily damage the ECG. 

This concern also applies to the test equipment (Figure 201.106). If care is not taken, the 5V can easily damage the precision 0.1% resistors in the output divider and internal DC offset components.  

Next, the standard specifies that the voltage is applied in 1V steps. This means it is possible to pass the test even though equipment fails the requirement. For example an ECG may start to distort at +500mV, flatline by +550mV, but the designer accidentally sets the "Leads Off" signal at +600mV. In the region of 500-550mV this design can display a distorted signal without any indication, and from 550-600mV is confusing to the operator why a flat line appears. If tested with 1V steps these problem regions would not be detected and a Pass result would be recorded. 

Finally the standard allows distortion up to 50% (a 1mV signal compressed to 0.5mV). This is a huge amount of distortion and there no technical justification to allow this given the technology is simple to ensure a "Leads Off" message appears well before any distortion. The standard should simply keep the same limits for normal sensitivity (±5%).

Solution 

In practice, it is recommended that a test engineer start at 300mV offset and search for the point where the message appears, reduce the offset until the message is cleared, and then slowly increase again up to the point of message while confirming that no visible distortion occurs (see Figure 2). The test should be performed in both positive and negative directions, and on each lead electrode (RA, LA, LL, V1 to V6). The dc offset function in the Whaleteq SECG makes this test easy to perform (test range up to ±1000mV), but the test is also simple enough that an ad-hoc set up is easily prepared.  

Due to the high number of tests, it might be temping to skip some leads on the basis that some are representative. Unfortunately, experience indicates that manufacturers sometimes deliberately or accidentally miss some points on some leads, or set the operating point to the wrong level, such that distortion is visible prior to the message appearing. As such it is recommended that all lead electrodes are checked. Test engineers can opt for a simple OK/NG record, with the operating points on at least one lead kept for reference. Detailed data on the other leads might be kept only if they are significantly different. For example, some ECGs have very different trigger points for chest leads (V1 - V6). 

Due to the nature of electronics, any detectable distortion prior to the "Leads Off" message should be treated with concern, since the point of op-amp saturation is variable. For example one ECG may have 10% distortion at +630mV while sample might have 35% distortion. Since some limit should apply (it is impossible to detect "no distortion") It is recommended to use a limit of ±5% relative to a reference measurement taken with no dc offset. 

The right leg

The right leg is excluded from the above discussion: in reality the right leg is the source of dc offset voltage - it provides feedback and attempts to cancel both ac and dc offsets; an open lead or poor electrode causes this feeback to drift towards the internal rail voltage (typically 3V in modern systems). This feedback is via a large resistor (typically 1MΩ) so there is no risk of damage (hint to standards committees - if 5V is really required, it should be via a large resistor).

More research is needed on the possibility, effects and test methods for RL. It is likely that high dc offsets impact CMRR, since if the RL drive is pushed close to rail voltage, it will feedback a distorted signal preventing proper noise cancellation. At this time, Figure 201.106 is not set up to allow investigation of an offset to RL while providing signals to other electrodes which is necessary to detect distortion. For now, the recommendation is to test RL to confirm that at least an indication is provided, without confirming distortion.

Figure 1: With dc offset, the signal is at first completely unaffected, before a region of progressive distortion is reached finally ending in flat line on the ECG display. Good design ensures the indication to the operator (e.g. "LEADS OFF") appears well before any distortion

Figure 2: The large steps in the standard fail to verify that the indication to the operator appears before any distortion. Values up to 5V can also be destructive for the ECG under test and test equipment. 

Figure 3; Recommended test method; search for the point when the message is displayed, reduce until message disappears, slowly increase again check no distortion up to the message indication. Repeat for each lead electrode and both + and - direction.

IEC 60601-2-25 Clause 201.8.5.5.1 Defibrillator Protection

General information on defibrillator testing can be found in this 2009 article copied from the original MEDTEQ website.

One of the significant changes triggered by the IEC 60601-2-25 2011 edition is the inclusion of the defibrillator proof energy reduction test via the general standard (for patient monitors, this test already existed via IEC 60601-2-49). Previously, diagnostic ECGs tended to use fairly low impedance contact to the patient, which helps to improve performance aspects such as noise. The impact of the change is that all ECGs will require higher resistors in series with each lead, as detailed in the above article. The higher resistors should trigger retesting for general performance, at least for a spot check.

Experience from real tests has found that with the normal diagnostic filter (0.05Hz to 150Hz), the baseline can take over 10s to return, exceeding the limit in the standard. Although most systems have automated baseline reset (in effect, shorting the capacitor in an analogue high pass filter, or the digital equivalent), the transients that occur after the main defibrillator pulse can make this difficult for the system to know when the baseline is sufficiently stable to perform a reset.  The high voltage capacitor used for the main defibrillator pulse is likely to have a memory effect causing significant and unpredictable baseline drift well after the main pulse. If a reset occurs during this time, the baseline can easily drift off the screen, and due to the long time constant of the 0.05Hz filter, can take 15-20s to recover. 

The usual workaround is that most manufacturers declare in the operation manual that during defibrillation special filters should be used (e.g. 0.67Hz). The issue raises the question of why diagnostic ECGs need to have defibrillator protection, and if so, how this is handled in practice. If defibrillator protection is really necessary, sensible solutions may involve the system automatically detecting a major overload and switching to a different filter for a short period (e.g. for 30s). It is after all an emergency situation: expecting the operator to have read, understand and remember a line from the operation manual, and as well have the time and presence of mind to work through touch screen menu system to enable a different filter setting while at the same time performing defibrillation on the patient is a little bit of a stretch. 

Powered by Squarespace

IEC 60601-2-25 CSE database - test experience

The standard IEC 60601-2-25:2011 includes tests to verify the accuracy of interval and duration measurements, such as QRS duration or the P-Q interval.

These tests are separated into the artificial waveforms (the CTS database) and the biological waveforms (the CSE database). The CSE database is available on CD-ROM and must be purchased from the INSERM (price $1500, contact Paul Rubel, prubel.lyon@gmail.com)[1].

In principle, the database tests required by IEC 60601-2-25:2011 should be simple: play the waveform (in digital or analogue form), compare the data from the equipment under test to the reference data. In practice, there are some considerations and complications. This document covers some of the issues associated with the CSE database.  

First, it should be confirmed that the equipment under test actually measures and displays global intervals, rather than intervals for a specific lead. As stated in Annex FF.2 of the standard:

“The global duration of P, QRS and T are physiologically defined by the earliest onset in one LEAD and the latest offset in any other LEAD (wave onset and offset do not necessarily appear at the same time in all LEADS because the activation wavefronts propagate differently).”

Global intervals can be significantly different to lead intervals. This becomes evident from the first record of the database (#001), where the reference for the QRS duration is 127ms, while the QRS on LEAD I is visibly around 100ms. The following image is from the original “CSE Multilead ATLAS” analysis for recording #001 shows why: the QRS onset for Lead III (identified with the line and sample number 139) starts much earlier than for Lead I.

If the equipment under test does not display global intervals, it is not required to test using the CSE database to comply with IEC 60601-2-25.

The next aspect to be considered is whether to use waveforms from the MO1 or MA1 series.

The MO1 series is the original recording, and contains 10s with multiple heart beats. Each heart beat is slightly different, and the reference values are taken only for a specific heart beat (generally the 3rd or 4th beat in the recording). The beat used for analysis can be found from the sample number in the file “MRESULT.xls” on the CD-ROM[2]. The MO1 recordings are intended for manufacturers using the software (digital) route for testing their measurement algorithms. Many ECGs perform the analysis by averaging the results from several beats, raising a potential conflict with the standard since the reference values are for a single beat only. It is likely that the beat to beat variations are small and statistically insignificant in the overall analysis, as the limits in the standard are generous. However manufacturers investigating differences in their results and the reference values may want to check other beats in the recording.

The MO1 files can played in analogue form but there are two disadvantages: one is to align the equipment under test with the reference beat, the second is looping discontinuities. For example Record 001 stops in the middle of a T-wave, and Lead V1 has a large baseline drift. If the files are looped there will be a large transient and the potential for two beats to appear together; the ECG under test will have trouble to clear the transient events while attempting to analyze the waveforms. If the files are not looped, the ECG under test may still have trouble: many devices take around 5s to adjust to a new recording, by which time the reference beat has already passed.

The MA1 series overcomes these problems by isolating the selected beat in the MO1 recording, slightly modifying the end to avoid transients, and then stitching the beats together to make a continuous recording of 10 beats. The following image superimposes the MA1 series (red) on top of the MO1 series (purple) for the 4th beat on Lead I. The images are identical except for a slight adjustment at the end of the beat to avoid the transient between beats:

 The MA1 series is suitable for analogue and digital analysis. Unlike MO1 files which are fixed at 10.0s, the MA1 files contain 10 whole heart beats, so the length of the file varies depending on the heart rate. For example, record # 001 has a heart rate around 63bpm, so the file is 9.5s long. Record 053 is faster at 99bpm, so the file is only 6s long. As the file contains whole heart beats, the file can be looped to allow continuous play without limit. There is no need to synchronize the ECG under test, since every beat is the same and the beat is always the reference beat.

The only drawback of the MA1 series is the effect of noise, clearly visible in the above original recording. In a real recording, the noise would be different for each beat, and helps to cancel out errors if averaging is used. For manual analysis (human), the noise is less of a concern as we can visually inspect all leads simultaneously, from this we can generally figure out the global onset even in the presence of noise. Software usually looks at each lead individually and can be easily tricked by the noise. This is one reason why ECGs often average over several beats. Such averaging may not be effective for the MA1 series since the noise on every beat is the same.

Finally, it should be noted that the CSE database contains a large volume of information much of which is irrelevant for testing to IEC 60601-2-25. Sorting through this information can be difficult. Some manufacturers and test labs, for example, have been confused by the file “MRESULTS.xls” and attempted to use the reference data directly. 

In actual case, file “MRESULTS.xls” does not contain the final reference values used in IEC tests. They can be calculated from the raw values (by selective averaging), but to avoid errors it is best to use the official data directly.

Most recent versions of the CD-ROM should contain a summary of the official reference values in three files (all files have the same data, just difference file format):

  • IEC Biological ECGs reference values.pdf
  • IEC Biological ECGs reference values.doc
  • CSE_Multilead_Library_Interval_Measurements_Reference_Values.xls

If these files were not provided in the CD-ROM, contact Paul Rubel (prubel.lyon@gmail.com).

[1] The CSE database MA1 series are embedded in the MEDTEQ/Whaleteq MECG software, and can be played directly without purchasing the CD-ROM. However the CD-ROM is required to access the official reference values.

[2] For record #001, the sample range in MRECORD.xls covers two beats (3rd and 4th). The correct beat is the 4th beat, as shown in the original ATLAS records, and corresponds to the selected beat for MA1 use.

IEC 60601-2-25 Overview of CTS, CSE databases

All ECG databases have two discrete aspects: the digital waveform data, and the reference data. The waveform data is presented to the ECG under test, in either analogue or digital form (as allowed by the standard), and the ECG under test interprets the waveform data to create measured data. This measured data is then compared against the reference data to judge how well the ECG performs. These two aspects (waveform data, reference data) need to be considered separately. This article covers the databases used in IEC 60601-2-25.  

CTS Database

The CTS database consists of artificial waveforms used to test for automated amplitude and interval measurements. It is important to note that the standard only applies to measurements that the ECG makes: if no measurements are made, no requirement applies; if only the amplitude of the S wave in V2 is measured, or duration of the QRS of Lead II, that is all that needs to be tested. In the 2011 edition the CTS database is also used for selected performance tests, some of which needs to be applied in analogue form. 

All the CAL waveforms are identical for Lead I, Lead II, V1 to V6, with Lead III a flatline, aVR inverted and aVL, aVF both half amplitude, as can be predicted from the ECG Leads relationship. The ANE waveforms are more realistic, with all leads having similar but different waveforms. A key point to note with the ANE20000 waveform is the large S amplitude in V2, which usually triggers high ringing in high order mains frequency filters - more on that on another page.  

The CTS waveform data is somewhat of a mystery. in 2009 MEDTEQ successfully bought the waveforms from Biosigna for €400, but that organisation is now defunct (the current entity bears no relation). The standard states that the waveforms are part of the CSE database and available from INSERM, but this information is incorrect, INSERM is responsible for CSE only. According to Paul Rubel (INSERM), the CTS database was bought by Corscience, but their website contains no reference to the CTS database, nor where or how to buy it.  

Adding to the mystery, In the 2005 edition of IEC 60601-2-25 the CTS reference data was mentioned in the normative text but completely missing in the actual appendixes. The 2011 edition finally added the data but there are notable errors. Most of the errors are easily detected since they don't follow the lead definitions (for example, data for Lead I, II and III is provided, and this must follow the relation Lead III = Lead II - Lead I, but some of the data does not),  

Almost certainly, the situation is affected by a moderately wide group of individuals associated with the big manufacturers that are "in the know" and informally share both the waveform and reference data with others that are also "in the know" - otherwise it seems odd that the errors and omissions would persist. Those of us outside the the group are left guessing. The situation is probably illegal in some countries - standards and regulations are public property and the ability to verify compliance should not involve secret knocks and winks. 

The good news is that thanks to Biosigna, MEDTEQ and now Whaleteq has the CTS database embedded in the MECG software. And the reference data is now in the standard. This provides at least one path for determining compliance. We are not permitted to release the digital data. 

The experience from actual amplitude tests has been good. Most modern ECGs (software and hardware) are fairly good at picking out the amplitudes of the input waveform and reporting these accurately and with high repeatability. Errors can be quickly determined to be either:

  • mistakes in the reference data (which are generally obvious on inspection, and can be double checked against the displayed waveforms in MECG software);
  • due to differences in definitions between the ECG under test and those used in the standard;
  • due to the unrealistic nature of the test waveforms (for example, CAL50000 with a QRS of 10mVpp still retains a P wave of just 150µV); or
  • an actual error in the ECG under test  

For CTS interval measurements, results are mixed. Intervals are much more difficult for the software as you need to define what is a corner or edge (by comparison, peak is peak, it does not need a separate definition). Add a touch of noise, and the whole interval measurement gets messy. Which is probably why the standard uses statistical analysis (mean, deviation) rather than focusing on any individual measurement. Due to the statistical basis, the recommendation here is to do the full analysis first before worrying about any individual results. 

CSE Database

For the CTS database, the standard is actually correct to refer to INSERM to obtain both the waveform and reference data. The best contact is Paul Rubel (prubel.lyon@gmail.com). Unlike CTS, the CSE database uses waveforms from real people and real doctors were involved in measuring the reference data. As such it is reasonable to pay the US$1500 which INSERM requests for both the waveforms and reference data,

The MECG software already includes the CSE database waveforms, output in analogue form, as allowed  under the agreement with Biosigna. However it is still necessary to buy the database from INSERM to access the digital data and reference data.

More information and experience on the CSE database is provided in this 2014 article.

IEC 60601-2-25 Update Guide (2005 to 2011 edition)

For the second edition of this standard, IEC 60601-2-25 and IEC 60601-2-51 were combined and re-published as IEC 60601-2-25:2011 (Edition 2.0).

The standard has of course been updated to fit with IEC 60601-1:2005 (the 3rd edition). Also, similar to IEC 60601-2-27, the opportunity has been taken to correct some of the errors in requirements and test methods for performance tests that existed in the previous edition. However, compared to the update of IEC 60601-2-27, the changes are far more extensive making it difficult to apply the new standard in a gap analysis approach. Experience also indicates that historical data for existing equipment is often of limited quality, so it may anyhow be an excellent opportunity to do a full re-test against the new standard.

Despite the updated tests, it seems that significant errors still persist, which is to be expected given the number of complexity of the tests.

The table below provides an overview of corrections, changes and problems found to date in the new standard. This table was compiled during the real tests against new standard.

One major change worth noting is that requirements for ECG interpretation (the old clause 50.102 in IEC 60601-2-51) have been completely removed from the standard. There is no explanation for this, however the change is of interest for the CB scheme since it is now possible to objectively test compliance with all performance tests.

Table: List of changes, corrections and problems in IEC 60601-2-25:2011
(compared to IEC 60601-2-25:1993/A1:1999 + IEC 60601-2-51:2003)

Clause Subject Type Content
201.1.1 Scope Change

The scope statement has been reworded, so for unusual cases it should be checked carefully.

There has been a common mistake that IEC 60601-2-25/IEC 60601-2-51 should not be applied to patient monitors, and a similar mistake can also expected for this edition. However, the correct interpretation has always been  that if the patient monitor provides an ECG record intended for diagnostic purposes, then diagnostic standard should also be applied.

This would then depend on the intended purpose statement (and contraindications) associated with the patient monitor. However, manufacturers of patient monitors with 12 lead ECG options, with measurements of amplitudes, durations and intervals or automated interpretations might find it difficult to justify a claim of not being for diagnostic purpose.  

201.5.4 Component values Change For test circuits, resistors are now required to be ±1% (previously 2%)
201.6.2 Classification New The ECG applied part must now by Type CF (previously there was no restriction).
201.7.4.101 Detachable lead wires Change Detachable lead wires must be marked at both ends (identifier and/or colour)
201.7.9.2.101 Instructions for use Change

Requirements for the operation manual have been substantially modified in the new standard (see standard for details).

Note: it seems that HF surgery got mentioned twice in item 6) and 12), possibly as a result of combining two standards (IEC 60601-2-25 and IEC 60601-2-51)

201.8.8.5 Defibrillator proof tests Change

Due to the size of the clause, it is difficult to fully detect all changes. However, at least the following changes have been found:

  • The test combinations (Table 201.103) now includes 12 lead ECGs (i.e. C1 ~ C6 should also be tested)
  • The energy reduction test is now included (previously not required for diagnostic ECGs)
  • The test with ECG electrodes is now removed

The energy reduction test is a major change: many diagnostic ECGs have no series resistors which helps to limit noise, improve CMRR. To pass the energy reduction test, ECG lead wires should have at least 1k resistors and preferably 10k (as discussed in the technical article on Defibrillator Tests). With this series resistance, the impact of the ECG gel electrodes is reduced, and perhaps this is the reason for making the test with electrodes obsolete. The test result anyhow depended on the type of ECG electrodes, which is often outside the control of the manufacturer, making the test somewhat unrepresentative of the real world.

201.12.1.101 Automated interpretation Change Automated interpretation is now removed from the standard. Note that it is still expected to be covered by regulatory requirements, such as Annex X of the MDD.
201.12.1.101.1.2 Automated amplitude measurements Correction

The limits stated in the requirements has now been corrected to match the test method (5% or 40uV).

The reference values for CAL and ANE waveforms have now been included in the standard (Annex HH). The previous edition stated that these values were there, but they were missing.

Problem

In the Annex HH reference data, the polarity of some S segment values is wrong (CAL 20500, aVL, aVF , and V3 for all of the ANE waveforms). There may be other errors that get revealed with time.

201.12.1.101.3 Automated interval measurements (CAL/ANE) Problem

The requirement statement refers to global measurements (with 17 waveforms, up to 119 measurements), however the compliance statement refers to measurements from each lead (for a 12 lead ECG, up to 1428 measurements if all durations/intervals are measured). Not all ECGs provide global measurements, so this really should be clarified.

Because of this it is also unclear about the removal of 4 outliers "for each measurement". If global measurements are used, this would imply that 4 out of the 17 measurements can be removed from the statistical analysis (which seems a lot). However, if lead measurements are used, this implies 4 out of 204 measurements, which is more reasonable. 

201.12.4 General test circuit Correction / change

The test circuit is now correctly and harmonized with IEC 60601-2-27:2011, IEC 60601-2-47 and also ANSI/AAMI EC 13. Previously the 300mVdc offset was placed in parallel with the test signal which meant the impedance of the dc supply appeared in parallel with the 100Ω resistor and reduced the test signal. The dc offset is now placed in series where this problem does not occur.

However, it is noted that for one test the 300mV DC offset is still required to be applied "common mode" using the old circuit.

Also, in the old standard the resistance between RL/N to the test circuit was 100Ω, whereas now it is a 51kΩ//47nF. A conservative interpretation is that all tests should be repeated with the new circuit, given the significant change (although experience indicates the results don't change).

201.12.4.101 Indication of inoperable ECG Problem The standard indicates that the test should be performed with 1V steps, up to 5V. However, the point of saturation normally occurs well below 1V (experience indicates this is from 400 - 950mV). This means it is possible to pass the test, without passing the requirement. The standard should instead require the dc voltage to be increased in steps of 5 or 10mV to ensure that the indication of saturation is provided before the signal amplitude starts to reduce. 
201.12.4.
102.3.2
Test of network Change The previous test (application of 2mV and 6mV waveforms to various leads) is now replaced with the CAL and ANE waveform, with a limit of 10%
Problem

The above change has interesting points. The first is that one might ask why the test is needed, since the CAL and ANE waveforms have already been tested under 201.12.1.101 (automated amplitude measurements). However, Clause 201.12.1.101 can be done by digital analysis, whereas this test is for the full system including the ECG's hardware. Also, not all ECGs measure all amplitudes. 

It therefore requires the ability to generate CAL and ANE test signals by analogue (with laboratory 1% accuracy) which many laboratories may not have.

That said, the test really does not really seem to test the networks correctly. As in the old standard, the networks are best tested by providing a signal to one lead electrode only, whereas the CAL/ANE waveforms provide the same signal to all leads simultaneously, except RA which is grounded. Although some analysis is required it seems clear that at least part of the lead network cannot be tested by the CAL/ANE waveforms.

Finally, one might ask why there is a 10% limit for the test method, while the requirement statement says 5%. The reason could be that the basic measurement function is 5%, while the lead networks add another 5%, thus providing an overall 10% error. This is a clear relaxation on the previous edition, which seems unnecessary given that modern electronics (and software) easily handles both the measurement and network well below the 5% in the old standard.

201.12.4.103 Input Impedance Correction

The previous version of the standard had an allowable limit of 18% (for reduction with 620k in series), but Table 113 incorrectly had an effective 6% limit. The 6% limit could be met at 0.67Hz, but most ECGs failed at 40Hz (the input impedance changes with frequency).

The new standard now corrected this to a limit of 20%, aligned with IEC 60601-2-27.

The requirement to test with a common mode 300mV to RL has been removed.   

201.12.4.104 Required GAIN Change / Problem

The previous standard included a test of a 1mV step to verify the sensitivity (mm/mV), with a limit of 5%. This test and the limit are now removed, which means there is no objective measurement to verify that a 1mV step corresponds to 10mm on the ECG record. This may or may not be deliberate: it opens the possibility that manufacturers may use a gain of "10mm/mV" in a nominal sense only, with the actual printed record being scaled to fit the report or screen. The classic 5mm pink background grid also then also scaled to give the appearance of 10mm/mV, even though the true measurement reveals strange values such as 7mm/mV (on a small screen) or 13mm/mV (on a printed record).

Using the definition, "GAIN" is the "ratio of the amplitude of the output signal to the amplitude of the input signal". The text in 201.12.4.104 refers to the amplitude on the ECG record. Putting these together, it seems the literal interpretation is that 1mV input should be 10mm on the ECG record. Also several tests provide the allowable limits in mm (e.g. CMRR test limit is 10mm), if the outputs are scaled this would make little sense.

But in the absence of any criteria (limit), it is all a bit vague. If scaling is allowed, it should be clearly stated, and limited to a suitable range otherwise it can get confusing (e.g. at "10mm/mV", a 1mV indication should be in the range of 8-14mm. The scaling and reference grid should be accurate to within 5%, although in the digital world, this may be not necessary to test beyond a spot check to detect software bugs. Finally all limits in the standard should be converted to mV or uV as appropriate.

201.12.4.105.1 CMRR Change

The test is the same except that the DC offset is now included in the CMRR test box, and the RL/N lead electrode is no longer required to be switched (test is harmonized with IEC 60601-2-27). Previously, the standard indicated that the dc offset is not required, because it has been tested elsewhere. 

201.12.4.105.3 Filter affecting clinical interpretation New

The standard now requires the ECG report to include an "indication" that the clinical interpretation may be affected by filter settings (if applicable). However, there is no clear statement about what is an acceptable "indication". It could mean text such as "Warning the interpretation: might not be valid due to the use of filters"; on the other hand it could mean just making sure that the filters used are clearly shown on the record, perhaps adjacent to the interpretation (allowing the user to make thier own conclusion).

What makes the issue more confusing is that some manufacturers might apply the filters only to the printed waveforms, while the interpretation is still performed on the unfiltered data (to ensure that the filters don't mess up to the interpretation), or worse, some kind of mixed situation (e.g. only mains hum filter is allowed for interpretation).

201.12.4.105.3 Notch filter effect (on ANE20000) Change

The allowable limit for ringing in the ST segment has now been increased from 25uV to 50uV.

Test experience indicates that the impact of notch filters for the waveform ANE 20000 on Leads I, II and III, aVR, aVL, aVF is minimal. However, the very large S amplitude on Leads V2 (1.93mV) and V3 (1.2mV) can cause a large amount of ringing in the ST segment, which is probably the reason for the change in limit.

It is possible that previous tests have been limited to inspection of Lead II with the assumption that the ANE20000 waveform is the same for all leads (a mistake which the author has made in the past). In fact, the test should be done with a full 12 lead ECG simulator, with each lead inspected one by one. If the notch filter is applied in hardware (in part or full), the test should be done in analogue form.

201.12.4.106 Baseline, general Change Many of the baseline tests have now been removed, such as temperature drift, stability, writing speed and trace width, presumably because in the modern electronic / digital world these are not worth the effort to test. Most ECGs use a high pass filter and digital sampling, which means there is no real possibility for baseline drift.
201.12.4.106.2 Channel crosstalk Change

The previous edition mixed up leads and lead electrodes (for example, putting a signal on RA/R results in a signal on Leads I, II, aVx, and Vx) so the criteria never made any sense. In practice the test needed to be adapted.

Fortunately, this test has now been corrected and updated to give clear direction on where lead electrodes should be connected and also which leads to inspect for crosstalk. The test is the same as in IEC 60601-2-27:2011.

Problem

In step c) of the compliance test, the standard says to inspect Leads I, II and III, but this appears to be a "cut and paste" typographical mistake. The correct lead is only Lead I (Leads II, III will have a large signal not related to crosstalk). Similarly, in step d) this should be only Lead III. Steps e), f) and g) are all correct.

201.12.4.107.1.1 High frequency response Change

For frequency response, previously all tests A to E were applied, in the new standard only tests (A and E) or (A, B, C and D) are required.

Also the limit for test E has been slightly reduced (made stricter) from -12% to -10%.

201.12.4.107.1.2 Low frequency response Change

The allowable slope has been changed from 250uV/s to 300uV/s, perhaps in recognition that a single pole 0.05Hz high pass filter (typically used in many ECGs) could not pass the 250uV/s limit. Theoretical simulations showed that 0.05Hz single pole filter produces a slope of 286uV/s.

Problem Minor mistake in the standard: the requirement statement does not include the limit for the slope of 300uV/s. This is however included in the compliance statement.
201.12.4.107.2 Linearity and dynamic range Change / problem

The previous test method used a 1mVpp signal, but required the minimum gain. For an ECG with typical minimum gain of 2.5mm/mV, this meant that the test signal was only 2.5mm, which then conflicted with the diagram.

The new standard corrected this, but made the slight mistake of saying "10mV" rather than "10mm". But the test only makes sense if 10mm is used.

201.12.4.108.3.1 Time and event markers Change / problem

It appears as if the authors of the standard were getting a bit tired by this stage.

Both editions of the standard fail to provide a test method, and it is not really clear what to do. The compliance statement is effectively "X shall be accurate to within 2% of X", which makes no sense.

In the latest edition, things have got worse, with the reference to test conditions referring to a clause that has no test conditions (201.12.4.107.3).

In practice one would expect the time markers to be accurate to within 2% compared to either a reference signal (e.g. 1Hz for time makers of 1s), and/or against the printed grid.

Of course, all of this really has not much impact in the digital world with crystal accuracy of 50ppm (software bugs notwithstanding).

201.12.4.109 Pacemaker tests Change

The previous pacemaker tests (51.109.1 and 51.109.2) have been combined and extensively reworked:

  • The requirement statement has been changed to include pacing pulses of 2mV to 250mV and durations 0.5 to 2.0ms
  • The test circuit for pacemaker has been defined
  • the point of measurement of amplitude after the pulse is changed from 50ms to 120ms (3mm)
  • the test with the triangle pulse (or CAL ECGs) is removed
  • the test method now includes a calibration step (item e)) to ensure the 2mV pulse is accurate
  • there is now no requirement to declare the impact of filters
  • (big point) the test is now clearly required for all electrodes, tested one by one as per Table 201.108
Problem

Although the requirement statement refers to 0.1ms, there is no test for this. 

Also, the status of filters is not clear. Most ECGs use hardware or software "blanking" when a pacing pulse is detected, leaving a clean ECG signal for further processing including filters. This means that the filter setting has no impact on how the ECG responds. However, some manufacturers don't use this method, allowing the display to be heavily distorted with the pulse, with the distortion varying greatly depending on the filters used. Ideally, the standard should encourage the former approach, but at least if heavy distortion can occur for some filter settings, this should be declared. 

 

IEC 60601-2-2 Dielectric heating

This article has been transferred from the original MEDTEQ website with minor editorial update.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The mechanism of breakdown at high frequency is different to normal mains frequency dielectric strength tests - it can be thermal, rather than basic ripping apart of electrons from atoms. Also, for mains frequency tests, there tends to be a large margin between the test requirement and what the insulation can really handle, meaning that errors in the test method are not always critical. In contrast, the margin for HF insulation can be slight, and the test method can greatly affect the test result.

HF burns, in part due to insulation failure, continue to be a major area of litigation. Of particular concern is the high fatality rate associated with unintentional internal burns which may go unnoticed.

For those involved in designing or testing HF insulation, it is absolutely critical to have a good understanding of the theory behind HF insulation and what causes breakdown. This article looks into the detail of one of those mechanisms: thermal effects. 


Theory

All insulating materials behave like capacitors. With an ac voltage applied, some current will flow. At 230V 50/60Hz, this amount of current is very small, in the order of 20uA between conductors in a 2m length of mains cable. But at 300-400kHz, the current is nearly 10,000 times higher, easily reaching in the order of 10mA at 500Vrms, for just short 10cm of cable.  

All insulating materials will heat up due to the ac electric field. This is called dielectric heating or dipole heating. One way to think of this is to consider the heating to be due to the friction of molecules moving in the electric field. Microwave ovens use this property to heat food, and dielectric heating is used also in industrial applications such as welding plastics. These applications make use of high frequency, usually in the MHz or GHz range.

At 50-60Hz the amount of heating is so small it amounts to a fraction of a fraction of a degree. But again at 300-400kHz the amount of heating can be enough to melt the insulation.

The temperature rise caused by dielectric heating can be estimated from:

dT = 2π V2 f ε0εr d t / H D d2      (K or °C)

Although this is a rather complicated looking formulae, it is mostly made up of material specific parameters that can be found with some research (more details on this are provided below). To get some feel for what this means, let's put this in a table where voltage and thickness are varied, for a frequency of 400kHz, showing two common materials, PVC and Teflon:

Predicted insulation temperature rise @ 400kHz

 

Voltage PVC Insulation Thickness (mm)
(Vrms) 1 0.8 0.6 0.4 0.2
  Temperature rise (K)
200 0.7 1.1 1.9 4.3 17.3
400 2.8 4.3 7.7 17.3 69.3
600 6.2 9.7 17.3 39.0 156.0
800 11.1 17.3 30.8 69.3 277.3
1000 17.3 27.1 48.1 108.3 433.3
1200 25.0 39.0 69.3 156.0 623.9

 

 Table #1: Because of a high dissipation factor (d = 0.016), PVC can melt at thicknesses commonly found in insulation. 
For broad HF surgical applications, a thickness of at least 0.8mm is recommended 

 

Voltage Teflon Insulation Thickness (mm)
(Vrms) 0.5 0.3 0.1 0.05 0.03
  Temperature rise (K)
200 0.0 0.1 0.5 2.1 5.9
400 0.1 0.2 2.1 8.5 23.7
600 0.2 0.5 4.8 19.2 53.4
800 0.3 0.9 8.5 34.2 94.9
1000 0.5 1.5 13.3 53.4 148.3
1200 0.8 2.1 19.2 76.9 213.5

 

 Table #2: Teflon has a far lower dissipation factor (less than 0.0002), so even 0.1mm is enough for broad HF surgical applications. 
However, because of Teflon's superior qualities and high cost, insulation thickness is often reduced to around the 0.1mm region

For PVC insulation, these predicted values match well with experimental tests, where a small gauge thermocouple was used as the negative electrode and the temperature monitored during and after the test. For insulation with thickness varying between 0.3mm and 0.5mm, temperatures of over 80°C were recorded at voltages of 900Vrms 300kHz, and increasing the voltage to 1100Vrms resulted in complete breakdown.


Practical testing

As the formulae indicates, the temperature rise is a function of voltage squared, and an inverse function of thickness squared. This means for example, if the voltage is doubled, or the thickness is halved, the temperature rise quadruples. Even smaller variations of 10-20% can have a big impact on the test result due the squared relation.

Because insulation thickness varies considerably in normal wiring, it is possible that one sample may pass while another may not. Although IEC 60601-2-2 and IEC 60601-2-18 do not require multiple samples to be tested, good design practice would dictate enough samples to provide confidence, which in turn depends on the margin. For example, if your rated voltage is only 400Vrms, and your thickness is 0.8+/- 0.2mm, then high margin means the test is only a formality. On the other hand, if your rating is 1200Vrms, and the thickess if 0.8+/-0.2mm, perhaps 10 samples would be reasonable.

Test labs need to take care that the applied voltage is accurate and stable, which is not an easy task. Most testing is performed using HF surgical equipment as the source, however, these often do not have a stable output. Also, the measurement of voltage at HF is an area not well understood. In general, passive HV probes (such as 1000:1 probes) should not be used, since at 400kHz these probes operate in a capacitive region in which calibration is no longer valid (see here for more discussion) and large errors are common. Specially selected active probes or custom made dividers which have been validated at 400kHz (or the frequency of interest) are recommended.   

Perhaps the biggest impact to the test result is heat sinking. The above formulae for temperature rise assumes that all the heat produced cannot escape. However, the test methods described in IEC 60601-2-2 and IEC 60601-2-18 do not require the test sample to be thermally insulated. This means, some or most of the heat will be drawn away by the metal conductors on either side of the insulation, by normal convection cooling if the sample is tested in an open environment, or by the liquid if the sample immersed in fluid or wrapped in a saline soaked cloth.  

This heat sinking varies greatly with the test set up. The test in IEC 60601-2-2 (wire wrap test) is perhaps the most severe, but even something as simple as the test orientation (horizontal or vertical) is enough to substantially affect the test result.

Because of these three factors (variations in insulation thickness, applied voltage, heatsinking) bench testing of HF insulation should only be relied on as a back up to design calculations. Test labs should ask the manufacturer for the material properties, and then make a calculation whether the material is thermally stable at the rated voltage and frequency. 

The above formulea is again repeated here, and the following table provides more details on the parameters needed to estimate temperature rise. The temperature rise should be combined with ambient (maybe 35°C for the human body) and then compared to the insulation's temperature limit.

 

dT = 2π V2 f ε0εr d t / H D d2      (K or °C)

 

 

Symbol  Parameter Units Typical value Notes
V Test voltage Vrms 600 - 1200Vrms

Depends on rating and test standard. Note that ratings with high peak or peak to peak values may still have moderate rms voltages. Under IEC 60601-2-2, a rating of 6000Vp would require a test with 1200Vrms. 
 

f Test frequency  Hz 300-400kHz Depends on rating. Monopolar HF surgical equipment is usually less than 400kHz1 
ε0 Free space permittivity  F/m 8.85 x 10-12 Constant
εr Relative permittivity  unit less ~2 Does not vary much with materials
δ Dissipation factor unit less 0.0001 ~ 0.02 Most important factor, varies greatly with material. Use the 1MHz figures (not 1kHz)
t Test time s 30s IEC 60601-2-2 and IEC 60601-2-18 both specify 30s
H Specific heat J/gK 0.8 ~ 1 Does not vary much with materials
D Density  g/cm3 1.4 ~ 2 Does not vary much with materials
d Insulation thickness mm 0.1 ~ 1 Based on material specification. Use minimum value

 

 1Dielectric heating also occurs in bipolar applications, but due to the significantly lower voltage, the effect is much less significant.   

IEC 60601-2-2 201.8.8.3 High Frequency Dielectric Strength

Experience indicates there are two main causes for insulation failure at high frequency: thermal and corona. Both of these are influenced by the higher frequency of the test waveform and the effects may not be well appreciated for the test engineer more familiar with mains frequency testing. 

Thermal

In addition to high voltage, electrosurgery devices also operate at a relatively high frequency of around 400kHz. At this frequency, surprisingly high currents can flow in the insulation - roughly 8,000 times higher than those at mains frequency. For example, a 50 cm length of cable immersed in saline solution tested at 1000Vrms/400kHz can easily have over 200mA flowing through the insulation, creating a sizeable 200VA load.

Although this VA load is predominately reactive or apparent power (var), insulators are not perfect and a portion will appear as real power in watts. This is determined by the dissipation factor (δ) of the material. Some materials like PVC have a high factor (δ around 0.01-0.03), which means that 1-3% of the total VA load will appear as heat. In the example above, if the test sample was PVC insulation with δ = 0.02, it means the test would create 4W of heat (200VA x 0.02). That heat can be enough to melt the insulation, causing dielectric breakdown.

The theory is discussed in more detail in this article from the original MEDTEQ website. As the article indicates, key factors are:

  • the rms test voltage, with heat is proportional to Vrms squared

  • thickness of the insulation, with heat inversely proportional to thickness squared

  • material dissipation factor

  • the test set up (availability of heatsinks such as liquids, electrodes)

While it is obvious the authors of IEC 60601-2-2 are aware of the thermal effects, the test in the standard seems to be poorly designed considering these factors. First, the standard concentrates on peak voltage and has a fairly weak control of the rms voltage. Secondly it is expected that thickness is not constant, so testing multiple samples may make sense, particularly for thin insulation. And thirdly the heatsink effect of the set up should be carefully considered. 

In critical situations, good manufacturers will opt for low dissipation factor materials such as Teflon, which has δ ~ 0.001 or 0.1%. This ensures safety in spite of the weakness in the standard. Even so, thin insulation in the order of 100µm can still get hot - keeping in mind heat density is a inverse function of thickness squared (1/t²), which means that half thickness is four times as hot.  

Conversely, very thick insulation can be fine even with high dissipation factors. Thick PVC insulation on a wiring to an active electrode can often be an acceptable solution, and concerns over the test set up need not be considered. 

A test developed by MEDTEQ is to enclose the sample in 50µm copper foil (commonly available) with a thermocouple embedded in the foil and connected to a battery operated digital thermometer. The foil is connected to the negative electrode (prevents any damage to the thermometer), with the active electrode connected to the high voltage.  During the test the thermometer may not read accurately due to the high frequency noise. However, immediately after the test voltage is removed, the thermocouple will indicate if any significant heating occurred. For 300V PVC insulation tested at 1000Vrms/400kHz, this test routinely creates temperatures in the order of 80-100°C, demonstrating that the material is not suitable at high frequency. A similar test can be done by coating the foil in black material, and monitoring with an IR camera.

This test has been useful in discriminating between good and bad materials at high frequency. It is a common rookie mistake for those seeking to break in to the active electrode market to reach for materials with high dielectric strength but also high dissipation factors.  

The potential for high temperatures also raises a separate point overlooked by the standard: the potential to burn the patient. It may be that real world situations may mitigate the risk, but it would seem theoretically possible that insulation could pass the test while still reaching temperatures far above those which could burn the patient. This again supports a leaning towards low dissipation materials, verified to have low temperatures at high frequency.  

Corona 

Corona is caused when the local electric field exceeds that necessary to break the oxygen bonds in air, around 3kV/mm. In most tests, the electric field is not evenly distributed and there will be areas of much higher fields typically close to one or both electrodes. For example, electrodes with 2kV separated by 3mm have an average field of just 666V/mm, well below that needed to cause corona. However if one of the electrodes is a sharp point, the voltage drop occurs mostly around that electrode causing gradients around the tip above 3kV//mm. This creates a visible corona around that electrode, typically in the form of a purple glow, and ozone as a by product.

In the context of dielectric strength, corona is not normally considered a failure. Standards including IEC 60601-2-2 indicate that it can be ignored. This makes sense for normal electrical safety testing: firstly, corona is not actually a full breakdown, it is a local effect and an arc bridging both electrodes rarely occurs. Secondly, dielectric strength is intended to test solid insulation, not the air. In particular most dielectric strength tests are many times larger than the rated voltage (e.g. 4000Vrms for 2MOPP @ 240V in IEC 60601-1). This high ratio between the rating/test is not a safety factor but instead an ageing test for solid insulation. In that context, corona is genuinely a side effect, not something that is representative of effects that can be expected at rated voltage.  

Unfortunately this is not true for high frequency insulation. Firstly, the test is not an ageing test which means the ratio between the test voltage and rating is much closer, just 20%. That means that corona could also occur at real voltages used in clinical situations. Secondly, corona can damage the surface of the insulation, literally burning the top layer. In many applications the active electrode insulation needs to be thin, such as catheter or endoscopic applications. If for example the insulation is only 100µm thick, and corona burns off 50µm, the dielectric strength or thermal limits of the remaining material can be exceeded, leading to complete breakdown. Analysis and experience also indicates that the onset of corona for thin insulation is quite low. Finally, there is reference in literature and anecdotal evidence that the onset of corona is lower at high frequency by as much as 30% (i.e. ~2kV/mm). 

From experience, corona is the most common cause of breakdown in actual tests for thin insulation. But the results are not consistent: one sample may survive 30s of corona, while another breaks down. The onset of corona is also dependent on external factors such as temperature and humidity, and the shape of the electrodes. In tests with saline soaked cloth, corona occurs around the edges and boils off the liquid, leading to variable results.

Practical experience suggests that the wire wrap test is the most repeatable. This creates corona at fairly consistent voltage, and consistent visible damage to the surface of the insulation. Breakdown is still variable suggesting that a number of samples are required (e.g. 10 samples). Temperature and humidity should be controlled.

Currently IEC 60601-2-2 states that corona can be ignored, allows tests on a single sample and uses the wire wrap test for certain situations only. In the interests of safety, manufacturers are recommended to consider using the wire wrap test for all active insulation, and to test multiple samples, or consider regular samples from production.