I want to understand how medical device companies are currently verifying the software that runs on their devices.
Yes, I know what the standards and guidance documents say. They give a bunch of good ideas and guidelines.
And then they leave it up to the organization to decide.
Here’s an example list of verification techniques from TIR80002-1 “Guidance on the application of ISO 14971 to medical device software”:
There are many methods available to facilitate assurance that RISK CONTROL methods are likely to perform as intended, some more resource-intensive than others. No single method is sufficient. Some of these methods are identified below in Table B.3.
Table B.3 – Methods to facilitate assurance that RISK CONTROL methods are likely to perform as intended
- Design reviews
- Sneak circuit analysis
- Functional testing
- Timing and memory tests
- Boundary value analysis
- Performance testing
- Stress testing
- Statistical testing
- Error guessing
- Thread-based testing
- Use-based testing
- Cluster testing
- Environmental modeling
- Timing simulation
- Use case/user workflows
Testing should account for a variety of types of tests (e.g. stress, boundary, timing, power failure, fault, SOUP failure, etc.) to assure SAFETY-RELATED SOFTWARE is tested under an adequate range of conditions rather than focusing exclusively on requirement-based testing.
This is great and all, but which of these techniques should you use? How much time and effort does each one require? Are there easier ways to achieve the same results?
I want to understand which development and verification techniques give medical device companies the highest ROI in ensuring their software is of sufficiently high quality.
There’s plenty of material out there on what you could do.
I want to be able to help you know what you should do.
That starts with understanding what you actually do.
- How many companies actually do unit testing for class 2 devices? What’s the typical code coverage?
- How many companies perform static analysis? Have automated builds?
- Does anyone run their embedded software code on emulators? Do they spend time writing simulators to mimic the behavior of real hardware? How do they do error injection?
I’ll be performing a research project over the next few months to explore this question, and you’ll be hearing about it quite a bit.
Be safe out there, and happy developing.