Ask some people the first thing they need to develop software, and they will answer: “Requirements.”
It makes sense, and, it’s true. How can you build something if you don’t know what to build? Writing complete, testable requirements is tricky, and extremely important.
Even more fundamental than requirements, though, is risk.
Thinking about the many medical device software projects I’ve worked on over the years, I realize that most people working on those projects did not worry enough about risk. They got their requirements from someone else, and went about writing software to meet those requirements.
When you read through IEC 62304, Software Development Process is part 5, and it’s nine pages long. Risk Management is Chapter 7, and it’s only two pages long. It seems like an afterthought.
I can tell you from experience that performing a risk analysis on a medical device can be one of the most tedious tasks facing an engineer. It’s a highly technical activity, so it requires full concentration. It can take weeks or even months, and all the while you’re not writing code or designing boards or doing any of the fun things you went into engineering to do.
Because risk analysis isn’t fun, engineers shy away from doing it. It’s often left to just a few individuals that happen to have enough system knowledge to be able to accomplish it, and who may have unfortunately been pegged as “that person who takes care of the regulatory stuff.”
It’s too bad. Risk analysis is core to what we do. It’s fundamental to working in an industry where mistakes can hurt and kill people.
If you develop software for a consumer electronic device in a well-run company, you may find many of the same activities that are required by the FDA. There will be an internal wiki with requirements and architecture definitions. There will be source control and a robust configuration management system. There may be an army of people performing comprehensive subsystem and system-level testing.
What you won’t find is a detailed analysis of all the ways that device could possibly cause harm, and methodical mitigation of every one of those risks.
All devices, medical or consumer, need to fulfill their basic functions.
All devices, medical or consumer, need to be usable, hopefully even easy to use. There are examples from both industries of devices that do this well, and devices that do not.
Methodically finding, evaluating, and mitigating risks is the only thing that makes developing software that is safety-critical different from developing software that is not.
It deserves your attention, and your focus.