I spoke with a colleague recently about how to effectively find different kinds of bugs that occur in embedded software development.
The most important way to classify software bugs is whether:
- the code does not match the specification, or
- the specification is inadequate or incorrect
Verification and Validation
When you hear “Verification and Validation”, words which are often used interchangeably, this is what they refer to.
Verification is where you make sure that the code (or some other part of the device) conforms to the specification.
Validation is where you make sure that the device as a whole (which is now verified to meet the spec) actually solves the user need it was originally developed for. If there is a problem at this stage, either your verification wasn’t adequate (you missed some way in which the device does not meet spec), or the spec itself was inadequate or incorrect.
There are classically two ways to find errors in the specification:
- careful review of the spec during the design input phase
- system-level testing during the validation phase
In practice, I’ve seen that errors in the specification are found throughout the development process. The act of writing code forces diligent software engineers to review the requirements carefully, and they often find mistakes and omissions. The requirements document is updated, life goes on.
More worrisome is how to find errors in the software, particularly bugs that are rare and/or difficult to reproduce.
There are several different classifications of software bugs that I find useful to consider:
- business logic bugs
- concurrency bugs (race conditions, priority inversion, deadlocks, starvation)
- system interaction bugs (explosion of possible system states means certain state combinations unhandled)
- hardware interaction bugs
Next time, I’ll talk about what kinds of verification (particularly testing) are best suited to catch each one.