( This blog post was authored by firstname.lastname@example.org )
I recently attended the NSA High Confidence Software & Systems (HCSS) Conference and noticed that many tool vendors and researchers working on static and dynamic analysis were using a new term called sound analysis, which means, ‘no false negatives’. In other words, a ‘sound’ analysis won’t miss any of the types of flaws it is looking for. It might have some false positives.
Bill Scherlis, one of the speakers, defined this as: “In a sound analysis, as distinct from heuristic analysis, we do not produce false negatives. If there is a defect of a particular variety, our sound analysis will find it. I’ll note that a sound analysis may have false positives. The mathematics generally preclude the possibility of having it both ways. But in practice we don’t get many false positives. But the main point is to avoid false negatives, to not miss a diagnosis. We may occasionally over-diagnose but we will never miss a diagnosis.”
I like the idea of this, but I have never heard any of the vendors in our space claim sound analysis for anything they look for. I did see one vendor there that claimed they could provide sound analysis for buffer overflows. The company was Kestrel Technology. They define soundness as: http://www.kestreltechnology.com/about/sound.php
So, if you hear of any code scanning vendor claim they provide ‘sound’ analysis, I’d be interested in hearing about it. email@example.com
- Dave Wichers
PS: Because Kestrel does ‘sound’ analysis, they are able to report the kinds of positive information I would love to see from a tool. In my presentation, I had the following security facts label (updated based on Jeff’s original idea 5+ years ago). Imagine tools in our space reporting what I have listed in gray … Wouldn’t that be nice :-)
PPS: I don’t think this is an attack ‘against’ the tools. I think there are two points here:
1) Sound analysis vs. best we can do with current state of the art (and even sound analysis can improve with less false positives).
2) And separately, tools reporting what they have found that’s good, rather just what is bad. However, if tools aren’t doing ‘sound’ analysis, they will be reluctant to report goodness, since they can’t find/report everything that’s relevant (and are thus unsound) :)