Scaling symbolic evaluation for automated verification of systems code with Serval
Scaling symbolic evaluation for automated verification of systems code with Serval Nelson et al., SOSP’19
Serval is a framework for developing automated verifiers of systems software. It makes an interesting juxtaposition to the approach Google took with Snap that we looked at last time out. I’m sure that Google engineers do indeed take extreme care when building the low level networking code that powers Google’s datacenters, but their fundamental design point was to enable frequent releases for fast iteration, feedback on their designs, and yes, early detection of problems.
Formal verification is at the other end of the spectrum. In theory it enables you to eliminate whole classes of problems and vulnerabilities entirely (in practice perfection is still hard to come by), and so it can be especially valuable in security sensitive situations. But it comes with a very high price tag:
Writing proofs requires a time investment that is usually measured in person-years, and the size of the proofs can be several times or even more that an order of magnitude larger than that of implementation code.
That’s both very expensive and an incredibly long wait for feedback. To invest in formally modelling something, you really Continue reading
