Undefined Behavior and CERT’s Vulnerability Note

There were a lot of interesting comments to last week’s post on Apple’s secure coding guide and I plan to follow up on those in future posts, but I first wanted to make a comment of my own on the  vulnerability note from cert.org that was referenced by Apple’s document and by my post.

CERT’s vulnerability note

The vulnerability note’s overview states:

Some C compilers [and C++ compilers – JK] optimize away pointer arithmetic overflow tests that depend on undefined behavior without providing a diagnostic (a warning). Applications containing these tests may be vulnerable to buffer overflows if compiled with these compilers.

Undefined behavior

When your code exhibits undefined behavior, the compiler is not constrained by the language standard and any behavior at all is acceptable (to the standard). That is why we say the behavior is “undefined.” Insert joke on nasal daemons here.

Saying that the compiler is free to generate any code it wants is an obvious way of phrasing this, but it is really looking at it the wrong way around. Looked at from the compiler writer’s perspective a better way of phrasing it would be:

The compiler is free to assume that undefined behavior never happens so no code needs to be generated to handle such cases and no code needs to be generated to test for such cases.

The scary thing is that if you are writing code that actually does encounter undefined behavior, it is extremely unlikely that you’ll be happy with the outcome that results from these optimizations.

So the take-away is that we shouldn’t write code that has undefined behavior. But this isn’t news. That has been standard advice since the beginning of time.  (Which, according to Unix, was 1970-01-01.)

What is new is that modern compilers are more and more starting to exploit the freedom granted them in undefined behavior cases and have been more aggressive about identifying those cases and optimizing out code that would deal only with them.


Note that part of the CERT’s vulnerability note overview states that a diagnostic isn’t required. This is true, but naive readers might be tempted to think that requiring such a diagnostic would be a good idea. It would not.

Consider a function with a precondition of a non-null pointer parameter because the pointer will be dereferenced in the function. Do we want the compiler to warn us that it is optimizing out code for the null pointer case? It isn’t possible for the compiler to determine which optimizations are for undefined behavior which is known about (and which we are careful to prevent) and which optimizations are for undefined behavior which would be a surprise to us.

Requiring the compiler to warn for every undefined behavior optimization would result in an avalanche of false positives and users would end up silencing all such warning.

Some Undefined Behaviors are More Equal Than Others

So the problem is that coders are writing code with undefined behavior and they need to fix that right? Well not according to CERT:

Application developers and vendors of large codebases that cannot be audited for use of the defective wrapping checks are urged to avoid using compiler implementations that perform the offending optimization. Vendors and developers should carefully evaluate the conditions under which their compiler may perform the offending optimization. In some cases, downgrading the version of the compiler in use or sticking with versions of the compiler that do not perform the offending optimization may mitigate resulting vulnerabilities in applications. [emphasis mine – JK]

That’s right, the problem isn’t that we have code with undefined behavior, the problem is that nasty compilers are using “offending” optimizations.

To give the vulnerability note its due, it does give a coding solution to the example problem explaining how to fix the issue with better code. But I found the quoted statement both surprising and bothersome. The attitude is that it is okay that we have broken code, as long as we don’t upgrade our compilers, is hard to swallow.

As compilers mature they are generating better code (modulo some regressions) and it is likely that the code they are generating for you is more secure and less likely to have subtle bugs with each subsequent revision. Asking developers to opt out of compiler improvements so that they can avoid fixing broken code makes me suspicious of the commitment to code quality.

Please post comments on Google Plus.