Jarkko Hietaniemi
2015-08-07 12:40:03 UTC
(Re: https://rt.perl.org/Ticket/Display.html?id=125570, please read
through it first)
Attempting summary first at technical level:
(1) At least under gcc the the __attribute__((nonnull(..)) does NOT mean
"magically make the code work if NULL passed in", even for values of
"work" meaning "insert code that assert-fails on NULL". In fact, we
explicitly insert our own ASSERTs (and by a porting test, enforce
and verify them) to do the assert-fails.
What the attribute *really* means is (quoting Aristotle): "a *promise
from the programmer to the compiler* that the function will never be
called with NULL arguments." And what this means is that any code
that tests the pointer, or goes through it, can be elided by the
optimizer because we just promised, via the attribute, that this
code makes no sense. This means that e.g. any sanity checks
testing the argument against NULL-ness will be removed.
As also pointed out by Aristotle, there is no inter-procedural
flow analysis done to see where NULLs might come in. (For a library
like libperl, that would be quite a feat.)
(2) bulk88 pointed out that the exact behavior on badly behaving
pointers (like NULL) is undefined, which is defined (haha) to mean
that the implementation (of compiler, OS, CPU) is free to to do what
they want. A fair point, though I don't know what we can do here
portably, beyond adding special code for each platform where we
know exactly what to do.
I am in the camp "the attribute is doing us no good", and should not
be generated, under any build configuration. It's not doing what
one would think it's doing, and again as Aristotle points out,
is no different from using #ifdef DEBUG/DEBUGGING.
I am also in the even more aggressive camp "why are we using the ASSERTs
only in DEBUGGING builds?" I mean, is accidentally derefing a NULL
somehow more acceptable in production builds? This I expect to be
of too extreme fringe for most.
through it first)
Attempting summary first at technical level:
(1) At least under gcc the the __attribute__((nonnull(..)) does NOT mean
"magically make the code work if NULL passed in", even for values of
"work" meaning "insert code that assert-fails on NULL". In fact, we
explicitly insert our own ASSERTs (and by a porting test, enforce
and verify them) to do the assert-fails.
What the attribute *really* means is (quoting Aristotle): "a *promise
from the programmer to the compiler* that the function will never be
called with NULL arguments." And what this means is that any code
that tests the pointer, or goes through it, can be elided by the
optimizer because we just promised, via the attribute, that this
code makes no sense. This means that e.g. any sanity checks
testing the argument against NULL-ness will be removed.
As also pointed out by Aristotle, there is no inter-procedural
flow analysis done to see where NULLs might come in. (For a library
like libperl, that would be quite a feat.)
(2) bulk88 pointed out that the exact behavior on badly behaving
pointers (like NULL) is undefined, which is defined (haha) to mean
that the implementation (of compiler, OS, CPU) is free to to do what
they want. A fair point, though I don't know what we can do here
portably, beyond adding special code for each platform where we
know exactly what to do.
I am in the camp "the attribute is doing us no good", and should not
be generated, under any build configuration. It's not doing what
one would think it's doing, and again as Aristotle points out,
is no different from using #ifdef DEBUG/DEBUGGING.
I am also in the even more aggressive camp "why are we using the ASSERTs
only in DEBUGGING builds?" I mean, is accidentally derefing a NULL
somehow more acceptable in production builds? This I expect to be
of too extreme fringe for most.