In our (to be patented) NaN encoding, the payload conveys a very
small error code and a lot of info about what the machine was doing
when the error occurred, for use by the debugger. Due to the nature
of the specific architecture, almost none of that info would be
meaningful on a conventional superscalar architecture that doesn't
do the extensive speculation that we do. Useful NaN info in our
format can survive a speculative narrowing from quad to single and
As for the standard, the payload format is explicitly left
undefined, to admit just such use as ours. Keep your mitts off!
On 4/29/2011 7:31 AM, Charles Stevens wrote:
The payload seems to be provided to allow the implementor to
specify diagnostic information about the operation that produced
it, and I don't find much about how it should be encoded, what its
limits are, and whether those limits are the same regardless of
the width of the format.
It also seems to me that it would be illogical for the implementor
to specify payloads in, say, a decimal128 item that could not, on
capacity grounds, be specified to the same degree of exactness in
a binary32 item.
As I read it, to give two examples, the capacity limits (presuming
a right-justified integer) are 23 bits (numeric range 1 -
8,388,607) for binary32 and 33 digits (1 -
999,999,999,999,999,999,999,999,999,999,999) for decimal128. Even
the least of these ought to provide enough variations to allow the
implementor to report whatever he felt was appropriate. Eight
million potential "error codes" is a lot of error codes.
It's also unclear to me how this payload value (or bit-pattern) in
the trailing significand is to be interpreted -- as a bit-pattern
(even for decimal), as a right-justified integer, as a
left-justified integer, as a fractional value, or as a canonic
significand with the implied decimal/binary point after the first
digit/bit. Is this clearly specified somewhere?