[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Payload length and interpretation in IEEE Std 754-2008



The payload seems to be provided to allow the implementor to specify diagnostic information about the operation that produced it, and I don't find much about how it should be encoded, what its limits are, and whether those limits are the same regardless of the width of the format.  
 
It also seems to me that it would be illogical for the implementor to specify payloads in, say, a decimal128 item that could not, on capacity grounds, be specified to the same degree of exactness in a binary32 item. 
 
As I read it, to give two examples, the capacity limits (presuming a right-justified integer) are 23 bits (numeric range 1 - 8,388,607) for binary32 and 33 digits (1 - 999,999,999,999,999,999,999,999,999,999,999) for decimal128.  Even the least of these ought to provide enough variations to allow the implementor to report whatever he felt was appropriate.  Eight million potential "error codes" is a lot of error codes. 
 
It's also unclear to me how this payload value (or bit-pattern) in the trailing significand is to be interpreted -- as a bit-pattern (even for decimal), as a right-justified integer, as a left-justified integer, as a fractional value, or as a canonic significand with the implied decimal/binary point after the first digit/bit.  Is this clearly specified somewhere? 
 
    -Chuck Stevens

754 | revision | FAQ | references | list archive