Re: Alternate floating-point results under directed rounding
Van Snyder schrieb:
On Mon, 2008-11-10 at 04:02 -0800, Arnold Neumaier wrote:
Sylvain Pion schrieb:
> Arnold Neumaier wrote:
>> Nobody had suggested that. The question is only whether a
>> floating-point +-inf or NaN should be converted into Empty,
>> or into a substitute finite point interval, or into a substitute
>> interval of positive width.
>
> Can't it be made an error?
> I am not sure how to properly word this in the standard, but for
> example one could define "undefined behavior", or raise some error
flag...?
Making it an error would perhaps be the best.
On the other hand, one may want to be able to trap the error,
to be able to do something alternative if the error occured...
That's why I proposed to use the nonstandardNumber flag.
I wouldn't mind using a different flag...
I prefer that IEEE NaN and IEEE inf be transformed to differently
represented "interval" objects. This allows users to interpret their
results without wondering whether some information has been lost.
Information is lost whenever converting a float to an interval and the
history of the float is not known. Thus it is unreasonable to create
special non-intervals (which need extra book-keeping in all operations)
to take care of this situation for inf and NaN but not for other floats.
Since IEEE FPA computations with NaN don't destroy NaN, intervals [x,y]
with either x or y NaN are reasonable representations of the empty set,
especially since division of an interval containing zero by the point
interval [0,0] using IEEE FPA naturally produces [NaN,NaN] with invalid
signaling.
A reasonable representation for intval(inf) is [inf,inf]. IEEE FPA
computations don't destroy inf, so if one wants to think of this as an
alternative representation of the empty set, one can do so. But it has
a different quality from sqrt([-1,1]) = [NaN,1].
In my proposal, sqrt([-1,1])=[0,1], with the flag possiblyUndefined raised.
If one computes (with variables instead of constants) 0.1*(2*HUGE) one
gets inf instead of 0.2*HUGE. If inf is then converted to [inf,inf] and
inf is interpreted as {x|x>HUGE} the correct result isn't in the
interval -- but it also isn't in the interval [NaN,NaN] so it's
difficult to produce a definitive argument that one is the better
representation of this result than the other. The argument I offer is
that [inf,inf] better represents the process by which the result arose
than does [NaN,NaN], and this might be useful information.
The argument I offer is that unless a floating-point number represents
exactly what is is, there is already a semantic error in the use of
intervals. Thus the computed result is immaterial.
It is unreasonable to make the standard cater for semantic errors of
the user.
Rather th semantics should be so easy that the correct semantics can
be taught without having to consider many exceptional situations. This
is the case with my proposal.
If P1788 is to be built atop P754 as much as possible, I see no reason
for yet another flag, nonstandardNumber, since P754 already has NaN, and
invalid and overflow signals.
nonstandardNumber is needed anyway when converting a string that does
not represent a valid interval.
Arnold Neumaier