Re: YES P1788/M0029.02:Level3-InterfaceOnly, *BUT*
Lee Winter's post of 2011-12-30 14:35 UTC has several misconceptions,
I think. John Pryce pointed out some of them already.
On 2*[.8 F.max, .9 F.max], which John and I see as being [F.max, +Inf]:
> That happens to be the IEEE-754 result of outward rounding. But it
> is not the tightest expressible containing interval. The tightest
> expressible containing interval is {+ovr +ovr].
How exactly do you expect to encode this, and preserve containment,
if not as [F.max,+inf]?
> In the adopted IEEE-754 arithmetic underflows are mandated to be flushed
> to zero, which restriction produces the notorious "signed zeros" ...
Not so -- rounding direction is observed and the result may be F.min,
the smallest (in magnitude) nonzero subnormal.
The point is that the FP number space includes a finite number of points,
including the affine infinities of the two-point compactification of the
Reals. An early draft of 754 also included the unsigned single projective
infinity of the one-point compactification, which (unlike two-point) is
compatible with complex number applications.
The subject of signed zeros, inexact zeros (i.e. underflow) has been
discussed here: 2008-10-29 to 2008-11-08, 2010-04-22, 2004-04-28, and
together with overflow vs inf from 2010-09-20 to 2010-09-30.
> Thus an interval with a lower bound of positive overflow is a
> perfectly sensible value. Representing that interval with a lower
> bound of the maximum representable numeric value is just as
> unnecessary, in the sense of excess width, as using an interval with
> the lower bound of one or zero. Many implementations might choose to
> go with the result of IEEE-754 outward rounding, but the standard
> should not prohibit an implementation from using tighter bounds should
> it so choose.
In what sense is [+inf, +inf] tighter than [+F.max, +inf], given that the
lower bound of +inf has to be interpreted as if it were F.max anyway? If
not, how do you (Lee) expect to have it interpreted?
It's actually worse: when converting [+F.max, +Inf] to a format with a
larger exponent range, the lower bound becomes a genuine lower bound that
can continue to participate in arithmetic. If it were [+Inf, +Inf] that
bit of information would be lost. Here is an example where this is in
fact critical: conversion from binary32 to binary64. If we start with
F.max32, that value survives the conversion just fine. But if we started
with +Inf (+ovr in Lee's terms), that would change its interpretation as
we change format, and would be interpreted as F.max64, which would most
likely grossly break containment rules.
Here are some comment about Lee's post of 2011-12-30 15:50 UTC:
> But in mathematics division by zero does not yield infinity. First
> of all infinity is not a value. It is an abstract concept. And it
> is not part of R. But most importantly mathematical division by a
> mathematical zero does not have a result because it is undefined in
> mathematics.
There are many infinities and many zeros in mathematics -- you have
to know the context. What you say is true in the field of Reals. But
the underlying mathematical model for 754 is not the Reals, but the
two-point compactification thereof, which is not a field. (This loss
of field properties is surely small potatoes compared to the loss of
associativity when rounding is incorporated in the model.)
Even the distinction between Real zero and Integer zero can matter, e.g.
in the definition of the Power function.
> No, it is arithmetic over IEEE-754 values, thus two levels aleph less
> than the extended reals. Many people _call_ the IEEE-754 extended number
> line R*, but that does not mean the IEEE-754 number line _is_ R*.
Forget about alephs (yet another kind of infinity in mathematics). I don't
know anybody who would call the set of floating-point numbers R* -- it is
clearly a finite subset thereof.
> It would be useful to support unbounded values within 1788. But that
> would require an extension to the IEEE-754 arithmetic to support true
> mathematical infinity as opposed to overflow.
The small adjustments necessary are noted in the Vienna Proposal. It is
not inconceivable that an implementation would have a mode where directed
rounding obeys slightly different rules, such as 0*inf being zero. This
would not harm normal FP arithmetic with round-to-nearest. Perhaps there
would be two additional directed rounding modes instead; machines already
support five or even eight modes when DFP is supported.
> Yes, but overflow exception handling is a fairly ponderous process.
Only because there been insufficient clamor for light-weight exception
mechanisms. The hangup is not the machinery, it is the fact that this
cannot be expressed well in high-level languages, so manufacturers fear
that nobody would use the mechanism, and hence the development cost is
not justified. This is of course just another vicious circle. There is
one case where I had direct input -- the IBM System Z radix conversion
instruction -- and it has a fast exceptionless over/underflow option
where an unmasked exception is reflected in the in-line condition code
instead of triggering an interruption.
> It is based on an "out of band" representation, which is incompatible
> with sane interval arithmetic. The IEEE-754 exception exists to
> support fragile systems that cannot cope with the limits of the
> underlying arithmetic. ... (rest of rant deleted)
Care to exhibit some "fragile systems"? The implementations I know of
are quite fastidious in dealing with edge cases, and the 754 standard
has made a serious effort to contain these edge cases. There are some
conflicting requirements that left a few warts behind for BFP, but DFP
had no backwards-compatibility hurdles and was able to avoid them.
> The 754 overflow flag itself is awkward to deal with because it is
> sticky. One must explicitly clear the flag before beginning any
> operation whose result one wants to protect.
On the other hand, just as with 1788 decorations, one can let an entire
subroutine go ahead full-steam in an optimistic manner, check the flags,
and then redo exceptional situations in slow motion to detect the point
of failure (if that is of interest). That was the whole point of this
"awkward" mechanism.
> And even with a full
> exception handler there is a limited ability to handle exceptional
> values. For example, an overflow exception handler is passed an
> argument representing the actual result value, but with the exponent
> reduced by a constant amount, thus rendering the result representable.
This is what happens for the basic arithmetic operations, where it works
fine without loss of information, and permits easy support of a wider
effective exponent range.
> But that kludge does not generalize. It suffices for most of the
> operations specified in IEEE-754, but if the magnitude of the overflow
> exceeds the extra range provided by the adjustment constant, the
> actual value cannot be obtained by the exception handler.
That is why different mechanisms are used. Conversions to a narrower
format return the result rounded to the target precision, but in the
source format, so that the exponent can be adjusted separately. The
reduction operations carry a scale factor around. The radix conversions
(in IBM's implementation) return (on unmasked over/underflow) a value in
the absolute range of 1 to base, with a separate exponent as an int32,
similar to what frexp() would have returned. Note that 754-2008 is less
specific about trapped over/underflow than 754-1985 was, but covers more
operations (e.g. conversions), so the details are left to implementers.
> IMHO it is far better to design 1788 to handle those exceptions as a
> matter of course than to require users of 1788 to provide their own
> kludges to handle the limits of 1788.
I thought that's what we were all doing here...
Michel.
---Sent: 2011-12-31 05:38:06 UTC