Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: The history of the word 'format' in 754...



Dan Zuras wrote:
From: "Nate Hayes" <nh@xxxxxxxxxxxxxxxxx>
To: "Dan Zuras Intervals" <intervals08@xxxxxxxxxxxxxx>
Cc: "P-1788" <stds-1788@xxxxxxxxxxxxxxxxx>
Subject: Re: The history of the word 'format' in 754...
Date: Wed, 20 Oct 2010 01:26:05 -0500

Dan Zuras wrote:
>> . . .
>>

In P1788, we also have the concept of "decorated intervals" which has no
direct analogue in IEEE 754, either. So this also requires some fresh
thinking.

. . .

Nate Hayes

Actually, when I originally suggested decorated intervals my
inspiration was an old 754 implementation in Forth.

As Forth operates only with a stack & has no concept of
global variables, it was necessary for them to tag every
floating-point number with a set of modes (5 bits) & flags
(another 5).  They only implemented 32-bit floating-point
numbers so the combined data structure was 42 bits.

As far as I know this style of implementation was unique.
But it had obvious advantages in that every floating-point
value carried its exception history with it.  And JUST the
exception history for that number.  Whatever else happened
could not contaminate that result.

In the modern world this would also have advantages when it
comes to parallelism.  Independent calculations are freed
from the choke point of any synchronization interlocks
associated with global state.  And the exception state that
is local to any given value is the exception state of THAT
value & no other.

It was with all this in mind that I made the suggestion.

Still, all that having been said, I take your point.

Other than this one unusual implementation there is no
other comparable concept within 754.

So let me suggest some of that 'fresh thinking'.

In 754 a calculation is not considered conforming unless
it sets all flags associated with any applicable exceptions.
That is, an operation takes its operands together with the
exception state & produces the floating-point result
together with the new exception state.  It is the only way
one can correctly interpret the results.

In the Forth implementation all that information would
have been confined to the augmented 42-bit operands & the
augmented 42-bit results.

Let me suggest, again for the sake of correct interpretation
of the results, that the same be made true of intervals.

Now, in the floating-point world exceptions are often
ignored.  Even routinely ignored.  While I am not aware of
any implementation that does not compute the exceptions in
that case, there is nothing that would prevent it.  No one
would ever know the difference.

Let me suggest that the same be true for intervals.  That
is, that decorations may be dispensed with when it is known
that they are being ignored.  Or perhaps when it is known
that they do not change in the course of a calculation.
In this case, bare intervals would be a useful optimization
at level 3 because, at level 2, they are indistinguishable
from decorated intervals that computed their decorations &
ignored them.

However making bare intervals a primary datatype at level
2 creates a situation in which one intentionally computes
WITHOUT the information necessary to correctly interpret
the results.  The analogy would be computing in
floating-point WITHOUT computing the exceptions.  How
would one know if this infinity is the result of an
overflow or a divideByZero?

Right.

This is the desired situation in certain interval algorithms, hence a reason bare intervals should be first-class Level 2 types.




Therefore, let me suggest that the proper place for such
things as bare intervals be at level 3.  There they may be
used by an optimizing compiler without compromising the
assured computing which is the only reason for intervals
to exist in the first place.

I'm sure many applications will be able to take advantage
of them.  In particular, your graphics algorithms seem
particularly well suited to such optimizations.

Most of the graphics algorithms we use are variants on branch-and-bound. What this means is that bare intervals are all that is needed until an exception occurs. At that point, the bare intervals are no longer needed but bare decorations are then required to finish propagating the exceptional information through some lengthy computation.

This is mostly analagous to how an IEEE 754 computation might not check exception flags, but would instead depend on a NaN propagating through a lengthy computation so as to correctly indicate the computation failed for some reason.

Of course, with the bare decorations, there are decoration flags in the payload. So the branch-and-bound algorithm can also see exactly what exception occured, e.g., "possiblyDefined" vs. "notDefined" etc. It then uses this information to more optimally delete boxes from the solution.



The
compiler could strip out that 17th byte, pack things up
nicely, & run literally bare only.

Right... where "bare only" means either bare interval or bare decoration. Under different circumstances, both will be needed.



But let this particular tool be used by compilers & proof
engines only.  To hand it to the fallable user only invites
error & then mistrust of the standard.


I believe IEEE 1788 should provide the Level 2 model with bare intervals, bare decorations, and decorated intervals and then leave it up to vendors and implementers to decide how "automated" the various combinations of these objects will be at Level 3 and beyond.

As John's recent challenge shows, even when the USER makes a programming bug, the decorations will ensure the exception does not go unnoticed.

No standard will ever be able to prevent users from writing bad code. But you once wrote a long and eloquent e-mail about how the standard must ensure correct results even under these conditions, lest liability be transferred from users to us, the authors of the standard.

Nate Hayes


That is my suggestion anyway.  How fresh you may think it
to be is up to you.

Yours,

   Dan