Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: The history of the word 'format' in 754...



Dan Zuras Intervals wrote:
From: "Nate Hayes" <nh@xxxxxxxxxxxxxxxxx>
To: "Dan Zuras Intervals" <intervals08@xxxxxxxxxxxxxx>
Cc: <stds-1788@xxxxxxxxxxxxxxxxx>
Subject: Re: The history of the word 'format' in 754...
Date: Wed, 20 Oct 2010 09:56:08 -0500

Dan Zuras wrote:
. . .

Now, in the floating-point world exceptions are often
ignored.  Even routinely ignored.  While I am not aware of
any implementation that does not compute the exceptions in
that case, there is nothing that would prevent it.  No one
would ever know the difference.

Let me suggest that the same be true for intervals.  That
is, that decorations may be dispensed with when it is known
that they are being ignored.  Or perhaps when it is known
that they do not change in the course of a calculation.
In this case, bare intervals would be a useful optimization
at level 3 because, at level 2, they are indistinguishable
from decorated intervals that computed their decorations &
ignored them.

However making bare intervals a primary datatype at level
2 creates a situation in which one intentionally computes
WITHOUT the information necessary to correctly interpret
the results.  The analogy would be computing in
floating-point WITHOUT computing the exceptions.  How
would one know if this infinity is the result of an
overflow or a divideByZero?
Right.

This is the desired situation in certain interval algorithms, hence
a reason bare intervals should be first-class Level 2 types.

	Isn't concern for correct computing the
	point of ALL interval algorithms?

The algorithm designer usually knows best which parts of a
computation must be safe in order that the final result is safe.

It depends all on the circumstances.
Sometimes a lot of unverified computations can be used as heuristics
to tune the verified part.
Sometimes decorations are irrelevant, sometimes they are essential.
And sometimes it pays to switch from bare intervals to bare decorations.

But the latter feature, which Nate Hayes wants to use in a prominent way, requires that even operations with bare interval return a
decoration (though not as part of the bare interval) which is usually
ignored but can be used to trigger the switch.

So, as you also write below, we should perhaps think about how to
specify this.



Most of the graphics algorithms we use are variants on branch-and-bound. What this means is that bare intervals are all that is needed until an exception occurs. At that point, the bare intervals are no longer needed but bare decorations are then required to finish propagating the exceptional information through some lengthy computation.

	And how will you know that an exception has occured
	on a bare interval?

	You must be counting on some standard exceptional
	behavior in the interval part alone to tell you.
	We have defined no such standard exceptional
	behaviors yet.  Will you be making a motion to
	this effect?

The natural thing is to return separately the decoration that would
be triggered if all bare interval arguments were decorated with isSafe.


	And when you switch from bare intervals to bare
	decorations where will you be getting the operands
	that contribute to your bare decoration results?

	Without further details, much of this description
	smells of things that belong in the decorations of
	decorated intervals.  At least at level 2, the
	level of abstractions.

Yes, this must (and can) all be decided on level 2.


Of course, with the bare decorations, there are decoration flags in the payload. So the branch-and-bound algorithm can also see exactly what exception occured, e.g., "possiblyDefined" vs. "notDefined" etc. It then uses this information to more optimally delete boxes from the solution.

	Decoration flags in WHAT payload?

	When have we ever discussed the 'payload' of a bare interval?

	What is it you have in mind here?

intended was probably not the payload of a bare interval but of a bare decoration, where the implementor has much freedom if we specify things
intelligently.


The
compiler could strip out that 17th byte, pack things up
nicely, & run literally bare only.
Right... where "bare only" means either bare interval or bare
decoration.  Under different circumstances, both will be needed.

	The circumstances I suggested were ones in which
	a compiler PROVED no decoration was needed.

	What circumstances are you suggesting?

Bare only computations with an automatic switch from bare intervals
to bare decorations make sense when operations are involved that are
not everywhere defined. This is not always a matter of compiler analysis
since it depends on the anticipated range of inputs whether computations
enter the dangerous region. In general purpose code as our global optimization systems this is difficult to ensure, and we'd mostly use the bare interval mode or the decorated interval mode.

But in computer graphics, one evaluates the same handful of expressions
over and over again, maximal speed is essential, and one can spend time proving by hand that one is always safe with bare-only calculations.



But let this particular tool be used by compilers & proof
engines only.  To hand it to the fallable user only invites
error & then mistrust of the standard.

I believe IEEE 1788 should provide the Level 2 model with bare
intervals, bare decorations, and decorated intervals and then
leave it up to vendors and implementers to decide how "automated"
the various combinations of these objects will be at Level 3 and
beyond.

	I'm sorry Nate.  I cannot agree with this.

	This is exactly the sort of thing that will lead to
	a standard that does not assure anyone of anything.

The standard assures (or has to assure) correct level 2 behavior,
and this is what matters for safety.

Of course, people can still program unsafely, but this will always be
possible, no matter what a standard requires.





As John's recent challenge shows, even when the USER makes a
programming bug, the decorations will ensure the exception does
not go unnoticed.

	No.

	You have misinterpreted the meaning of John's challenge.

	There was no bug in it.  The crux of the challenge was
	to recognise that when feedback (either from iteration
	or recursion) leads an output to become a new input,
	that new input must be 'wiped' of its computation
	history as it no longer has any meaning in the context
	of an input.

	The 'bug' is to be found in any program that fails to
	realize that.

	John's challenge was to recognise that this was so.


Both you and Nate Hayes are right.

The example program of John Pryce had the bug because he did not
specify a wiping. This was what Nate Hayes analyzed.

But the challenge had asked to ''Refine this into code'' that
works correctly.


Arnold Neumaier