converting numbers to intervals
It seems to me, regardless of the operand and result formats, converting
a number to an interval should yield the smallest interval in the result
format containing the number, and signal an exception if the result format
has no interval containing the operand number,
with default result the empty set.
Likewise converting an interval operand to an interval result should convert
the interval operand into the smallest interval containing the operand
in the result format, and signal an exception if the result format has
no interval containing the operand interval, with default result as much
of the operand interval as can be represented, possibly empty.
From this point of view, "1.0e400" is a specific number represented in decimal,
and is representable as an interval in any result format that can represent
all the reals.
Remember the definition of an exceptional situation is one that, no matter
what default is chosen, somebody will take legitimate exception to it for
some application, and so there will be a need to enable other behavior in
that case.
The default standard conversion operations should consider
point operands to be points.
A specific APPLICATION might well have reasons to want to consider legacy
data output from point programs
as representing intervals rather than points. In that case, it may
be that "1.0e400" is shorthand for [0.995e400,1.05e400], and the latter
interval is what should be converted to the target interval format.
Likewise a binary datum might be thought of as shorthand for
an interval and converted accordingly.
Perhaps this is what Van was getting at?
Perhaps the standard should also define such conversions, to enable such
applications, but to do so would require some confidence that the right
shorthand interpretation of a point as an interval is being made. If it's
different for each application, then perhaps the standard should not define
one. After all, the usually unknown error bound on computed point data
is usually wider than half an ulp.
And on another topic, one suggestion: avoid global dynamic modes.
They can be an impediment to efficient hardware for
distributed computation. If there are modes
in the standard, specify them in a way that encourages languages to support
them as static (known at compile time) rather than dynamic.