Please vote against Motion P1788/0019.01: Explicit/Implicit idatatypes
Ralph Baker Kearfott wrote:
Since Motion 19 has been made by John Pryce and
seconded by Dan Zuras, the discussion period now begins, and
will end after Friday, September 10. I attach the motion.
I hereby submit a motion that P1788 shall support both explicit and
implicit interval formats (now to be renamed interval datatypes,
idatatypes). I believe it is Motion 19. Details are in the attached
paper, which is a revision of the one on the same subject I circulated
in July.
I strongly advise against accepting this motion.
The motion does nod add any useful functionality to the future standard.
However, the content of the motion, taken seriously in its consequences,
necessarily dilutes the requirements that can be imposed on an
implementation, to an extent that little useful can be relied on.
But a standard should enforce high quality as far as reasonable;
otherwise there is no point of having one.
More specifically:
1. Section 3/line 10 requires that ''Implementations based on other
representations should be able to conform to the standard''
According to Motion M0016.01, which requires that at least one infsup
type is supported, a midrad-only based implementation cannot conform to
the standard; so the new motion would contradict a motion already passed
if it would allow a midrad-only based implementation to conform.
On the other hand, midrad in addition to infsup will conform to the
standard if the latter does, if correct conversions exist, and if the
standard is otherwise silent about non-infsup types (as it should be).
Thus the motion is superfluous; no separate regulation is necessary.
2. The motion as it stands allows a conforming implementation where
FF (open-faced F) consists of only thin and entire intervals, with
obvious hull and operations. But such an implementation is completely
useless.
Thus the motion implies that the standard enforce anything interesting.
3. Section 3.4/line 3: If the idatatype whose support is required is
not infsup, the implementation is useless for applications in global
optimization and constraint satisfaction, the domainant application
of intervals outside interval analysis.
(To make sense of the proposal in Section 5.8, one would need one
explicit idatatype anyway, which can only be infsup, it seems; cf. the
discussion in Section 3.1.)
4. Section 4.1/line 8-9: This ambiguity of the hull is an ugly feature,
made necessary by an attempted compromise with midrad.
5. Section 4.1/line 17-18: Whatever the hull operation agreed on, it
will be extremely difficult to provide tightest operations for all but
addition, subtraction, multiplication and division (and even for the
latter two it won't be easy). But one would like to have it at least
for the most important elementary functions (see Section 4.2. on
reproducibility).
Now the only way I see to compute exp(xx) for an arbitrary interval xx,
say, that I can see is to compute the function
value at the endpoints to sufficiently high accuracy and then to convert
it to midrad. Thus one needs the infsup representation as an
intermediate step to get a tightest midrad representation. This makes
the representation unnecessarily slow.
On the other hand, if we don't require tightest results uniformly for
all idatatypes, we cannot require them for infsup, which means that a
user will have no guarantee through the standard beyond pure enclosure.
But to have this, a 1-page standard would be sufficient.
6. Apparently, the primary reason to propose the motion is to
accommodate a minority of interval researchers who want a midrad
arithmetic. However, a standard should not cater for a handful of
people but to a large group of users.
I believe there is currently no demand for anything but infsup
interval arithmetic. The far future can be addressed by a subsequent
version of the standard, and all current applications are served
excellently with infsup arithmetic, which exists in a number of
high-quality implementations. It is easy to simulate midrad
with good accuracy (and even with arbitrary precision) in infsup,
should it be required.
On the other hand, a midrad implementation cannot easily simulate
infsup with good accuracy; for example, there is no support for
intervals such as [-10,10^20] or [0,inf], which makes it unsuitable
for global optimization applications on unbounded domains
(frequent already for linear programs).
Worse, there is not a single midrad implementation that I know of
that provides support for more than addition, subtraction,
multiplication and division. Already in this case, there is not
even agreement on the form that the operation should take in exact
arithmetic: centered (Henrici-style) or optimal (equivalent to infsup)?
The only serious use of midrad arithmetic in nontrivial applications
has been in Rump's IntLab package, and uses the centered variant
(on interval matrices only, not on single intervals), which
overestimates widths by a factor of up to 1.5.
The case for an efficient implementation of single operations that can
compete in speed with infsup and correctly accounts for rounding errors
has not been made even for these operations -- neither in theory nor in
practice. There is neither a trial implementation nor a theoretical
paper discussing the relevant issues. In fact, most publications on
midrad arithmetic are highly theoretical and assume exact arithmetic
with the four elementary operations only.
7. There was an argument that midrad needs only a few digits for the
radius, and hence can be a memory-saving device. However, there are no
applications in sight that are so large that memory would be an issue;
moreover, a realization of this advantage could only come from a
hardware implementation. But how this could be more efficient
(in terms of speend and chip size) than an infsup version for the
square root, say is an enigma.
On the other hand, a few-digit midrad arithmetic overestimates
uncertainties unnecessarily. Let e be the smallest nonzero radius
for an interval with midpoint 1. Then the product of n intervals
1+-e, computed sequentially with centered arithmetic and outward
rounding, is 1+-me, where m=2n+1 more than twice as wide than the
corresponding infsup interval. With optimal arithmetic, it is only
slightly better, with a radius of m'e, where m'=2n-2, asymptotically
again a factor of 2 off. In view of the assumption that e has only a
few bits, this is a significant factor.
Arnold Neumaier