I said "Once we've defined what
the IEEE 1788 Interval Arithmetic operations should do, exactly,
and how they should interrelate (and note there are steps we should be
doing even before that) . . .". Obviously somebody will wonder
what I meant.
I think the first step the committee
should do fairly soon is to write and agree on a brief description of exactly
what classic Interval Arithmetic is - what a value consists of, what operations
do what and why, why IA is the way it is, and what applications it's good
at or not so good at. Since there are both range and mean-radius
versions that means two descriptions, including conversions between them.
I hope 1000 words each would do, but readability should win over
brevity.
Then we should do the same for each
of the other kinds of IA. What's different about it, why, and what
is it better or worse at (of course there will be disagreements on that),
and any ways in which it conflicts with standard IA. We all need
to understand this, and future users of the standard do too.
Finally we should think about, agree
on and document how the computer aspects would affect that:
- One example is that IEEE 754
binary64 does not have unlimited precision - what are the implications
of that? How should that affect the standard?
- Another is that it does not
have unlimited small or large exponents.
- At the small end that
leads to subnormals - does that matter?
- At the large end, the
exponent limitation means very large values can overflow where doing IA
on paper you can write an arbitrarily large value, and that's complicated
by the 754 overloading of one pair of values to indicate both Overflows
and Infinities. I think the existence of a maximum finite value has
several implications for IA (another separate topic).
- Another example is that 754
binary has two zeros and 754 decimal has hundreds or thousands of zeros.
Does it really matter that they can be distinguished by examining
the bit pattern, when they do compare equal to each other?
Once we've done all that, we're ready
to finalize the 1788 operation definitions (which may differ slightly from
classic Interval Arithmetic operations) and write the standard.
Please don't take this as a criticism
of what anybody has done or not done so far. I also believe any project
needs free thinking and brainstorming before getting down to a more rigorous
approach, that people need to get to know each other, and that's the stage
we're still at for a while longer.
- Ian Toronto
IBM Lab 8200 Warden D2-445 905-413-3411
----- Forwarded by Ian
McIntosh/Toronto/IBM on 14/04/2009 10:52 AM -----
Ian McIntosh/Toronto/IBM
14/04/2009 10:53 AM
To
STDS-1788@xxxxxxxxxxxxxxxxx
cc
Subject
Re: Motion 4: P1788 on non-754?
IEEE 754 isn't perfect but it offers
more "features" (rounding modes, subnormals, infinities, NaNs,
multiple precisions and exponent ranges, binary or decimal base) than other
floating-point designs.
I support what I think is the intent
of this motion, which I believe is for us to start by assuming all that
capability as our starting point. But . . .
Once we've defined what the IEEE 1788
Interval Arithmetic operations should do, exactly, and how they
should interrelate (and note there are steps we should be doing even before
that), and are all in agreement with that, comes the next step: Review
the operations and interactions to determine what floating point requirements
they impose on the underlying hardware+software floating point implementation.
We may, for example, decide that infinities are essential at least
as lower and upper bounds, meaning that if a non-754 system wanted to follow
the standard it, it would have to have some hardware or software representation
of infinity. We may decide that using the 754 bit patterns is necessary,
or we may decide that conversion to and from that as a "data interchange
standard representation" is enough, or we may impose no bit pattern
rules at all.
I hope that our standard allows not
just IEEE 754 binary64 (double precision) but also other IEEE 754 formats:
binary32, binary128, binary64x (Intel's 80 bit version and others'
128 bit versions), decimal32, decimal64, decimal128, arbitrary precision
- whatever the implementers and user decide to use. It's likely that
we can allow some non-IEEE formats like PPC "double double" despite
the fact that it sometimes gives more than 106 bits (whenever there are
zero bits between the upper 53 and the lower 53). CELL SPU single
precision and others which lack subnormals may be ok despite losing precision
around zero. CELL SPU single precision may have a bigger problem
in lacking infinities (and NaNs).
All those decisions depend on what functionality
the operations we define need. The requirements are not just a hardware
issue. If the hardware is deficient and one is willing to pay in
speed and code size or function calls, software can fill in. The
question is whether it's practical, but that's a decision for implementers
and users not for the standard committee. Users may prefer a slow
implementation for the computer they have over having to buy a different
computer.
I support what I think is the intent
of this motion now, to let us get on with the important issues, but I also
strongly support refining it later when it's time.