Re: Motion 31: V04.2 Revision of proposed Level 1 text
On 2012-02-13 12:16:44 +0100, Ulrich Kulisch wrote:
> Dramtic advances in computer technology, in memory size, and speed
> have been made since 1985. Arithmetic speed has gone from megaflops
> (10^6 flops) to petaflops (10^15 flops).
But there are new ranges of devices targeted at low-power rather
than performance. There are also GPU's (which can be used for
generic purpose computations), where units are simpler than a
conventional processor core...
> Problems that are dealt with are getting larger and larger. For
> adopting this situation the revised standard IEEE754 (2008) provides
> another floating-point format of 128 bits.
And even larger than this one, with the binary{k} formats. All these
are optional, BTW.
> This is not really useful for interval arithmetic. For large
> intervals there is no need describing the interval bounds with more
> than 30 decimal digits.
If only inf-sup is available, one may need high-precision bounds.
If you mean that inf-sup is not the best format for high-precision
intervals (typically narrow intervals, because large intervals do
not need high precision IMHO), then I agree. This has already been
discussed.
> I think 99% of the users would be satisfied with double precision
> bounds (16 decimal digits). And for small intervals describing each
> bound by 128 bits would be a waste of silicon and computing time. It
> would be better representing such an interval as [x] := x + X, where
> x is a double precision floating-point number and X is a double
^^^^^^ I think you mean "quadruple" here (binary128)
> precision interval. This would considerably speed up and simplify
> the arithmetic for small quadruple precision intervals.
Actually, this is some form of the already mentioned triplex format.
There are other alternatives, such as mid-rad, where the radius can
be an error in ulp's; this is less powerful, but it should be faster
and can be sufficient.
> I am now going to extend this idea. For interval evaluation of an algorithm
> (a sequence of arithmetic operations) in the real number field an old result
> of interval arithmetic says roughly that increasing the precision by k
> digits reduces the error bound by b^-k , i.e., results can always be
> guaranteed to a number of correct digits by using variable precision
> interval arithmetic. Intervallers dream of this kind of applications for
> half a century. A /long interval of length n/ is defined as a sum [x] := x1
> + x2 + ... + xn + X. Here the xi, i = 1(1)n, n >= 0, are double precision
> floating-point numbers and X is a double precision interval. Arithmetic for
> such multiple precision intervals is defined and can simply be used by
> operator overloading (in C-XSC for instance).
Actually there are alternatives that are faster. On most machines,
an integer-based multiple-precision implementation (e.g. GMP/MPFR)
is faster than a floating-point based implementation (I just know
that on old SPARC's, this was not true because such processors did
not have a hardware integer multiplication, just a FP one).
And in this case, instead of being a double-precision interval,
X could just be a small integer giving an error in ulp's.
But all these internals (Level 2 and below) could be hidden to the
user (unless he wants to deal with them). The user should just be
able to have a high-precision interval arithmetic satisfying the FTIA,
in particular (other features such as decorations can be useful,
depending on the application).
> With increasing width of the interval [x] the word length decreases.
> This variable precision interval arithmetic is the interval
> arithmetic equivalent to extending the word length in floating-point
> arithmetic. (See, for instance, my book /Computer Arithmetic and
> Validity/, de Gruyter 2008, or the nice article by Blomquist et al.
> in the proceedings of the Dagstuhl Seminar 2008, Springer LNCS 5492,
> 2009, [5] on the poster).
>
> I think that everybody of P1788 can imagine without going into
> further detail that the central building block for such variable
> precision interval arithmetic is an exact dot product. Waiting until
> another revision of IEEE754 provides it would postpone interval
> arihmetic from being more widely and successfully used perhaps for
> decades.
Three points about that:
1. One can implement an interval dot product without necessarily
having to use a floating-point dot product. In particular if
integer-based multiple precision can be used. I don't disagree
with a specification of an interval dot product (but it shouldn't
be seen as a basic operation, it should be at the same level of
elementary functions).
2. The goal of a standard is specification of the behavior, not of
the implementation (building blocks). A standard can itself be
seen as providing a set of building blocks for something above
(for instance, IEEE 754 can be used to implement a double-double
arithmetic).
3. Not all users need an interval dot product with correct rounding
(tightest mode).
> >---- Also, the exact dot product isn't as simple as described with a large
> exponent range.
>
> Motion 9 says: An implementation shall at least provide it corresponding to
> binary64 if it provides binary floating-point and decimal64 if it provides
> decimal floating-point. ... It may provide more... See my poster for how
> computing the exact dot product in software or hardware. What can be done in
> the cases of larger exponent ranges is discussed in my book.
This is just about a complete arithmetic, not an exact dot product.
Moreover I think that the above doesn't make sense. First, an
implementation doesn't provide floating-point, but intervals. Then,
corresponding FP types are not necessarily binary64 and decimal64.
They can be binary32 or binary128 for instance. Providing a complete
arithmetic associated with binary64 would be awkward.
> Let me summarize what has been said: To adapt the computer more to
> the needs of interval arithmetic two requirements are needed:
>
> I. fast hardware support for interval arithmetic (easy obtainable on
> existing processors), and
Not on all processors. Some processors do not even provide (hardware)
floating-point. Still, IEEE 754 FP can be done with them.
> II. a fast and exact multiply and accumulate operation (continued addition),
> or what is equivalent to it, an exact scalar product (for the double
> precision format. For a simple sketch see the poster).
Point formats corresponding to intervals don't necessarily include
binary64 or decimal64. And this is not a standard on floating-point
arithmetic; by specifying an exact scalar product in FP, you're
doing what IEEE 754 is supposed to do.
> Both are basic arithmetic operations for interval arithmetic and should be
> listed as such in the document.
The dot product is not a basic operation. It isn't even specified in
other common standards (IEEE 754, language ones...).
> Let me make another remark: Arithmetic for extended real intervals (closed
> and connected sets of real numbers) is free of exceptions. So a standard for
> interval arithmetic could be greatly simplified by a more strict separation
> form IEEE754 for floating-point arithmetic (no automatic type transfer from
> floating-point to interval).
I don't disgree with that (if I understand what you mean).
Regards,
--
Vincent Lefèvre <vincent@xxxxxxxxxx> - Web: <http://www.vinc17.net/>
100% accessible validated (X)HTML - Blog: <http://www.vinc17.net/blog/>
Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)