Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: back to the roots



Bill,

I don't understand your question. If you do not trust your data you better do not compute at all.

Assume you somehow got an approximate solution of a linear system and you want to improve it by a defect correction. Then you assume of course that the data of the approximate solution are infinitely precise. Even if no digit of the "approximate" solution is correct you can with Rump's method compute a highly accurate enclosure. Also here you assume that the "approximate" solution is infinitely precise.

Best wishes
Ulrich



Am 28.06.2013 19:07, schrieb Bill:
Ulrich,

Can you cite one practical real world example of a computation, other than one in pure mathematics, in which all inputs are infinitely precise degenerate intervals?  That might help to motivate your perceived desire for a long accumulator.

With any measured input values, and therefore non-degenerate interval inputs, I know of no such practical applications for a long accumulator.

Cheers,

Bill

On 6/28/13 8:47 AM, Ulrich Kulisch wrote:
Colleagues:

In the old days computers provided the four elementary operations +, -, ×, /, and additionally a fifth operation: error free accumulation of numbers and of products of numbers into a long fixed-point register. The technique can be traced back to the early computer by G. W. Leibniz 1693, see the attachment. Later ancient computers provided several long result registers, see the right upper corner of the poster on the attachment. This most natural fifth operation was not built into the early floating-point computers. It was too expensive for the technology of these days. Later its superior properties had been forgotten. Thus floating-point arithmetic is still somehow incomplete.

Today computer technology is extremely powerful. The reintroduction of the fifth basic arithmetic operation into the computer is a step which is long overdue. In modern terminology the long result registers represent a new data format “complete” together with “complete arithmetic”, error free accumulation of numbers and products of numbers, summarized by the three letters EDP, the Exact Dot Product. It is high time to reintroduce this basic operation into scientific computation.

I know, of course, that IEEE P1788 is called a standard for interval arithmetic. But we would make a big mistake if we would unnecessarily restrict it to the four elementary operations for intervals. The basic technique for computing close bounds also must be considered. The IFIP letters show that many other numerical analysts share this view

Interval arithmetic brings guarantees and */safe/* bounds into computing. These bounds frequently are overly pessimistic. The EDP is the simplest and most general tool for getting */close/* bounds. It brings accuracy and speed and it is free of exceptions. A combination of both is what is needed from a modern computer. Interval arithmetic and the EDP form one unit. We did not get the EDP from IEEE 754. So it is most natural that it is provided in IEEE 1788.

In numerical analysis the dot product is ubiquitous. It is a fundamental operation in the vector and matrix spaces. In interval arithmetic it appears in the arithmetic for interval vectors and interval matrices over the real and complex numbers, in long or variable precision interval arithmetic (motion 9, [3]), in defect correction methods, or in Rump`s method for inverting arbitrarily ill-conditioned matrixes, for instance. By operator overloading variable precision interval arithmetic is very easy to use. It enables the use of higher precision operations in numerically critical parts of a computation. With it the result of every arithmetic _expression_ can practically be guaranteed to a number of correct digits. It is the EDP which makes floating-point and interval arithmetic effective. Designing its implementation is easy not to say simple with the tools that are available today. The hardware that is needed for the EDP is comparable to that for a fast multiplier by an adder tree accepted years ago and now standard technology in every modern processor. The EDP brings a similar speed up for accumulation at comparable costs, see [4]. The IFIP letters require the EDP [1], [2]. Providing it in IEEE P1788 would be a great service for the entire scientific computing community.

Consider, for instance, computing a sum of matrix or matrix-vector products
s := a×b+c×d+e×f. In the conventional way in floating-point arithmetic a rounding is done after each multiplication and addition. In contrast to this with the EDP each component of s is computed exactly. The result then can be rounded to double, triple, quadruple, or a higher precision as required by the application. This simple example has numerous applications in verified numerical computing. Many of the advanced applications discussed in Chapter 9 of my book [4], in the books on the XSC-languages [6], [8], [9], [10] and in the corresponding toolbox volumes [7], [11], [12] are based on applications of the EDP.

The simplest and fastest way of computing a dot product is to compute it exactly. By pipelining it can be computed in the time the processor needs to read the data, i.e., it comes with utmost speed. No intermediate roundings and normalizations have to be performed. No intermediate overflow can occur. No error analysis is necessary. The result is always exact. It is independent of the order in which the summands are added. It should be applied as often as possible. Any method that just computes an approximation also has to consider the relative values of the summands. This results in a more complicated method. Rounding the EDP to a lower precision is done, if required at the very end of the accumulation.

This letter pleads for extending floating-point and interval arithmetic by the EDP as the fifth elementary operation. It combines the advantages of floating-point arithmetic (no scaling requirement) with those of fixed-point arithmetic (fast and exact accumulation of numbers and of simple products of numbers even for very long sums). The EDP reintegrates the advantages of fixed-point arithmetic into digital computing and floating-point arithmetic. It is obtained by putting a capability into modern computing which was already available on calculators before the electronic computer came onto stage. It computes a central and frequent numerical operation always exactly. In numerical analysis it makes a big difference whether an operation is exact for many or most data or for all data.

References

 

[1] The IFIP WG - IEEE 754R letter, dated September 4, 2007.

 

[2] The IFIP WG - IEEE P1788 letter, dated September 9, 2009.

 

[3] U. Kulisch, V. Snyder: The Exact Dot Product as Basic Tool for Long Interval Arithmetic.

 

[4] U. Kulisch: Computer Arithmetic and Validity – Theory, Implementation, and Applications, de Gruyter, Berlin, New York, 2008, second edition 2013.

 

[5] F. Blomquist, W. Hofschuster, W. Kraemer: A Modified Staggered Correction Arithmetic with Enhanced Accuracy and Very Wide Exponent Range. In: A. Cuyt et al. (eds.): Numerical Validation in Current Hardware Architectures, Lecture Notes in Computer Science LNCS, vol. 5492, Springer-Verlag Berlin Heidelberg, 41-67, 2009.

 

[6] R. Klatte, U. Kulisch, C. Lawo, M. Rauch and A. Wiethoff, C-XSC – A C++ Class

Library for Extended Scientific Computing, Springer, Berlin Heidelberg New York, 1993

 

[7] R. Hammer,M. Hocks,U. Kulisch and D. Ratz, C++ Toolbox for Verified Computing:

Basic Numerical Problems. Springer, Berlin Heidelberg New York, 1995.

http://www2.math.uni-wuppertal.de/~xsc/xsc/cxsc_new.html

 

[8] R. Klatte, U. Kulisch, M. Neaga, D. Ratz and Ch. Ullrich, PASCAL-XSC –

Sprachbeschreibung mit Beispielen, Springer, Berlin Heidelberg New York, 1991.

http://www2.math.uni-wuppertal.de/~xsc/xsc/pxsc_download.html

 

[9] R. Klatte, U. Kulisch, M. Neaga, D. Ratz and Ch. Ullrich, PASCAL-XSC – Language

Reference with Examples, Springer, Berlin Heidelberg NewYork, 1992.

 

[10] R. Klatte, U. Kulisch, M. Neaga, D. Ratz and Ch. Ullrich, PASCAL-XSC – Language

Reference with Examples, MIR, Moskau, 1995, third edition 2006 (in Russian).

 

[11] R. Hammer,M. Hocks, U. Kulisch and D. Ratz, Numerical Toolbox for Verified ComputingI: Basic Numerical Problems, Springer, Berlin Heidelberg New York, 1993.

http://www2.math.uni-wuppertal.de/~xsc/xsc/pxsc_software.html
 
[12] R.Lohner, U. Kulisch, W. Krämer, "Numerical Toolbox for Verified Computing II, Advanced Numerical Problems" Draft Version, Karlsruhe 1994
http://www2.math.uni-wuppertal.de/~xsc/literatur/tb2.pdf
 
For general information on the XSC-languages see http://www.xsc.de/ or
http://www2.math.uni-wuppertal.de/~xsc/xsc-sprachen.html

 




--
Karlsruher Institut für Technologie (KIT)
Institut für Angewandte und Numerische Mathematik
D-76128 Karlsruhe, Germany
Prof. Ulrich Kulisch

Telefon: +49 721 608-42680
Fax: +49 721 608-46679
E-Mail: ulrich.kulisch@xxxxxxx
www.kit.edu
www.math.kit.edu/ianm2/~kulisch/

KIT - Universität des Landes Baden-Württemberg
und nationales Großforschungszentrum in der
Helmholtz-Gesellschaft



-- 
Karlsruher Institut für Technologie (KIT)
Institut für Angewandte und Numerische Mathematik
D-76128 Karlsruhe, Germany
Prof. Ulrich Kulisch

Telefon: +49 721 608-42680
Fax: +49 721 608-46679
E-Mail: ulrich.kulisch@xxxxxxx
www.kit.edu
www.math.kit.edu/ianm2/~kulisch/

KIT - Universität des Landes Baden-Württemberg 
und nationales Großforschungszentrum in der 
Helmholtz-Gesellschaft