Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: back to the roots



We are abviously discussing from different platforms. I assumed that the reader of my mailis is familiar with the basic ideas of my book.
Computer Arithmetic and Validity - Theory, Implementation, and Applications.
De Gruyter 2008, second edition 2013.

I feel that I spent already too much time discussing these topics.

With best regards
Ulrich Kulisch


Am 02.07.2013 17:41, schrieb Richard Fateman:
On 7/2/2013 8:08 AM, Ulrich Kulisch wrote:
Am 01.07.2013 14:36, schrieb Vincent Lefevre:
On 2013-06-30 12:08:15 +0200, "Neher, Markus (IANM) [IANM ist die Organisationseinheit Institut für Angewandte und Numerische Mathematik am KIT]" wrote:
Many problems end up in solving some linear system. Ulrich Kulisch has
already pointed out how solving linear systems benefits from an EDP.
I haven't seen any algorithm using EDP to solve linear systems.

You also need to show why plain multiple-precision dot product (which
could take much less resource thanks to good precision control)
wouldn't be sufficient.

OK, Let's be argumentative.
Let me repeat a paragraph of my mail of June 28:
"The simplest and fastest way of computing a dot product is to compute it exactly.
This is clearly not what you mean.  Perhaps you mean" the simplest and fastest way of computing AN ACCURATE VALUE
for a dot product is to compute it exactly. "   I don't know how one goes about proving such a statement.  Certainly
"simplicity" is not an important criterion once an algorithm is reduced to a computer procedure.  For example, using
a Fast Fourier Transform to multiply polynomials is not "simplest", but in some circumstances is "fastest".

By pipelining
This is irrelevant because a particular computer either  can or cannot pipeline operations. If pipelining is available then
other methods can benefit.

it can be computed in the time the processor needs to read the data, i.e., it comes with utmost speed.

I think you are saying that you can compute dot product in a time that is proportional to the length of the input.  I am aware of
only one plausible proposed method that is not similar, and that requires sorting.  Instead of being O(n) it is O(n log n) but
even then, a radix sort on the exponents is sufficient and that is O(n) .
No intermediate roundings and normalizations have to be performed. No intermediate overflow can occur.
I may be mistaken, but I thought it was necessary to check for "carry".
No exceptions are to be delt with. No error analysis is necessary. The result is always exact. It is independent of the order in which the summands are added. It should be applied as often as possible.
Are you proposing that a scalar multiplication   x*y  should be done with EDP and that scalar  a*x+b  be done
with EDP? <a,1>.<x,b> ?

Any method that just computes an approximation also has to consider the relative values of the summands.
Do you mean  any method that claims to provide a correctly rounded dot product?  Certainly the obvious sloppy
accumulation in a "DO" loop can just add up the terms without any effort to consider such issues.
This results in a more complicated method. Rounding the EDP to a lower precision is done, if required at the very end of the accumulation."

With respect to simplicity and speed the multiple-precision dot product with precision control MPDP has no chance to win the game.
Is there a paper/ benchmark to demonstrate this?
With respect to resources I also see the EDP in front, no error control is necessary, no exceptions. All that is needed is some local memory (silicon) on the arithmetic unit of about 1/2 k bytes in case of IEEE 754 double precision.
This is irrelevant.  You are saying that a fast program can be written to run on a computer that (so far as I know)
 cannot now be purchased.

On the IBM mainframes we needed only 1/4th of this memory because of the smaller exponent range and I do not remember a serious application where this was not eough. So at the cost of allowing simple overflow exceptions (which easily could be dealt with by software) the silicon area would still shrink considerably.
Earlier you said there were no overflows...

Instead, would it make more sense to argue  that there should be an operation in the standard to compute
a tight enclosure for the dot product of two vectors of intervals?
Or a tight enclosure for the solution to a linear system with interval coefficients?

Or maybe, going further afield but also looking for a "big" solution,  that
1788 should include an operation that would encapsulate a Newton-like
iteration to a zero of a function?  (Which seems an obvious goal for
people using interval arithmetic -- but raises serious issues about how to
accomplish it....)

RJF




Best regards
Ulrich

-- 
Karlsruher Institut für Technologie (KIT)
Institut für Angewandte und Numerische Mathematik
D-76128 Karlsruhe, Germany
Prof. Ulrich Kulisch

Telefon: +49 721 608-42680
Fax: +49 721 608-46679
E-Mail: ulrich.kulisch@xxxxxxx
www.kit.edu
www.math.kit.edu/ianm2/~kulisch/

KIT - Universität des Landes Baden-Württemberg 
und nationales Großforschungszentrum in der 
Helmholtz-Gesellschaft



-- 
Karlsruher Institut für Technologie (KIT)
Institut für Angewandte und Numerische Mathematik
D-76128 Karlsruhe, Germany
Prof. Ulrich Kulisch

Telefon: +49 721 608-42680
Fax: +49 721 608-46679
E-Mail: ulrich.kulisch@xxxxxxx
www.kit.edu
www.math.kit.edu/ianm2/~kulisch/

KIT - Universität des Landes Baden-Württemberg 
und nationales Großforschungszentrum in der 
Helmholtz-Gesellschaft