Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: Please listen to Ulrich here...





Hey Ulrich,

I have nothing but respect and admiration for the huge contribution you, your colleagues, and students have made to the foundation of computing with intervals.  We would not be where we are without you.  Additionally, please do not take anything I write personally.  It has nothing to do with our personal friendship, which I hope remains solid.  Nevertheless, I simply cannot neglect fundamental issues that I believe if not addressed will set back the movement of computing with intervals into the mainstream of scientific and engineering computing.

With all of the above said, I believe you misstated my question in your first sentence, below.  This is fundamental.  My question remains: Send a practical example of an interval computing problem in which CA and/or EDP make it possible to compute significantly narrower interval solution bounds than can be done using double precision floating-point intervals when all interval inputs have width of 1 ulp in, lets say the 5th or 6th decimal digit.  From my perspective, discussions about hardware availability are irrelevant.  What I am concerned about is to prevent scientists and engineers from mistakenly entering degenerate interval inputs into interval computations when those inputs are the result of fallible measurements, and as a consequence drawing false conclusions therefrom.

In this connection, the example you have used to demonstrate the utility of a long accumulator is perfect.  Remember the German steam turbine that vibrated itself loose a crashed through a number of buildings and killed people?  You used to show a newspaper picture of the carnage that resulted.  Anyway, the problem of spinning up a large steam turbine is interesting because there are certain RPMs when vibration can occur.  As a result, the strategy to safely spin up is to go close to one of these resonating RPMs and stop to let the temperature stabilize throughout the turbine, because heating it too fast can cause thermal gradient induced cracking.  So, after temperature stability is achieved just below a resonating RPM, revolutions are quickly increased to hop over the resonating RPM and then repeat this process at all the known resonating RPMs.

The problem is to compute the resonating RPMs, which you showed are the roots of a nonlinear function of a number of measured parameters that characterize the relevant physical properties of the turbine.  As I recall, you showed some of these measured values to about 5 decimal digits of accuracy.

For relatively low RPMs the nonlinear function whose roots are the resonating RPMs are nicely spaced and therefore easy to locate even with floating-point arithmetic.  However, at higher RPMs, the nonlinear function becomes chaotic with apparently many closely spaced roots when the function is computed using floating-point arithmetic.

So, you replicated the function's computation with intervals and a long accumulator to demonstrate that in fact there are only a few well spaced roots at the high RPMs.  Your claim was this demonstrates the practical utility of both computing with intervals and a long accumulator.

My problem is that I do not believe if you had entered the measured parameters describing the turbine as non-degenerate intervals to reflect their accuracy that you would have seen the same result.  What I suspect is that at the higher RPMs the computed interval bounds on the function would have such that nobody could tell where its roots and thus the resonating RPMs are located.  I believe the conclusion that should be drawn from this and other similar computations is that without additional measurement accuracy there is a limit to the accuracy with which these computations can be performed.  In the steam turbine example, I believe it to be much safer to know that you do not know where the resonating RPMs are located than it is to falsely believe that you do.

If you recall, I asked you if you would replicate the computation with non-degenerate interval inputs and, as I recall, you said that was not the problem in which you were interested.  OK.  However, I believe this is the problem in which engineers and scientists *should* be interested and that we, in our standard, should help them to perform.  Such "help" can be that extra effort is required to enter degenerate interval inputs into an interval computation.

I write all of the above, which I'm sure you remember, in the hope that other readers will come to understand and appreciate the importance of this issue.

With best regards and hopes for our continued friendship:

Cheers,

Bill


On 8/29/13 2:40 AM, Ulrich Kulisch wrote:

Dear Bill,

 

I am sorry,  I feel you partly misunderstood my mail. You asked for a practical example of a problem with intervals in the data where an EDP is useful or essential to solve it.

 

In my answer I mentioned large classes of such problems. I also mentioned methods for solving these problems. Just for an easier understanding of these methods I first describe these for problems with degenerate intervals in the data where they are well established.

 

I also feel sorry that the discussion seems to be concentrating on the implementation of CA and the EDP. I don’t need this discussion. CA and the EDP are provided in the XSC-languages for more than 30 years and there is no open question left.  Several colleagues familiar with these tools voted yes on motion 9 as well as on motion 47. I repeatedly mentioned that the solutions are simple. This, however, does not mean that they are trivial. To see that and how simple these tools are needs studying details. A certain deficit doing this is it what makes the discussion so tough. Remarks like: A long shift of the products is necessarily slow, or a large carry propagation is slow, are coming again and again.

 

I fully agree with you that progress on problems with non degenerate intervals in the data is needed. I do not at all intend to distract the attention from studying other important tasks of interval arithmetic.

 

Let me finally mention another important problem where only degenerate intervals occur. Consider a semilinear elliptic boundary value problem where you don’t know whether a solution exists. There are a lot of applications in the technical sciences or in mathematical physics which lead to such problems. In a first step you would compute an approximate solution. Then, in order to apply a mathematical fixed-point theorem which verifies the existence of a solution, you have to establish close bounds for an expected exact solution. One method that does this requires computing a highly accurate guaranteed lower bound for the smallest eigenvalue of the system matrix which is very large. In such cases you are very happy to have an EDP. It releases you from constantly checking for intermediate cancellations. For details see

S. M. Rump: Solving Algebraic Problems with High Accuracy, pp. 51 -120, in

A New Approach to Scientific Computation, edited by U. W. Kulisch and W. L. Miranker, Academic Press 1983, collected papers delivered at a Symposium held at the IBM Research Center at Yorktown Heights on August 3, 1982.

 

I would be very sorry Bill loosing you as a friend. For so many years we have fought together for a common goal.

 

With best regards

Ulrich






Am 27.08.2013 21:28, schrieb G. William (Bill) Walster:


Dear Ulrich,

You write: "In the vast majority of these applications the data are floating-point numbers or how you call it degenerate intervals."

My point is that in the vast majority of these applications it is a mistake for inputs to be degenerate intervals.  That is why it is so important to make extra effort required to enter degenerate intervals into interval programs, as is the case in Sun's intrinsic compiler implementation.

If anything, I believe concentrating on the implementation of CA and EDP have distracted much of the interval research communities' time, effort and energy from other much more important tasks that need to be solved to bring computing with intervals into the mainstream of science and engineering.

Sorry, Ulrich, but that is my firm belief.

With respect and regards,

Bill


On 8/27/13 3:32 AM, Ulrich Kulisch wrote:
Am 24.08.2013 21:49, schrieb G. William (Bill) Walster:
Ian,

As far as I can tell the only time when a case can be made that EDP is 
essential for interval computations is when all interval inputs are 
degenerate and therefore infinitely precise. Otherwise, with interval 
bounds on the accuracy of typical measured data, I don't see the 
requirement for EDP.  I continue to wait to see even one practical 
example thereof.

Cheers,

Bill

Bill,
 
the dot product is a fundamental arithmetic operation in the vector and matrix spaces as well as in mathematics. Computing an EDP is simple, error free and fast. If in a particular application the active part of the long accumulator is supported by hardware it comes with utmost speed. Not a conventional computation of the dot product in floating-point arithmetic nor computing a correctly or otherwise rounded dot product can reach this speed.

Let me now comment on your mail. I wonder why you care so much finding a small corner of applications where the EDP might not be useful.

These are typical applications of interval arithmetic: Verified solution of systems of linear or nonlinear equations, guaranteed evaluation of polynomials or of arithmetic expressions, computing enclosures of the solution of an ordinary or a partial differential equation, and others. In the vast majority of these applications the data are floating-point numbers or how you call it degenerate intervals. Interval arithmetic allows computing close bounds for the solution and even to prove the existence of a solution within the computed bounds.

In these and other cases computing these results follows a general pattern: First an approximate solution is computed, then bounds for the expected exact solution are established (in case of a partial differential equation this may require computing highly accurate eigen- or singular values, for instance). Then a mathematical fixed-point theorem (Banach, Brower, or Schauder, depending on the problem) is applied which verifies the existence of a solution within these bounds. Finally, if necessary, an interative refinement is applied to improve the result. In all these steps an EDP is repeatedly applied and it often is essential for successs. This can already be seen in case of computing a verified solution of a system of linear equations.

Let me now consider problems with non degenerate intervals in the data. If the bounds are small the computation follows exactly the pattern described above. There is no reason why less care in computing the approximate solution, in establishing bounds for the exact solution or in the verification step is necessary.
If the bounds for the data grow, the problem may gradually degenerate into an NP-hard problem.

You probably know that I held patents on the EDP in Europe, the US, and in Japan between 1981 and 2007. I paid quite some fees for keeping them alife over all these years. The XSC-languages Pascal-XSC, ACRITH-XSC of IBM, and C-XSC all provide CA and the EDP. The verification methods developed in these languages make heavily use of these tools. See the language descriptions and the corresponding toolbox volumes. References are listed in my mail of August 7, 2013.

The patents you obtained in the US a few years ago do not make use of CA and the EDP. I am absolutely convinced that the methods which use CA and the EDP are superior to those which do not use these.

With best regards
Ulrich


-- 
Karlsruher Institut für Technologie (KIT)
Institut für Angewandte und Numerische Mathematik
D-76128 Karlsruhe, Germany
Prof. Ulrich Kulisch

Telefon: +49 721 608-42680
Fax: +49 721 608-46679
E-Mail: ulrich.kulisch@xxxxxxx
www.kit.edu
www.math.kit.edu/ianm2/~kulisch/

KIT - Universität des Landes Baden-Württemberg 
und nationales Großforschungszentrum in der 
Helmholtz-Gesellschaft