Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: The current proposal



Sorry, the version sent was somewhat mutilated.
Here is the correct version:



Siegfried M. Rump schrieb:

Trust me, it is very natural in interval arithmetic to do interval
calculations with the bounds.

It is very natural in _any_ programming language to make certain
mistakes that must be learnt to be avoided. This is part of
learning the ropes.

The standard is not made for naive users who don't understand what
they do, but to enable good programmers to write good software.


For a given function F, for example, X=F(interval(A.sup)) should give an
inclusion of the value of F at the right bound of A.

With the current proposal, this needs a case distinction:

   if A.sup==Inf
     X = F(interval(realmax,Inf));
   else
     X = F(interval(A.sup));
   end

No. With the Vienna proposal, the user is asked to write
      X = F(isup(A)),
which is easy to use and easy to remember. (One could even make a
compiler to issue a warning if instead of isup(A) the construct
interval(A.sup) appears.)

The Vienna proposal discusses this in detail in Remark 2 of
Section 4.1 (and mentions it also in Section 1.5 on semantical
correctness).


This burden is on the part of the user.

The burden for correct programming has aleays been and will always be
on the part of the user, no matter what a standard says.


Even worse, it is likely to be forgotten and to pass unoticed.

One could recommend compilers to issue a warning if instead of isup(A)
the construct interval(A.sup) or other semantically dubious constructs
appear.


And then false results may appear, the
worst what can happen to a verification method.

No verification method is guarded against incorrect programming, unless
the wholwe program is passed through a formal verifier (which surely
will be the case some time in the future). Then this error will be
spotted unfailingly.

You are also mistaken when you think that your Intlab implementation
does not give rise to false results because of unintended but naively
assumed behavior. Here are a few examples:

1. Writing naively in Intlab
    Z=intersect(infsup(1,2),infsup(3,4))
    cas=isempty(Z)
gives
intval Z =
       NaN
cas =
     0
suggesting a nonempty intersection of [1,2] and [3,4].
Upon figuring out what happens, I discovered that @intval contains two
different routines isempty.m and isempty_.m, and the second version
should have been used.
    Z=intersect(infsup(1,2),infsup(3,4))
    cas=isempty_(Z)
indeed gives
intval Z =
       NaN
cas =
     1
If a user is required to know and remember this (which is _much_ more
likely to result in an error when a naive user gets it wrong) then
there cannot be anything wrong in having to remember to use isup(A)
in your example above.


2. I tried
    A=infsup(-inf,inf);
    AA=infsup(A)
    a=inf;
    C=A+a;
    CC=infsup(C);
    D=CC-CC
and was surprised to get
D =
  Columns 1 through 11
     0     0     0     0     0     0     0     0     0     0     0
  Columns 12 through 22
     0     0     0     0     0     0     0     0     0     0     0
  Columns 23 through 24
     0     0
although afterwards I could of course figure out why this had to
result. I had intended to write
    A=infsup(-inf,inf);
    AA=infsup(A)
    a=inf;
    C=A+a;
    CC=infsup(C);
    D=C-C
which gives
intval D =
       NaN
as expected. But
    C=infsup(-inf,0)+1e400;
    casC=isempty_(C)
gives
casC =
     1
suggesting that C is empty, although you had been stressing many
times before that the result of interval plus real should never
be empty.


This much to the illusion of verified computation without having
verified the program, and to the burden the user has to ensure
correctness of the programs in the face of documented or
undocumented behavior.

It does not make any sense at all to complicate a standard just to
hide one possible source of naive semantic errors.


Arnold Neumaier