Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

1/[0,2]=NaI???



Nate Hayes wrote
(in: Re-submission of motion 5: multiple-format arithmetic):

Arnold Neumaier wrote:
Nate Hayes schrieb:
Arnold Neumaier wrote:
Nate Hayes schrieb:

Moore's theroem remains valid with the proposed definition, since
a real expression defines a real function f(x_1, ..., x_n) only
on domains where no divisor is zero.

Therefore, correctly, for xx=[0,1]
    {1/(x^2-x+1) | x in xx}
       subseteq 1/(xx^2-xx+1) = 1/([0,1]-[0,1]+1) =
1/[0,2]=[1/2,inf], while with your definition, division by an
interval containing zero gives NaN), and we'd get NaN, violating
      Moore's law since {1/(x^2-x+1) | x in xx} subseteq NaI
does not hold.
The reason it "violates Moore's law" is because x = 0 is not in the
domain of the function! That's the whole point of returning NaI...
???

When I evaluate f(x):=1/(x^2-x+1) at x=0,
I get the perfectly reasonable value f=1.

But 1 is not in NaI, violating Moore's law.

Real analysis for { 1/(x^2-x+1) | x in [0,1] } does not lead to NaI. It's
optimal range enclosure is [1,4/3].

Of course, real analysis does not know of NaI.
But you had been claiming that x=0 is not in the domain of the function,
which is false.


But in the example, you compute a non-optimal range enclosure by composition
of arithmetic operations.

Moore's theorem only requires that an enclosure is obtained, and says nothing about optimality (which holds only in rarte cases). With the
definition of the proposed motion, Moore's law always holds.

But with your definition, Moore's law fails in the above case since
1 in NaI is false.


One of the intermediate operations is 1/[0,2].
This is violation of Moore's law since division by interval containing zero
is undefined.

The point is that you introduce an unnecessary violation of Moore's law.


In any case, dropping 0 from the domain of the intermediate step 1/[0,2]
gives [1/2,Inf), which is severely pessimistic.

Nearly as pessimistic is the result of 1/(xx^2-xx+1) for xx=[eps,1]
with tiny eps, although no division by an inteval containing zero
occurs.

Thus this cannot be an argument for not allowing divisions by an
inteval containing zero.


By the way, this pessimistic estimate is enouh to exclude the box
xx=[0,1], yy=[0,10] in constraint propagation for the constraint
     1/(x^2-x+1) + y <= 0,
while your definition erases all information. Since constraint propagation is one of the major uses of interval methods in
the applications, an interval standard that does not allow to
do constraint propagation optimally is no good.


Modal intervals improve the situation. Monotonicity gives:
    fR(X) := 1/(X*(Dual(X)-1)+1)
over the monotonic domains x \in [0,.5] and x \in [.5,1].

If one uses branching, one can keep the overestimation small with many methods. And that you branch just at .5 and then get monotonicity
requires extra analysis which is not part of the modal theory.

With the smae amount of analysis, one gets here the exact range
without using intervals at all, since in 1D, a monotone function
attains its extrema at a bound, Thus the range is the hull of the
function values at 0, 0.5 and 1.

Therefore there is no need for modal arithmetic in this example.

And what do you do for
   f(x)= 1/sum_{i=1:n} (x_i^2-x_i+1)?
Modal intervals now requires an exponential amount of work to get
the exact range. So it cannot be regarded as a solution.


So
    fR([0,.5]) \union fR([.5,1]) = [1,4/3]
is the optimal range enclosure, and there is no NaI due to overestimation.


I believe nothing in this motion and rationale hinders the
implementation of various forms of non-standard intervals --
Kahan, modal, etc. -- as discussed at the end of Vienna/1.2.
I've mentioned before this is simply not true. If traps or flags
are only way to obtain NaI result from an interval operation such
as 1/[-2,3], this is hinderance to efficient modal interval
implementations.
This is another eason why modal intervals should not be part of
the standard. It makes the latter unnecessarily complicated,
only to introduce an error-prone technique that can be safely
handled only by a tiny minority of users.
I don't agree at all. It opens 1788 to a wider audience by
clarifying and simplifing.
Modal arithmetic is a very dangerous tool that _easily_ leads to
wrong results without a very good understanding of its theory.

I believe it is just a straw-man, Arnold. There have already been
discussions and examples in this forum of how it is a problem with intervals
in general.

I know of only two dangers of standard interval arithmetic:
1. The unprotected conversion of decimal numbers to floats,
   ignoring round-off, and
2. The use of a fixed-point theorem without having checked continuity
   on the whole box.
Both dangers can be avoided with little care on the part of the user,
and in the Vienna Proposal. they are avoided as far as possible
by design.

With modal intervals, these dangers persist but are multiplied by
the possibilities of introducing errors by replacing intervals by
their duals without sufficient justification. These are much more
serious since there is no easy way to guard against it, and since
the modal theorems are quite subtle to understand correctly.


Classical endpoint analysis is particularly tedious and
error-prone,

???
Please explain why.


but the monotonicity theorems of modal theory simplify that
quite a bit.

Before applying the modal theory, you need to do the monotonicity
analysis first, as in the classical endpoint analysis; so what you
call the tedious part of the latter is as tedious if modal intervals
are available. (The remainder is a few lines of code only in the
endpoint analysis.)


Arnold Neumaier