Re: text2interval again /
On 03/08/2013 08:39 AM, Richard Fateman wrote:
On 3/7/2013 8:52 PM, Michel Hack wrote:
Richard Fateman wrote:
On 3/7/2013 12:51 AM, Arnold Neumaier wrote:
That is, something like:
set_rounding_mode(negative_infinity)
y:=sin(x)
is not the kind of thing supported anyplace I know of, to get
a lower bound on sin(x). Perhaps you know of such a system?
One can do it in Java and in C++.
Perhaps I am being unrealistic in thinking that intervals can peacefully
coexist with
machine scalars, and the standard should encourage that, not handicap it.
There exist several interval packages (e.g., in Java and C++) using IEEE
directed rounding support to provide rigorous results.
The standard is supposed to unify the different approaches and to remove
some inefficiencies of the current packages that used to need ugly
and/or slow workarounds.
But if you insist on doing number parsing as part of the standard, then
"1/10" is a far more explicit and obvious notation than "0.1".
Parsing arbitrary rationals or arbiotrary decimal floats makes not much
of a difference, neither in programming effort nor in the resulting speed.
Note that text2interval is supposed to be used mainly for conversion
from human-written text in some program (though it could be exploited
for other tasks).
If the mathematician was so inclined to prove something about x in
[1/10, 2/10] with a computer, then he (or she) should first make
sure that [0.1,0.2] meant the same as [1/10,2/10].
For a mathematician, these indeed mean precisely the same.
Indeed, interval methods were partly invented in order that one can do
rigorous mathematics though a computer with inexcact computations is
involved. See, e.g., my paper on computer-assisted proofs,
http://www.mat.univie.ac.at/~neum/ms/caps.pdf
Are we going
to protect the mathematician from all possible representation-
related issues in the scalar world outside intervals?
YES -- that's precisely what text2interval() can do: make the program
portable to environments with different underlying representations,
while at the same time allowing the properties of the representation
to be fully exploited.
This goal does not seem objectively satisfiable unless I am
misunderstanding
something. It seems to me that programs that work in double-float may
fail in single-float. Not in the validity of an
enclosure, but in the usefulness of the answer, namely tightness. If
your result is to
test to see if two intervals intersect, your program may not work the
same everywhere.
Yes. Decisions based upon an interval program must have three-way
branches, depending on whether a positive decision, a negative decision,
or no decision is reached. This is not really different from the usual
try-catch mechanism where the inability of a program to give an answer
(since it may fail) is accounted for explicitly.
Not reaching a decision is not counted as a failure in logic, but only
as a failure in deciding a problem. As thare are undecidable problems in
mathematics anyway, the failure to decide something is not that severe.
What correctly programmed interval methods (and related methods based on
directed rounding) however assure is that _whenever_ a decision is
arrived at, it is provably correct.
Moreover, for many problems one can restart an algorithm in case of
failure with higher and precision until one gets the answer. Indeed,
this is done for quite a number of computational geometry algorithms,
where the correct performance depends on making certain branching
decisions in a way guaranteed to be mathematically correct - otherwise
visibly wrong artifacts may be produced.
Perhaps one can show that in some sense a better representation will
always get
an answer that is at least as good as a worse representation.
There are certain such results, based on inclusion monotony of interval
arithmentic, and there are other results based on asymptotic properties
that ultimately arbitrarily accurate bounds are obtainable.
See, e.g., my book on interval methods,
A. Neumaier,
Interval Methods for Systems of Equations,
Encyclopedia of Mathematics and its Applications 37,
Cambridge Univ. Press, Cambridge 1990.
The value is determined by the mathematical problem posed, which is
usually in text format.
I really don't see this the same way you do. There are applications in
which some physical problem starts with some uncertainty. A scientist
will not know the uncertainty exactly and so will add some "slop" to it.
Of course, there are also these applications. In this case the precise
bounds would not matter, but even in this case it is useful to know that
the result obtained is correct for the problem exactly as specified.
In contrast, pure floating-point calculations may result in answers that
are arbitrarily off the true answer, and sometimes without warning.
A simple example is the harmless-looking program
x=0.2;
for k=1:30, x=6*x-1; end;
which has the exact result 0.2. But the computed result in double
precision floating point arithmetic is orders of magnitudes away.
This would be revealed by an interval computation of the form
xx=1/text2interval('0.2');
for k=1:30, xx=4*xx-2; end;
However, your suggestion to use instead
x=0.2;
xx=num2interval(x,x);
for k=1:30, x=4*x-2; end;
would
- either (if num2interval does not move the bounds) give an initial xx
not containing the correct data, hence no guarantee about the
subsequent behavior would be possible;
- or (if num2interval moves both bounds by one ulp) approximately
double the width of the final result.
The same kind of problems may arise in calculations where it is far less
easy to analyse the reasons for this behavior.
The standard should make these things computable reliably and without
unnecessary wide results.
This affects integers as well as various
approximations to real-number arithmetic. By providing appropriate
primitives we can avoid this issue, to the extent that representation
issues will affect the quality of an implementation (e.g. tightness),
but not the correctness (i.e. containment).
It seems to me that correctness is easily achieved by providing routines
that always return Entire, since
that will enclose any valid answer.
True but in the standard as currently proposed it can be proved that the
overestimation in interval function evaluation tends to zero as the
width of the interval and the accuracy of computation go to zero; a
prerequisite for successful applications to global optimization
I suppose there is a circumstance in which just
returning Entire (without bothering to compute anything!) is incorrect.
It is never incorrect in an interval context, but it can often be
avoided, and it is avoided more often if one does not widen intervals
unnecessarily (such as moving each bound of a round-to-nearest result by
one ulp, thus doubling the width unnecessarily).
That is, if
the program never halts, and you didn't notice that, so returning Entire
was incorrect.
If a program never halts it cannot return Entire, since returning
requires halting.
Anyway, even a better version of correctness is still quite easy at
least for bounded
intervals and rational operations. Quality matters.
I recall, but cannot spot some discussion of 0.5 vs 0.50.
According to general consensus, these are the same numbers, both in
mathematics and in computer science. So no discussion is needed.
Arnold Neumaier