Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: Proposal for a new structure of the standard



On 2010-07-13 06:43:25 -0700, Dan Zuras Intervals wrote:
> > * Does IDBar have to be a finite set? This doesn't seem to be really
> >   useful and that would rule out some implementations, such as
> >   intervals of GMP integers, for instance.
> 
> 	Well, two things here.
> 
> 	First, no, it does not have to be a finite set.  It is
> 	just finite in any implementation known or considered &
> 	therefore useful in that sense.
> 
> 	And, second, that includes GMP.  GMP may be able to
> 	represent a great many more numbers.  But not infinitely
> 	many. :-)

According to the *specification*, the numbers are not bounded, thus
the set is infinite. One can deduce obvious bounds from the internal
structure, but I don't think such bounds have any interest as in
practice, the limit is implied by the available memory, with an
undefined behavior (in general) when the available memory is
exhausted. This is very different from the set of floating-point
numbers in fixed precision, where the limits (range and precision)
are well-known and the specification is clear (though there may
still be implementation limits in practice, e.g. because in general
some memory will be dynamically allocated).

> > * IMHO, the definition of the hull should not be required. It seems
> >   to be useful only for the tighest arithmetic, and the standard
> >   shouldn't require things that are not used.
> 
> 	The definition of the hull is required for uniqueness.
> 	Which would be true whether or not tightest arithmetic
> 	is in effect.

In the paper, the hull is *not* used for the "valid" version, thus
it is completely useless when only the "valid" version is available.
And the hull is *not* required for uniqueness. You can define
uniqueness (but what uniqueness?) with a hull, but there can be
other ways.

> 	It is also required for reproducibility.

Ditto, it is *not* required for reproducibility. A fixed algorithm
in a fixed arithmetic (i.e. with no optimizations that can change
the behavior) is sufficient for reproducibility.

>       Not merely reproducibility among implementations but
> 	reproducibility WITHIN an implementation. For without it, how
> 	is one to know that this interval is the same as one computed
> 	earlier?

With the algorithm that computes it: reproducibility is ensured
by the underlying arithmetic.

> 	Should we, as a standards body, decide & specify the
> 	algorithm used to compute the hull?  Or, if not the
> 	algorithm itself, the properties to be met by a hull
> 	that provides for uniqueness?

Since the hull isn't used for the "valid" version, what's the point?

The obvious use of the hull is just to return the hull of some subset
defined at Level 1: this is exactly what the "tightest" version does.
So, if you do not want the "tightest" version, how would you use the
hull?

> > * Should there be a notion of internal and interchange idatatypes?
> 
> 	Perhaps there should.  But that is a discussion for
> 	another day.  It need not touch on the issues here.
> 
> 	John has gone to great trouble to couch this discussion
> 	in terms of level 1 & level 2 notions only.
> 
> 	Issues of representation, whether exchangable or not,
> 	don't enter into it.

Note: in my question, I meant at Level 2.

> 	Of course interval arithmetic is not designed to detect
> 	bugs in processors.  Or anywhere else for that matter.
> 	What ever gave you THAT idea?

Then the paragraph on the Pentium bug is meaningless.

> 	But interval arithmetic IS designed to assure the user of
> 	the correctness of the results that are computed with it.
> 	For were it not so why would any user go to all the trouble
> 	needed to use it?  One can always compute faster in some
> 	hardware floating-point.  And one can get more digits more
> 	easily computing with GMP, MPFR, et al.  But neither of
> 	those methods tell you exactly HOW MANY of those digits
> 	are correct, if any.

You can do a static error analysis.

> 	Intervals do that for you.
> 
> 	But only if they are trusted to do so.

and you don't need reproducibility/uniqueness/... only the FTIA.

> 	Quality of an implementation goes a long way to gaining
> 	that trust among those who are not trained in formal
> 	interval methods.  That means how tight a result can be
> 	returned by an implementations.  It also means how well
> 	it works in other ways that go towards trust.  Does it
> 	have bugs or not?  Are the answers returned the answers
> 	expected?  Can they be understood?  Do I believe them?
> 	That sort of thing.
> 
> 	Reproducibility is part of that.  If I get one answer on
> 	one machine & another on another, why should I trust
> 	EITHER of them?

Because of the FTIA.

>       It has been suggested that in that case the TRUE answer must,
> 	therefore, be in the intersection of both. But that is only a
> 	valid conclusion if I trust both answers.

There's no reason not to trust them, unless you assume that there may
be a bug in the implementation or a hardware problem. If you assume
that at least one of the answers is correct, you can take the union.
It you don't trust anything, then there's nothing one can do for you.

If you don't trust some implementation, then you should have two
different implementations. And if you want to detect inconsistencies
between them, they should implement the tightest version.

-- 
Vincent Lefèvre <vincent@xxxxxxxxxx> - Web: <http://www.vinc17.net/>
100% accessible validated (X)HTML - Blog: <http://www.vinc17.net/blog/>
Work: CR INRIA - computer arithmetic / Arénaire project (LIP, ENS-Lyon)