Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: Version 2.11, Proposal for interval standardization



Arnold Neumaier wrote:
Arnold Neumaier wrote:
Sylvain Pion schrieb:
[...]
Well, I *think* it is indirectly, but explicitly forbidden in your
current proposal, and that's the whole point.  The reason I think
so is that in your proposal, functions are defined in isolation,
and at best they each return the "tightest" accuracy, and the refer
only to their arguments.  So, if x=[1,1], evaluating x/[3,3] *must*
return an interval y which is the tightest result and not a singleton,
and y*[3,3] then *must* return the tightest of this, which is not
a singleton.  So the transformation to [1,1] is not allowed.

The standard is supposed to define interval arithmetic on machines,
not what people can do with it.

Well, this standard will serve, among other things, as a base for building
language standards.
So, my question is: if a C++ implementation performs the transformations
I mentioned, will it still be able to claim that it conforms to IEEE-1788,
or is it violating it?

In my opinion, yes, since the standard says nothing about how the
operations are used. So if the C++ user can access all operations
individually, it conforms to the standard, even when transformations
are made in composite expressions.

We disagree about what the text implies, but we agree on what it should mean.
That's OK with me: we can probably better tackle those wording details later anyway.


Thus we specify single operations, but not how they may be combined, and under which conditions better results are obtained. This is a matter of theory on interval methods, not of a standard.

Theory tells (or does not tell) what gives best results in
infinite-precision interval arithmetic; the standard guarantees that
if you program this using a standard-conforming implermentation, you'll get correct enclosures of the theoretical result.

A programming system with elaborate syntax transformation capabilities
can implement some or all of the theory - but then it must still call
routines that do the elementary operations. How to do these (and only
that) must be specified by the standard.

I think a description of valid transformations should be part of IEEE-1788,
just like there are some in IEEE-754 for floating-point arithmetic.

Your section 2.6 is even one of them, I just propose to improve it.

So far, you have not given any concrete arguments against what I proposed
(I mean, in terms of applications where this would be a problem to allow
tighter intervals for expressions if the containment property is maintained).

Well, at least occasionally one wants to be able to control what
happens. So there should be a way in which the user can protect
a piece of code from being transformed.

Let me cite one case from experience. Our current Gloptlab constraint
solving package contains more modern versions of the same feature; so
this is not purely academic.

When (in 1985), I wrote my first interval branch and bound program for
enclosing pictures of implicit curves, occasionally a few pixels were
mysteriously missing. I had checked all roundings for correctness,
and was puzzled for a long time.

Until I noticed that the inner product was done automatically
with Kulisch's accurate inner product, which usually is a benefit
in accuracy. In my particular case, however, I had made silently
use of the fact that the inner product was done in the standard way
when doing a subsequent nonstandard inner operation - and that had
caused the failure. Once I knew what happened, it was easy to fix
the bug. But when this happens unexpectedly...

Our current Gloptlab package contains more modern versions of the same
feature.

Interesting.
At least it sounds quite rare, and this means we could definitely allow
those transformations in the standard.


In C++0x, there is a new facility named "constexpr". This facility allows
to build compile-time constants for elaborate types such as interval,
and allows to force constant propagation through some functions, in order
to build such constants (Gabriel Dos Reis knows everything about it).

All this is specific to C++0x, but one could imagine, if IEEE-1788 specified some additional requirements for some constant expressions over intervals,
that this would be implemented using this facility, rather than parsing
some text/string as your section 2.6 mentions.

This would be the right tool if the constant expressions are specified
as ordinary expression in terms of exact literals.

But the standard should be language independent...

Just to make my point clear, and make sure we agree:
if IEEE-1788 requires being able to parse the text of expressions in
the format specified in 2.6, then I find it not language-level-friendly.
It would be a boring-to-implement and useless requirement (and not
constexpr-friendly for the particular case of C++0x).
Specifying I/O text/literals formats for isolated intervals is fine,
but going to expressions is too much, IMO.  Languages already have
their own ways and syntax to specify expressions, we should just talk
about expressions in a more abstract way than text format in IEEE-1788.


One could also imagine that
the additional precision requirements (tighter intervals) -- if we end up
mandating something like that -- could be done by the compiler transparently,
and required by the C++-interval standard for such constants, maybe.
From a language point of view, this would be much cleaner than requiring
to build special constants from strings representing expressions.

So, my points are still:
- the notion of "text" is not relevant to the tight interval constants
 requirements of applications.
- the possibility of tighter intervals for expressions (combinations of functions) should not be restricted to the kind of expressions found in section 2.6,
 and it is useful to generalize it.
- if some application domains have special precision requirements for
 some interval constant expressions, we should maybe _mandate_ something
 (and not in terms of text).

I am against requiring anything here, since optimal accuracy for
arbitrary constant expressions is not easy to achieve (let alone to
prove that it is achieved).

For non-constant expressions, this is even worse...

Agreed.

On the other hand, I'll add a remark on your concern about value-changing
optimizations.

Thanks.

--
Sylvain Pion
INRIA Sophia-Antipolis
Geometrica Project-Team
CGAL, http://cgal.org/

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature