Ian, Jürgen, P1788
What I get from these replies is that the T -> T operation *should* be mandatory. Ian's comments are important but they apply to expressions. Those are a language issue and we IMO
(a) *should not* make requirements about expression-evaluation (beyond containment);
(b) *should* make recommendations on the lines Ian proposes.
Actually, thinking about it, I'm less sure about (a).
Suppose we have variables of an interval type T, say inf-sup binary64:
T xx1, xx2, ..., yy
yy = some_expression(xx1, xx2, ...)
- Strict mode (Ian's "mandatory" mode) makes each intermediate operation in "some_expression" be the T-version, i.e. takes T inputs & gives a T result.
- What about an implementation that, by default, uses 80-bit arithmetic inside expressions, i.e. uses T' = inf-sup binary80? I think people would agree this is fine for general purpose work, provided a numerical analyst who wants to control precision closely can switch into strict mode.
- But what about an implementation that, by default, uses 32-bit precision inside expressions, i.e. uses T'' = inf-sup binary32, and just stores into a T-interval at the end? (Maybe so as to use a fast GPU or similar.) I would hesitate to call this standard-conforming. If the user wants 32-bit operations, they should declare the variables accordingly.
So I think the precision of evaluating an expression *shall* be related to the precision of its inputs; whether it should be the least precise of the inputs, or the most precise, I leave to the advice of Ian & others.
[I am assuming 754-conforming types, of the same radix, for which "least" & "most" precise makes sense. T' more precise than T means it is a superset of T. For other types we should make no recommendations or requirements.]
Regards
John Pryce
On 12 Jun 2013, at 13:08, Vincent Lefevre wrote:
I also find this less important for intervals. But note that the
double-rounding problem has a noticeable effect only for rounding
to nearest, not for the directed rounding modes used with inf-sup
interval types. Thus if
(1) T' is a superset of T;
(2) T -> T' tightest operations are implemented;
(3) T' -> T tightest hull is provided (this is required, isn't it?);
then the corresponding T -> T tightest operations are obtained by
composition of (2) and (3), so that I don't see any reason not to
provide these T -> T operations. But for a language not providing
them natively or in a library, saying "to get such an operation,
the programmer must write the composition explicitly" would be
a way to conform to the standard, IMHO, though languages should
define a shorter/simpler way to write these operations.
On 10 Jun 2013, at 15:24, Ian McIntosh wrote:
As I said a couple years ago, the standard should allow multiple modes. In the mandatory one, an implementation shall evaluate each expression as literally as possible. In optional modes, it may provide more accuracy (even if slower), more speed (even if less accurate), more debuggability, or other tradeoffs. An implementation using 80 bit extended precision should be allowed and would have a competitive advantage on precision, but if the standard or the program specifies double precision then there must be a way to get that.
- Ian McIntosh IBM Canada Lab Compiler Back End Support and Development
Re: A Level 2 query
Jürgen
On 9 Jun 2013, at 19:58, Jürgen Wolff von Gudenberg wrote:
isn't that the problem with the extended 80bit format in the early processors ?
AFAIK this has only caused some inconsistencies like due to double rounding. thre tightst enclosure is not found.
so my answer is yes [(JDP) I assume you mean it should be a violation of 1788.]
That was in floating point, where inconsistencies (between compilers, or between optimisation levels of one compiler) can be extremely annoying. With intervals, you have containment and the effect, especially if done throughout a long expression, will be to get a tighter enclosure than you "expected". Is that really so annoying? Actually I tend to agree with you and am playing devil's advocate.
John
Am 09.06.2013 17:31, schrieb John Pryce:
...E.g., let T be infsup-binary32, and T' be infsup-binary64, and the operation be subtraction. Then suppose, in obvious notation,
xx_32 - yy_32 always gives zz_64.
In fact as these are both 754-conforming types of radix 2 -- call these "nice" types -- any combination is allowed (the "typeOf" feature on the lines of "formatOf" in 754), so the current rules say there shall be an
xx - yy whose inputs may be any combination of nice types, giving result of type T.
Also
xx - yy whose inputs may be any combination of nice types, giving result of type T'.
Suppose the implementation only provides the second of these. (If one wants the first, get it by taking the T-hull explicitly.) Should the standard call this non-conforming?