Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
On 2014 Feb 28, at 14:53, Vincent Lefevre wrote:On 2014-02-28 09:19:18 +0100, Guillaume Melquiond wrote:The last paragraph of 7.5.4 is a bit restrictive too. It might well happen that two different implementations are just incomparable. I am thinking in particular of elementary functions; which one of two implementations is the most accurate presumably depends on the input domain (values close to zero, large values, etc) due to their having different argument reduction.I think that linear ordering may be useful for the end user, and the current "should" is OK. But IMHO, the ordering should just be informative, and not necessarily linear: accuracy mode 1 > accuracy mode 2 when in general f1(X) in included in f2(X), without a guarantee that this is always the case. The idea is to inform the user that he will generally get more accurate results by choosing mode 1 instead of mode 2. The only guaratee is that for inf-sup types (when the hull is unique), the tightest mode gives the most accurate results.I agree with Guillaume in the sense that two different implementations of an operation phi are often (usually?) incomparable de facto, in the sense that for a general xx, neither of yy1=phi1(xx) and yy2=phi2(xx) is guaranteed to be a subset of the other. But a mode is a *named assertion about* accuracy. How many of those is it useful to have? We don't invent one for each implementation of phi.
I understand that there is a difference between "documented" and "actual" accuracy of a function, and I am fine with that. But I don't think it invalidates my point. Let us take an example.
Consider an interval cosine function with an implementation such that the results are optimal enclosures for inputs smaller than 4pi and just noise (that is, [-1;1]) for inputs larger than 2^50. (I know of at least one interval library that somehow behaves like that.) In other words, the function does not satisfy the prerequisites of the "accurate" mode (overestimation by at most one ulp in the inputs), yet for most intent and purpose, it fits the "tight" mode.
So the implementer has three choices: 1. classifying the function in a "not even accurate" mode 2. improving the quality of the function at the cost of some heavy work 3. coming up with a new mode "tight in practice"For marketing and time-to-market reasons, choices 1 and 2 might not even be under consideration. So the implementer decides to introduce the "tight in practice" mode, which it is impossible to order with respect to all the other modes. Indeed, it is worse than "tight", better than "not even accurate", but uncomparable with "accurate".
That is it for the example. I believe it is a realistic example, though I am not sure what it tells us.
Best regards, Guillaume