Re: math function accuracy -- was Re: text2interval again /
On Mar 11 2013, Vincent Lefevre wrote:
On 2013-03-09 08:40:02 -0800, Richard Fateman wrote:
Incidentally, I do not see the merit in widening the argument in the spec
for "accurate" (page 41).
It seems to be saying that for sin(x) evaluated on an interval that
looks like [very large argument, very large argument] we don't believe
in FTIA.
No, the only reason is to ease the implementation, and even make it
practically possible in some cases. Without widening the argument,
with a large exponent such as what is possible with MPFR on 64-bit
machines, you could quickly exhaust the available memory without
getting an answer. Even in binary64, one may want fast sin/cos/tan
for large arguments.
Also, whether or not they are specified in 1788, the matter of more
complicated functions is important. There is little practical point
in a specification that is rigorous for 99% of a program and undefined
for a critical 1%. Not all functions have known feasible algorithms
for tightest evaluation.
(And it would even
more simplify an implementation to replace "required tightest"
with "required accurate, recommended tightest" for
ALL "tightest", since some systems without access to rounding modes
will have to work hard to get tightest, but can be accurate
extremely efficiently. )
Possibly useful for some languages, like XPath (without extensions),
where all operations are required to be correctly rounded to nearest.
But I'm not sure whether this is a good idea. The interval type could
be degraded to an implicit type is such a case.
The principal point in such a specification would be to encourage
languages to do that more generally. I am utterly sick of languages
and libraries that say that, as soon as you use certain critical
facilities, all behaviour is undefined.
Inaccuracy is a minor form of undefined behaviour, but the point
stands.
Regards,
Nick Maclaren.