Re: Discussion about accuracy modes
> From: "Nate Hayes" <nh@xxxxxxxxxxxxxxxxx>
> To: "P-1788" <stds-1788@xxxxxxxxxxxxxxxxx>
> Subject: Discussion about accuracy modes
> Date: Tue, 24 Aug 2010 10:34:47 -0500
>
>
> Dear P1788,
>
> There has been quite a bit of discussion (public and private) surrounding
> the topic of accuracy modes for implict and explicit types.
>
> For example, Vincent has argued (convincingly, I think) that only "valid"
> accuracy mode should be required for variable-precision types. Motion 19, on
> the other hand, seems to imply that all variable-precision types are
> implicit. This raises the question in my mind: does this also imply that
> _all_ implicit types will necessarily only be required to statisfy "valid"
> accuracy requirements?
Hmm. While we could have a discussion of accuracy modes
if you like, I don't think it has anything to do with
Motion 19.
The distinction between explicit & implicit types that
John proposes has nothing to do with how an interval is
calculated. It has to do with whether or not one can
always find a unique tightest interval to enclose a
desired result. If one can, it is explicit. If one
cannot, it is implicit. That's all.
It is a property of the datatype not the computation.
Nor does it follow for me that all of the larger, or
variable precision datatypes must be implicit. I'm
sure many of them are but they could just as easily
be implemented with the uniqueness required to be
classified as explicit.
>
> The reason I ask this question is because more and more I think that _some_
> implicit types, such as mid-rad types with fixed range and precision, could
> possibly satisfy "tightest" accuracy mode, at least for certain basic
> operations such as +, -, *, /, sqrt, and conversions.
Even here, John's definition comes into play. For it
is not a question of how hard one works to compute an
accurate result. It is that for some implicit types
(of which many are mid-rad forms) there exists no
unique tightest interval.
It is a property of the datatype, not of the accuracy
mode.
The problem is that "tightest" is ill-defined in the
implicit case.
>
> It makes me wonder if the current definition of implicit type in Motion 19
> might not be a little too generalized. For example, perhaps accuracy modes
> should be specified orthogonally to the distinctions of implicit and
I agree that accuracy modes should be specified in a
manner which is orthogonal to the distinction between
explicit & implicit.
And I think they ARE being specified orthogonally at
the moment.
> explicit. In other words, perhaps we should not lump all variable-precision
> types into the "implicit" category (as was the case in early discussion of
> implicit types, e.g., when the more narrow definition of an implicit type
> was that the endpoints of an interval didn't necessarily have an exact
> representation by the underlying level 3 floating-point elements).
OOoo. I think it is sufficient that they do not have
a unique narrowest enclosing interval within a given
datatype. This is a result of the finiteness of a given
datatype which is a level 2 property independent of its
level 3 representation.
As soon as we require that results be exactly representable
we knock out all representations except the symbolic ones.
For only in the symbolic representations can all of 1/3,
sqrt(2), & {lim(n->oo) f(n) = Pi} be represented exactly.
And if we have a symbolic representation hanging around
sufficient for all of that, we have no need for intervals.
Things will always be computed exactly.
I realize I may have strayed from your intended point here.
But this was the one I could answer. :-)
>
> Attached is a PDF that attempts to illustrate some of these points. Notice
> in this figure that accuracy modes are largely determined by whether the
> interval type has fixed range and precision or not, as opposed to whether
> the interval type is explicit or implicit. Alternatively, we could use the
> proposed definiton for implicit type in Motion 19. In that case, some
> mid-rad types (e.g., the ones depicted with fixed range and precision) could
> be "made" explicit by providing some suitable definition of unique hull.
>
> . . .
>
> In any case, these are the thoughts that have been on my mind lately
> regarding these topics. I'm wondering what others think?
>
> Sincerely,
>
> Nate
>
Ah, I think I understand now.
You think of variable precision interval types as a single
datatype which is a subset of the datatype of potentially
infinite range & precision. This is something Vincent has
named effectively or practically infinite. (I forget his
exact words.)
Yes, if you considered such a thing to be a candidate
datatype for 1788 then it would be implicit.
But it would not be implicit for the same reason as any
of the more finite datatypes. They are implicit because
the finitely many intervals they represent overlap in such
a way as to not have a unique smallest enclosing interval
for some desired results.
But this potentially infinite style of datatype would not
have a smallest enclosing interval because one can always
represent a smaller one. That is, there are infinitely
many narrower intervals, at least up to the limit of your
computer resources.
Thus, practically infinite.
Now, while I think we must include variable precision
intervals, I don't think we should consider them to be
one large datatype for just this reason.
In order to properly specify their behavior I think we
must consider each instantiation of precision & range
(or precisions & ranges, in the case of heterogeneous
mid-rad forms) to be a single datatype. It can be
either explicit or implicit, as is your want.
Then, when one performs an operation that changes the
precision of the result it must be considered as an
effective type conversion to another type. It is exact
when moving up & enclosing & possibly inexact when
moving down.
The result is then specified according to the rules in
the new datatype.
Accuracy modes might enter into it as well but that has
nothing to do with the specifications of the datatype
itself.
Now, I must admit that Vincent & I never finished our
discussion of this issue. Perhaps he does not agree.
If so, we are left with the problem of specifying the
behavior of potentially infinite datatypes.
Either that or eliminating them from the standard
completely.
I would rather not do that.
Your thoughts?
Dan