Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: Explicit vs Implicit bounds on range and precision



> Date: Wed, 14 Jul 2010 23:04:00 -2000
> To: stds-1788                    <stds-1788@xxxxxxxxxxxxxxxxx>
> From: Michel Hack                          <hack@xxxxxxxxxxxxxx>
> Subject: Re: Explicit vs Implicit bounds on range and precision
> 
> I agree with everything Dan Zuras wrote in his reply,
> BUT I'm afraid he missed my point COMPLETELY:
> 
> >  So you move your entire problem to a 1200 digit precision
> >  type with a 7 digit exponent range together with 100 digits
> >  & 4 exponent digits in the radius.
> 
> The critical part of my post was not even quoted in Dan's reply:

	I apologise for cutting this portion off.

	I am wary of my tendency to ramble on &
	felt I should cut my reply short.

> 
> MH:
> >> DZ: >  But the limits are there all the same.
> >>
> >> Ok, so among format-derived constraints we should include
> >> EXPLICIT bounds on precision and range.
> >>
> >> Many systems do have such controls -- but NOT ALL.
> 
> 
> What I'm concerned about is formats that DON'T HAVE A SPECIFIABLE LIMIT
> to range or precision.  Most such formats are perhaps not relevant to
> the REAL arithmetic that P1788 deals with -- they are either restricted
> to integers, rationals (e.g. represented as pairs of integers, or a
> finite set of Continued Fraction coefficients), or similar entities.
> 
> One such entity that *is* relevant however is Exact Real Arithmetic.
> Typical representations are spigot algorithms that deliver better and
> better rational approximations (here CF representations are ideal),
> effectively using deferred evaluation with backtracking as needed.
> Asking such a system to return the tightest interval leads to infinite
> regress and must thus be avoided.
> 
> In other words, what Dan ASSUMES to be there (an explicitly specified
> bound on precision and/or range) must be REQUIRED to be there for the
> definition of "tightest" to make sense.  Running out of resources does
> not count because, as Vincent pointed out, that leads to non-predictable
> behaviour.

	You are quite correct.  That is one of my
	assumptions.

	You see, I take your point but I don't believe
	it matters.  At least among the variable precision
	floating-point types.

	I don't care WHO makes the decision about what
	precision is to be used.  So long as the decision
	is made (even when it is made for the user without
	the user's knowledge or permission) we have
	something upon which to hang a standard.

	However, for things like rational arithmetic,
	continued fractions, symmetric level index, or
	anything else along these lines, I have two
	opinions on the matter.

	The first is that I agree that they do not have
	well specified precisions or, in some cases,
	ranges.

	The second is that I would exclude them from the
	standard.  And partially for the reason that they
	DON'T have well specified computational
	characteristics.

	Rational arithmetic & continued fractions have
	their uses but I would not bend this standard so
	far as to include them for THIS purpose.

	I am a little less sanguine about SLI.  Its faults
	are more subtle & wrapped up in its failure to be
	scale free.

	No, I would restrict candidate interval datatypes
	to those that are implemented in some kind of
	floating-point arithmetic.

	You may disagree.  Or think I'm being narrow minded.
	Perhaps I am.  But I think we bite off more than we
	can chew by considering them.  And the world will
	not be nourished by that if we choke on 1788 as a
	result.

	As for what you call Exact Real Arithmetic, it is
	really what could more properly be called Delayed
	Evaluation Real Arithmetic in that it only computes
	the next part when asked to do so.

	If it is implemented among the rationals I would
	exclude it along with the other rationals.

	But some such systems are implemented among the
	floating-point numbers.  And that is a little
	harder to figure out.

	I'm not sure about that one.

	Off the top of my head it seems to have more in
	common with multi-precision floating-point with a
	delayed evaluation than it does with intervals
	because, unless an interval starts out with a
	fixed non-zero width to begin with, the radius
	can never be computed.

	But I'll have to think about it.

	Come to think of it, some of the other "unspecified
	precision" arithmetics also have the property that
	the radius cannot be computed unless it is non-zero
	to begin with or there is some arbitrary cut-off
	in the computations.  A fixed limitation but not a
	fixed precision.

> 
> Michel.
> 
> P.S.  There is another booboo in Dan's examples:  mid-rad representations
>       where the radius has a smaller exponent range than the midpoint.
>       Unless the radius is RELATIVE this won't work very well, as the
>       only valid enclosure for large midpoints would be Entire.  Trouble
>       is, relative radii have trouble with zero-centered intervals...

	Quite so.  I am still a novice in this field.  I learn
	something new every day.

> 
>       I think Arnold's triplex formats can deal with this.
> 
>       That's a completely different topic however.
> ---Sent: 2010-07-15 03:31:59 UTC

	If you mean formats of the form (mid,rad1,rad2) which
	represent [mid-rad1-rad2,mid+rad1+rad2], I think of
	them as more or less equivalent to mid-rad forms in
	which the rad is split into higher & lower parts
	[mid-(rad1+rad2),mid+(rad1+rad2)] in much the same way
	as double-double is often done in floating-point.

	And they should have no trouble meeting our specs with
	appropriate level 2 sets.

	I am also intrigued by a similar form that represents
	[(mid+rad1)-rad2,(mid+rad1)+rad2].  It is ALSO similar
	to double-double forms but increases the precision of
	the midpoint rather than the radius.  It seems more
	useful to me & equally easy to fit into our proposed
	level 2 spec.

	Both of these have the property that they get a little
	higher precision & might find fast implementations using
	hardware for each of their parts without further need
	for software arithmetic.

	Understand that there are problems with double-doubles
	in floating-point.  But they are largely concerned with
	the fact that the density of numbers is variable in a
	funny way depending on the value of the number.  However,
	in intervals this should not be much of a problem as
	the "effective precision" of an interval type is more
	concerned with the radius than the midpoint.

	But you guys know better.  Perhaps I'm mistaken.

	I'm counting on you to let me know. :-)


				Dan