Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: dependent and independent intervals, proposal to toss out



On 3/3/2013 10:31 AM, Michel Hack wrote:
Richard Fateman wrote:
Your explanation then says that a string s, say s = "0.1" which is
acceptable as input to the IEEE754 string-to-float conversion is also
acceptable to the text2inteval program.  However it seems it does not
necessarily denote the same numeric value in text2interval as in the
IEEE754 standard.  In text2interval("[0.1 ...]"), the number is 1/10.
The IEEE754 standard defines TWO possible results for "0.1", depending
on the current rounding direction.
I am not familiar with the -2008 version, but I expect there would be more possible results:
1.  In decimal radix, 0.1  would be the same value as 1/10
2.  In binary radix, 0.1 could be rounded  in each precision: single, double, extended, whatever..
   a.  up
   b. down
   c. nearest  (which would be one of the above)


In text2interval() the context is known, so (as John Pryce already
mentioned in his reply), one part of the string will be converted
with round-toward-minus, the other part with round-towards-plus,
for an inf-sup type. 
Is this really necessary?  If I create an exact rational interval [1/10,1/3]  I do not
expect any rounding whatsover.  If I create a decimal_text2interval(" [0.1, 0.2]")
 I do not expect any rounding either. Perhaps I am mistaken though?

If I create a binary interval with binary_text2interval(" [0.25, 0.5]")  where both
endpoints are comfortably exact in binary, then it seems to me there are two
possibilities.  (a) The intended interval is exactly as stated ... [1/4,1/2] ... or
it is something else, but the closest numbers were 0.25 and 0.5....

If the true
numbers were, say [0.250000...1 and 0.499999999...9], but the best that the
input routine could manage was 0.25 and 0.5.  then in that case rounding seems appropriate.

So I suppose this is the  provenance issue you indicate.

a:=readfromstring("0.25")
vs.
a:=readfromstring("0.250000000000...1")
vs
a:=readfromstring_rounding_down("0.25")
vs
a:=readfromstring_rounding_down ("0.250000...1")
 For a mid-rad type the appropriate roundings
would again be chosen to satisfy containment.

If you convert the different parts of the string to floats separately,
using default rounding mode to nearest for both, and THEN package them
as an interval, you have lost the proper rounding context, and you
will not be able to guarantee both containment and tightness (you could
get one at the expense of the other).

So perhaps the sentence "Its value is the exact real number it represents."
would be clearer if it said something more like this..
In a private discussion I had suggested that "it represents" could be made
unambiguous by saying "the text string represents", and John accepted it,
even though he thought there was no ambiguity.

One could write it then, in pseudo-code as
  text2interval(s):=
    look for [A,B];
    return (nums2int(read_from_string(A), read_from_string(B))
    ...(other forms omitted)
No, it would be:
      look for [A,B];
      return (nums2int(read_from_string_rounding_down(A),
                       read_from_string_rounding_up(B))

I understand this motivation and I was about to comment that of course I agree,
but then on closer thought, it seems to me that both interpretations make sense.

That is, if someone produces a hexadecimally formatted number and
reads it in as an endpoint, why should we diddle it up or down?  Someone
doing this could already diddle it first.  Same for a decimally formatted
number.  There is nothing especially wrong with 0.1d0 as a specification
for a number, as long ast you realize it is not 1/10.   If someone computes a
machine number and tells us to use it as an endpoint of an interval
via nums2interval,  we believe it, so far as I can tell.

  In my own implementation I have 2 separate functions,   (ri 1.0 2.0)
{ri = construct a real interval from inf-sup}
and (widen 1.0 2.0)   {actually there is a 3rd optional argument for decoration.}

ri produces a interval which uses exactly the endpoints provided. These are
numbers in the host system and not read in specially.   The program
widen produces an interval with endpoints bumped down and up.
Widen is really a cheap and crude way to approximate...

(ri  <compute a lower bound on the lower endpoint, perhaps by rounding down appropriately>
     <compute an upper bound ... etc>)

It is cheap because you can just blast away and if you are entitled to assume
the scalar results for lower and upper bounds are within 1 ULP, then bumping
them up and down by 1 ULP gives a valid enclosure. 
It is crude because it may widen the interval too much.
But you know all this.

The question is, if I know an enclosure, should I be able to specify it with text2interval
without someone diddling the endpoints?  Should text2interval do the diddling but
not nums2interval ?   I could, after all, do something like this:

a:= 0.1    // whatever the host system produces
a:= a*(1-epsilon)   //nudge down
....
nums2interval(a, b) ..

as you suggest,

or leave out the rounding or simulated round, as I originally wrote.

I would simply use the equivalent of nums2interval(1/10,1/3).
That's perfectly ok in a system that supports exact rationals.
have we defined text2interval("[1/10,1/3]")?

(I do not discount entirely the possibility of making a library to do exact
rational interval arithmetic even if the host system has no rational arithmetic.
It's not something that I would expect to be very useful, though perhaps in
some theorem proving situation it might help

In general however, nums2interval(x, y) requires the programmer
to be aware of the provenance of the scalars x and y. 

 It is
expected to be used when parts of a computation are carried out
on scalars with explicit directed rounding, as that sometimes
allows tighter enclosures to be returned by competently-written
library routines
This makes perfect sense, except that I though the excuse for text2interval was
that IT would make tighter enclosures possible.  Or was that just the argument
for numbers like pi?
.

The following is an unrelated question:
Also I'm not sure how arbitrary precision works into this.
If you put in too many digits does IEEE754-2008 round?
Of course it rounds, and the rounding is specified rather precisely.
But it ignores the number of digits actually provided in favor of the number of
digits supported.  Mathematica, for example, increases the precision for
numbers with more digits presented  (if there are more than about 19.)


(1) It shall observe the current rounding direction.
(2) When no radix conversion is implied (e.g. decimal->DFP,
    or hex->BFP), any given number of input digits shall be
    converted with correct rounding.
(3) When converting from decimal to binary, it shall support an
    input precision of at least three more decimal digits than
    would be needed for a value-restoring round-trip conversion
    to the widest supported binary format (and it should support
    unbounded input precision).
So there no limit to the number of digits it will consider?   This is probably not
so hard, since you just need to compute the guard digit, and the actual
precision is fixed...
OK

(4) When converting from decimal to binary with bounded precision,
    and more input digits are given, a first correct rounding (in
    the current rounding mode) shall be made in decimal to the
    stated precision bound, and then this intermediate result shall
    be converted as per (3).

Michel.
---Sent: 2013-03-03 19:12:43 UTC