Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Motion 35 -- New levels 1 and 1a (amendment)



Some comments on Nate's revised "Overflow" document.

Page 3, section 2.1:   the hull of { f(x): x \in X } is simply ...        (1)
                         f(X) \equiv [min_X f(x),  mx_X f(x) ]

        This is true, but "simply" is misleading, as the reason for
        the existence of min_X and max_X is rather subtle:  it is
        due to the fact that the subset X of the natural domain D_f
        is given a-priori.

Page 3, section 2.2:   "infinite amount of precision" may introduce
         the same kind of difficulties as "unbounded range", because
         this permits a pole to be approached infinitely closely.  So
         I would expect a Level 1a parameterization of the minimal
         distance between distinct points, or the minimal width of a
         non-singleton interval -- so we could describe "underflown"
         results.

Page 4, extension of FTIA into IRbar.  When the interval hull (1) is
         redescribed as  { f(x): x \in X_f } with X_f = X \intersect D_f  (5)
         for *any* X \in Ibar^n, we lose the a-prioriness of X_f, and
         now inf and sup may not exist anymore, so the definition is
         essentially incomplete.

         An interesting function on R^2 to consider is x^2 + (xy-1)^2
         which is strictly positive but has no minimum.  This exposes
         the issue with infinite precision I raised above.  (If this
         function had a minimum it would be when y^2 = y^2 + 1 -- not
         very likely except in FP (level 2) arithmetic!)

Page 5, Proposition 1:  "... as defined in (5)".  But (5) is broken!

Page 7, comments on 754:  I agree that overloading infinity in 754
         has led to some difficulties, but I very much doubt that
         the Level 1a approach would have helped.  If the 754 flags
         are used properly (perhaps possible only in assembly language)
         there is in fact no ambiguity, but this is simply not practical,
         for a number of reasons, including horrible performance (even
         when coding optimally in assembler), because of the interleaving
         of flag and computational operations.  But mathematically it is
         possible to resolve the overloading, so that as a model of FP
         arithmetic (in the Level 1 sense) it holds together much better
         than you might think (the signed zeros are a bigger issue here).

         In any case, this was just a statement of opinion.

Page 8, Table 1:  what do the top-left and bottom-right corner entries
         mean?  There are no [-w, -w] or [+w, +w] overflow families as
         far as I can tell.

Page 8, near bottom:  [1,+w] is a different mathematical object than an
                      unbounded interval, so we may define the bisection
                      point of [1,+w] as a real number.
         "so"?  This is a non-sequitur.  We can say with equal justification
         that we want the midpoint of an unbounded interval to exist, so we
         pick an arbitrary real number.  This is simply "domain completion".

         What *is* different is that Level 1a explicitly introduces a scale
         parameter h, which makes it easier to pick that arbitrary real
         number.  It could still be h, h/2, sqrt(h) or anything else.


Page 9, first sentence:  Nobody ever thought that the "overflow" concept
         was too new or not well understood.  The objections have been
         to Nate's approach by means of "overflow families".  THAT is a
         new concept, interesting in its own right, and we are discussing
         its implications right now.

Page 9, bottom third:  "Since smedian2 is a variant of geometric mean..."
         introduces another non-sequitur in my opinion, and I also don't
         think the various approaches to bisection lead to a "snare of
         logical contradictions".  Domain completion requires choices,
         but any contradictions would just arise from trying to address
         incompatible requirements.

Page 10, equation (14).   Why should anybody be surprised?  It is
           perfectly ok to have partial functions -- one simply has
           to deal with them appropriately.  Yes, this may mean case
           by case handling, but that's what most programs do.

Page 13, formula (19):  And how is this better?


Michel.
---Sent: 2012-05-17 23:09:46 UTC