Re: [Reliable Computing] abs[x] for intervals? (reason for a standard)
Alan,
In some applications, such as non-smooth optimization, you want
"abs" to be the range of the absolute value, whereas, in other
implementations, you want it to be the "magnitude," that is, the
largest absolute value, rounded up, of any point in the argument
interval. (Additionally, the "mignitude," or the smallest absolute
value of any point in the interval, rounded down, is usually
included, for a triad of extensions to the absolute value. The
mignitude is useful, for example, in proving diagonal dominance
of a matrix with uncertain entries.)
In my Fortran 90 module INTERVAL_ARITHMETIC, I have ABS and MAG
(and MIG).
Your quandary is a prime example of the usefulness of a STANDARD.
In particular, if there were a standard (in this case, probably,
a programming language standard), there would be no question,
and the user would not be surprised.
This is why I am sending a copy of your message to the IEEE
P1788 working group on the standardization of interval arithmetic.
Best regards,
Ralph Baker Kearfott
Alan Eliasen wrote:
I've received some questions about my implementation of the absolute
value function for intervals of real numbers in my programming language
Frink ( http://futureboy.us/frinkdocs/ ). My behavior follows the
definition for abs[x] defined in Ramon Moore's book _Methods and
Applications of Interval Analysis_ (see p. 10, eq. 2.5). The definition
is:
abs[x] = max[ abs[infimum[x]], abs[supremum[x]] ]
Which obviously gives a single scalar value, and not an interval.
When passing intervals to an algorithm that was originally written
with other numerical types in mind, (yes, I have the usual cautions
about this,) some users have expressed surprise at the result returned
by the abs[x] function. They would expect, for instance, that the
absolute value of the interval [-3, 2] would return [0,3], which is a
reasonable expectation, and would allow many more algorithms to work
without modification. In addition, being able to retain a
"main/middle/best guess" value for intervals that already have a "main"
value would be a benefit, rather than returning a scalar value.
So, my questions are:
* What was the original rationale for this definition?
* Is this definition still considered best practice? Have other
texts proposed a different definition?
* If not, what is the currently-accepted best definition? What are
its strengths and weaknesses?
* If you have an implementation of interval arithmetic, what
definition do you use?
* Which definition do you find most appropriate for converting
real-valued algorithms to use intervals?
My proposed definition is broken into 3 cases showing the code paths
that will be taken for efficiency:
For intervals straddling 0:
abs[x] = [0, max[infimum[x], supremum[x]]
For intervals with supremum < 0:
abs[x] = [-supremum[x], -infimum[x]]
For intervals with infimum > 0:
abs[x] = x (i.e. no change necessary)
--
---------------------------------------------------------------
R. Baker Kearfott, rbk@xxxxxxxxxxxxx (337) 482-5346 (fax)
(337) 482-5270 (work) (337) 993-1827 (home)
URL: http://interval.louisiana.edu/kearfott.html
Department of Mathematics, University of Louisiana at Lafayette
(Room 217 Maxim D. Doucet Hall, 1403 Johnston Street)
Box 4-1010, Lafayette, LA 70504-1010, USA
---------------------------------------------------------------