Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
1)
Yes, it would be desirable to compute the exact range all the time and to avoid overestimation, but it is known theorem that computing the exact range
of a function under interval uncertainty is NP-hard, even for quadratic functions, so if we want to get guaranteed bounds, excess width is inevitable, i.e., if we want to always get an enclosure, an interval always containing the exact range, it is inevitable
that we have an enclosure which is sometimes different from the exact range. 2)
The original example of x-x when x is [1,3] (and a similar example of x/x) are based on a major misunderstanding of interval computations which is,
unfortunately, rather common: that interval computations means replacing each operation with numbers by a corresponding operation with intervals. This so-called naïve interval computations was never advocated by anyone, the very first 1966 book by Moore has
exactly this example of x-x to show that this is NOT the way to go. A proper way – as shown in any textbook or survey on interval computations and implemented in all the packages – is to first check monotonicity and then use centered form if the function is
not monotonic. Monotonicity can be checked if we apply automatic differentiation to the _expression_ and then use naïve interval computations to find the range of the resulting derivative. If the resulting interval is always non-negative or always non-positive,
the function is monotonic and its range is easy to compute by simply computing the values at endpoints. When we apply this to x-x, interval computations lead to 0 range, NOT to [-2,2]. Same with x/x.
From: stds-1788@xxxxxxxx [mailto:stds-1788@xxxxxxxx]
On Behalf Of Mehran Mazandarani Dear John, Vladik,
Michel, and IC members Thank you for your kind reply. As I mentioned, based on the Std. 1788-2015 we may obtain results which lead to values that are not possible actually.
Michel told that it is because of dependency issue, and that
Moore's arithmetic is indeed a worst-case arithmetic -- but this allows it to *guarantee* that the computed result encloses
any possible actual result. Then, he told When an interval programmer writes a program to compute a particular
function, which could (in the point-function context) be written as an 1. Using Moore's approach leads to obtain values which may not be possible, i.e. impossible values. So, it makes us to analyze, decide, and design systems and processes with too high costs and probably too complex. That all of these are
because of values which are impossible, i.e. they will not happen. 2. The standard should be based on an approach which makes us be assured in at least some future development. This is while as you can see the attachment, using advantage of specific knowledge we get to what
I termed it Restoration issue. Additionally, we are just considering very simple cases, because the
I recommend to rethink about the section of Std. 1788-2015 which deals with fourth basic operations in order to avoiding what will mislead us in so many problems.
Comments are welcome. Thank you so much for your kind attention and consideration. Warmest regards,
Mehran Mazandarani |