Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: Motion on interval flavors



George,

For me, the most important aspect of Kaucher/modal interval operations is
the speed and efficiency advantages they provide over their classical
interval counterparts when realized in hardware.

Linear interpolation
   L(U) = A + U * ( B - A )
is the fundamental operation to evaluating polynomials is Bezier and
b-spline basis. If
   A=[5,9],     B=[1,2],    and    U=[.2,.3],
then computing the linear interpolation with classical operations yields
   [5,9] + [.2,.3] * ( [1,2] - [5,9] )
       = [5,9] + [.2,.3] * [-8,-3]
       = [5,9] + [-2.4,-0.6]
       = [2.6,8.4]
Considering that the set-of-values result is
   { a + u * ( b - a ) : a in [5,9], b in [1,2], u in [.2,.3] } = [3.8,7.6]
it is easy to see that the operations of a classical interval processor
cannot compute very good estimation of the exact solution.

On the other hand, with Kaucher/modal intervals realized in hardware, one
can compute:
   [5,9] + [.2,.3] * ( [1,2] - Dual( [5,9] ) )
       = [5,9] + [.2,.3] * ( [1,2] - [9,5] )
       = [5,9] + [.2,.3] * ( [-4,-7] )
       = [5,9] + [-1.2,-1.4]
       = [3.8,7.6]
and so in the same number of arithmetic operations (the dual operator is
"free"), the Kaucher/modal interval operations in hardware can compute the
optimal range enclosure. Note, e.g., requirement of the dual intervals
[9,5], [-4,-7], and [-1.2,-1.4] that Svetoslav talks about.

As discussed in detail in Chapter 6 of
   http://grouper.ieee.org/groups/1788/Material/Hayes_Modal%20Intervals.pdf
one can also compute the optimal bounds by reverse-engineering the modal
interval arithmetic into primitive floating-point operations, i.e., the
linearInt() function that appeared in an version of the Vienna Proposal.
However, then one is stuck with a bunch of floating-point operations. But as
I've mentioned many times before, this suffers from tremendous performance
penalties due to the required if-then branching, which can flush the
floating-point processor pipeline; not to mention the required changing of
rounding mode directions etc. Also, these operations can not be performed on
the SIMD registers of modern Intel and AMD computers, e.g.

It costs millions of dollars develop an ASIC, so why will hardware vendors
waste this time and money to put classical interval operations into hardware
when they cannot actually compute good range enclosures but the
Kaucher/modal arithmetic can?

The linear interpolation is one example. Similar examples were studied by
P1788 a long time ago and results compiled in one of Arnold's papers about
imroving interval range enclosures. If you look a the solutions in that
paper, the narrow or optimal range enclosures cannot be computed by the
classical interval arithmetic operations; so to compete with the results of
Kaucher/modal operations again the challengers had to result to algorithms
consisting entirely of endpoint analysis with floating-point operations.
However the results were reported in a manner that suggest this was somehow
a "better" solution, when in fact those solutions may run many times slower
than equivalent Kaucher/modal solutions if the right hardware is
available... these types of practical electrical engineering issues were
simply never taken into consideration.

Again, I ask the question... if all the interval range enclosures are
computed by complicated floating-point programs, why will any company waste
the time and money to put classical interval operations into hardware. If no
one will hardly use them?

If P1788 is a standard aimed mainly at specifying the interval operations
that will be implemented in hardware, then IMO it should really focus on
Kaucher/modal operations; since in so many of the above examples it is clear
that classical interval operations will simply not be used.

Nate