Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Let's gather some evidence!



Sylvain and P1788

On 8 Sep 2009, at 09:59, Sylvain Pion wrote:
Introducing NaI(s) has a cost in the implementation of
basic operations (+, *) due to the propagation rules.
I can see how to propagate the empty set at zero cost
(if implemented as (NaN, NaN)), but I fail to see how
to generalize it for NaI : neither with NaN payloads
(how do you combine them?), nor with a non-NaN as second
member (needs fixups for e.g. the unary minus operator).

I think it is very important that the IA standard be
efficiently implementable with current hardware,
because it is through software implementations that it
has a chance of being adopted first, to later have a chance
of seeing hardware support appear.  So I would need a much
stronger rationale to support NaI(s) at this stage.
The alternatives mentioned for reporting "errors" appear
to be enough for me now (variants of the functions
returning more information besides an interval, "flags"...).

These are important considerations now we are getting closer to level 3 issues of choosing representations for special values like Empty and NaI. But I'ld like your view on these points:

- With current hardware, how do you envisage "flags" being implemented efficiently?

- How do you envisage "returning more information besides an interval" being implemented? If it is by the mechanism of an extra output argument, so c = a + b becomes c = plus(a,b,flag) as I believe is the scheme in the draft C++ interval standard, how should one remedy the huge inconvenience to the user, who has to rewrite existing function code?

- I reported to P1788 on 13 October 2008, results of performance comparisons in C++ of BIAS in various versions. Actually just the lowest level Bias0.c (scalar operations, not vector). One was equivalent to the official distribution, with neither Empty nor NaI. The other had both Empty and NaI (and actually implemented csets, which one would expect to slow it down further).

Running a serious benchmark test with Nedialkov's VNODE-LP interval ODE code, taking about 150 seconds on my 2.33 GHz Intel Mac. Result: the overhead of the more complicated version was less than 6 percent.

Defects of my comparison:
* It's on only one platform.
* No Empty's or NaI's were actually created, because VNODE-LP
  uses a "classical" interval model.
* No vector-optimised operations were used. Even in the linear
  algebra, VNODE-LP calls the Bias0 scalar arithmetic operations.

Moreover, on the more "philosophical" ground, I think the
reasons for introducing NaNs in FP do not hold for IA,
as the IA system is more naturally closed.  Introducing
a cost in all interval operations mostly to handle cases
coming from FP->IA conversions seems like a wrong trade-off
to me.

"IA system is more naturally closed" is fair enough. Against this, the requirement for correctness in IA is _much_ greater. We claim to write code that gives mathematically proven enclosures.

FP->IA conversions _will_ occur. If we don't have NaI, what? The possible choices of return Empty, return Entire, throw exception, etc., were discussed at length in the C++ interval forum, and all were found deficient.

I think it really is time to have some objective evidence of the performance consequences of various choices on current platforms. Here are some hypotheses that may be worth testing:

H1. On some important platforms, the presence of code for NaI seriously affects speed even when no NaIs are generated.

H2. Same, but when a significant number of NaIs are generated and then propagated through a computation.

H3. H2 may not be true for purely scalar computations, but is true when using interval elementary functions that have been vector- optimised.

Who will take up the challenge? Sylvain, I guess you have the ability to provide such evidence, I am sure your C++ proficiency is many times mine. One could use (variations on) BIAS, or Filib++, etc. I suggest some basic linear algebra would be an appropriate benchmark.

Best wishes

John Pryce

=========================================================
P.S. The following may be of interest about the standard distribution of Profil/BIAS.

Notation: lowercase a,b,... means point value, uppercase A,B,... means interval. A v in front means a vector. So a * vB means (point value) times (interval vector), etc.

In Bias1.c -- the basic vector interval operations:

- Add (vR = vA + vB) and subtract (vR = vA - vB) are
  directly coded in vector form.
- Component-wise vector multiply (vR = vA * vB) is
  not provided, nor is divide (vR = vA / vB).
- But interval scalar-times-vector (vR = A * vB) is
  provided and is directly coded in vector form, using
  the nested IF's method.
- Interval vector-over-scalar (vR = vA / B) is
  provided, but is done by repeated calls to the
  scalar divide operation (R = A / B)

Many other flavours are provided, e.g. (point scalar) times (interval vector); also various flavours of vector fused-multiply-add, and dot product.

It would be interesting to see how vector (component-wise) multiply and divide perform if coded using the "min and max of 4 cases" method (as in the Kulisch op-tables) instead of nested IF's. With and without NaI's present.