Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

RE: Uncertainty is essential?



I definitely agree, this is very much in line with the fact that in general, especially when special functions are involved, no algorithm can compute the exact range f(x1,...,xn) -- and even for polynomials, while computing the exact range is theoretically possible, the worst-case computation time grows exponentially (unless P=NP), so, even for reasonable size inputs, it can exceed the lifetime of the Universe.

So, when we prove, by using the that h(x)=def=g(f(x1,...,xn)) is everywhere defined and/or contiuous, the proven fact is true, but when we cannot prove it by using our method -- e.g., when g(x) is sqrt(x) and we are not sure whether our range of f(x) contains negative numbers or not since the  enclosure contains negative numbers -- the onloy thing that we are concluding is a warning that it MAY be underfined somewhere.

In this sense, probably all the usual exceptions that we generate are warnings. We can try to distinguish special cases when they are DEFINITELY warnings, but this is complicated, because this would require not enclosures but vice versa, estimated which are guaranteed to be within the range (like what Kaucher interval does). It is probably too late to add such new ideas to the standard, it is easier to simply explain that all generated exceptions are warnings (as they often are in computing in general; for example, in LaTeX sometimes a warning indicates is a real mistake, but what many of our students do is hit Enter, and compiling often continues perfectly well). 

________________________________________
From: stds-1788@xxxxxxxx [stds-1788@xxxxxxxx] on behalf of John Pryce [j.d.pryce@xxxxxxxxxx]
Sent: Friday, January 24, 2014 5:45 AM
To: stds-1788
Subject: Uncertainty is essential?

P1788

Thinking about the revision of §7 "Flavors" I've come against an issue that seems to me to be important, but I've not got my thinking straight on it yet.

The problem seems clearest in a traditional bare-interval system that signals an exception, which might terminate execution, when something nasty happens. So assume that, and assume an interval type T (probably inf-sup) based on IEEE754 binary64 numbers, call it format F.

Suppose an interval operation *might* have encountered a nasty point (take this to mean where the function is undefined, so "our" decorated intervals would go from def, or better, to trv). Suppose it's hard to be sure, and the function signals an exception to report its suspicions. What exactly should that exception be asserting to have happened?

The following is just a "for-instance" but illustrates the point well. Take tan(x), which has a singularity at each odd multiple of pi/2. Let
  p = 214112296674652  (probably an old friend of Vincent Lefevre's).
This is very close to s=q*pi/2 (s for singularity), where
  q = 136308121570117.
In fact
  s = 214112296674652.000000000000000259...

p is exactly representable in F (is an F-number); so are p-1, p+1. Hence the intervals
   zz = [p-1,p+1], xx = [p-1,p],  yy = [p,p+1]
are exact T-intervals. Since p+-0.5 are F-numbers too, this is also true if T is a mid-rad type.

Suppose we are doing a bisection algorithm. To check that a singularity of tan() lies somewhere in zz is easy in F-arithmetic. But checking which half it's in, xx or yy, is not so easy. Suppose we are working on xx; our best efforts to find whether s is inside xx fail; so we signal an exception E.

Does E tell the programmer "xx contains a singularity"? If it does, then knowing there is only one singularity in zz, she can logically conclude "yy does NOT contain a singularity", which is false as you can see from the numbers above. To avoid making false statements, the only thing E can say is "I am unsure whether xx contains a singularity or not".

I conclude: exceptions that assert uncertainty are inseparable from bare interval arithmetic, if we are to avoid lying. And this is true in any flavor, for any type T capable of serious computing.

Points:
- For (our) decorated intervals, this is a practical issue but *not an issue of principle*, because they handle uncertainty simply by downgrading a decoration, e.g. from dac to trv.

- I guess (our elementary functions experts would know) that similar examples exist in any precision.

- It might have been a good idea for P1788 to standardise on tanPi(x) = tan(Pi*x), and similarly for other trig functions. But it's too late for that, and probably just shifts the difficulties elsewhere.

Some in the group have argued that only decorated intervals should exist. But I think most folk want bare intervals to be available, for performance reasons.

Therefore it seems to me that the Flavors clause should build in an exception that says "I'm not sure" into the foundations of the standard for all flavors.

I don't see an essential difference between this and the "IntvlConstructorUnsure" exception we already have, so unless persuaded otherwise I'll submit a motion that the latter be renamed "PossiblyUndefinedOperation" or similar. Similarly, that "IntvlConstructorFails" be renamed "UndefinedOperation", and these be used by any operation whether constructor or not, in any flavor, in suitable defined situations.

Jumping the gun, I'll put this into the revised Flavors Clause 7.

John Pryce