Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
On 3/7/2013 8:52 PM, Michel Hack wrote:
This is pretty unlikely, it seems to me. Most programming environments do notRichard Fateman wrote:On 3/7/2013 12:51 AM, Arnold Neumaier wrote:This would not be enough as the error in a computed scalar can be arbitrarily large, depending on how it was computed.If the error in a computed scalar can be arbitrarily large, then no such number whatsoever should be permitted as an endpoint to an interval ever.If the sign of the error is known -- and it can be known when directed rounding is used appropriately -- then this is perfectly ok. support directed rounding in any convenient or portable fashion and so few mathematical libraries use them. And if they were used, it would be in internal forms to achieve a final result which was correctly rounded to nearest. That is, something like: set_rounding_mode(negative_infinity) y:=sin(x) is not the kind of thing supported anyplace I know of, to get a lower bound on sin(x). Perhaps you know of such a system? I suspect that most subroutines assume the rounding mode is set to nearest. If it is not, they get some answer, possibly not the right answer. After all, a negative number rounded would be reversed in direction.... Or perhaps some system routines set the rounding mode, thereby ignoring any prior specification. Or the rounding mode is reset in some context switch in the operating system and the user has no real control of it, anyway. What DOES happen is that people define libraries that will compute y:=sin(x) to within 1/2ULP or 1ULP or some other specification. From that one can easily compute a lower bound and an upper bound. Well, I agree that one can define the standard and provide a reference implementationThat's precisely the point in encapsulating the whole interval constructor instead of letting a program do it piecemeal. by creating a high-level assembly language in which the only semantics available is that of the sequence of calls to subroutines like those in INTLIB... e.g. (from http://interval.louisiana.edu/preprints/interval_arithmetic.pdf ) C Compute X**4 +
X**3 + X --
CALL
POWER(X,4,TMP2)
CALL
POWER(X,3,TMP3)
CALL
ADD(TMP2,TMP3,TMP2)
CALL
ADD(TMP2,X,F)
but then RBK explains with Fortran 90 he can do USE INTERVAL_ARITHMETIC TYPE(INTERVAL) X,
F
CALL SIMINI
X = INTERVAL(1,2)
F = X**4 + X**3 +
X
And precompilers for interval arithmetic are at least as old as 1976 / Augment/ . Perhaps I am being unrealistic in thinking that intervals can peacefully coexist with machine scalars, and the standard should encourage that, not handicap it. Putting such a burden on the language to call text2int for all floats makes adoption of such a standard substantially less interesting, and would, I think, push most languages into a precompiler stage. Ugh. But if you insist on doing number parsing as part of the standard, then "1/10" is a far more explicit and obvious notation than "0.1". If the mathematician was so inclined to prove something about x in [1/10, 2/10] with a computer, then he (or she) should first make sure that [0.1,0.2] meant the same as [1/10,2/10]. Are we going to protect the mathematician from all possible representation- related issues in the scalar world outside intervals? This goal does not seem objectively satisfiable unless I am misunderstandingYES -- that's precisely what text2interval() can do: make the program portable to environments with different underlying representations, while at the same time allowing the properties of the representation to be fully exploited. something. It seems to me that programs that work in double-float may fail in single-float. Not in the validity of an enclosure, but in the usefulness of the answer, namely tightness. If your result is to test to see if two intervals intersect, your program may not work the same everywhere. Perhaps one can show that in some sense a better representation will always get an answer that is at least as good as a worse representation. Certainly that is the purposeThe value is determined by the mathematical problem posed, which is usually in text format.I really don't see this the same way you do. There are applications in which some physical problem starts with some uncertainty. A scientist will not know the uncertainty exactly and so will add some "slop" to it.The purpose of the standard is to define primitives that can be used to construct some programs with provable properties (NOT to guarantee that every program will have provable properties). Right. But it goes a good deal beyond that.What those properties are depends on the application, but containment is the underlying premise. TrueThere are problems that come from pure mathematics. It seems to me that a mathematician approaching a computer should understand how numbers are represented in a computer, and how to construct, using integers, etc. any exact numbers he or she needs to be represented in the computer.The trouble with this is that different representations have different properties, different ranges, etc., which makes it very difficult to write portable programs. It seems to me that correctness is easily achieved by providing routines that always return Entire, sinceThis affects integers as well as various approximations to real-number arithmetic. By providing appropriate primitives we can avoid this issue, to the extent that representation issues will affect the quality of an implementation (e.g. tightness), but not the correctness (i.e. containment). that will enclose any valid answer. I suppose there is a circumstance in which just returning Entire (without bothering to compute anything!) is incorrect. That is, if the program never halts, and you didn't notice that, so returning Entire was incorrect. Anyway, even a better version of correctness is still quite easy at least for bounded intervals and rational operations. Quality matters. Not a typo.I recall, but cannot spot some discussion of 0.5 vs 0.50.The Vienna Proposal mentions some literal representations to denote intervals representing uncertain numbers such as 12.3_ # represents 12.3 +- 1 ulp, i.e., [12.2,12.4] 1.23?e3 # represents 1.23e3 +- 1/2 ulp, i.e., [12250,12350] So far we have not had any motions that go to that level of detail. It would certainly be ok for text2interval() to support such conventions. But some explicit syntactic denotation would be required; conventions that distinguish 0.5 from 0.50 are just not common enough (and make it impossible to express exact decimals, except as explicit rationals, which could conflict with a "no expressions please" syntax).what happens if we add .000 at the end of [2¬1023,2¬1023], and it causes us to round down and up. The endpoints of the original zero-width interval would be known to about 307 decimal places, but the endpoints of the widened interval would be known to about 16 decimal places.Was the .000 a typo? ok. I assume .001 WOULD cause widening. By a huge amount. Thanks for theIf not, it would not cause any rounding or widening. clarification. So maybe that should be used. It is not so hard to do. Maybe you'd needIn any case, the number of digits of an exact decimal representation of a binary floating-point number (which exists; max is 751 for IEEE binary64) is a red herring. It is perhaps mathematically awkward that zero-length intervals are ony possible for certain numbers -- but the fact that this set includes all reasonably-small integers and a few common fractions (namely the dyadic ones) means that it is possible to take advantage of point intervals in most representations. Lisp's rationals are of course a lot more attractive in this respect, two arrays of 751+ decimal digits and a software division algorithm. These are fun but not very useful.and exact-real arithmetic (via spigot algorithms or deferred functional expansion) would be even more so -- unlike unbounded rationals it might even meet the requirement for a well-defined hull. Yes. Welcome to the club.(Now I wandered off the range... sorry.) RJF |