IEEE 754R minutes from May 23, 2002

Attendance and minutes review

The IEEE 754R committee met May 23, 2002 at Network Appliance. In attendance were:

Mike Cowlishaw clarified that we have about 18 months before IBM would like something to which they can commit. He was unable to comment on whether "commit" meant "commit to silicon." He also clarified the difference between significance arithmetic and the proposed decimal arithmetic with redundant encodings. The proposal admits multiple encodings of the same number, when the normalized representation of that number has trailing zeros. Arithmetic in the proposed system is still correctly rounded: the values of results depend only on values of operands, and not on the representation of those operands. In contrast, significance arithmetic implementations assign meaning to the amount of denormalization in the representation, so that the value of a result depends not only on the values of the operands, but also on the representation of those operands.

Professor Kahan also looked up the citation from his paper on "Mathematics Written in Sand," which he mentioned last time. The article was written in 1958 by Daniel Delury for Mathematics Teacher, 51, pages 521 -- 530.

Draft review

As we often do, we began the draft review with a discussion of table 4. As we decided last time, the table no longer has language bindings. Also from last time, we said that implementations "shall" provide a means for logical negation of predicates. Footnote 6 in the draft which we were reviewing clarified the meaning of "logical negation":

There may appear to be two ways to write the logical negation of a predicate, one using NOT explicitly and the other reversing the relational operator. For example, the logical negation of (X = Y) may be written (NOT X = Y) or (X ?<> Y); in this case both expressions are functionally equivalent to (X != Y). However, this coincidence does not occur for the other predicates. For example, the logical negation of (X < Y) is just NOT (X < Y); the reversed predicate (X ?>= Y) is different in that it does not signal an invalid operation exception when X and Y are unordered.

The ? mark modifier is Kahan's preferred syntax for designating non-signaling predicates.

Unfortunately, we were all looking at table 4 and not at the explanatory text. We argued at length whether the "logical negation of a comparison predicate" refers to a complementary predicate with the same signaling behavior as the original, or to what footnote 6 calls the "reversed predicate."

Jim Thomas discussed the points in his May 14 e-mail. First, he discussed the importance of quiet versions of traditional predicates. If it is important to have the complete set of predicates, quiet and signaling, it is possible to implement them using a small subset of the predicates plus a standard logical negation which does not alter the signal behavior of the predicate to be negated. We might make such a spanning set of predicates a "shall," and leave the remainder as "should." Thomas noted that he has not specified a syntactic preference, and he thinks it is a good thing. Riedy asked why we might not require the entire table, if all the predicates in the table can be implemented in terms of a few predicates we can require. Thomas objected that requiring the whole table would clutter language designs with little appreciable benefit to users.

Hough thought Thomas's proposal was basically about which rows of table 4 are required. He offered to try to clarify the point, so that we could return to it next time.

Decimal review

After last month's meeting, several of the participants played with ideas for decimal encodings different from the one Mike Cowlishaw presented. None was as good. There was some concern that other vendors might be prevented from developing clever implementations of the encoding by licensed IBM implementations in development. Bob Davis noted that IEEE policy is for any patents to be announced and on the table. Mike Cowlishaw and Eric Schwarz testified that the current approach is to simply convert the representation to BCD for arithmetic. Erich Schwarz stated that the playing ground is currently level; IBM's design team has not even sketched notes on a trickier implementation than conversion to BCD.

The committee stated that, unless the DPD encoding is royalty-free, we will not consider it.

We then turned to technical details. Hough observed that nailing down the encoding is likely the most important point from IBM's perspective. Cowlishaw replied that the most important point is actually the length of the fields. Kahan noted that there are reasonable, possibly attractive encodings which have no distinction between the exponent field and the leading digits of the significand. Since the representation will only be pulled from memory and passed through a decoder, the details of how the fields are separated seem to matter little to the programmer. Cowlishaw responded that, while this is true, a representation with strictly demarcated fields is easier to implement in either hardware or software. For instance, if the fields are strictly demarcated, it is possible to read the exponent without decoding the entire number.

Kahan thought that before deciding conclusively whether boundaries between fields should be sharp, we should first decide how many exponent digits there should be. Kahan noted that he neither cared whether the exponent range looks nice in decimal, nor whether there is some slop in the exponent encoding for a 16 byte format. Cowlishaw responded that there is a proposal circulating around IBM to increase the number of coefficient digits by one: 9 bits exponent + 16 digits significand (54 bits) for the eight byte encoding, and 16 bits exponent + 34 digits significand (114 bits) for the sixteen byte encoding. Kahan asked whether IBM could stomach having an exponent field which has a number of bits rather than a number of decimal digits, despite issues with printing. Cowlishaw replied that he could stomach that.

Jim Thomas asked how decimal and binary types will mix in languages supporting both. Cowlishaw replied that the answer depends on this history of the language, but for C-based languages, where floating-point constants are historically binary, that should remain the default. The C# language specification went into some detail on how decimal and binary interact. Jim Thomas clarified that he was more concerned with how an operation with one decimal and one binary operand would behave. Riedy and Schwarz objected that, without a cast, such an operation would be a type error. Kahan saw some similarity in how decimal and binary types might interact with how binary and hexadecimal types interact on the IBM 390G. On that machine, modules are compiled to work with one format or the other. Modules written for binary must do explicit conversions to call hex-based modules, and vice-versa.

Mike Cowlishaw pointed to C# as an example. The C# language has an explicit decimal type; decimal constants end with an m suffix, just as long constants end with an l. An assignment of a decimal number to a binary floating-point variable without an explicit cast is a type error.

Zuras opined that, contingent on intellectual property issues, our efforts to include decimal arithmetic should go ahead. Kahan objected that from a procedural perspective, it would be simpler to finish a binary floating point standard and then augment it to include decimal, rather than attempting both standards simultaneously. He noted that there are some tricky differences between decimal and binary that could be stumbling blocks. Bob Davis objected that much of our language does not distinguish between decimal and binary. Cowlishaw observed that he has worked extensively with 854, and found it to be very compatible with 754. Kahan remained adamant that, at least initially, we should keep binary and decimal issues separate, and attack decimal when we have binary mostly complete. Hough noted that we already made such a decision once. Zuras suggested that we move on to other topics. Kahan, Zuras, Schwartz, Cowlishaw, and Riedy will continue to discuss format issues via e-mail.

Remainders

Fred Tydeman contacted Jim Thomas regarding a defect report on the remainder operation. Should there be an alternate to allow remainder to return zero when the second argument is zero? The argument in the defect report is that a zero return value makes sense as a limiting case. While the limit as y goes to zero of rem(x, y) exists and is zero, it's arguable whether that is a useful property. The only number to satisfy the remainder equation x = k y + r for y = 0 is r = x. So arguments could be made for defining rem(x, 0) as 0, x, or NaN. Nobody in the committee knew of an application where the choice made a difference. Kahan noted that most of the applications he knew of rem involved the period of a periodic function, which is never 0. Furthermore, most of the tricks that used the remainder are no longer actively used, since remainder is now relatively expensive. Kahan thought the important issue is whether someone would write rem(x, 0) intentionally. If nobody would write rem(x, 0) intentionally, then we should have an invalid exception to let the user know there is a problem.

The committee was uniformly opposed to the proposed change. The only question was whether we wished to make additions or corrections to the response. Jim Thomas will await further objections and comments, and, if there is no further debate, will send the committee's response.

On a semi-related note, L. Meadows at Sun agrees to coordinate our efforts with the Fortran standardization efforts.

Alternate exception handling

David Hough presented his proposal for alternate exception handling. The problem with traps in 754 is that they are very hard to use portably. With great effort, people like Doug Priest have figured out how to make functionality available, if you can figure out how to use it. This is hard, partly because you have to know something about what is going on at a hardware level. Doug Priest had difficulty getting the same interface (libm9x) to even work on both SPARC and x86.

Maybe we should have specified the desired goals, not the implementation. Years ago, Kahan provided a menu of things he thought were needful. Hough took that list, and possibly changed it somewhat:

The last bullet (limit programmability) was added during our discussion. Hough's feeling was that this particular item would probably be no more used than the current trap handlers.

Jason Riedy and Joe Darcy are of the opinion that the print and continue / pause and continue options are part of the debugging environment, and need not concern us.

Presubstitution and counting mode are the hard ones. The reason they are specified is to provide a way of negating the exceptional behavior without slowing things down in common case. In the proposal, Hough suggests that all items in the list but the last be required, and that the old user programmability be included as a "should." Possibly we could get by without counting mode by providing the most common functionality via functions (or via a functional interface).

Jason Riedy suggested that we should require user programmability, and that perhaps users unwilling to implement the rest of the mechanisms themselves do not want those capabilities.

Dan Zuras thought we should distinguish between in-line options (presubstitution and counting mode) and out-of-line options that require a change in control flow. Kahan agreed that the distinction is important. Zuras later also suggested distinguishing between unexpected exceptions, which indicate bugs, and exceptions which we know will occur occasionally, and for which we have a reasonable response.

Kahan pointed out that hardware designers care about what can happen in the trap handler. If we ask architects for fast support for just the default behavior, presubstitution, and counting mode, perhaps they can do a better job than is possible for general trap handlers. Kahan suggested we think of these traps as having limited scope, similar to how cache misses are handled. As examples of the mechanism he had in mind, Kahan pointed to the handling of cache misses (and gradual underflow in the final versions) in PAL code on the Compaq Alpha.

If traps occur sufficiently frequently, the programmer may be better off inserting branches.

Zuras hoped that Hough would propose trap-handling policies that would allow computer architects to simplify the floating point architecture. In particular, he hoped for a solution in which the hardware could be designed to never need to go out-of-line to handle exceptions. Out-of-line exception handling affects only performance, but that effect can be egregious. For example, Zuras noted that a piece of hardware he worked on once was redesigned so that most of the functionality was 50% faster, but underflows were substantially slower. A loop in one of the SPEC benchmarks underflowed several times; that benchmark ran six times slower on the redesigned hardware.

Darcy asked about the speed of arithmetic with non-finite numbers. He noted that we have an existence proof that non-finite arithmetic can be handled quickly; we also have a proof that people do not care enough to insist on such quick handling. Jim Thomas thought this indicated a more general problem in our discussion: we had not talked about what use the proposed mechanisms have. Hough stated, in brief, that counting mode is used for determinants and large products, and the canonical evil example for presubstitution is an inner loop of a continued fraction computation. The example is evil since the presubstituted value must be recomputed at each iteration. A simpler example of presubstitution is to compute functions which have removable singularities. Thomas asked how we might judge the value of supporting this exception handling menu: we may have examples where people can put the options to use, but that alone does not necessarily mean the options will be widely exercised. Hough responded that he thought most people would not use alternate exception handling.

Kahan thought the most important exception handling capabilities will be those which permit programmers who create function libraries to develop atomic functions (in the sense that exceptional conditions will be handled entirely internally). For example, complex divide should signal over/underflow only if an element of the result deserves it, and not if there is an exception just because of the algorithm chosen. One of the things that makes good exception handling difficult is this issue of scoping. Zuras noted that the handlers and sticky flags of the 754 standard create a global context, which compiler writers find irritating.

Zuras noted that almost anything we do will be an improvement, since at the moment almost any exception is accessed in a different way on different platforms. There is no portability. Kahan replied that there is portability, but only if the programmer restricts herself to IEEE default values.

Kahan repeated his desire for limited menus that might allow architects to simplify their designs. Schwartz stated that what would really help in speeding up designs is to allow imprecise traps. He noted that the PowerPC has an imprecise exception mode. The PowerPC is in-order issue, out-of-order execution; in particular, loads and stores can complete out-of-order with floating point operations.

Darcy said he liked the functional notation Hough mentioned earlier, since it specifies the capabilities we want and not how they are implemented. Bindel noted that he has been working on a package which provides a functional notation for presubstitution, but which is implemented via macros and trap handlers so that the functional notation incurs no call overhead. Kahan pointed out that Apple's SANE environment provided locutions for manipulating the exception handling environment. Thomas pointed out that it is difficult to get features like SANE-style procentry / procexit analogues and flag support into a language like C99, in part because we lack compelling examples to sell to language committees.

In the interest of discussing transcendental functions before we adjourned, we postponed further discussion of alternate exception handling. Dan Zuras asked whether anyone wanted to work with David Hough on the proposal, and suggested that we spend the majority of the next meeting in continued discussion of alternate exception handling.

Transcendental functions

Markstein presented the proposal by Muller and company for inclusion of transcendental functions into the new standard. At the time, of the original standard, many transcendental libraries had worse than an ulp error, though Kahan pointed out that libm (1981-82) achieved 1 ulp for most of the supported math functions. But in 1986, Gal and Agrawal at IBM published a paper on a library for the IBM 360 which computed all its transcendentals correctly rounded.

We might reasonably insist on getting at least an ulp. To get within half an ulp, one must face the Table Maker's dilemma: how many digits must be computed in order to round properly? For exp(x), it has been proven that 5 million bits are adequate to round properly in double precision. For several functions, Muller and company have found, by exhaustive search, all the "bad" cases for which more than quad precision is necessary to correctly round. The number of bad cases is small, so that bad cases can reasonably be handled by table lookup. The log function has four such bad cases. The trig functions are more difficult.

Muller also asked how many digits are necessary for correct rounding in directed rounding modes. Someone doing interval arithmetic, for instance, might like the nearest values above or below the evaluation of some transcendental. The majority of hard-to-round cases actually come from directed rounding modes.

With results like these, there are certain transcendentals which could conceivably be correctly rounded at reasonable speed (assuming quad runs at reasonable speed). There would be no need to resort to a bignum package for further computation in hard cases, since they could be handled by table lookup. Checking whether quad is necessary is actually the most expensive part of the computation; Markstein says he knows how to compute logs on the Itanium in 40 cycles, but it takes about 12 cycles to check the result. On the RS6K, it takes about 15 cycles.

Muller proposes four levels for transcendental function libraries:

Thomas noted that two separate issues had been addressed: correct rounding and directed rounding. Kahan agreed that directed rounding of transcendentals was a new issue, but thought it would not be particularly difficult to respect rounding directions if you were always willing to make an error of perhaps 1.1 ulps. He thought the interval community would be happy with an error of 1.1 ulps, as long as cardinal values remained exact. A 1.1 ulp bound would not cause intervals to grow appreciably faster than they would under correct directed rounding. Correct rounding is a separate issue, and Darcy, Kahan, and Thomas all asked whether it would be worth the costs, and who would benefit.

Plamen Koev presented counter-arguments to transcendental standardization. When we perform a floating-point computation, the idea is to produce some relation between the input and the output. For algebraic expressions, we know how to compute exactly what we want, albeit at doubly exponential costs, using Stark's resolution principle. For transcendental functions, we have no such tool: to tell whether a function is correctly rounded, we must prove a theorem. Computing correctly rounded values in every case will be expensive, and is of dubious use, besides to ensure reproducibility. But reproducibility is already in conflict with performance: consider the difference in results for blocked matrix multiplication routines tuned for different cache sizes, for instance.

What properties can we impose on computed functions that are necessary in order to reproduce the actual functions behavior? We can't list them all! Sign symmetry and weak monotonicity are obvious targets, but, as Kahan noted, we're unlikely to have weakly monotonic implementations of Bessel functions any time soon. And what about more subtle properties? Take Harry Diamond's theorem, for instance. Suppose f(x) and g(x) are inverses of each other, and F(x) and G(x) are their correctly rounded approximations. Let FG denote the composition of F with G. Then while FG may not equal the identity, FG(FG(FG(x))) = FG(FG(x)), assuming round-to-nearest. With additional hypotheses on f and g, we can prove FG(FG(x)) = FG(x). How would one prove such a property is maintained when working with transcendentals? And would anyone care about it?

Kahan noted that monotonicity is a good property, but even that can be a serious headache for many functions. For which functions should we demand such a property? On which subdomains? And once we had an implementation we thought to preserve the property, how could we test it? Zuras noted that difficulty of testing is a tough point, since we would like, as part of the standardization process, to provide a set of test cases to verify whether an implementation conforms to the standard. Kahan agreed, and concluded that, while we can recommend libraries of high quality, we're not yet ready to standardize. There is bound to be pressure to standardize from the validation community, but if we mandate exactly reproducible results, those results must be correctly rounded, and it is not clear that we can achieve testable, reasonably efficient implementations that deliver correctly rounded results.

Kahan repeated Koev's comment about the difficulty of proving correct rounding for transcendentals, and stated that we have good reason to believe such proofs are generally undecidable. Zuras commented that he was unsure this reason applied to floating point numbers, which form a finite (if large) domain.

Bob Davis suggested that we might include "should" statements rather than requirements, but Kahan thought even including "should" statements would be premature. Jim Thomas, Dan Zuras, and Bob Davis asked whether there might be some piece that we could standardize, even if we cannot deal with all transcendentals over their entire domains. Kahan repeated his opinion that there are too many functions, domains, and tradeoffs, and that we have so much on our plates already, that the best we can do is document honestly what has been done.

Darcy noted that the Java specification originally required fdlibm; then fdlibm became the "strict math" library, and the "math" library was only required to have a certain quality; and there is still talk at Sun about making a "fast math" version of the library which would make no guarantees. On this note, we eventually spoke briefly about benchmarks that might illustrate the advantages of high quality arithmetic and libraries by going slow for the inaccurate versions and fast otherwise.

Scheduling

The June meeting will be Wednesday, June 19, at Network Appliance. We will be in the Santa Cruz room in building 3. Other upcoming meetings are:

754 | revision | FAQ | references