Minutes from 754R meeting 20 June 2001

David Hough

The sixth meeting of the IEEE 754R revision group was held Wednesday 20 June 2001 at 1:00 pm at Network Appliance, Sunnyvale, Bob Davis chair. Attending were David Bindel, Joe Darcy, Bob Davis, Dick Delp, Eric Feng, David Hough, Jim Hull, David James, Rick James, W Kahan, Ren-Cang Li, Alex Liu, Peter Markstein, Michael Parks, Jason Riedy, David Scott, Jim Thomas, Neil Toda, Dan Zuras.

A mailing list has been established for this work. Send a message "subscribe stds-754" to majordomo@ieee.org to join. Knowledgeable persons with a desire to contribute positively toward a substantially upward-compatible revision are encouraged to subscribe and participate. An official website is at http://grouper.ieee.org/groups/754/ containing the 22 Jan draft of 754R in pdf format, but requiring a password.

The next meeting is scheduled for Wednesday July 18, 1-5 PM, Network Appliance, Santa Cruz conference room. Further meeting dates are reserved for August 15, October 18, November 15, December 13, January 14. David Scott subsequently offered a room at Intel SC12 for Thursday September 13, 1-5pm. Moreover a meeting in Berkeley soon would be convenient so that David Bailey might attend to go over some of the points he made at ARITH.

The draft minutes of the previous meeting were approved, with the addition of a summary of a discussion of options for programmer control of FMA usage, subsequently provided by Darcy.

All typos and approved changes in the draft standard should be forwarded to David James who is editing the document. It's in Framemaker form but will be made available in pdf and ascii as well on the private directory of the website.

An extended discussion of the format of the bit field tables led to an agreement to characterize the set of numbers represented in each format - and leave the big/little-endian representation issues for separate discussion. And the equations defining numeric floating-point formats should be couched in terms of integers rather than fractions. Kahan will send James a suitable URL for a paper containing such equations.

ARITH-15

ARITH conference:, June 11-13, Vail CO - Hough commented on particularly interesting papers, including one by David Bailey et al on double-double and quad-double; one by Schwarz et al on decimal floating-point arithmetic; and one by a team at Fujitsu HAL on a chip that provides UNfused multiply add with two distinct roundings, which, with two such units and a 1 GHz clock would provide 4 GFlop floating-point.

There was an informal session on standardization. Muller presented a list of issues involved in standardizing transcendentals, and Hough reported on issues considered so far by 754R. There was a fair amount of interest in various topics we've considered, but a lot of suggestions of the form "why don't you standardize ... which is helpful for my application" and the suggested features varied widely and were not really general purpose.

Michael Park's Notes from ARITH-15 session on IEEE 754R, Q&A.

Shane Story, Intel: Standardize flush-to-zero? Hough: No.

Eric Schwarz, IBM: Include 854-style features into new 754 ? Arbitrary radix? Would this conflict with 754R? Eric will send email to the committee. [remind him?] If binary to decimal (and back) conversion is part of the standard, then decimal floating-point needs to be specified carefully.

David Matula, SMU: Quad. Why is the leading bit implicit? Hough: Already specified and implemented. The explicit bit was never exploited on x86.

David Bailey, NERSC: Concerned for expression evaluation. Unless done carefully, prohibits parallel processing, e.g. different iterations of a loop. Potential disaster. Hough: Use compiler directives for anything other than the default. (If we tackle this, we may never finish).

Schulte, Lehigh: Support for any precision less than single? e.g. 16 bits for graphics? Hough: Nope, no advocates. Very application-specific.

Unknown, asked: Underflow and S/Q NaNs. Will there be a specific proposal to remove incompatibilities among similar arithmetics? Hough: Yes, rules for underflow will be specified more clearly.

(Parks: Well, we're certainly trying, aren't we? When I was at AMD, more than once we ended up yelling at each other regarding different interpretations of underflow and associated flags in the standard).

Old Issues

Correctly-Rounded Base Conversion: Proposal by Hough.

The discussion of compile-time base conversion needed to be separated somewhat from the rest and made more explicit that it covered functions like strtod and not manifest constants.

"Largest" supported format should be "widest."

Revised proposal by Hough incorporating these corrections.

Underflow. Although many of us thought that agreement had been reached in May on the principle that the definition of an exception shouldn't depend on whether its trap is enabled, Kahan thought this would be gratuitously inconvenient to either those looking for underflows signifying greater than normal rounding error, or to those looking for subnormal numbers in order to be rid of them. So he favors an underflow flag and a subnormal trap - similar to what 754 requires - although few understand how the exception is different in the trapped and untrapped cases. In terms of the quantities

he favored the definition: if the underflow trap is enabled, trap if R1 is less than the minimum normal; if the underflow trap is not enabled, set the underflow flag if R1!=R2.

The trapped case, which actually delivers a rounded scaled result with exponent biased into the normal range, requires an extra bit to indicate whether the trapping operation was inexact, and another to indicate whether it was rounded up or down. The plausible outcomes of the trapped case are few:

Kahan: Though these possibilities are few, no really adequate trap handlers have been written yet for x86, perhaps because on the x86 they are complicated by the need to also deal with x86 stack overflow/underflow, a non-numerical kind of programming error which was gratuitously classified as an IEEE invalid exception in the first implementation, and compatibly propagated ever since, and so became an architectural error.

[Sun provides an x86 trap handler in libm9x , distributed with Sun's unbundled compiler products for x86 Solaris. Its implementation was complicated by x86's lack of an inexact bit for the trapping instruction, and the failure of certain instructions to update the fp environment, thus making it impossible in some cases to tell whether a trap occurred because an exception occurred in an arithmetic operation, or because someone explicitly raised a flag and unmasked the corresponding trap at the same time. Thus standardizing more trapping support prior to successful implementation on a variety of platforms would lead to additional errors and omissions.]

Transcendentals: At ARITH, Muller proposed standardizing transcendental function exceptions and standardizing names for various accuracy levels that implementations might claim. So Kahan discussed reasons why he thought standardizing transcendental function values was premature. Hough: note that these reasons might not apply to standardizing the commonest transcendental exceptions of one argument. But algebraic functions of two arguments, such as hypot and pow, and transcendental functions of two arguments, including all complex transcendentals, get extremely complicated very quickly - getting the exceptions correct requires a lot of saving and restoring of flags, which is increasingly expensive relative to the basic arithmetic.

It appears that, with comparable care in coding, correctly-rounded transcendental functions cost 2-4X more than conventional functions that aim for perhaps 0.53 ulps worst error. Eric Feng compared some implementations by Gal and Ng to test Gal's claim of only 1.25X slowdown, but that turned out to be only for a limited argument range. If the cost could be gotten uniformly down to 1.25X we'd just do it, but the 2-4X penalty seems more typical, and trig(big) and x**y seem to be especially problematic. Kahan wants to avoid the situation where implementors feel encouraged to pick and choose which parts of the standard to implement.

What about standardizing a weaker spec like < 0.53 ulp or < 1 ulp? Do we also have to specify monotonicity and sign symmetry? How many useful properties should be standardized? sin(2x) = 2 * sin(x) * cos(x) is going to be satisfied about as well with 0.53 as with correct rounding.

Harry Diamond's Thorem: Given a function f, monotone and convex, and its inverse t; and computed approximants F and T. Using "o" to signify composition, t o f = 1, but H = T o F != 1 exactly. Harry Diamond's Theorem for correct rounding to nearest asserts that H o H o H = H o H. Furthermore H o H = H is true for exp and log and perhaps more generally too.

But it's not known how to prove the theorem for < 0.53 ulp. What other properties can be lost? [Hough: the theorem seems doubtful for < 0.53 unless monotonicity is also required for the approximations. But there might be specific proofs for specific functions.]

754 | revision | FAQ | references