[note: I failed to keep up with who said what in all circumstances. Send any corrections to Jason Riedy (filling in for David Bindel).
The ninth meeting of the IEEE 754R revision group was held Thursday, 18 October 2001 at 1:00 pm PST at Network Appliance, Sunnyvale, Bob Davis chair. Attending were Mike Cowlishaw, Joe Darcy, Bob Davis, Dick Delp, Eric Feng, David Hough, David James, W Kahan, Richard Karpinski, Ren-Cang Li, Alex Liu, Jon Okada, Jason Riedy, David Scott, Jim Thomas, and Dan Zuras. A copy of the meeting attendance roster is available in the private section. Please review it for accuracy.
Due to network problems, the previous meeting's minutes were unavailable. Consensus was to deal with them on the mailing list.
Joe Darcy presented on language support for aspects of IEEE 754 traditionally ignored. His slides are available. A few notable comments during the presentation follow.
Languages may not be able to introduce NaN as a literal; some people already use it as a variable name (acting as a literal). This is similar to C99's problems with _Bool.
Some operations are easily implemented through bit-twiddling, but there are no portable ways to convert floating-point numbers to bitfields or integers. Some common tricks have subtle problems.
One trick is to set up a pointer to a float in memory and then cast that pointer to a pointer to an appropriately sized int. Most languages enforce aliasing rules that an optimizer can use to completely scramble this code.
Another common trick is to use a union type. Some compilers guarantee the layouts will overlap appropriately, but others may make different choices of alignment, etc. for the union members.
Requiring a set of conversion functions may be a good idea.
In C99, there is no portable way to chose the expression evaluation type. Also, long double is not guaranteed to be an 754 type, even in the 754-requiring extension.
W. Kahan asked why C99 chose its, um, interesting typing rules for picking overloaded functions. Jim Thomas responded: "Performance." Take float x, y; sin(x + y);. Expression evaluation may widen the evaluation / storage type of x + y to double. But if the double-precision sin function is chosen, the user may get a sudden, large performance hit on the compiler with the better expression evaluation policy.
There are also some interesting interactions with vararg functions like printf. Consider float x, y; printf("%g\n", x + y);. If x + y is promoted to long double, the stack will not match what printf expects. In K&R C, there are no format codes for single-precision; all arguments need promoted to double for this function.
Borneo: First, decimal literals are converted to the appropriate precision floating-point value. Then they may be widened to the evaluation precision. C99 apparently chose the other order: determine the precision and then convert. C# has an m specifier to denote exact decimal literal.
Caller has to twiddle modes appropriately. Callee can assume round-to-nearest. sqrt is an IEEE operator but often hidden as a function call, so these semantics should not apply. There needs to be a permeable specification to allow modes to pass through "calls". Also, which is really the common case: permeable or round-to-nearest?
Why not handle rounding modes in the same way? Answer: Seems to fit the common usage patterns better. (W. Kahan wants to discuss the APL handling of these later.) How do these impact the type signatures and inheritance? Answer: Same as with exceptions. [ed.note: See Guiseppe Castagna's "Covariance and Contravariance: Conflict without a Cause" for more information on the contravariance-covariance issues mentioned.]
Compiler writers need examples from paying customers before they'll even consider caring.
Using StarOffice [ed. note: also OpenOffice]. Things still render differently on different platforms, even in Word. The infinity symbol strongly depends on what font is available, etc.
There will be two levels of document, a working draft and a committee draft. The general process will be the following:
Any tables desired need to be made and sent to the editor for possible inclusion in an appendix. Bob Davis: They could be useful for finding holes in the spec.
Move into a discussion of all previous decisions.
W. Kahan asked why. Everyone at the earliest meetings knew there was a reason, but no one remembered it. This brought up the point of rationale. General decision: There will be a rationale for all changes. These will be included in the list of changes.
Should or shall? The old minutes note should. This will be discussed further as another agenda item.
There is one occurrence of "denormalized" as a verb. This will not be changed to "subnormalized".
Dan Zuras would like to revisit this. Indeed, he did later in the meeting, and will again in the future.
There was a general search for Zuras's proposed text (after a correction by David Scott). It seems to have slipped through the cracks, but has been re-sent to the list for discussion.
This is going back into the mix for later discussion. Some current and active participants were not involved in the original discussion. This is a potentially controversial decision, so it's work revisiting.
Early meetings decided simply to throw them in. More recent meetings wanted to take the following course:
No one present objected to that course of action.
A person inquired via email about moving the committee meetings around the country or world.
Mike Cowlishaw (who will be traveling from England when he can) also noted that the official IBM rep has never been able to attend, and that he knows a few others who would like to attend.
Bob Davis: it would be difficult to maintain a proficient quorum for voting on issues if attendance varies drastically between meetings.
Dick Delp: this occurred during the original 754 meetings. If an otherwise active contributor could not attend, the committee would try to accommodate them.
Dan Zuras: attendance changes even when the meeting is moved to Berkeley.
In general, no one was really sure where the people who cannot attend are located. Some are in Europe, others in Texas, and yet others along the east coast. Delp suggested sending a request for locations to the list.
The general response: First try to set up a teleconference for those unable to attend. The first step is to move the meeting earlier in the day. If we can start at 9am, people in Germany should be able to participate remotely a bit after dinner. Also, set up a conference call for the next meeting, and discuss the issue through the mailing list.
Consensus seemed to indicate that we should strive to include as many as possible, but we shouldn't overly disrupt progress to do so. Many current attendees cannot afford (or justify to their employers) significant travel. Not being able to form a quorum would stifle any progress.
Bob Davis is being "promoted" to general chair of the MSC group, and thus should not be chairing sub-groups. He detailed the following responsibilities for the position:
The last requirement ruled out almost all attendees. Two have volunteered to become members (or renew membership). Dan Zuras has been nominated for chairman (Hough, Delp seconded), and Alex Liu for vice chairman (lost track while hiding). The vote will be in November.
In the last few minutes, Dan Zuras raised the issue of test vectors and higher precisions. At one point, there was general consensus to suggest increasing precisions in multiples of 3/2 and 2. This came from theoretical and practical considerations. While the theoretical considerations still hold, the practical ones have been questioned. The suggestion was to provide guidance for hardware implementations, but there is no evidence that hardware will support precisions above 128 bits in the future.
Also, limiting the number of higher precisions to two per binade could make test vector generation easier. But it might be possible to generate test vectors for varying precisions automatically. Jason Riedy referred him to some interesting work linked from the further reading page that seems to accomplish exactly this.
W. Kahan noted that it is extremely unlikely that anyone can justify extremely high precision in hardware. Some others mentioned reconfigurable computing, but it wasn't discussed. Cowlishaw pointed out that there is far more demand for decimal support.
Kahan also pointed out that the real problem is in the support library. Transcendental functions for not too many bits, up to maybe 256) will be specialized to the format. Beyond that, more general algorithms will perform as well and be testable.
There followed an exchange about summations. Zuras pointed to Jim Demmel's accurate summation exercise and noted that less than sqrt(2) increments would suffice for that algorithm, so why limit things to sqrt(2) increments. Kahan noted that for long sums, other algorithms (compensated summation) would perform as well and use only existing precisions. Similar algorithmic changes occur in finite element models, where a dot product may only need slightly more precision. There the precision depends on the condition number of the problem. Too large a condition number signifies problems for which greater precision is merely a bandage.
Kahan prefers the idea of programmers specifying lower bounds on desired precision then using inquiry functions, as previously noted. Zuras and Kahan agreed to discuss this later.