Draft minutes are attached. Dan, could you please check the last paragraph? I'm not sure I correctly recorded what you said about having two levels of standard conformance.
I have ordered the record of arguments from the latter half of the meeting by topics and themes rather than chronologically.
We began the meeting with a presentation from Jim Thomas, who cautioned the committee not to add new NaN semantics unless there are compelling reasons. Adding NaN semantics imposes a burden on implementers at all level -- including hardware designers, library maintainers, and compiler writers. Unless those implementers see a clear benefit, they will not support those features, or will provide support which is so slow as to be useless. Slides from Thomas's presentation are available online.
Dan Zuras observed that Thomas's arguments in the past had previously been used to support the notion of a single canonical NaN. At that time, the committee decided to pursue Jeff Kidder's proposal as a compromise. Should we revisit that decision?
W. Kahan outlined the history of the NaN, from Konrad Zuse's idea of an indefinite in the 1940s, to Seymour Cray's inclusion of indefinites in arithmetic, to the semantics initially suggested by Kahan and incorporated into the original 754 standard. Zuse recognized that for non-stop operation, a machine would need an indefinite representation. In that way, the arithmetic would be algebraically closed -- every operation on entities in the system would produce another entity in the system. The German war office refused to fund Zuse's project, but Cray later adopted similar ideas for his machines. The indefinites of the Cray era lacked defined semantics -- the result of a comparison with an indefinite was unpredictable, for example. The 754 quiet NaN remedied this problem. The signaling NaN was a later addition, introduced as a political compromise to support extensions to the arithmetic -- namely UN and OV -- that some committee members at the time thought necessary.
Additional meaning for NaNs is application-specific, and hard to discuss in a general context. An example, though, might be a search routine which returns a specially decorated NaN to indicate that the routine could not find the desired value. In the absence of other meaning, the most interesting information about a NaN is how it was generated. Kahan restated his belief that NaNs should be used to propogate information about the conditions of their creation for the purpose of retrospective diagnostics.
After Kahan's historical note, Jeff Kidder presented his proposal for the treatment of NaNs. Kidder's slides, available online, descrbed the features of this proposal. Kidder proposed that the bits of the NaN significand be used as sticky flags, one for each condition that might generate an invalid. When a NaN was generated, some of the flags would be set; operations which received two NaNs as input would OR the flag sets together. For example, a NaN generated from sqrt(-1) + 0/0 would have bits set to indicate that its generation involved both an invalid division and the square root of a negative number. The standard would specify functions to set, read, and test the flag sets for each NaN.
Kidder's proposal was motivated by the desire to get the most value for the least hardware cost. In previous meetings, the committee indicated that commutative binary operations with NaN inputs should also be made commutative with respect to information carried in the NaN payload. A natural way to achieve such commutativity would be to return the NaN a greater significand. However, Kidder observed, the x87 approach of using microcode to implement this behavior was a death knell for high performance; and in a previous meeting Eric Schwarz stated that he would be unwilling to implement in hardware any NaN propogation rule which involved something as complicated as a carry. Kidder also noted that the specific bit meanings he proposed were a superset of the meanings used by SANE for the same purpose.
Kidder's proposal proved controversial. Kahan observed that IBM also used NaN signficand bits as flags in one of the IBM 360 series machines. Because there was no authority to ensure consistent use of the bits, though, users created conflicting mapping, and the whole scheme devolved into confusion and disuse. Kahan and Jason both stated a strong preference for a scheme to indicate where a NaN was generated, not simply the type of operation which generated it.
Ivan Godard initially objected not to the encoding, but to the merge mechanism, which he characterized as an elegant overkill. NaNs used to carry diagnostic information will be treated much like compiler error messages -- programmers will fix the first one they see, then ignore the rest and re-run the code. Godard questioned the utility of commutativity in this circumstance, stating that it would be preferable to simply propogate one of the NaNs than to try to merge the NaN information.
Jim Thomas thought the proposed mechanism would burden compiler writers trying to think about whether an optimization might violate the semantics of NaN propogation.
After much heated debate and a brief break, Kidder suggested
that we use a slight variation from the current standard:
an operation on multiple NaN operands shall return one of those
operands as a result, though the standard will not specify which;
there shall be functions to encode and decode the NaN significand;
and there shall be a quieting function. Zuras pointed out that
we agreed in previous meetings not to change the body of the
text in a way that breaks
upward performance compatibility;
Text which says operations with NaN inputs shall return one of the
inputs must go in the appendix, since the 1985 standard only says
operations should return one of the input NaNs.
In later discussion, we refined the proposal. An arithmetic operation with multiple NaN operands shall return one of those operands, except that there may be a change in sign. The standard will not say which NaN will propogate, nor which sign will be produced; nor will the standard dictate which NaN the hardware should generate when a NaN is first created by an invalid operation. There shall also be functions to insert and extract the significand of the NaN.
This proposal met with mixed success in a straw vote. When taking the proposal as a whole, there were seven votes for and seven against; when taking the proposal without the operations to extract and insert the significand information, there were twelve votes for and four opposed. The exact nature of the payload was the source of most of the controversy that was voiced: Kahan asserted that without such functions, there is little point to distinguishing NaNs; Zuras and Thomas voiced concerns about the portability of the insert and extract functions; and Eric Schwarz expressed concern about including in the standard any constructs which involve bit manipulation more readily carried out in the integer pipeline than in the floating point pipeline. There were a few other concerns: Bindel observed that propogation of NaN information is difficult during conversion form a more to a less precise type, and Darcy objected to the lack of bitwise predictability due to allowing the implementation to choose which NaN to propogate. We will revisit the debate again at next month's meeting.
We also revisited the question of signaling NaNs. On the thesis that no compelling portable application of signaling NaNs exists, Hough proposed to allow implementations to have no signaling NaNs. More specifically, he proposed that we deprecate NaNs by including text in the appendix which states that implementations should not have signaling NaNs -- though we will still leave the description of the behavior that signaling NaNs shall have if an implementation provides them. Hough's suggestion met with little immediate resistance.
Joe Darcy expressed twin concerns: first, that we are specifying details outside our scope; and second, that we are providing language designers with insufficient rationale, and so will continue to fail to convince them to support features like floating point flags and modes. What are the killer applications? What are the end-user facilities we wish to support? It seems that much of our discussion is focused at too low a level: for example, without an extended-exponent type is easier to explain than counting mode, and may be a more natural concept. As the average floating point competence seems to decline over time, language designers will be increasingly undermotivated to do the work to support low-level facilities presented without a sufficient background of rationale and prior use.
Zuras responded that Coonen's papers are an excellent initial source of information, and various others pointed out that the entire purpose of the purple prose in the draft is to document our rationale as we go. Though those who spoke generally agreed that high-level specifications are useful, and that we should try to engage the language community in order to get better support for floating point concepts, there was some confusion over what form of rationale Darcy had in mind.
Before the April meeting, there will be a meeting of the microprocessor standards committee. Dan Zuras requested feedback from the committee on two points. First, it seems unlikely that we will make our December deadline. When should we say we expect the standard to be ready for a vote? The calendar works by quarters, so the options were either next March or next June; it seems likely that we will not be ready until next June. Second, Zuras intends to ask whether it is possible to have a split sense of conformance to the atandard: implementors may either conform with the entirety of the new floating point standards; or they may conform at a lower level with the IEEE 85 standard plus the changes we have made in the main text, which do not break upward performance compatibility. It is unclear how such a split level of conformance will be implemented procedurally.