[note: I didn't keep track of the repeated discussions well, so there are likely gaps. Send any corrections to Jason Riedy (filling in for David Bindel).]
[TODO: Update attendance, normal header stuff]
David Bindel was unable to attend due to scholastic activities, and Jason Riedy couldn't recall the changes made, so the November minutes are still unapproved.
David Hough proposed that we delay re-visiting the scope and purpose; pending discussions may impact our direction. Bob Davis expressed concern over changing the scope and purpose again.
After mailing list conversations, the fused multiply add issue was considered the least controversial. FMA was bumped to the head of the issue list.
In principal, not a controversial topic. There was a great deal of tiny word-smithing involved, and I didn't record every comment. Dick Delp proved very adept at finding the worst possible parsings.
Peter Markstein asked why we still admonish against operations that produce a lower-precision result. Hough responded that we simply haven't decided what to do. Dan Zuras wondered if we should simply leave discussion of those operators out, or should we proclaim "thou shalt not" have these? Markstein noted that they exist, even if prohibited, so the proclamation would be fruitless. Hough noted that the change log entry is just rationale for the old footnote, as remembered at the previous meeting.
Jim Thomas asked if there will be a bibliography. David Hough responded that it could be an appendix, but someone needs to write it.
Markstein asked if the footnote contradicts the ability for precision control to narrow results. Dan Zuras noted a subtle distinction. Everyone else noted that they didn't understand Dan Zuras. General consensus was to accept the wording for now.
David Hough slipped a proposal into his draft. In an effort to replace all occurrences of 'denormal' with 'subnormal', he created the word 'supernormal'. He rephrased the clauses determining detection of underflow to specify _what_ should be detected rather than _how_ it is to be detected. In doing so, he replaced "denormalization loss" with "supernormal loss" and caused a great deal of consternation. The confusion was resolved when Jim Thomas noted that the root 'normal' was being used for fundamentally different purposes. Using 'extraordinary' and 'ordinary' rather than 'supernormal' and 'normal' made everyone happy.
Rick James wondered if we couldn't remove the labels altogether, but others believed it better to provide names for discussion.
Joe Darcy noted that there is a difference between the fused multiply-add operation and non-explicit contractions of a*b+c. This difference was remembered and forgotten frequently though out the discussion.
Prof. Kahan reminded the group that the exact result exists in an algebraic setting before rounding occurs. The algebraic completion included in 754 over his dead body includes 0 * Inf => NaN. This result is generated exactly, so 0 * Inf + Inf => NaN + Inf => NaN.
David Scott asked about Inf*0+NaN. By the reasoning above, this will generate a new NaN, although it needn't. Confusion between the atomic operator and the expression appeared again, but generally people think this should not return a new NaN, or raise invalid.
Dick Delp noted that multiplication's exceptional case requires commutativity, and that we don't explicitly require it. Need we say the multiplication in fma is commutative? Others responded that the arithmetic produces the commutative exact result correctly rounded, so we don't. But then NaNs appeared, and those break commutativity in most implementations. The solution, for now, is to say both inf * 0 and 0 * inf raise exceptions.
Dan Zuras brought up the often raised point that some people only want the a*b+c form, not +/- a*b +/- c. After much discussion, we decided that we had already agreed on only having a*b+c.
Jason Riedy kept asking for a "shall provide" rather than a "should provide". C99 already requires an fma, so there's little harm done. Alex raised the point that software implementations are much trickier than they appear, and lack of speed kills, but otherwise had no objections.
Dan Zuras asked (many times) if the proposal should be accepted. Jason Riedy pointed out that every single piece has been or will be changed, so accepting it would be silly.
Dan Zuras wants the alternate sign versions included to prevent funny versions (like Power's) that don't round correctly. Peter Markstein noted that Power only gets +/- (a*b+/-c) wrong, and that's because it treats it as two operations.
But Dan wants to round expressions, not operations, and had thought that was what the old standard meant. David Hough brought up the example of z - sqrt(x). Who's responsible for which rounding? Dr. Kahan wanted the example modified to z - sin(t) to make it more confusing. He also mentioned many tricks used to make expressions more amenable to directed roundings.
Dan still wants directed rounding of expressions, but realizes it's more painful than he had thought. Jim Thomas noted that what he really wants is not to be second-guessed by the compiler.
Jim Thomas also asked if fma should be required for all levels of implementation. Different precisions may be provided, but not different operations. Others noted that fma isn't harder than correctly rounded binary-decimal conversions, and that it needn't be implemented in hardware. Embedded systems that don't use fma can simply not link it into the final image.
Much word-twiddling followed; see the updated draft (private area).
There was a great deal of word-twiddling surrounding how the sign of an exact zero is determined. Everyone wanted it to be the same as a*b+c, but getting the phrasing correct seems difficult.
Getting this 'correct' also involved much word-twiddling.
The latter was discussed and somewhat decided many times, but I guess it lives on.
The meeting schedule was firmed up. See the schedule.
International participation seems limited to Mike Cowlishaw, so his schedule is the primary influence for future meeting times.
David Hough proposed moving the recommended functions into the main body and requiring them. C99 requires many of them. Prof. Kahan believes they deserve more thought; lack of thorough thought left them in an appendix previously.
Everyone seems to agree on including the quiet functions, ones that are allowed or required never to signal.
Collected quick notes:
Consensus was to accept this portion and move these functions into the main text.
David Hough also proposes a nextup function, essentially nextafter(x, +Inf). It seems to be the most common case and could be implemented in hardware. Others think it an optimization like fma(-a,b,c) becoming fnma(a,b,c). Kahan notes that it adds an extra name but not extra meaning.
The predicates isfinite, isinfinite, etc. are common cases and can be implemented more efficiently than the full classify. Hough was not aware of hardware classify implementations like IA64 or IA32's.
Kahan asked if issigned (or signbit) could perhaps return 1 or -1, but others would prefer a boolean value.
Jim Thomas noted that C99 has all the proposed ones except issubnormal and issignalling, and he recommended using the C99 names. classify is more difficult to use because there is no standard way to return the results.
Consensus: Add the predicates, and re-examine the classify function later.