IEEE 754: Minutes for 18 April, 2002

[Editorial note: Unless it's in quotes, it's not a quote. It may be really close to a quote, or it could be completely paraphrased. If conversations seem to go in non-causal directions, well, they probably did. I'm sure I've left pieces out, but not much. Comments and corrections to Jason Riedy.]

Dan began the meeting by noting that we're not going to finish by the previously hoped-for deadline. (The editor cannot remember what that deadline was, as it was hopeless anyways.) To expedite matters, he suggested that we allow presentations finish without objections. People should only ask for clarifications. Prof. Kahan further suggested that people be assigned to individual topics.

[Attendance needs recorded by the person who has the attendance sheet.]


Presentation on decimal arithmetic - Mike Cowlishaw

[Ed: Clarifying questions / notes by slide, then the discussion.]

The slides are available.

Page 3

Delp: So 55% + 43% = 98% of commercial databases don't use binary floating-point at all?

Kahan: Note commercial, not scientific and engineering.

Page 4

Cowlishaw: IBM C on mainframes already has a decimal type.

Page 5

Cowlishaw: The intent is to make decimal a well-known data type.

Page 7

Scott: Knowing how many zeros / digits there are is the important part.

Cowlishaw: The numbers as labels point is spurious, but people often use it as an example.

Page 8

Scott: How do trailing zeros affect comparisons?

Prof. Kahan suggesting delaying that question until he could relate how the Burroughs B5500 handled its registers.

Cowlishaw: C# and Java support both "equals" and "compare". "Equals" is identity with zeros.

Cowlishaw: "The UK is part of Europe, I suppose."

Page 9

Scott: Unnormalized numbers do the same thing.

Kahan: The 8087 used unnormalized numbers like this, but the committee chose otherwise.

Page 10

Cowlishaw: Problem with A.1: Large binary integers converted back and forth with decimal format is painful. Rounding is also painful from counting decimal digits.

Cowlishaw: A.2: 28 digits was an alternative, but 31 digits are traditional. Cobol 2002 wants intermediates to 32 digits.

Page 11

Cowlishaw: Chen-ho: The main snag is that it requires multiples of 3 digits and requires re-encoding for shifts.

Cowlishaw: Arithmetic is performed by expanding to bcd, working, then re-compressing into registers. This has little performance impact.

(quoted 3 gate delays for Chen-HO and DPD, 10 for base 1000 (binary coded millennial / BCM))

Page 12

Delp: You're not implying that the bias has the same value as 754's?
Cowlishaw: It can be.

Page 13

Cowlishaw: Plenty of room for discussion.

Cowlishaw: The last line in c.1 is because unnormalized encodings have many zero values.

Kahan: There is another version between c.1 and c.2, where there is a bit of a mix between tag bits and the exponent.

Discussion

Scott, Kahan, Zuras, and others want to use inefficiencies in the exponent and the padding bits to add more digits somewhere.

Everyone wants a 32 bit format. Many real-life values will fit in this nicely, and people want to fit more into memory. Zuras suggested using BCM and mixing bits with the exponent. Cowlishaw would prefer if the same encoding scheme can be used across all widths.

Delp brought up adding more rounding modes. Some people may want to round 0.5 down rather than up. The preferred direction depends on which side of the monetary transaction you're on.

Zuras: Unnormalized is the biggest can of worms here.

Kahan comment outline:

Zuras: And...

Kahan:

Cowlishaw wanted to avoid arithmetic for now. The user-selected precision is in a control register.

Zuras: Is there an operation to convert 12345.0 to 12345.00.
Cowlishaw: Yes, have a rescale in hardware or software.

Kahan: Including scale can lead to arcane rules for division.
Cowlishaw: Not in this case. Define two rules: Division normalizes result, or it computes to full precision of what you've got (normalize first).

Delp: Considering 14.0 different than 14 is deceptive, but not necessarily wrong.

Kahan: It is a mistake to pander to anomalies that have at best historical reasons. It should suffice to have guard/round/sticky bits to use in the next rounding, the one that specifies a destination.

Zuras: Can we do it with normalized fp only?

Kahan: Rounding to specific digits is the real issue.

Kahan: We need a four-byte format for interactions with people. What proportion of "commercial data" would fit in a four-byte decimal type?
Cowlishaw brought up XML for some disturbing reason.
Kahan: Cray's customers demanded a half-word format.
Scott: Only need load/store of the smaller format.
Riedy: And many applications rely on tight resources like shared memory, communication buffers, etc. Smaller format fits more in.
Kahan: Only need load/store if you maintain the extra info for rounding (g/r/s).

Zuras, et al. went on guessing about the reasons for 854's failure.

Cowlishaw: Really need floating-point in decimal to be fast. Penalties for pure software implementations range from 100x to 1000x.

Cowlishaw then put up results from a telco benchmark showing that 40-70% of its time was spent in decimal arithmetic. The remaining time was I/O, including conversions from text to decimal.
(A description of the benchmark is now available.)

Zuras: Bottom line: What do we have to provide?
Kahan: The right language, and the right arithmetic.
Zuras: As a committee.
Cowlishaw: Three things I'd like to see:

  1. representation
  2. arithmetic (suggested)
  3. linguistics

Kahan: The first and third cannot be separated.
Hough: It's helpful to endorse formats by putting them into a draft.
Zuras: All the considerations are intertwined.

Kahan: Floating-point is not preponderant in the commercial world, and 854 was floating-point. The convention of determining destination by looking at the operands is poor.

Cowlishaw: About layouts, unnormalized numbers can be disallowed. Cannot eliminate the first digit reasonably.
Kahan: You lose less than 10% of your values if you allow a 0 initial digit. Binary loses 50%.

Kahan: Don't include two zero tests. It's too difficult for programmers to always pick the right one.

On the patent

Cowlishaw:

Delp: This has to get through parent committees.
Cowlishaw: At a minimum, do the base requirement for IEEE. Will look into royalty free licensing.

Hough: Larger issue: Is this the best encoding?

Kahan: Best four-byte format will likely be different. You can certainly get seven decimal digits and better +/- 99 exponent with special values.

Cowlishaw: Other optimalities exist with the given encodings. Copying to longer fields requires only padding.

Kahan notes that his document on naming covers many of the points discussed.

Kahan: The IBM 360 had unnormalized arithmetic. It was handy in conversions between hexadecimal and decimal. It was also used for significance arithmetic...

Cowlishaw: A local (UK) lumberyard will take a sheet specifying lengths of lumber. If you write 14, they'll give you a piece from 13.5 to 14.5. If you write 14.0, you'll get a more precise cut.

This lead to a long discussion on denoting precision solely through the number of digits. Vocal consensus was that it's a BAD IDEA. You should be explicit about the "measurement quantum." People still want to use significance arithmetic, but many people feel we shouldn't be bound to make their lives easier.

Kahan: A redundant representation can also save on shifts.

Cowlishaw: We have 18mo-2yrs before commitment to a specific format.

The topic became finding the minimum requirement we need a specification to satisfy. Two sub-points:

The encoding is a contentious point. The proposed formats support 33 decimal digits, span a large exponent range, and have two wasted bits. People wondered if there's a good trade-off between a smaller exponent range and more digits; the same decision made in binary. The lower bound of digits necessary for many results is 32. The thirty-third is useful for extra computational precision. Kahan's experience has shown that carrying 34 digits gives far superior results for 31 digit data. So it'd be nice to cram another digit in.

Pretty much everyone who participated thinks a decimal format is desirable, but it needs to be a good enough format. Hough's criteria was that the format needs to be good enough that another format isn't clearly better in all ways. Everyone recognizes that there will be no one perfect format.

Work on alternate formats is to proceed on the mailing list. Zuras stated his intent to work on a 32-bit format and to cram one more digit into the 64-bit format. Cowlishaw is to look into the need of a 32-bit format, put the benchmark on-line, and get information on the patent problems.


Contact with other language committees - Jim Thomas

Jim Thomas volunteered to work with Fred Tydeman, who has volunteered to be an interface to the C standard committee.

We still need contacts with other committees. Riedy volunteered Hough to get in touch with Fortran people in Sun.


Draft review - David Hough

Table 4: Remove all hints of language bindings.

Kahan: Do we want language implementors to provide all variations.

Many people noted than some languages (like C99) provide pretty much all the variations already.

Kahan: The statement before the table mentions that an implementation must provide a "means of logically negating" the comparison. Do we mean negating the operator and then applying, or applying and then negating.

The difference is that negating the operator will change the exceptional condition specification, while negating the result keeps the original operator's exceptional conditions.

Zuras: The latter.
Kahan: The former.
Zuras: The latter.
Kahan: The former.

Thomas: Languages should provide the other cases directly. Not to do so would be a step back from C99.

Kahan: We don't need the signalling variants as badly. They were introduced because no one handled NaNs at all.

Accepted the changes. Thomas is making note of needed proposals.

Trapped gross under- and overflow:

Hough: This issue depends on exception handling. I suggest accepting this and revisiting the issue once our exception direction is more clear.
Riedy: Next meeting was set aside for exceptions.

nextup / nextdown:

Thomas: The point of having separate functions is that they have different exceptional behavior?

Hough: Yes. Nextafter raising was a mistake, but changing that would be a gratuitous, small change.

Thomas: Why isn't it worth an exception?

Hough: Normally exceptions are related to inexact results. The next* routines are exact.

Kahan: Consider a zero finder with the secant method. If the secant doesn't change, you want to bump an endpoint. Exceptions can tell you when you've bumped to infinity. But there are other examples where exceptions aren't warranted.

But a change had slipped into nextafter that made all this silly. The accepted change is to change nextafter back, and get rid of the second paragraph in nextup.


Next meeting

To cover exceptions. It will be at NetApp in the afternoon.

754 | revision | FAQ | references