The IEEE 754R committee met at UC Berkeley
on May 29, 2003. In attendance were
 Dan Zuras, self
 Joe Darcy, Sun
 Leonard Tsai, self
 Mark Erle, IBM
 Eric Schwarz, IBM
 Jason Riedy, UC Berkeley
 RenCang Li, Univ. of Kentucky
 Jim Thomas, HP
 John Harrison, Intel
 Jeff Kidder, Intel
 Alex Aiken, UC Berkeley
 Yozo Hida, UC Berkeley
 Richard Fateman, UC Berkeley
 W. Kahan, UC Berkeley
 Dick Delp, self
 John, Hauser, self
 Peter Markstein, HP
 Peter Tang, Intel
 Mike Cowlishaw, IBM
 Don Senzig, self
 Michael Parks, Sun
Note taker: Jeff Kidder, Intel Corp.
Meeting started at 1:10 (In the grand UCB Tradition).
Mike Cowlishaw (IBM) gave a 20minute overview of Decimal Datatypes.
 Recommended including roundhalfup rounding mode to the standard.
 A fine point in division: when to stop: 0 remainder or pdigits and
round
E.g., should 4.0 / 2 be 2 or 2.0
Discussion:
 Not normalizing the exponent
 For some constituency: A number conveys something beyond its value
Kahan interprets this as a way of treating datatypes for each fixed
point data
Call for topics:
 Goal for the day: settle the arithmetic
 Result not normalized
 Result is value + Format
 NaN's and Conversions
 Mixed Operations
 Decimal types in languages
 Expression Evaluation
 What other operations are supported (ILOGB, nextafter) and how
 Names
 If the format matters, how do we get the format
 Exposing bits
 NaNs as results (invalid DPD codes, etc)
 Should infinite results be fully defined
 Should rescale (or other operations) be added to 754r
 New flags/classes needed? Normalized? Extreme exponent
 String conversion (e.g., "00.0")
Unnormalized Results: (collection of summary of comments)
 Should the IEEE 754r only address the values and not the formats?
 The argument is that this information is needed to know
 Cannot just turn it into a language problem.
 One of the applications of interest is database applications.
 Databases could just store (value, format)
 These formats may well be used for other applications (such as dates)
 Want to have the ability to say "here is the format I expect" either
warn me or coerce it
 How do we deal with the case (in languages) where someone receives
unexpected results

 Want to catch these errors early?
 How do I add an assertion that the format meets some field
expectation.
 Could provide a normalize instruction for scientific folks
 Can we get the scientific people to decide is there a way to get the
right value and still provide the features that IBM is looking for
 How do you define the preferred scale factor for sqrt, (or cube root,
hcos, ...)

 Exact reciprocals can have 3 times as many digits as the source (e.g.,
1/8 = 0.125)
 Should discourage formatting
 Some rules that some users need to follow need to follow rules that
don't follow general mathematical reasoning.
 The role of standardization is to introduce formality, rigor, ... to
prevalent approaches.
Examples:
 1.234 + 3.456 = 4.690
 1.2 * 1.2 = 1.44
 0.0123 + 0.0 = 0.0123
 0.0123 + 0.00000 = 0.01230
 sum(100 numbers with k digits past the decimal) which total 100 with k
0s past '.'
 $1.29 * 10 = $12.90
 rescale(1.23456, 3) = 1.235
Approaches:
 Great danger with selfformatting data  need to mitigate
 Normalize the data all the time, but let the customers get what they
want
 Deal with string manipulation rules
 The standard should only specify the value and not the format
More comments:
 A standard isn't a standard if it doesn't specify what you get (the
format)
 What if we allowed faster versions
 We are trying to reduce not increase implementer choices
 One of the functions that exposes the format is printf
 The proposed rules are at least as sensible as what people do and more
like
what they do than normalizing. But they would be able to get a
reproducible
result. If we have redundant representations, we need full
specification.
 What do we do when the result is not perfect (differs from target
exponent)?

 Rounding, too many digits, false exponent overflow
 There is more than one "3"
 To formalize the arithmetic we could consider "business values" as
distinct
from the real number values that they map to. This mapping is often
manytoone.
 We have an ability to avert a great deal of heartache if we fully
specify the result.
 If the standard only specifies the real values (and the sign of zero)
then there
will likely be implementations claiming to be IBM compliant beyond IEEE
754r
compliance.
 Better to talk about "scale" than "format"
 We agree that the real value of a result will depend only on the
values of the inputs
not on the choice of scale
 What is the nextafter function?
 Is it our intention that printf depend on the scale of the source?
Proposal: specify the scale of the result operation as a function of the
value and
scale of the sources. The numeric value of an operation only depends
only on the
numeric values of the sources, the type of the destination, and the
rounding control.
The overflow and underflow thresholds are functions of the destination
type, and
the decision to over/underflow depends only on the value of the
sources.
A vote was held on the proposal:
 15 for
 1 against
 1 abstained
The proposal was carried.

For sqrt need to define the scaling. Sqrt 1.00 = 1.0. Sqrt 1.0 = 1. or
1.0?
 Would like to have a high performance way of asserting that an
operation
has some expected scaling (or other classes).
 God save us from traps
 Should be able to detect a normalized source  might be
 Could have a "rounded" flag or "displaced" flag. If the result
exponent differs from the intermediate exponent.
 So if someone wants to specify a scaling.
"Rescale":
 Rescale(x, e) can raise "round" (and possibly "inexact") if digits go
off the bottom
 Nobody likes the name
 "Coerce" might be a better name. "Round"? But can also lengthen.
"Format" scale.
 assertScale(x,e) would be a predicate form that wouldn't set global
state.
It amounts to a compare equal with the exponent.
assertScale(x,expected) is like assert(exponent(x) == expected)
 Could also have assertScalesEqual(x, y)
At 4:50 the question of breaking for the day was raised.
Dinner options were considered. 12 or so.