I'm really glad you have those notes, Dan. All my earlier archives were retained at the company when I left Unisys (and gainful employment) in June of 2006, and anything I might have had in my AOL personal E-mail was trashed by AOL about three years ago. The only records I have are the INCITS/J4 papers I submitted. |
By the way, as far as I'm concerned, COBOL doesn't have a problem with subnormalized floating-point numbers. As long as the values are accurate, we don't care. Normalize, don't normalize; it's not that much of a language issue. We suggest normalization, and the intermediate data items are maintained in normalized format, but as to the values in data, and even to the values in numeric literals, +0.0005e+5 is just as good as +5 in decimal arithmetic.
You indicate "Alas, by December 2006 the only compromise both sides would consider was having both formats in the standard." That tells me "this is still a raging controversy that is unresolved. COBOL should not make a move until it is." I had retired by that point, and didn't receive any notice or invitation to participate in the IEEE 754 discussions from then on. All of my participation in the COBOL effort since June 2006 has been "pro bono publico". The next thing I hear on the subject, IEEE Std 754-2008 is out, I obtained a copy of it, and lo and behold, it has both decimal and binary encodings specified for the same format.
It appears from your discussion hat the decision to call both encodings the "same format", rather than having two separate formats (e.g., decimalB128 and decimalD128), was also a political compromise within the IEEE 754 working group. It would have been much easier for COBOL had they been handled as separate formats, but I wasn't given the opportunity to raise that point. Had I been invited to participate after my retirement from Unisys, I'd have raised that point and argued for separate denotations in the discussions. But I wasn't. And for COBOL to be forced to note that, say, FLOAT-DECIMAL-34 and FLOAT-DECIMAL-B-34 are, as far as IEEE Std 754-2008 and its working group are concerned, REALLY exactly the same format (decimal128), just interpreted in different ways, is not something that makes sense to me, nor do I think it would make sense in the COBOL world.
Also, I don't think I ever received a copy of a draft containing Peter Tang's proposal, or a copy of the proposal itself. My concern about any binary encoding was the painful experience for COBOL that binary128 encodings do not exactly represent every precise decimal value in the same range, and vice versa, so the very idea appeared to be a compromise distasteful to COBOL. Given a copy of the approved draft in the form of the published standard, I can now see the advantages of both encodings, and can now trust that each can exactly represent decimal values within the permitted magnitude values. I didn't have enough details on the binary encoding to verify to my satisfaction that the representation of decimal values in binary form was indeed specified as exact until I saw the published version.
Now, remember, I first got a concrete answer on the resolution of the binary/decimal controversy in February 2009. The approval for IEEE Std 754-2008 came about on June 12, 2008 (I don't know when the approved version was actually published and made available for distribution). The original target publication date for the new COBOL standard was July 2008. The COBOL working group was under a lot of pressure to get a standard out MUCH MORE PROPMPTLY, given that it had been seventeen years (with two amendments) between the last two (1985 and 2002). We verified that the decimal encoding "still worked" for COBOL, and made the tweaks necessary in the draft to state that that's what COBOL expects and uses. Given the time pressure on us, that was as much as we had the resources to do.
I don't know what the formal review processes are for IEEE, but I can tell you they take a WHOLE LOT of time and energy in ISO/IEC JTC1/SC22. I'll spare the details, but I can say with reasonable certainty that if a major controversy was still raging in December 2006 in the COBOL standards committees without a CONCRETE, SPECIFIC resolution to the controversy acceptable to all hands, there is no way we could have gotten a standard published by November 2008. The final review process takes too long.
Again, we are in the final approval cleanup and approval process for the COBOL standard. The drafts have been publically available all along. I am not prepared to delay the publication of that standard for a period of years on this point. Are you saying that we should? Are you saying that COBOL's decision to stick, for THIS revision, to the decimal encoding is sufficiently offensive to you, personally, that you will argue that your implementation WILL NOT support it? That strikes me a lot like "But you didn't do it OUR way!!". One CAN choose to support the current COBOL mode. It is not as efficient as you would like, but it can be done, under ARITHMETIC IS STANDARD-DECIMAL. If at some point COBOL supports the binary encodings, and decimal-in-binary arithmetic, you can make use of that, through a clause like ARITHMETIC IS STANDARD-DECIMAL-IN-BINARY, and gain the efficiency you wish.
It looks to me like set of features parallel to what's in the draft, like new USAGEs such as FLOAT-DECIMAL-B-16 and FLOAT-DECIMAL-B-34; a new arithmetic mode specified by something like ARITHMETIC IS STANDARD-DECIMAL-IN-BINARY, and new functions like ENCODE-INTO-BINARY and ENCODE-INTO-DECIMAL would probably cover all the issues and provide you what you want.
But I know I won't have the resources to complete a proposal to add those to the draft in any reasonable timeframe relative to the approval process. I regret that that meets with your intense displeasure and disapproval, and it's unfortunate that you find it so offensive that I didn't think about dealing with the dichotomy before I (or apparently anyone else in the COBOL effort) knew the formal specification of that dichotomy. But now is not the time to damage or destroy ALL the work (and not just IEEE floats) we've done on the grounds of "woulda, coulda, shoulda". What's there is good, what's there is useful. We can make it better, but absolute and incontrovertible perfection is the purview of the Almighty, not humans or their efforts.
There is a strong possibility that a proposal to AMEND the soon-to-be-published standard (rather than wait for the next revision, like what was done with the Intrinsic Function amendment of 1989 to 1985 COBOL) would be not only feasible but well-received, but I don't know how arduous the amendment process is these days. As I have time and resources, I am in fact inclined to get started on just such a proposal, and will be discussing that possibuility with the COBOL working group at my next opportunity. I'll be out of commission for a while, right in the middle of this controversy as well as in the middle of the final standardization process, and that means I won't get it done for a while. Much more I cannot do for the immediate future.
> To: charles.stevens@xxxxxxxx
> CC: stds-754@xxxxxxxxxxxxxxxxx; forieee@xxxxxxxxxxxxxx
> From: forieee@xxxxxxxxxxxxxx
> Subject: Re: Two technical questions on IEEE Std 754-2008
> Date: Thu, 24 Feb 2011 15:44:44 -0800>
> > From: Charles Stevens <charles.stevens@xxxxxxxx>
> > To: <forieee@xxxxxxxxxxxxxx>
> > CC: IEEE 754 <stds-754@xxxxxxxxxxxxxxxxx>
> > Subject: RE: Two technical questions on IEEE Std 754-2008
> > Date: Thu, 24 Feb 2011 13:21:57 -0700
> > . . .
> > I agree that if we had known back in 2006 or 2007 that this was
> > an issue we needed to consider, we should have done something
> > about it -- most probably separate USAGE clauses for the binary
> > encodings of the two IEEE decimal formats we support, a new mode
> > of arithmetic, and a set of intrinsic functions to convert items
> > from one encoding to another. But we didn't. In fact, the first
> > record I have about "binary encoding" is from February 2009, after
> > the 754 standard was published.
> > . . .
> > -Chuck Stevens
> Oh, Charles,
> You forget who you are talking to. I was the chairman.
> I kept notes of those meetings.
> You WERE there. You DID know.
> You were not there back in 2002 & 2003 when Mike Cowlishaw
> took on the whole of the binary floating-point community &
> convinced them that decimal was not only sensible but
> After a year & a half of negotiations over the details of
> decimal floating-point, he spoke at a remarkable 2-day meeting
> in Berkeley. During that meeting he took on an overflowing
> room full of binary floating-point experts who started out
> 100% against him. And he beat them down. In the end, we
> all came to an agreement.
> BTW, the sticking point was unnormalized numbers. They have
> been an anathema in the binary world for at least as long as
> Cobol has existed (prior to the IBM 709). Having eliminated
> them in 754-1985, we thought we would commit a great sin
> against the future if we allowed them back into the world.
> But he (like you) was mostly interested in the number format.
> And we (I count myself among them) couldn't care less about
> the format but needed consistent & correct arithmetic.
> When both sides agreed on that, we had our compromise.
> Mike did all of that work primarily for your benefit. You
> were not there at the time so I tell you this now to let you
> know how important his work was in changing the nature of 754
> to accomodate Cobol & the rest of the decimal world.
> Flash forward to 14 July 2005. The meeting was held at HP in
> Palo Alto. There were 18 people in the room. And 5 people on
> the phone. You were one of them.
> One of the remarks I attributed to you was:
> For all of what Chuck outlined, he had to admit that
> there is only one implementation that only partially
> implements 'standard arithmetic'. Perhaps that will
> improved in the near future.
> I guess nothing much has changed in the last 6 years.
> Anyway, this was the meeting in which Peter Tang outlined
> Intel's plan for a binary encoded form of decimal arithmetic
> (later to be known as BID). It was not the first we had
> heard of it but this was the meeting that dropped the bomb.
> Peter spoke for 45 minutes. Using the results from Mike's
> own benchmarks, he made a good case for why his format was
> a good one for an otherwise binary machine that supported
> Mike delivered a rebuttal supporting DPD that also lasted
> 45 minutes.
> At the time we all believed there was room for only one
> format in 754. And the flexibility that Mike demonstrated
> 3 years earlier was not longer available to him. By this
> time he had gone to silicon.
> When Mike was done I spoke for 15 minutes. As I was
> speaking at the time the thrust of my remarks didn't make
> it into my notes but I recall proposing a sort of Millennial
> Encoded Decimal as a compromise. You see, anticipating
> this conflict, I had spent the previous weeks outlining
> the design of circuits that worked with 10x10-->20 bit
> multipliers as the basic unit. I showed that they were
> not significantly harder to decode than DPD & also not
> significantly slower than binary. It was my poor attempt
> at offering a compromise.
> In the end, nobody was interested in my compromise.
> Well, my notes go on to say: "At which point, a very
> lively discussion ensued."
> For those of you who have read my notes in the past, the
> phrase "a lively discussion" is chairman diplo-speak for
> "we yelled at each other". It gets to "very" when we call
> each other names. Fortunately it never got to "spirited".
> That's when "yo momma" gets involved.
> Still we argued well past dinner time. I recall telling
> Peter in a private moment that I really wished he was an
> idiot. And explaining that if he were an idiot I could
> ignore all this crap & go back to the business of the
> standard. But Peter is NOT an idiot & his proposal had
> merit. So did Mike's point of view.
> And there it sat unresolved until, in the fall of 2005,
> it was clear that both sides were intractable & unwilling
> to come to any compromise.
> So, after consulting with the IEEE, in September of 2005
> I declared the revision effort to be at an impasse &
> suspended further meetings until this issue was resolved.
> I asked Jeff Kidder (of Apple at the time) to act as a
> neutral arbiter between the two sides & awaited results.
> I attended these meetings myself. Joe Darcy hosted one
> meeting at Sun in Santa Clara on 16 November 2005. You
> were one of the people on the phone. I attribute a few
> remarks to you. For example.
> Chuck claimed that Cobol can conform to 754 by
> implementing only what his customers need. Cobol
> is not interested in some of the features of 754.
> Our position is that none of the optimizations
> that we might discuss will apply to Cobol due to
> its rigid syntax. Cobol might support both
> Decimal128 & Binary128 (& conversions) but would
> NEVER support mixed radix expressions.
> Perhaps I should have taken that as a warning. I did not.
> You may blame me for that.
> Anyway, both sides were well hardened in their positions
> by this time.
> I had hoped to scare both sides into talking to each
> other. Instead, both sides brought pressure on the IEEE
> to have me removed as chairman. Since both sides did that
> the IEEE must have thought I was doing my job & left me to
> run things as I saw fit.
> Alas, by December 2006 the only compromise both sides
> would consider was having both formats in the standard.
> Which brings us back to where we are today.
> Now, remember, all this was done for you guys. And you
> were there. It all happened before your eyes. We listened
> to your comments. Had you raised any issues of not handling
> NaNs & infinities it would have made a big stink & made it
> into my notes. But there is no such comment there.
> I realize that none of this history is going to change
> your mind. If the Cobol committee decides to ignore the
> work we did on their behalf, that is one thing. But let's
> not pretend that it was foist on you without your prior
> It is simply not so.