[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: Two technical questions on IEEE Std 754-2008



The original hopes for COBOL were that the current draft would be made the new standard sometime in 2008.  A variety of circumstances (some procedural, some technical) have conspired to delay it another three years or more. 
 
I agree that if we had known back in 2006 or 2007 that this was an issue we needed to consider, we should have done something about it -- most probably separate USAGE clauses for the binary encodings of the two IEEE decimal formats we support, a new mode of arithmetic, and a set of intrinsic functions to convert items from one encoding to another.  But we didn't.  In fact, the first record I have about "binary encoding" is from February 2009, after the 754 standard was published.  
 
As it stands now, according to the COBOL FCD, an implementor that chooses to support the decimal formats and standard-decimal arithmetic, but uses binary encoding of decimal floating-point operands as his "native" mode is going to have to convert the COBOL values from the decimal encoding to the binary encoding, and convert the results back to decimal encoding.  An implementor that chooses to support these features, and uses decimal encoding of decimal floating-point operands as his "native" mode isn't going to have to do either.  We don't FORCE the implementor to support any of it.      
 
In my view, these features provide a CAPABILITY in COBOL that users don't have now, and I think, on the grounds of portability, that implementors would be doing a service to their users by providing it, even when it's not the most efficient mode for their particular hardware.
 
In fact, I would encourage implementors on ANY architecture to try to find a way to conform to the necessary parts of IEEE 754 as a value-add to their user base on behalf of COBOL, regardless of how much the difference between the run-time performance of that platform's "native" mode and the "standard-decimal" mode might be.  Whether to live with that performance difference or not should be an option the end user should be able to make, it shouldn't be a decision the implementor doesn't allow him to make.  
 
Put a different way, if there were still a direct hardware descendant of the Philco 2000 being produced, and there had been a COBOL implementation on that architecture, I suspect it'd be highly unlikely that that architecture would be particularly efficient at handling decimal128 items of either encoding.  We would ENCOURAGE the implementors of any COBOL compiler intended for that platform to provide support for the new arithmetic modes and the IEEE formats, and if they did that, the users would almost certainly find that their programs ran much less efficiently on the Philco than they did under the Philco's native arithmetic mode as a result.  But they would also find that they would run MUCH faster on ANY machine that supported the IEEE decimal floating-point formats and arithmetic modes, so long as that machine supported EITHER the decimal OR the binary encoding for those decimal formats. 
 
Is the feature as it stands USELESS, and does it HAVE TO BE CHANGED prior to publication?  No, I don't think so.  It's a feature that provides capacities and portability options that weren't there before.  If we need to enhance it, we can do so later in a revision, or, if somebody can convince the working group that the omission of additional features is a technical ERROR in the standard, we could do so in a corrigendum. 
 
I would hope the efficiency questions raised by an implementor's decision to use the binary encoding on his platform when COBOL's using only the decimal encoding are not considered sufficient grounds for that implementor to refuse to provide the feature for his COBOL compilers.  The implementor might disagree, but again, that's between the implementor and the user.  We think having IEEE arithmetic support and associated floating-point formats is better than not havng it at all.  Addition of and adjustments for the binary encodings is not practical now, and we didn't find out that it was even an issue to anyone until far too late to consider making such a drastic change. 
   
    -Chuck Stevens
 
 
> To: charles.stevens@xxxxxxxx
> CC: stds-754@xxxxxxxxxxxxxxxxx; forieee@xxxxxxxxxxxxxx
> From: forieee@xxxxxxxxxxxxxx
> Subject: Re: Two technical questions on IEEE Std 754-2008
> Date: Thu, 24 Feb 2011 11:12:24 -0800
>
> > To: <khbkhb@xxxxxxxxx>
> > CC: <forieee@xxxxxxxxxxxxxx>, IEEE 754 <stds-754@xxxxxxxxxxxxxxxxx>
> > Subject: RE: Two technical questions on IEEE Std 754-2008
> > Date: Thu, 24 Feb 2011 10:40:24 -0700
> >
> >
> > . . .
> >
> > As a side note: it's been argued that IEEE arithmetic is the same regardle=
> > ss of the encoding. Offhand=2C I don't see anything that requires that dec=
> > imal-format operands to IEEE arithmetic operations be in the same encoding=
> > =2C or that the result of the operation is in that same encoding. It seems =
> > to me that the result of adding a decimal-encoded decimal128 item to a bina=
> > ry-encoded decimal128 item will NOT be the same as it would if both were en=
> > coded in decimal in the first place. The results are (almost?) always vali=
> > d numeric values=3B they're just wrong if the presumption is that the opera=
> > nds and the result are all in the same encoding. This is the same dilemma =
> > COBOL faces if the presumption is that the data in IEEE decimal formats is =
> > encoded in decimal=3B the difference is that COBOL explicitly states that o=
> > nly decimal is permitted to begin with.
> >
> > -Chuck Stevens
> >
>
> Charles,
>
> As you seem determined to follow your current path
> I thought I had written my last word on this subject
> but I guess I have another one in me.
>
> This is a false analogy.
>
> It is equivalent to saying that adding a DPD encoded
> Decimal128 to an ASCII string would be dangerous.
> Of course it would. If it hurts, don't do that.
>
> Anytime you import data without knowing its format
> you make such errors. Almost always fatal.
>
> We spent at least 2 years putting decimal arithmetic
> into 754. About a year & a half shoehorning it in
> in the first place. And later, another 6 months or
> a year when the conflict between IBM & Intel came up.
> But we resolved all that.
>
> And we did it for you.
>
> The arithmetic *IS* identical in both DPD & BID. We
> made damn sure of that.
>
> That you don't seem to be interested in using both
> encodings wastes that latter time. That you don't
> seem to be willing to support decimal even to the
> point of changing the fundamentals of the arithmetic
> WRT NaNs & infinities wastes the rest.
>
> I was hoping to persuade you to make your variances
> much more slight (in the form of more Cobol-like
> exception defaults) but even that has gone nowhere.
>
> I'm sure you know far better than I what is best for
> your users. And what you can do in the time you have
> available to you.
>
> But it seems a shame.
>
> Yours,
>
> Dan

754 | revision | FAQ | references | list archive