[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: Update on COBOL position on IEEE decimal floating-point usage



Charles, please point out to the members of your drafting committee that this IEEE 754 mailing list has always been, and still is, open to all.  There's no need for you (especially given your temporary [we hope!] difficulties) to act as a conduit between them and this mailing list.
 
It is especially difficult for people here to interpret the nuances of one 'camp' as against another without them being able to speak for themselves.
 
Mike


From: stds-754@xxxxxxxx [mailto:stds-754@xxxxxxxx] On Behalf Of Charles Stevens
Sent: 05 March 2011 18:21
To: IEEE 754
Subject: Update on COBOL position on IEEE decimal floating-point usage

We've run into a bit of controversy in the COBOL drafting committee on the best way to handle IEEE floating-point formats (particularly the decimal formats) and arithmetic. 
 
One camp wants the implementor to be free to specify any details of an intermediate format he wants for arithmetic operands and results (at the level of the operation), so long as the results are exactly the same as what IEEE 754 floats would prouce, and so long as the fullly-defined and not-quite-so-fully-defined USER descriptions of data items conform in the sense that their values are identical.  This applies to binary arithmetic and to decimal arithmetic; the results in the former case are AS IF the results were in binary128 format, and in the latter AS IF the results were in decimal128 with the encoding choice specified for the imlementor for the intermediae result format, and for the not-quite-so-rigorously-defined data items to match as well.
 
Another camp feels, given the fairly-rigorous definition of the IEEE formats, it is more in keeping with the direction given the COBOL committee to require that the intermediate format SHALL BE as specified in IEEE 754 (for binary128 when "binary arithmetic" is specified, and for decimal128 with either the decimal or binary arithmetic when "decimal arithmetic" is specified.  The implementor gets to specify which encoding is used for the intermediate results, and whatever encoding he chooses there is also used for the not-quite-so-fully-defined USER descriptions of decimal floating-point data items, but in all cases the standard would specify rigorously that an IEEE format is used for the intermediate item, and the implementor would only be free to specify which of the two encodings were used there as well aso for the two "general" FLOAT-DECIMAL formats.  
 
For the "fully-defined" formats, the specifications are exactly those of binary32, binary64, binary128, decimal64 with decimal encoding, decimal128 with decimal encoding, decimal64 with binary encoding, and decimal128 with binary encoding in both camps.   
 
The two other formats -- in COBOL terms, FLOAT-DECIMAL-16 and FLOAT-DECIMAL-34 -- are required to behave AS IF  they were IEEE decimal floats with the encoding chosen by the implementor by the first camp, and as if they WERE THE SAME AS decimal64 and decimal128 respecively, with the implementor free to choose only the encoding (for the binary encodings, making them identical to FLOAT-DECIMAL-16-B and FLOAT-DECIMAL-34-B respectively, and for the decimal encodings to FLOAT-DECIMAL-16-D and FLOAT-DECIMAL-34-D respectively). 
 
The big difference here is for the "standard-decimal intermediate data item" and for the "not-qute-so-fully-defined" formats, which applies to the decimal floats only.  The first camp wants them to be "whatever the implementor wants, so long as the results are the same as what they'd get with decimal128 with an encoding matching the implementor's choice; the second camp would require them to be IEEE formats, with the choices limited to the decimal or the binary encoding. 
 
I'm pretty firmly in the second, "rigorous alignment with IEEE 754", camp, on the grounds that it's more fully specified, and more consistent with what I recall the direction of WG4 to be when we started the project.  But I'd like to get input from the IEEE community on this.  I'm also not a big fan of introducing major implementor-defined characteristics in an enhancement that was driven by a wish to add rigorousness to the specification in the area of COBOL arithmetic. 
 
Some features of the latest proposal that may not have been made clear since last I posted on the subject: 
 
1)  There is only one option for decimal arithmetic. 
2)  The implementor specifies the encoding he uses for the intermediate results in decimal arithmetic. 
3)  Data declaration constructs are provided explicltly to cover the seven supported "rigorous" binary and decimal floating-point formats  items (three binary, two decimal with decimal encoding, and two binary with bnary encoding) regardless of the mode of arithmetic.   
4)  Two "partially-defined" data declaration constructs provide that the encodings for them match the encoding used for intermediate results in the implementation. 
 
What do you think?  In other words, give me a solid reason to believe rigorous conformance to the IEEE formats for COBOL's arithmetic operations is not a good idea.  
 
   -Chuck Stevens
 
PS:  Typing is a bit of a chore with my arm in a tight sling.  I'll do what I can, but response may not be as prompt as I would like. 

754 | revision | FAQ | references | list archive