[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: Implementor support for the binary interchange formats

Thanks, all.  I think your responses have directly answered my question.  Direct "platform" (hardware, microcode, millicode, whatever) support for Binary128 arithmetic does exist out there, and that support is not necessarily tied to support for Decimal128 arithmetic.  As Bill Klein pointed out privately, Binary64 has been around a while and seemingly matches the pre-2008 IEEE floating-point maximum capacity. 
Binary64 (or, for that matter, decimal64) is not adequate to handle COBOL's mathematical precision requirements:  at least 18 digits before 2002, 32 digits thereafter.  
With the addition of formal specifications for Binary128, the idea of having an IEEE compliant arithmetic mode became something that "might be considered", but given the primarily-decimal focus of COBOL, the radix conversions involved in specifying a binary floating-point intermediate form represent a serious problem.   
As mentioned earlier, 2002 introduced the option that a temporary, abstract DECIMAL floating-point item with specific characteristics of precision and exponent range, but otherwise defined by the implementor, be used for arithmetic.  It also introduced FLOAT-SHORT, FLOAT-LONG, and FLOAT-EXTENDED usages, which I suspect many if not most implementors who chose to support these formats equated to the earlier binary16, binary32 and binary64 formats.  However, whether they do or do not conform to those specifications is left up to the implementor; there is no reference to IEEE 754 in ISO/IEC 1989:2002. 
Using decimal128 for an intermediate numeric form is an improvement over the abstract form,  and it was that format that is of primary interest to COBOL for at least these reasons:  1) it's not COBOL's invention, 2) the rules for arithmetic are not COBOL's; 3) the formats and rules are not defined by a particular vendor or hardware supplier at the expense of any other, because they're now an industry standard in and of themselves; and 4)  decimal128 exceeds the preexisting "standard intermediate data item" ranges in both precision and magnitude, and is thus adequate to subsume that specification. 
Doing arithmetic in binary128 kind of "came along for the ride" at the request of WG4.  In our proposed standard, we clearly suggest that binary128 is the "ugly stepchild" of arithmetic modes compared to decimal128 from COBOL's perspective, precisely because of the inherent precision loss in radix conversions from decimal for noninteger data, and because COBOL's historical focus is decimal data and arithmetic.  Binary128 would be great for FORTRAN, but FORTRAN's focus is different from COBOL's.   
The question at hand does not relate to support for the various IEEE formats as used to describe DATA.  It's strictly about the form into which numeric data is converted when used as a numeric operand.  The proposed revision (presuming my latest proposal is accepted) explicitly provides for FLOAT-BINARY-7 as binary32, FLOAT-BINARY16 as binary64, FLOAT-BINARY-34 as binary128, FLOAT-DECIMAL-D-16 as decimal64 in decimal encoding, FLOAT-DECIMAL-D-34 as decimal128 in decimal encoding, FLOAT-DECIMAL-B-16 as decimal64 in binary encoding, and FLOAT-DECIMAL-B-34 as decimal128 in decimal encoding.  They are all in the category that implementors are free to choose to support all, none, or any of these in data declarations.  My personal preference is that implementors provide ALL SEVEN of them. 
Both IEEE forms of arithmetic are in the current draft of the revision to the COBOL standard.  
The question I've been mulling over, for presentation to the COBOL drafting committee and to the ultimate decision-making committee (WG4) thus becomes tripartite:  1) Should we leave the draft as it is, with binary128 ARITHMETIC as conforming COBOL, along with the strong implications that it's an ugly stepchild from COBOL's standpoint?  2) should we consider changing the draft to delete the specifications for binary128 ARITHMETIC but explicitly allow the implementor to provide a mode analogous to the IEEE decimal forms, with a suitable reference to binary128 in IEEE Std 754-2008? or 3) do we change the draft to delete the specifications for binary128 ARITHMETIC altogether and make no mention that the possibility of using it ever existed from COBOL's point of view?
    -Chuck Stevens  

Subject: Re: Implementor support for the binary interchange formats
To: hfahmy@xxxxxxxxxxxxxxxxxxxxxxx
CC: charles.stevens@xxxxxxxx; stds-754@xxxxxxxx; stds-754@xxxxxxxxxxxxxxxxx
From: eschwarz@xxxxxxxxxx
Date: Tue, 1 Mar 2011 08:49:04 -0500

IBM zSeries supports binary128 in hardware as well as binary32, binary64, decimal64 and decimal128. Almost all architected arithmetic operations are completely executed in hardware with the exception of conversions between decimal and binary floating-point formats and a divide-to-integer operation. The binary dataflow is optimized for binary64 requiring multiple passes for binary128, but all controlled by hardware. The decimal dataflow is wider and directly supports decimal128.

Eric Schwarz


stds-754@xxxxxxxx wrote on 03/01/2011 04:44:56 AM:

> From: "Hossam A. H. Fahmy" <hfahmy@xxxxxxxxxxxxxxxxxxxxxxx>

> To: Charles Stevens <charles.stevens@xxxxxxxx>
> Cc: marius.cornea@xxxxxxxxx, IEEE 754 <stds-754@xxxxxxxxxxxxxxxxx>
> Date: 03/01/2011 04:51 AM
> Subject: Re: Implementor support for the binary interchange formats
> Sent by: stds-754@xxxxxxxx
> Dear all,

> 2011/2/28 Charles Stevens <charles.stevens@xxxxxxxx>
> I'm not talking about the binary ENCODING (e.g., "decimal128 using
> binary encoding"); already understood there were implementations
> providing that.
> What I'm talking about is "binary128 format" ITSELF, specifically as
> distinct from decimal128 using EITHER encoding. 

> I didn't see an answer to your question yet so here is my answer.
> All the HARDWARE for general purpose computers that I know of and
> used does NOT have binary128 directly in the processor. That means
> Intel, AMD, and Sun processors. I am almost sure that it is the same
> case for IBM, HP, and Fujitsu but I will leave it to those working
> at such companies or using those platforms to confirm it.
> Most processors (for general purpose computers) support binary64
> directly in the hardware (registers, adders, datapath, rounding
> logic, ...) and can provide binary32 as well since it is just a
> narrower format. Binary128 is a new feature added in IEEE754-2008
> just as the decimal64 and decimal128 are new features. Hence, some
> HW providers may opt to support binary128 directly in their future processors.
> However, binary128 may be supported now on these platforms via low
> level software libraries or microcode that use the underlying
> hardware designed for the 64 bits width and provide the correct
> result but with a longer execution time compared to binary64.
> I hope this answers your question.
> With that said, I also note that decimal is not much different. IBM
> and SilMinds are the only two companies that provide hardware units
> for decimal and both use DPD. As far as I know, (Michel please
> confirm or deny), IBM's HW has direct support for decimal64 while
> decimal128 needs similar low level SW to give the correct  answers.
> SilMinds has direct decimal128 and decimal64 in their HW accelerator cards.
> --
> Hossam

754 | revision | FAQ | references | list archive