[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: A general comment on COBOL "modes of arithmetic"

Decimal-scaled integers in "display" form have always been part of COBOL, and might even be regarded as characteristic of the language. 
However, every implementation I'm aware of has had their OWN mechanisms for handling numerics -- in data, external and internal -- in other forms.  Unisys MCP COBOL74 has (and I think COBOL68 had as well) REAL, DOUBLE, BINARY EXTENDED, BINARY TRUNCATED, and a bunch of others, but none of those formats are understandable to anybody else.  This is something users want, and they also want such things to be portable across platforms.  Now, it wouldn't be "pleasing" to IBM customers for REAL to be standardized as identical in format to a Burroughs B6700 single-precision 48-bit floating-point item, any more than it would be "pleasing" for Burroughs customers to have COMP-whatever (is it 4?) standardized as an IBM single-precision 32-bit floating-point item.  Giving one vendor's conventions standardized status without giving ALL vendor's analogous constructs standardized status is Not A Good Idea. 
So, the effort at providing support for the IEEE floats is a matter of compliance with a "vendor-neutral" standard that allows the user to handle such data in a platform-independent way.   And the most recent effort I've put forth in that area is to make sure that neither the decimal encoding nor the binary encoding of IEEE decimal floats is handled preferentially.  The way it was, the assumption was that the decimal encoding was used, and expected, and I've resolved that -- there's no assumption what the encoding is; the user specifies it as part of the data description.  If he's wrong, it's on him, or it's on the process that provided him either the erroneous description or the erroneous data.  We provide the mechanisms; the implementor is encouraged to support all of them (regardless of his "preference"). 
By the way, the proposed standard provides support for the following rounding modes:  AWAY-FROM-ZERO, NEAREST-AWAY-FROM-ZERO, NEAREST-EVEN, PROHIBITED, TOWARD-GREATER, TOWARD-LESSER, and TRUNCATION, both as defaults for a program and for individual statements that have the ROUNDED clause.  
If an arithmetic statement has no ROUNDED phrase, the rounding mode is TRUNCATION.  This is the historical default for COBOL in the absence of a ROUNDED phrase. 
If the ROUNDED phrase is specified for a statement without further details, the default is whatever's specified in the DEFAULT ROUNDED MODE clause in the OPTIONS paragraph. 
If there's no DEFAULT ROUNDED MODE explicitly specified, and ROUNDED is specified for an arithmetic statement,  the rounding mode is NEAREST-AWAY-FROM-ZERO. This is the historical default in the presence of the ROUNDED phrase.  The DEFAULT ROUNDED MODE clause is new in the proposed standard, and it's there to support the IEEE rounding mode options.  . 
    -Chuck Stevens 
> Date: Sat, 26 Feb 2011 20:34:30 -0500
> To: stds-754@xxxxxxxxxxxxxxxxx
> From: hack@xxxxxxxxxxxxxx
> Subject: A general comment on COBOL "modes of arithmetic"
> Frankly, I'm a bit confused by these modes of "arithmetic". I understand
> the notion of identifying the various representations, but it seems to
> me that there is a large common subset of most of these: decimal-scaled
> integers, with the scale (the power of ten) often held separately from
> the significand (and both, initially, in packed decimal, and later in
> binary, but with the SAME arithmetic properties; the scale was often not
> exposed numerically as it was part of the "picture", or field type).
> In other words, decimal fixed-point arithmetic. (Well, that's how I
> remember it. I presented caveats before.)
> The new DFP formats permit the scale to be represented in the same datum as
> the significand, but can that be exploited if the code has to be executable
> with a non-DFP-based arithmetic as well? I guess so, for traditional
> fixed-scale fields. The new floating-point fields would in fact not exploit
> the DFP cohorts, as earlier posts indicated that all floating-point items
> are considered as if normalised.
> If the logical representation remains as decimal-scaled integers, binary128
> should be capable of the same arithmetic as DFP (up to 34 digits) and BCD
> as Packed Decimal (up to 31 digits), right?
> Different arithmetic properties would then show up only when narrower
> formats are used (binary64 and even decimal64), as rounding effects
> become an issue. The new floating-point formats also support a large
> magnitude range with dynamic scaling (and, in the case of DFP, with the
> ability to get exactly the same results as laborious static scaling).
> If this hunch is right, then I understand somewhat the obsession with
> encoding, because the purpose is not really to describe the arithmetic
> but is explicitly intended to describe the representations.
> What continues to confuse me however is why Endianness (and, for character
> fields, the character encoding) is not considered part of the encoding.
> Please -- is there ANYBODY out there who can tell me how this is handled
> in COBOL implementations, for cross-platform import/export of records?
> (One possibility would be that all import/export is done in character
> decimal, for numeric fields, in which case Endianness indeed disappears
> as an issue, and only the Ascii/Ebcdic/... distinction remains, which
> is a horse of another colour. But in that case BID/DPD would not be
> an issue either -- which brings me back to my initial confusion.)
> Michel.
> ---Sent: 2011-02-27 02:21:55 UTC

754 | revision | FAQ | references | list archive