[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

FW: Two technical questions on IEEE Std 754-2008

I inadvertently, and unfortunately, sent this privately to Michel rather than posting it.  Same for a PS to follow shortly.   -Chuck Stevens. 

From: charles.stevens@xxxxxxxx
To: hack@xxxxxxxxxxxxxx
Subject: RE: Two technical questions on IEEE Std 754-2008
Date: Wed, 23 Feb 2011 12:27:02 -0700

FCD is "Final Committee Draft".  It has gone through international ballot, now closed, and we are now responding to the last few remaining international comments in preparation for the WG4 meeting in May, at which we hope to get formal approval for publication.  We have teleconferences scheduled prior to them, but it's unlikely I'll be able to participate much because I'm going in for major surgery a week from today, and expect to have little, if any, use of my right arm (and I'm right-handed) for the next couple of months.  THAT'S why I'm trying to resolve as many questions and ambiguities as I can in the meantime.    
While it is possible to distinguish between the encodings of two decimal128 items WITHIN the implementation, it is not possible to distinguish which encoding it is when it comes from OUTSIDE the implementation.  That's the problem. 
If you've got, say, an 80-byte record on a reel-to-reel tape, the tape's been written somewhere else, it's got a decimal128 item taking up the last 16 bytes of that record, with the first 64 bytes taken up with meaningful and valid information, there's nowhere to put an indication of what encoding was used to develop the numeric value in that record.  We can't "tag" data with the encoding if we don't know where it came from.  The guy who built it can't tag it either, because it's already written.  The implementor on the machine reading the tape can't tag it because he doesn't know what the intent of the guy who wrote the tape was.  The same applies to information that may come in as parameters from "outside sources" (including the internet) outside the implementor's complete control.     
The implementor can only determine what the encoding is if he has control of both the "sender" and the "receiver",  and the COBOL standard can't require that that be the case.  

Current standaard (2002) COBOL only has three forms of floating-point formats, all defined by the individual implementor, and thus there's no guarantee of portability.  It has one form of binary whose maximum digit capacity is defined by the standard (subject to implementor interpretation), and three forms whose capacity is only defined in terms of "minimum maxima".  The maximum number of digits that can be represented in any specified form (numeric literals or data items) is 31.  It does NOT have any format (floating-point, fixed-point or integer) with more than 31 digits.  More specifically it does NOT have representations of binary128 or decimal128 IEEE formats unless the implementor has chosen to provide them as an implementor extension.    And it does NOT have the capability to handle 34-digit numbers of any stripe UNLESS the implementor had already done so in his implementation of the 2002 standard.  Changing the behavior of an existing implementation of an existing standard is not something imlementors are likely to do.  Note that this 31-digit limit in the 2002 standard is up from 18 in the 1985 standard
The biggest reason we began investigating the IEEE formats is that, as I see it, for the FIRST TIME, an industry standard unrelated to COBOL provided a means of encoding numeric information, of sufficient precision to cover COBOL's requirements.  2002 COBOL required 31-digit precision everywhere, and if the arithmetic mode was specified by the user as "standard", the intermediate data item could have 32 digits of precision.  34 digits is better, and 34 digits in a form that isn't designed at the behest of COBOL is better still. 
The FCD explicitly allows the content of data items to be subnormal.  If they're converted into a standard intermediate data item that uses binary128 or decimal128 formats, they're normalized in the process.  If the content can't be normalized (because it's between the subnormal minimum and the normal minimum for the particular format), a fatal exception condition results. 
I'd expect the same to be true for something like attempting to handle a numeric literal like +0.0000000001e-997 in an arithmetic _expression_ in 2002 COBOL.  The literal doesn't have to be normalized, but the value doesn't, and if the minimum exponent is -999, the attempt at normalization will result in an exponent overflow condition. 
    -Chuck Stevens
> Date: Wed, 23 Feb 2011 12:13:34 -0500
> To: stds-754@xxxxxxxxxxxxxxxxx
> From: hack@xxxxxxxxxxxxxx
> Subject: RE: Two technical questions on IEEE Std 754-2008
> Chuck Stevens wrote:
> > in the FCD, we already support FOUR modes of arithmetic:
> (I'm guessing: Future Cobol Directions?)
> One of the things that the 754 WG worked hard on was to ensure that
> DFP arithmetic be IDENTICAL for the two possible encodings, BID and
> DPD. For example, BID could represent slightly larger coefficients
> than DPD (2**113 > 10**34), but such representations are not only
> non-canonical -- they have a defined value (namely zero) that is
> within the same range as possible DPD-encoded values..
> So I think it would be wise to avoid specifying too much at the
> language level. What you should consider (in my opinion) is means
> to tag data fields (not the contents) at the IMPLEMENTATION level
> as to encoding, to be prepared for implementations that might have
> native BID support.
> I'm assuming here that COBOL implementations already have a means
> of recording the type of numeric fields: 31-digit packed decimal,
> Binary128, Decimal128, or some "native" format such as 128-bit binary
> integers. If you could at the IMPLEMENTATION level distinguish the
> two Decimal128 encodings, you would be fully prepared to support
> native internal arithmetic as well as a standard interchange format
> (which would then be DPD-encoded if specified as COBOL Decimal128).
> The re-encoding functions defined by 754-2008 were defined to permit
> environments supporting both encodings to be implemented themselves
> in a portable manner, precisely so as to hide the issue from the vast
> majority of DFP programs.
> > STANDARD-DECIMAL (FCD): Arithmetic is performed using decimal128,
> > content is always normal from the view of the program
> I assume that "from the view of the program" means that you don't rule
> out implementations that take advantage of the ability of DFP to emulate
> fixed-point arithmetic, without the need to record the scale separately
> as is necessary for plain packed decimal. After all, the whole point of
> the 754-2008 specification for DFP being different from 854 was to exploit
> unnormalised representations.
> Michel.
> ---Sent: 2011-02-23 17:51:04 UTC

754 | revision | FAQ | references | list archive