[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: Implementor support for the binary interchange formats



It has been a long time but I remember that the HP 128-bit FP and the Sun 
128-bit FP were different.  The HP 128-bit float representation had a 112 bits 
in the significand plus a "hidden" bit for 113 bits of precision.  The Sun 
128-bit FP float representation also had 112 bits in the significand but had no 
hidden significand bit for 112 bits of precision.  Note that the Intel 80-bit 
extended type is like the Sun 128-bit float in that neither have a hidden 
significand bit.

When the Alpha architecture was designed, it included both a 128-bit VAX FP 
data type and a 128-bit IEEE extended data type.  There were no instructions 
for these two floating point types.  Implementation was in software.  The 
128-bit VAX FP had been in the VAX hardware since 1980 and included 1 sign bit, 
15 exponent bits, 112 significand bits plus one additional hidden significand 
bit for a total of 113 bits of precision.  The Alpha 128-bit IEEE extended data 
type was identical to the HP 128-bit float type.  Alpha chose the HP 128-bit 
representation over the Sun 128-bit representation because it was more 
compatible with the VAX 128-bit representation which would make implementation 
easier (as well as making the architecture more regular.)

In the case of big-endian versus little-endian in COBOL, take a look at packed 
decimal in both the VAX and Alpha.  The two 4-bit nibbles appear in an 8-bit 
byte in big-endian order while the bytes appear in memory or a register in 
little-endian order.  E.g., a 32-bit VAX register containing the packed decimal 
value 12345678 would contain the following hex value, 0x78563412.  A 
little-endian Alpha 64-bit register containing the packed decimal value 
1234567890123456 would contain the following hex value: 0x5634129078563412.  A 
big-endian Alpha 64-bit register containing the packed decimal value 
1234567890123456 would contain the following hex value, 0x1234567890123456.  
The funny little-endian packed decimal representation made it possible to 
exchange COBOL records containing 8-bit characters, 8-bit decimal bytes and 
4-bit packed decimal between little-endian and big-endian systems.

[[[[[ The in-memory representation of VAX floating pointer is even weirder.  
The VAX 32-bit, 64-bit and 128-bit floating types have sign, exponent, and 
significand fields exactly the same size as IEEE binary32, binary64 and 
binary128.  The values in each of the sign, exponent and significand fields is 
interpreted differently from the corresponding IEEE fields.  However, the in 
memory layout is very strange.  Break the VAX float number into 16-bit pieces 
in big-endian order; then place the 16-bit pieces in memory in little-endian 
order.  For example, 32-bit VAX float in memory looks like:


    31                            16 15                            0
    +---------------+---------------+---------------+---------------+
    |     nif-      |    icand      |S|  exponent   |  sig-         |
    +---------------+---------------+---------------+---------------+

and the 64-bit VAX float in memory looks like:  (stretch your mail reading 
window to at least 133 characters)

    63                            58 57                           32 31         
                  16 15                            0
    
+---------------+---------------+---------------+---------------+---------------+---------------+---------------+---------------+
    |     sig2      |    sig1       |      sig4     |    sig3       |     sig6  
    |     sig5      |S|  exponent   |     sig7      |
    
+---------------+---------------+---------------+---------------+---------------+---------------+---------------+---------------+

where sig7 is the most-significant significand byte and sig1 is the 
least-significant significand byte.

You can even call the VAX 32-bit memory format "middle-endian".  The sign bit 
is in bit 15 while the lowest-order significand bit is adjacent to the sign bit 
in bit 16.  This amazing mixed-endian layout was how PDP-11 floating point was 
designed in the early 1970s.  (I wonder if some architect or engineer would 
volunteer to stand up and take credit for this design.) ]]]]]

Back from our VAX floating-point diversion.  Consider the choices that 
little-endian systems have made with packed decimal.  Perhaps little-endian 
COBOL records containing IEEE decimal32, decimal64 and decimal128 
floating-point values should follow the VAX FP layout *except* using 8-bit 
chucks instead of 16-bit chuncks.  All COBOL systems would put the IEEE sign 
bit and the first 7 bits of exponent in the *zeroth* byte of a character string 
while the low order 8 bits of the significand would go into the *last* byte 
(byte 3,7,15 for decimal32,decimal64,decimal128) of a character string.  These 
character strings containing IEEE decimal floating point could be exchanged as 
COBOL records between both big-endian and little-endian systems.  This exchange 
would work similar to how packed decimal described above can be exchanged as 
COBOL records between big-endian and little-endian systems.

At the moment it seems like neither the IEEE FP nor the COBOL standard specify 
an 8-bit character string layout for IEEE decimal floating-point 
representation.  It looks like each COBOL implementer can make their own 
decision how to do this.  Please note that IEEE decimal FP on IBM AIX is 
big-endian with decimal encoding using one representation while IEEE decimal FP 
on the Intel X86/X64 architecture is little-endian and the Intel library uses 
binary encoding.

--Steve Hobbs

-----Original Message-----
From: stds-754@xxxxxxxx [mailto:stds-754@xxxxxxxx] On Behalf Of Dan Zuras IEEE
Sent: Tuesday, March 01, 2011 3:07 PM
To: Michel Hack
Cc: stds-754; Dan Zuras IEEE
Subject: Re: Implementor support for the binary interchange formats

Date: Tue, 01 Mar 2011 11:34:44 -0500
To: stds-754                      <stds-754@xxxxxxxxxxxxxxxxx>
From: Michel Hack                          <hack@xxxxxxxxxxxxxx>
Subject: RE: Implementor support for the binary interchange formats

Intel people should correct me if needed, but I was under the impression
that Itanium has hardware support for efficient software implementation
of binary128, and is one of the early machines to define what eventually
became the 2008 binary128 standard.

As for COBOL's FLOAT-SHORT, -LONG and -EXTENDED, I suspect they were
mapped to binary32, binary64 and Intel's binary80.  Binary16 has been
defined as a low-precision compact storage format; I know of no uses,
but there may be some in the embedded-system space.

Michel.
---Sent: 2011-03-01 16:45:00 UTC


        Actually, Michel, it is the HP & Sun people who should
        correct you. :-)

        The floating-point type that we called binary128 in the
        2008 standard is identical to the quad floating-point
        type that was used both by HP in their earliest RISC
        machine & Sun in some of theirs.

        I was involved in defining quad for HP & I have become
        friends with those involved in the same thing at Sun.
        There is some friendly dispute as to who came first but
        I like to say that we copied each other.  While I don't
        remember exact dates, I am sure I was working on what
        we called IEEE quad prior to 1988.  Probably as far back
        as 1985 or 1986.  I am less certain about that.  The
        machines ran at 8 MHz if that helps fix the date.

        Both these 128 bit formats & Intel's 80 bit format were
        considered instantiations of 754-1985's Double Extended.
        Intel's being the smallest possible instantiation & ours
        being the smallest instantiation aligned to a 64 bit
        boundary.

        At the time I felt that the 128 bit types naturally made
        more sense.  But over the years I have come to discover
        that making natural sense is overrated.  The 80 bit
        format has turned out to be useful as well.

        Still we in the 754-2008 revision committee thought it was
        time to bless the 128 bit format as basic with the name
        binary128 lest some future architect decided to play games
        with it.  And we admitted both formats to a class of user
        defined formats that we also blessed as conforming.

        Alas, it was also necessary to admit Intel's newer 82 bit
        format as acceptable.  And it must be admitted that it
        was first developed at HP before Intel bought it out.

        Still, some people really like to push unnatural to the
        limit. :-)

        Enjoy,

                                   Dan


        P.S. - Oh, as for full hardware support, the first I am
        aware of is a machine that was designed at HP Labs by
        people we brought over from IBM in a computer architecture
        that was eventually to be sold to Intel as Itanium.  We
        are more inbred than most of us like to admit. :-)


754 | revision | FAQ | references | list archive