[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

differences between implementations of IEEE-754 basic operators



Dear friends at IEEE-754,

I would like information and possibly confirmation of some facts about reproducibility of computations.

I'm concerned about the possible differences between hardware implementations of IEEE-754. I already know about the problem of programming languages introducing subtle differences between what is written in the source code and what is actually executed at the assembly level. [Mon08] Now, I'm interested in differences between, say, Intel/SSE and PowerPC at the level of individual instructions.

In short: if I tell the FPU, using a single CPU instruction, to do z:=x+y where z, y and z are double-precision operands (ditto for -, *, / and square root, and for single precision), what are the differences that I can expect between implementations?

From reading the IEEE-754, here are possible differences:
* NaNs produced as the masked response of an invalid operation contain implementation-defined bits. * The circumstances in which the denormal trap is triggered, and what data is given to the trap handler, are largely implementation-specific. * More generally, how trap handlers are set, when they are triggered, is unspecified.

My understanding is that as long as:
* One uses IEEE-754 single and double precision formats (this leaves out the x87 extended-precision registers).
* One does not set trap handlers except for invalid operation.
* One sets the trap handler for invalid operation to abort the program.
* One does not use the "flush denormals to zero" and similar flags from certain FPUs (PowerPC). Then all computations on any IEEE-754 compatible system (let's say, Intel SSE and PowerPC) give exactly the same result: either a trap for invalid operation, either a same non-NaN result.

Am I correct or is there still leeway for differences?

Best regards,

D. Monniaux

[Mon08]
@Article{Monniaux_TOPLAS08,
 author =     {David Monniaux},
 title =     {The pitfalls of verifying floating-point
                 computations},
 journal =     {TOPLAS},
 fjournal =     {ACM Transactions on programming languages and systems},
 year =     2008,
 month = may,
 publisher =    {ACM},
 fpublisher =   {Association for Computing Machinery},
 issn =         {0164-0925},
 volume =       30,
 number =       3,
 pages =        12,
 abstract =     {Current critical systems commonly use a lot of
                 floating-point computations, and thus the testing or
                 static analysis of programs containing
                 floating-point operators has become a
                 priority. However, correctly defining the semantics
                 of common implementations of floating-point is
                 tricky, because semantics may change with many
                 factors beyond source-code level, such as choices
                 made by compilers. We here give concrete examples of
                 problems that can appear and solutions to implement
                 in analysis software.},
 url = {http://hal.archives-ouvertes.fr/hal-00128124/en/},
 doi = {10.1145/1353445.1353446},
pdf = {http://hal.archives-ouvertes.fr/docs/00/28/14/29/PDF/floating-point-article.pdf}
}

Attachment: David_Monniaux.vcf
Description: Vcard


754 | revision | FAQ | references | list archive