2011/2/28 Charles Stevens <charles.stevens@xxxxxxxx>
I'm not talking about the binary ENCODING (e.g., "decimal128 using binary encoding"); already understood there were implementations providing that.
What I'm talking about is "binary128 format" ITSELF, specifically as distinct from decimal128 using EITHER encoding.
I didn't see an answer to your question yet so here is my answer.
All the HARDWARE for general purpose computers that I know of and used does NOT have binary128 directly in the processor. That means Intel, AMD, and Sun processors. I am almost sure that it is the same case for IBM, HP, and Fujitsu but I will leave it to those working at such companies or using those platforms to confirm it.
Most processors (for general purpose computers) support binary64 directly in the hardware (registers, adders, datapath, rounding logic, ...) and can provide binary32 as well since it is just a narrower format. Binary128 is a new feature added in IEEE754-2008 just as the decimal64 and decimal128 are new features. Hence, some HW providers may opt to support binary128 directly in their future processors.
However, binary128 may be supported now on these platforms via low level software libraries or microcode that use the underlying hardware designed for the 64 bits width and provide the correct result but with a longer execution time compared to binary64.
I hope this answers your question.
With that said, I also note that decimal is not much different. IBM and SilMinds are the only two companies that provide hardware units for decimal and both use DPD. As far as I know, (Michel please confirm or deny), IBM's HW has direct support for decimal64 while decimal128 needs similar low level SW to give the correct answers. SilMinds has direct decimal128 and decimal64 in their HW accelerator cards.