RE: P1788: Punish?
May be we already have everything the "killer app", the HW and marketplace.
But instead of using real numbers floating point precision Intervals
Arithmetic, "set theory" Intervals Arithmetic. Bill Walster Principal
Investigator, Interval Analysis, at Sun Labs believes that.
"http://research.sun.com/minds/2004-0527/".
Jose A. Muñoz
joseamunoz@xxxxxxxx
-----Original Message-----
From: stds-1788@xxxxxxxx [mailto:stds-1788@xxxxxxxx] On Behalf Of Sylvain
Pion
Sent: lunes, 30 de noviembre de 2009 19:14
To: stds-1788@xxxxxxxx
Subject: Re: P1788: Punish?
> And you can also pose the question, how will the marketplace punish
> firms who do NOT provide full HW support for P1788 - but rely instead on
software?
> What is going to drive firms to provide P1788 in HW - with all due
> respect I don't see how P1788 is going to provide that driving force -
yet.
Good question. Let me give it a try.
Hopefully, I won't write too many trivialities.
I agree that IA doesn't have an outstanding "killer app that drives big
sales". But let's try to turn this into a selling point ! :)
The thing is that I don't see that "one killer app" is rarely the reasoning
behind the integration of a feature in a common HW. It could be the
motivation when the target is a niche market with dedicated hardware
possibilities, but I don't think that is a concern here.
Let's look at the features now supported by e.g. x86 CPUs :
multi-core, SIMD (vector computations), AES (encryption), profiling
counters, debugging support, OS virtualization, RNGs...
Most of them, if not all, are not driven by one particular application, but
they are driven by general computing needs. The direction is a balanced
CPU. The question is : what will they do next with the available space on
the die ? More cores ? OK. But let's not forget that it takes time to
parallelize apps, and the software industry as a whole is slow. More
[sequential] functionality is also useful, and the question is whether IA
can be a candidate, and a competitive one.
IA could help making good use of those SIMD units for more apps at least.
To me, IA essentially competes with floating-point arithmetic as a computing
model for approximation of real functions, and it is the way it should be
sold.
FP is a terrible model, as its properties do not compose. I mean, there is
a nice 0.5 ulp error for a+b, but as soon as you do a+b+c you cannot say
anything interesting anymore without a headache.
What do we want with real numbers? Well, it's about quantifying the
approximation that you get for the evaluation of a function.
IA does just that. Sure, you can somehow emulate IA with FP in various
ways, e.g. by doing static error analysis, which is complicated as hell. In
short : it costs a lot in terms of learning, research, and paying experts.
The alternative is for the HW to account for the rounding error. Is it time
for this ?
The FP model is dominant in software also precisely because it has so little
support in HW : languages compete on speed, so they only offer types which
they can provide efficiently, and there is a chicken and egg problem here if
you are looking for wide adoption of IA before deciding to support it in HW.
It's true that the priority was given to things like graphics and games
lately, but aren't we reaching the bottom of this now ?
( An interesting anecdote here : a colleague of mine won a "best paper"
price at a well known graphics conference, the prize being the latest nVidia
card. He did not need it so he tried to sell it on eBay, thinking that
gamers would jump on it, being the high-end of those cards... But no gamer
bought it, since it was a complete overkill even for the most demanding
games.
It was already 2 years ago. )
Now, Intel knows that certification has a price. I read that the Pentium
bug cost about 500 million dollars, and they worked hard on proving their
CPUs. Many software areas need certification as well.
They may use software-IA now, or maybe they don't but would switch to it,
especially if it was faster thanks to HW support.
It's a bit like HW-assisted encryption : if you don't have it, you wonder
whether you should enable it, so you take some risks, but if you have it "by
default, at basically 0 cost", then you remove the question completely and
you are safe.
At a high level, the selling point is "IA helps making software less buggy",
or "helps making efficient and robust software more easily".
Everybody knows that computers have bugs and compute approximately, and
tackling this issue is hopefully a selling point understandable by everyone.
Marketing guys can feel free to re-baptize IA to some funny name, like AMD
used "3DNow!". I'm sure they are good at that :)
Since no chip maker currently ships HW-IA, what can break the status quo ?
How did this happen for the first FP co-processors ?
And for the first SIMD units ?
As for "*full* support", I think it need not be full in HW. It can also be
done progressively. Like the SSE/SSE2/.../SSEn have introduced more and
more types and functions.
I guess that Intel has ways to evaluate the needs of the various industries,
except maybe that, with time, once the low hanging fruits have been
considered, it gets harder to choose the next one (?).
Quantifying the gains and costs is also something we can hardly do here.
I would really like to see a word from Intel or any similarly big player on
the subject (official or not).
My 0.02 Euros.
--
Sylvain