Re: more grist for the mill
- To: David G Hough 754R work <754r@xxxxxxxxxxx>, stds-754@xxxxxxxx
- Subject: Re: more grist for the mill
- From: Ivan Godard <igodard@xxxxxxxxxxx>
- Date: Wed, 08 Jun 2005 23:52:10 -0700
- In-reply-to: <200506090626.j596QAdD019637@server-f.oakapple.net>
- References: <200506090626.j596QAdD019637@server-f.oakapple.net>
- Sender: owner-stds-754@xxxxxxxx
- User-agent: Mozilla Thunderbird 1.0.2 (Windows/20050317)
I just passed the paper on as potentially being of interest to the
group, but I think I differ with your view of it. In many embedded apps
needing wide dynamic value range (e.g. FP) and low data rates there will
be economic constraints which absolutely prohibit FP hardware. That's
just as true now as 1997, and so long as a buck is a buck it will remain
true. The difference between a $2 part and a $0.27 part is significant
when you have unit counts in the millions.
As for 24-bit precision, in many embedded apps the raw data often has
only 5-10 bits of precision. These apps need range rather than
precision, and they need speed only as that word is meaningful in an
8-bit CPU running at 10MHz. So the proposals in the paper seem
reasonable to me within that application domain, and many other embedded
domains, and seem likely to remain so.
David G Hough 754R work wrote:
This paper seems to be a collection of bad ideas, probably bad even for
embedded real time control systems.
Optimizing for software-only implementations is a bad design target since
most fpops will be executed in hardware. Nowadays, that's probably true
even in automative embedded systems, though it might not have been in
1997 or so when the paper was probably written.
And using base 8 (or 16) in a short binary format doesn't make much sense
since 24 significant bits are barely enough as it is for most purposes and
using higher radices effectively throws away one or two bits.