Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: Do I have a second? Re: Position: That the Standard for Computing with Intervals Have Only One Level 1 Requirement: Containment



On Aug 3 2011, Dan Zuras Intervals wrote:
>
> While I might use less tortured prose, it sound to me like
> the specification he desires is to return the tightest
> interval that the working precision permits.

> This is a workable & testable specification & is, at least
> for the basic operations, identical to OUR specification.

I disagree that it is either workable or testable, in general.
See later.

	Hmm.
	It served our purpose in 754 well enough.

I am sorry, but that is a matter of opinion, and I think that it
was one of its more serious mistakes.

Inter alia, it has led to an almost complete lack of communication
between the "IEEE floating-point always gives precisely the same
answers" community and many language and library standards, compiler
vendors and numeric programmers and users of their applications.

>  (2) Specify that all the operations we require
>  return the tightest possible results in the
>  precision available.

That more-or-less forbids optimisation, and begs the question of
what the precision available is, anyway.  This is coming back as
an important issue with the reinvention of coprocessors (including
vector units, GPUs etc.) - they don't always behave exactly
identically to the 'main' CPU.

	Now YOU are confusing me here.

	It neither forbids optimization nor permits untestably
	loose implementations.

It forbids any non-trivial expression rearrangement, including many
cases of common expression elimination, for a start.

	And there is no question of the precision available.
	It is decided in the declaration of the interval
	objects.  WhereEVER the CPUs lie.

I am sorry, but that makes little sense in either a language context
or a hardware one.  A declaration is more often associated with
the storage representation than the arithmetic operations.  In
particular, that is true for many or most coprocessors.  Consider
the common and simple example of whether they use hard or soft
underflow; that is NOT associated with the type, but whether the
operation is passed to the coprocessor or not.

>  (3) Specify that all optional operations ALSO
>  return tightest possible results should those
>  operations be chosen to be part of the
>  implementation.  (This leaves it possible for
>  looser implementations of these operations to
>  be hanging around, just not as part of the
>  testable standard,)

That more-or-less means that all serious numeric programs will be
outside the standard, which rather removes the point.

	Well, I must say I don't understand this either.

I have almost never seen a realistic numeric program that used only
the basic arithmetic operations (even including square root).  Also,
no (numeric) language since the early 1950s has had no 'problematic'
operations, such as trigonometric functions.

Er, what IS the blind sequence for anything non-trivial?  That
approach was taken by C99 for complex numbers, and it was and is
a disaster.  Even with as simple an operation as multiplication
of complex numbers, the 'obvious' code is not the most precise.

	Well, let's see...

	Off the top of my head:

		ff(xx) = sqrt(xx^2) - abs(xx) + 2.

	As Nate is fond of reminding us, this should return
	the fairly narrow interval [2,2] for all xx.

	And yet, implementing the function as written (i.e.
	blindly) would result in wider intervals which depend
	on xx.

	That's what I mean by "no worse than the blind
	sequence written in the source".

	Such a result should be permitted without excluding
	the possibility that a very clever compiler might
	figure out that [2,2] is the answer in all cases.

Yes, I know.  But what IS the "blind sequence" for a complex
multiplication?  If the specification is going to be applicable only
to programs that use nothing but the basic arithmetic operations,
then I don't see that it's going to be much use.


Regards,
Nick Maclaren.