Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: Proposal for a new structure of the standard



> Date: Tue, 13 Jul 2010 11:44:00 +0200
> From: Vincent Lefevre <vincent@xxxxxxxxxx>
> To: P1788 <stds-1788@xxxxxxxxxxxxxxxxx>
> Subject: Re: Proposal for a new structure of the standard
> 
> Nice work. A few comments:

	Vincent,

	John is on vacation at the moment but I will try to
	answer your questions as best I can.

	It will be my opinion though.  I cannot speak for the
	others.

> 
> * Does IDBar have to be a finite set? This doesn't seem to be really
>   useful and that would rule out some implementations, such as
>   intervals of GMP integers, for instance.

	Well, two things here.

	First, no, it does not have to be a finite set.  It is
	just finite in any implementation known or considered &
	therefore useful in that sense.

	And, second, that includes GMP.  GMP may be able to
	represent a great many more numbers.  But not infinitely
	many. :-)

	Not even close.

> 
> * IMHO, the definition of the hull should not be required. It seems
>   to be useful only for the tighest arithmetic, and the standard
>   shouldn't require things that are not used.

	The definition of the hull is required for uniqueness.
	Which would be true whether or not tightest arithmetic
	is in effect.

	It is also required for reproducibility.  Not merely
	reproducibility among implementations but reproducibility
	WITHIN an implementation.  For without it, how is one to
	know that this interval is the same as one computed
	earlier?  And, if not the same, how is one able to
	discover that fact if both extract the same parameters
	(either midpoint & radius or infimum & supremum)?

	The uniqueness provided by the hull is required just to
	make sure the user knows what is going on.

	But there is another issue here which we need to decide.

	Should we, as a standards body, decide & specify the
	algorithm used to compute the hull?  Or, if not the
	algorithm itself, the properties to be met by a hull
	that provides for uniqueness?

	It is a difficult question.  Made all the more difficult
	by all the cases we must consider:

		(1) The inf-sup hull for tightest possible result.
			(This is likely the only easy case.)

		(2) The inf-sup hull for a looser result.
			How loose is OK & how much is too much?
			And, when loose, how is uniqueness to be
			decided?

		(3) The mid-rad hull for tightest possible result.
			John has already eloquently outlined the
			difficulties surrounding this case.

		(4) The inf-sup hull for a looser result.
			And this case inherits many of the same
			difficulties of both cases (2) & (3) above.

		(5) Any other cases?
			Are we going to support any other styles of
			interval arithmetic, whether known to exist
			today or not?  If so, what problems to the
			definition of a hull are unique to those
			arithmetics?

	We have our work cut out for us however we decide these
	questions.

>   One question is: do
>   we wish to allow arbitrary precision, where the precision would
>   be determined more or less dynamically? That would make sense
>   with the "valid" version.

	Yes, I believe we must include such things.  I think
	most people agree with that.

> 
> * Should there be a notion of internal and interchange idatatypes?

	Perhaps there should.  But that is a discussion for
	another day.  It need not touch on the issues here.

	John has gone to great trouble to couch this discussion
	in terms of level 1 & level 2 notions only.

	Issues of representation, whether exchangable or not,
	don't enter into it.

> 
> * I don't think reproducibility is necessarily important (at least
>   for all applications). The standard should not require it, only
>   recommend it. If one really wants a good (but somewhat slower)
>   implementation with the best properties, one should use the
>   tightest arithmetic, and reproducibility would be implied.

	Of course reproducibility is not required for all
	applications.  Just as it is true that reproducibility
	IS required for SOME applications.  So, as a standards
	body, we must define what reproducibility means to both
	the user & the implementer.

	(And, BTW, reproducibility is NOT implied even in the
	tightest arithmetic, as John has pointed out.)

	Your comment really goes to the issue of whether or not
	reproducibility is to be considered the default or not.

	And the answer to that is: I don't know.  It is part of
	this discussion.  There are valid arguments both ways.

	I have my own opinion, however, which we can discuss if
	you like.  I'll give you a hint though: it may differ
	from yours.

	Largely because I would rather not have Svetoslav weep
	for my sins any longer.  It was too hard on me the last
	time. :-)

> 
>   Note: Interval arithmetic is not designed to detect bugs in the
>   processors. And if the goal is to check the results because of
>   possible bugs somewhere, there may be other (better) ways to check
>   them than rerunning the same program on a different platform.
>   Indeed, by doing that, one will not detect bugs in the program
>   itself. So, one may want to run a different algorithm on a
>   different platform, and since the algorithm is different,
>   reproducibility no longer matters.
> 
> -- 
> Vincent Lefèvre <vincent@xxxxxxxxxx> - Web: <http://www.vinc17.net/>

	Of course interval arithmetic is not designed to detect
	bugs in processors.  Or anywhere else for that matter.
	What ever gave you THAT idea?

	If it was John's mention of the Pentium bug, that was
	merely by analogy.  And probably because I put the idea
	into his head when I used it as an example of incorrect
	hardware during our discussions prior to his paper.

	But interval arithmetic IS designed to assure the user of
	the correctness of the results that are computed with it.
	For were it not so why would any user go to all the trouble
	needed to use it?  One can always compute faster in some
	hardware floating-point.  And one can get more digits more
	easily computing with GMP, MPFR, et al.  But neither of
	those methods tell you exactly HOW MANY of those digits
	are correct, if any.

	Intervals do that for you.

	But only if they are trusted to do so.

	Quality of an implementation goes a long way to gaining
	that trust among those who are not trained in formal
	interval methods.  That means how tight a result can be
	returned by an implementations.  It also means how well
	it works in other ways that go towards trust.  Does it
	have bugs or not?  Are the answers returned the answers
	expected?  Can they be understood?  Do I believe them?
	That sort of thing.

	Reproducibility is part of that.  If I get one answer on
	one machine & another on another, why should I trust
	EITHER of them?  It has been suggested that in that case
	the TRUE answer must, therefore, be in the intersection
	of both.  But that is only a valid conclusion if I trust
	both answers.  If I mistrust one or both merely BECAUSE
	they are different then I can come to the perfectly
	logical conclusion that one or both may be suffering from
	a bug manifesting itself in either one (I know not which)
	or in slightly different ways in both.  How am I, as not
	an expert in the field, to know otherwise?

	Oops.  I see I've spilled a bit of information about my
	opinion on reproducibility.

	Well, take it as my opinion alone.  The others don't
	share it.

	But I have time to convince them.

	And you. :-)

	Take care,

				   Dan