Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: Motion 52 -- Time to revive the "Expression" subgroup?



On 11/4/2013 5:44 PM, Jean-Pierre Merlet wrote:
> On 11/03/2013 09:57 PM, Michel Hack wrote:
>> One of the issues that has irked me over the years is whether an
>> interval is intended to represent a single uncertain value (as in
>> evaluation of sensitivity to initial conditions), or a range of
>> values (as in constraint propagation).
>>
>> I was reminded of this while examining Clause 6, "Expressions and
>> the functions they define", in the current P1788 Draft 8.1, and the
>> subject of Motion 52, which just reverted to discussion mode for
>> one more week.
>>
>> At the end of Clause 6.1 (c) one of the effects of repeated
>> subexpressions is mentioned:
>>
>> "If the algebraic _expression_ is evaluated naively, such a
>> subexpression is evaluated more than once, which affects efficiency
>> but not the numerics of what is computed."
>>
>> Assuming no side-effects, this may be the only worry for *point*
>> evaluation.  To this I wanted to add:
>>
>> Perhaps more significantly, when evaluated in interval mode,
>> multiple instances of the same variable can lead to excessive
>> widening of the final result; see Clause 6.x below.
>>
>> I even drafted such a Clause 6.x, to be inserted between current
>> 6.3 (The FTIA) and 6.4 (Related Issues):
>>
>> 6.x. The dependency issue.  When the same variable occurs multiple
>> times in an _expression_, and the _expression_ is evaluated in interval
>> mode, there are two possible interpretations: (a) the interval
>> variable denotes a single uncertain value, or (b) the interval
>> denotes a range of values. In case (a) there would be an implied
>> dependency between two instances of the same variable that is not
>> present in case (b).  The most extreme example might be x-x which
>> would be the singleton [0] when viewed as (a), but would be an
>> interval twice as wide as x when viewed as (b).  Since the
>> evaluator (interpreter, compiler) typically does not know the
>> intent, it must assume view (b) in order to guarantee containment
>> and the FTIA. It also means that it must be possible to control
>> compiler optimizations that turn x-x into 0, x+x into 2*x, or x*x
>> into sqr(x) -- and it explains why the standard requires the
>> presence of a Square function sqr().
>>
>>
>> I was however concerned about this, which is why I would like to
>> open discussion on this point:
>>
>
> I think that by default the compiler should take the _expression_ "as
> it" and provides simplification only on demand.


I think we cannot depend on a language implementation ignoring
optimization settings etc.  Thus some systems will 'optimize'
x-x to 0.   So we cannot advocate writing x-x, at least if the answer matters!
If we let the
> compiler let do this kind of job we may end up with inefficient
> calculation (e.g. factoring multi-variates polynomial without taking
> into account the range of the variables). Hence I have always
> advocated pre-processing the _expression_ (e.g. with a symbolic
> computation tool) that allows, at least in some cases, to obtain the
> best _expression_ in a given context.

I think that transformations to "single use expressions" by a computer
algebra system is a good idea, but beyond the scope of this
proposed standard.

In reality, I think that "foolproof"  or "compiler-optimization-proof" use of
a library implementing an interval standard will require that higher-level
languages be used to simulate something like machine language
with one operator per line.  For example, instead of a+b+c,

  target1:= interval_double_plus(a,b)
  target2 := interval_double_plus(target2,c)
etc.

Of course a symbolic computation tool, as you suggest, could do this
for you.

RJF

Best
>
> JPM