Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: Motion 31 draft text V04.4, extra notes



John,

Thank you for the answer.

> I'm not so interested in how easy it is to *define* as how easy it is to *implement*.
> I should hate to implement, in "tightest" mode, something like op: decimal64 X binary32 -> binary64 
> (meaning infsup in each case) let alone operations involving several exotic implicit types.

Yes. The standard has 4-level structure. Level 1 is mathematical level,
but we must keep in mind how to implement it at lower levels.

I want to say a few words about level 4 (bit strings).
We can imagine a few kinds of implementation. I want to classify them according how memory is allocated.
1) Memory is statically assigned for variable.
   For example 128-bits are used to represent a pair of Binary64 inf and sup.
2) Memory consists of constant-size descriptor and of another memory pieces that are allocated at
   variable initialization. For example, descriptor could contain:
   "int precision;" precision of bounds
   "int decoration;" decoration bits
   "int infExp, supExp;" binary exponent of inf and sup
   "limb *infMant, *supMant;" pointers to two pieces of memory with mantissa bits.
   The "init" operation allocates "infMant" and "supMant" according to given precision.
   When the variable is used to store a result arithmetic operation,
   memory is not reallocated, infExp, supExp, infMant, supMant bits are modified.
   The "free" operation deallocates "infExp" and "supExp" pieces of memory.
3) Memory consists of fixed (say 64-bits) pointer to another piece of variable size.
   The piece has a fixed-size header and a variable-size contents.
   The fixed-size header contains class tag and hidden fields used for garbage collection.
   The class tag determines what fields are contained in variable-size contents.
   Examples of class tags and related fields:
   Empty - only "int decoration;" field;
   InfSupBinary64 - "int decoration; double inf, sup;" nonempty interval with Binary64 inf and sup;
   InfSupBinary128 - "int decoration; long inf0, inf1, sup0, sup1;" nonempty interval with Binary128 software-emulated inf and sup;
   InfSupRational - "int decoration; Rational *inf, *sup;" nonempty interval. Its inf and sup are objects of class Rational with variable precision".
   The "init" operation initializes variable with a null pointer.
   When the variable is used to store a result of arithmetic operation,
   a new piece of memory is allocated. The size and type tag of new piece is determined by result of operation.
   The old piece becomes dangle.
   Garbage collector finds dangle pieces time-to-time and frees them.
4) Memory consists of fixed-size memory and of optional additional piece.
   A few bits of fixed-size memory determines either another bits
   are fixed-size numbers or pointer to additinal piece of memory.
   For example, it may be 128-bit memory with two 64-bit fieilds.
   If both 64-bit fields are IEEE754 Binary64 numbers then it is infsup interval with default decorations.
   If both of them are IEEE754 Binary64 not-a-numbers then a pointer is combined from payload bits of these not-a-numbers.
   This pointer may point to a piece of memory with one of the objects described in (3).
   When the variable is used to store a result of arithmetic operation,
   128-bits are either filled by tow Binary64 numbers or a new piece of memory is allocated.
   The old piece (if was) becomes dangle. The garbage collector collects dangle pieces.

The asignment of interval datatype is different for these cases.

Interval datatype is anattribute variables in cases (1) and (2):
- statically by compiler in case (1).
- by runtime initialization in case (2).

Interval datatype is attribute of operations in cases (3) and (4).
Some implementations may expose subclasses (like InfSupBinary64/InfSupRational) to user,
some implementations may hide them from user to retain simpler API.

The number of suppored interval datatypes in cases (3) and (4) may be beyond
the implementation subclasses.
For example, implementation with subclasses from example (3) may support infsup_Decimal64 operations too.
If result of operation is empty, it will be represented by Empty subclass.
If result of operation is [0,1], it will be represented by InfSupBinary64 subclass with bounds 0.0 and 1.0 .
If result of operation is [0.1,0.2], it will be represented by InfSupRational subclass with bounds 1/10 and 1/5 .

Now the draft of the standard forbids the above implementation, because iterval [0,1] doesn't remember
either it was obtained by Binary64 or by Decimal64 or by Rational operation.

The implementations (3) and (4) are natural for dynamic languages with garbage-collections.
Is it sensible to exclude such implementation from scope of the standard ?
I can remind about popularity of Sage library with Python-based interface.
http://www.sagemath.org/

> IMHO the only mixed type operations that should be *mandated* by Level 2 are where
> - Types T1, T2 and T are inf-sup types derived from IEEE754 formats
> - of the same radix
> - such that the underlying implementation supports "formatOf" for the
>   corresponding point operations, in the needed rounding modes;
> because these essentially come for free (I believe).

It's a different thing
to *mandate* operation and
to make it optional but to define its properties if an implementation implements "tightest" mode of this operation.

I'm sure that operations must satisfy "containment" or "interval extention,
but for now I'm not sure that we must require operations to be always "tightest" or return "hull_T" in all modes.
The "tightest" mode is good for reproducibility, but it can't be as fast as "containment" mode on today hardware.
Maybe we can require "tightest" as debug mode only.

I would mention here a difficulty of fast implemention of hull_{midrad_F} .
The "hull" be the same for both + and * operations.
If we consider "+" only, the good choice of hull is
hull_{midrad_F}([u,v]) = [ mid_F([u,v])-rad_F([u,v]), mid_F([u,v]+rad_F([u,v]) ]
Really, Level 1 <m1,r1> + <m2,r2> = [m1 + m2 - r1 - r2 , m1 + m2 + r1 + r2 ]
m = mid_F([m1 + m2 - r1 - r2, m1 + m2 + r1 + r2 ]) = roundNear(m1 + m2) is easy.
r = rad_F = roundUp(MAX(m1 + m2 + r1 + r2 - m, m - m1 - m2 + r1 + r2)) =
  roundUp(r1 + r2 + abs(m1 + m2 - m)) is a little more tricky but it is also possible in hardware F operations.

But the same definition of hull_t will be difficult for "*" operation .
For positive intervals
<m1,r1> * <m2,r2> = [ (m1 - r1)*(m2 - r2) , (m1 + r1)*(m2 + r2) ] =
[ m1*m2 + r1*r2 - r1*m2 - r2*m1 , m1*m2 + r1*r2 + r1*m2 + r2*m1 ]
m = mid_F([ m1*m2 + r1*r2 - r1*m2 - r2*m1 , m1*m2 + r1*r2 + r1*m2 + r2*m1 ]) = roundNear(m1*m2 + r1*r2).
It is not easy if we don't have complete-F arithmetics (or F-format fma at least ).
In any case, INTLAB midrad arithmetic will be faster though its multiplication is not "tightest" hull_{midrad_F}

So I think, that level 1 can tell about T1 x T2 -> T operations,
but levels below will say which of them are mandatory.

  -Dima

----- Исходное сообщение -----
От: j.d.pryce@xxxxxxxxxxxx
Кому: dmitry.nadezhin@xxxxxxxxxx
Копия: stds-1788@xxxxxxxxxxxxxxxxx
Отправленные: Среда, 4 Апрель 2012 г 16:10:50 GMT +04:00 Абу-Даби, Маскат
Тема: Re: Motion 31 draft text V04.4, extra notes

Dmitry & P1788

At last I'm catching up on recent discussions. 

First let me say I agree with criticisms I've seen of the Motion 13 Comparisons text in §5.6.9. They *need* definitions from first principles in terms of sets, rather than endpoints. If we don't have that then how they act on empty and unbounded inputs is likely to be ad-hoc and possibly contradictory. I'm just discussing with Vincent on revised text.

On 20 Mar 2012, at 11:59, Dmitry Nadezhin wrote:
> 1) p.10. Section 4.1. ... "Level 2 arithmetic normally acts on intervals of a given type
> to produce an interval of the same type."
> 
> Is this important ?
> The level 2 draft defines level 2 interval operations op_T(x, y) = hull_T(op(x, y)).
> So definition
> op: T1 X T2 -> T
> is as easy as definition
> op: T X T -> T

I'm not so interested in how easy it is to *define* as how easy it is to *implement*. I should hate to implement, in "tightest" mode, something like
op: decimal64 X binary32 -> binary64 
(meaning infsup in each case) let alone operations involving several exotic implicit types.

> Could we omit this phrase or append to it something like this :
> "... , but interval operations that act on intervals of types other than the result type are always possible".
But I'ld accept this amendment. 

IMHO the only mixed type operations that should be *mandated* by Level 2 are where
- Types T1, T2 and T are inf-sup types derived from IEEE754 formats
- of the same radix
- such that the underlying implementation supports "formatOf" for the
  corresponding point operations, in the needed rounding modes;
because these essentially come for free (I believe).

> 2) p.12. Section 5.2. "Excluding Empty is indicated by the notation I, e.g. \underline{I}R or \underline{I}(R), the nonempty closed intervals with
> (finite) real bounds.
> 
> Maybe "Excluding Empty, Entire and semi-infinite intervals ... " is more precise ?

Yes, I'll do that.

> 3)p. 18 Table 2 and p.20. Table 3. "recip" or "inv" ?
> Table 2 names inverse function as "recip" and Table 3 names it "invRev".

Good point. I think I'll go for "recip" since someone, Juergen I think, would like to rename the "reverse" functions as "inverse" functions.

John Pryce