Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [P1619-1] NIST Special Publication 800-38D Released



Charles Martin wrote:
> notice that this finalizes GCM
> [...]

I was very disappointed by the new NIST SP-800-38D, specifically
section 8 that deals with key- and IV-management. When we wrote 1619.1,
we spent A LOT of time on these IV-management rules, trying to make
them secure while allowing flexibility of implementations and
architectures. The NIST document, on the other hand, disallows some
perfectly secure choices (apparently for no good reason), and at the
same time opens at least one security vulnerability that I could see.

Below I list the shortcomings that I see in this new document. I
encourage interested parties to lobby NIST to fix these problems.

-- Shai

* The security vulnerability that I see in section 8.2.1 is this: That
section describes a "deterministic" method for IV-generation, where
the IV is split into a "fixed" field that has to be unique per device
(or more generally per "context") and an "invocation" field that is
essentially a counter.

However, the document does not mandate any particular partition of
the IV field. It "suggests, but does not require" that a 96-bit IV be
partitioned into left 32-bit fixed field and right 64-bit invocation
field, and this is only meant "to promote interoperability." One could
therefore end up with two compliant devices (say, from two different
manufacturers) that use different partitions of their IVs, where the
"fixed" field of one IV can collide with the "invocation" field of the
other and vice versa.

The NIST document requires that "The lengths and positions of the fixed
field and the invocation field shall be fixed for each supported IV
length for the life of the key." But having each implementation
complying with it separately is inherently not sufficient: A storage
system where the same key-management subsystem is used with
heterogeneous devices may unintentionally violate that condition.


* Beyond the specific problem above, I see a bigger problem with the
implicit implication of this document that the "deterministic"
construction from 8.2.1 is more secure than then "randomized"
construction from 8.2.2. (The document places an onerous "global
requirement" of using a key for not more than 2^{32} blocks with the
the construction from 8.2.2., which for many architectures would simply
preclude using it.)

In my view, the "deterministic" construction is inherently weaker than
the "randomized" one, in that it is very vulnerable to configuration
errors. As late as Draft D20 of 1619.1 we had the following warning
about the deterministic method in the appendix that described the "IV
collision" problem (cf. Section B.7.4 in D20, available from
http://grouper.ieee.org/groups/1619/email/msg02013.html):

> It should be noted, however, that this configuration is more sensitive
> to mis-configuration error than the configurations above that use
> randomness as their means for collision avoidance.

I am pretty sure that the chances of mis-configuration that would lead
to IV collisions is grater than the 2^{-32} bound that the NIST document
advertises as its security goal. (Especially if we consider an attacker
who tries to maliciously cause these mis-configurations.)


* The "global bound" of 2^{32} blocks per key in 8.3 is unnecessarily
strict, precluding some secure implementations for no good reason. Many
(most?) implementations that use randomized IVs would follow the
procedure that is described in the NIST document and in 1619.1, using a
fresh random IV as a starting point and then incrementing this IV for a
while before generating the next random IV. As was explained in details
in Appendix B.7.2 of 1619.1, this method of "Incrementing a random IV"
could allow many more blocks per key without increasing the probability
of IV collisions.

Using the explanations in 1619.1, one can obtain the following bound:
suppose that we use k-bit IVs, use the same key to encrypt a total of n
blocks, but only use m fresh random IVs (which means that the other n-m
blocks are using IVs that are an increment of one of these "fresh random
IVs"). Then the probability of IV-collision is bounded by

  p <= (m-1)*(2n-m)/2^{k+1}

(At the extreme "trivial" case of m=n, this bound degenerates to the
trivial bound of (n choose 2)/2^k.)

This bound is typically be MUCH better than the trivial bound that was
used to derive the bound of 2^{32} blocks in the NIST document: Think of
a case where the implementation always encrypts at least 100 blocks for
each new fresh IV (so we have m<=n/100). In this case one can use the
same key for at least 10 times more blocks without exceeding the
collision probability of 2^{-32}.

It should be noted that allowing a larger bound may make a qualitative
difference, not just a quantitative one: in architectures that serve the
same data key to many devices, it may be infeasible to track key usage
well enough to explicitly enforce any global bound. But if the bound is
large enough then one can obtain an implicit global bound just by
factoring the key-lifetime vs. average bandwidth.


* The NIST document mandates that when incrementing a random IV with r
random bits, it will incremented as an r-bit integer. This restriction
too is unnecessarily strict. It is clear that if there is some "global"
bound of 2^t blocks (or even a "local" bound of 2^t blocks before
choosing a new fresh random IV), then incrementing any t-bit sub-field
of the IV would be sufficient. (It is not hard to verify that there is
no harm in different implementations incrementing different sub-fields.)