Re: [STDS-802-Privacy] using only 24 bits of random MAC
>> > Keep in mind that the the N used to calculate the probability is
>> > the number of unique devices on the switched network. As soon a you
>> > reach a router it doesn't matter if a device on the other side is
>> > using your address. The size of a forwarding table on a switch just
>> > doesn't go up to 300,000. They will _theoretically_ go up to 64k
>> > but in practice they don't. So when people architect their network
>> > they consciously make it so their switches don't melt down.
>> >
>> > We can never say never but 1:156,000 is very highly unlikely.
I have been studying the same issue. For us at Microsoft, it would be a software architecture question. Do we just rely on probabilities, or do we build some special purpose error-handling procedure?
My experience with software development tells me that you cannot reliably develop special-purpose error-handling for extremely rare errors. The reason is obvious. The probabilities above tell us that in most likelihood the error-handling code will never be used, and thus will never be reliably tested. Sure, we can try force the error condition in the lab, but what you will get then is a lab scenario that may or may not emulate what would actually happen. So you end up with the equivalent of the fire-exit that nobody ever used. When there is an actual fire, you find out that the door is rusted and won't open, or that debris have accumulated in the passageway, or whatever.
At that point, the best approach is probably "cultural." Collisions would be just one other reason to avoid building large flat L2 networks. There are others, such that the growth in network management load to take care of all the paths while the devices are moving, or the potential for routing instabilities. So we should probably push the message that "big flat L2 networks are a bad idea," and maybe suggest that "anything over 16K is probably testing the limits of the technology."
The only way to support these very big flat networks would be to add a form of detection. I definitely would not try to repeat the "duplicate address detection" approach of IPv6 and its reliance on multicast. If we want detection, we need a central point of some kind. One solution would be for proponents of big flat networks to develop such system. Get a Wi-Fi connection request from some device MAC address, then check it against a local database and refuse the connection if there is a duplicate. The rare colliding device will see a "connection failed" message, and may naturally retry later, perhaps with a new random MAC. On the device, this reliance on natural interfaces would avoid the "rusty fire exit" problem.
Oh, and we want the problem to be really rare, so that small L2 networks never have to worry with expensive collision detection databases. So we should really use 46 bits of randomness.
-- Christian Huitema