Re: [STDS-802-Privacy] using only 24 bits of random MAC
>> I have been studying the same issue. For us at Microsoft, it would be a software architecture question. Do we just rely on probabilities, or do we build some special purpose error-handling procedure?
>>
>> My experience with software development tells me that you cannot reliably develop special-purpose error-handling for extremely rare errors. The reason is obvious. The probabilities above tell us that in most likelihood the error-handling code will never be used, and thus will never be reliably tested. Sure, we can try force the error condition in the lab, but what you will get then is a lab scenario that may or may not emulate what would actually happen. So you end up with the equivalent of the fire-exit that nobody ever used. When there is an actual fire, you find out that the door is rusted and won't open, or that debris have accumulated in the passageway, or whatever.
>>
>> At that point, the best approach is probably "cultural." Collisions would be just one other reason to avoid building large flat L2 networks.
>
> +1
>> There are others, such that the growth in network management load to take care of all the paths while the devices are moving, or the potential for routing instabilities.
>
> +1
Also, check the various protocols based on IP multicast. You need to special case ARP, DHCP, etc. If you forget one of those, say IPv6 ND, you end up with multicast storms on your flat network. And you cannot possibly know what people are inventing, from variation of multicast DNS, UPNP, local chat applications, NAT discovery, etc.
>>So we should probably push the message that "big flat L2 networks are a bad idea," and maybe suggest that "anything over 16K is probably testing the limits of the technology."
> I'd say it's more like 1K -- sure, 16K is much more like the limits of the technology, but 1K is pushing the limits of where things get painful. If you say 1K, people will build to 3K - if you say 16K, folk will build to 23K, and then complain when things go Foop.
Yes. I was too optimistic. Once you take the multicast applications into account, things get bad really quickly.
>>
>> The only way to support these very big flat networks would be to add a form of detection. I definitely would not try to repeat the "duplicate address detection" approach of IPv6 and its reliance on multicast.
> But... but... but... It works so well - what could *possiably* go wrong?!
>> If we want detection, we need a central point of some kind. One solution would be for proponents of big flat networks to develop such system.
>
>... and justify *why* they want the BFN.
>>Get a Wi-Fi connection request from some device MAC address, then check it against a local database and refuse the connection if there is a duplicate. The rare colliding device will see a "connection failed" message, and may naturally retry later, perhaps with a new random MAC. On the device, this reliance on natural interfaces would avoid the "rusty fire exit" problem.
> This also feels somewhat DADish - it also has the problem(s) of:
> 1: needing an address to use to ask the central device.
> 2: finding the central device.
> 3: having the central device need to replicate state to the backup central device.
> 4: finding the backup central device.
I was more thinking along the lines of something like a central EAP server, used in the backend by the AP, invisible to the clients. AP gets a connection request from some MAC, adds some magic elements invented by a smart AP manufacturer, checks with back end server, and then fails the connection if the central server detects a collision. The client device does not have to know about all that. It would all be part of some "added value" by those manufacturers who want to sell large L2 networks.
>>
>> Oh, and we want the problem to be really rare, so that small L2 networks never have to worry with expensive collision detection databases.
> The important phrase in that sentence is "small L2 networks" - I realize we are wading into religious territory here, but perhaps the primary takeaway is "avoid big flat networks - here is another reason..."?
>> So we should really use 46 bits of randomness.
> Yup. Oh, and the randomness should not be http://xkcd.com/221/
:-) Or that one: http://dilbert.com/strips/comic/2001-10-25/
> W
-- Christian Huitema