Precedence on measuring jitter
Can someone please educate me as to where the number for jitter at various
interfaces came from. For example, in table 52-11, TP2 jitter is specified
Is it based on actual data from vendors, saying they are comfartable with
this or is just a guess estimate.
There is a practical reason why I am asking this. As an equipment vendor, we
are required to support the worst case jitter requirements even if all the
ASIC/ EO vendors we know of are doing better. This is because we need to be
"802.3 complaint". What this does is prevents us from using ethernet in a
non-traditional kind of application. For instance, if we had more margin in
the jitter we could do a line regenerator, rather than just a point to point
link. So by making the component vendors job "too easy", you end up
hampering the equipment vendor's to apply ethernet to even more markets. And
if all the vendors we know of are so easily able to meet these specs, why
not make the jitter generation specs at TP2 tighter and jitter tolerance
specs at TP3 easier.
Say Bye to Slow Internet!