My question was specifically about the Echo and NEXT cancelers, since
these are filters that are relatively simple and well-understood, and
not about the FFE-DFE, for which there are many different implementation
approaches. I appreciate your detailed answer on the FFE-DFE complexity, but
on my specific EC-NC complexity question, you left me hanging with a
vague "reasonably below 6.7". And by the way, the complexity factor is
45x, not 6.7x, because of the symbol rate increase of 6.7x. So what you
really mean is "reasonably below
now that we're talking about the FFE/DFE complexity, let's use your numbers.
If you need a 24-tap EQ @ 833MHz for 10GBASE-T, compared to a 12-tap
EQ @ 125MHz for 1000BASE-T, by my calculations that's a 2x6.7 ~= 13x increase in
complexity for the FFE/DFE.
It is not possible for me know the actual number you may have
used as a reference in 1000BASE-T to obtain the x6.7 metric for
the respective filter order increase in 10GBASE-T.
I simply noticed that the x6.7 figure got somehow caught on in
the SG, and had been used so far as an indicator for the filter
order growth in 833 MHz sys. I see this figure as overly conservative.
If, for example, we assume a 12-tap FFE-DFE in 1000B-T, the 6.7
factor would result in 80-tap FFE-DFE in 10GB-T @ 833 MHz.
A very reasonable ISI cancellation could be achieved with just
24-tap EQ. So, a factor of 2 vs 6.7 would make a substantial
difference for implementation. (Coincidentally, a large, although
quoted as non-optimized, 80-tap FFE-DFE appears in the joint Jul.
vendor presentation.) Simulation shows, that in the case of Echo,
NEXT and FEXT the orders do not scale as favorably as 2, nevertheless
the respective factors still remain reasonably below 6.7.
Vivek Telang wrote:
have to disagree. I think the 45x complexity (again, I said this is a
first-order approximation) *does* apply to the cancellers. You
are assuming that the "system resource sharing and advanced DSP
techniques" are not already being used in the 1000BASE-T PHYs that are being
shipped today. Unless I missed something, the techniques that I saw in the
presentations are not unique to the 10GBASE-T system. Considering that
today's 1000BASE-T PHYs have power numbers that are an order of magnitude
lower than those in '99, you should assume that many of these techniques are
already being employed today.
there was something in the techniques that was specific to the 10GBASE-T
proposal, please point me to the particular
I don't dispute that many good techniques employed in 1000BASE-T would be
applicable to 10GBASE-T. In fact, this is what I see in my analysis as well.
DSP feasibility is an important point that would need to be properly
addressed. However, the DSP filter orders do not necessarily follow
the x6.7 rule.
There have been at least two presentations, which showed that with a
relatively simple processing in the analog front end, the DSP burden
could be substantially reduced. Further, there would be savings from
the system resource sharing and the use of advance DSP techniques.
Vivek Telang wrote:
To re-iterate the point that Dan made yesterday, I would add to
the feasibility list:
DSP feasibility for the above
Note that going from 100BASE-TX to 1000BASE-T, the symbol rate
(125MBaud) didn't change, and the DSP processing period was a relatively
healthy 8ns. In the proposed transition from 1000BASE-T to 10GBASE-T,
the symbol rate goes up from 125MBaud to 833MBaud, which means the DSP
processing period is reduced from 8ns to 1.2ns (a factor of 6.7). Also,
note the double whammy in that the number of cancellation filter
taps (Echo and NEXT) for the same coverage goes *up* by the same
factor (x6.7). So one could argue that to the first order the canceller
complexity is ~45x the canceller complexity of 1000BASE-T. Of course,
the processes that will be used for 10GBASE-T will be faster than those
used for 1000BASE-T, but it's an issue that requires some thought
and discussion before we're all comfortable.