serdes design vs run length
I was looking through some notes from Albuquerque and came across some
remarks I have heard and wanted to bring it up on the reflector.
I had heard some folk pointing out that there are some of us worried
about the relative board costs, relative emi costs, relative systems
cost, but ignoring relative costs directed at the serdes. So, to remedy
this, let me ask the following to those of you whom design serdes
What impact does the encoding or run length have on the relative cost of
a serdes device at 2.5gigabit and 3.125gigabit for 8b10b, Scrambled, and
64b66b? Does the extended run length really drive the cost, or have we
just not employed new methods into phase lock control that might reduce
the run length impact?
Or are there other serdes issues that one coding scheme has over the
other that impact relative cost more?
Thanks and take care