|Thread Links||Date Links|
|Thread Prev||Thread Next||Thread Index||Date Prev||Date Next||Date Index|
Figured I would respond to you on this issue.
First, let me address your Terabit Ethernet comment. The reason why I made the statement that I did can be explained by looking at Page 22 of the HSSG Tutorial (http://www.ieee802.org/3/hssg/public/nov07/HSSG_Tutorial_1107.zip) If we forecast the growth for network aggregation, it is easy to see the need for Terabit. Other analogies can be drawn from data in www.Top500.org which covers the HPC industry.
Two of the participants responded to your question regarding PCS, but their answer is more specific to the specification in its current state. So I will try to provide some background. First, the HSSG efforts are very intense, and there was a desire to try and come up with an architecture that would be scalable for future speeds. Next, as the group was addressing the architecture, it was not clear what the solution today or in the future would be, so an architecture that could be used with different optical / electrical physical specifications and interfaces was desirable. For example, for 100G, there was discussion about 10x10, 5x20, 4x25, 2x50, and 1x100 specifications. 20 winds up being the least common denominator of all of these possibilities. Therefore, 20 PCS lanes was chosen, and then in the PMA sublayer, you can essentially do the demuxing / muxing to and from PCS lanes to get to the right width interface at both ends of the PMA sublayer.
It is very important to note that an architecture has now been developed that can be scaled to future solutions of different widths and rates in the future.
Hope this helps out. Feel free to contact me in the future.
From: Fritz, Karl
Greetings Task Force Members,
My name is Karl Fritz and I work for the Special Purpose
Processor Development Group at the Mayo Clinic in
However, I do have a question related to the way the signals are grouped for 100GbE protocol. It appears that the data will be striped across 20 lanes, then down to 10 lanes (for the CAUI protocol) and then again possibly muxed down to 4 and then 1 (according the the Ethernet Alliance November 2008 Technology Overview document). Being that 10Gbps serdes exist, why does the standard start at 20 lanes (5 Gbps each)? If this standard is expected to be scaleable, it appears things could get rather messy if we want to scale this to Terabit speeds (effectively multiplying all this by 10).
Could somebody enlighten me a bit or point me as to why 20 lanes was selected as a standard? Why not go directly to 10 lanes at 10 Gbps?