Joel, my comment was limited to
discuss balancing of channels only. Just trying to point out something
that hasn't been clearly pointed out. There is a constant need for breakout
cables in both copper and fiber especially when technologies are first
introduced. It's not just what's next, it's also how do you get there. Data
centers are rarely gutted and rebuilt. Anything we do will need to be integrated
with what's already there. If they have 10G ports, moving to next gen would be
facilitated by implementing multiple 10Gbs channels.
2222 Wellington Ct
Lisle, IL 60532
Please look at past data regarding optic
feasibility for 10km solution space. The optics vendors are suggesting
4by25+FEC as a good fit. I'm pretty uncomfortable pushing that to
McGrath, Jim wrote:
Joel, maybe the 100Gbs target should
be 120Gbs. Ribbon fiber cables and the connectors are x12. Assuming a 10Gbs
per channel implementation, this works our very nicely. Many
copper cables and connectors are also already established at x12. Then
the 120Gbs interface could be broken out into 3 40Gbs interfaces or 12 10Gbs
interfaces without loosing bandwidth.
2222 Wellington Ct
Lisle, IL 60532
If the front end is defined as 100Gbps, expecting the
back end to be 40Gbps makes no sense from system implementation. Maybe
if it were 50Gbps, the TM might be easier to implement. But either way,
you would throw away a lot of bandwidth for a 100 going into 40.
if the front end moves towards 4by25+FEC, which it appears to be based on the
work so far, from a system perspective, you would use the same data rate on
the back end side with perhaps different signaling. Further, spending
another three years on a 40Gbps back plane standard for such a small gain
doesn't seem right. It was pretty painful the last time around.
You would end up defining 1by40Gbps, 4by10Gbps, 16by3.125Gbps. I just
don't see the ROI.
No one has yet to prove that 4by10Gbps LAG doesn't
fit the server market described by Shimon. And actually, I still don't
see the market he is talking about. Regardless of using LAG on the front
end or in an ATCA chassis with multiple LAG connections ... a solution exists
today that works well.
Last is someone still has to design an
aggregation box to connect all the 40Gs together and pipe them out
100Gs. I "know the art", and it is very costly to do this. But
that isn't the problem for me ... we can all burn the money to supply a market
we've seen no data for or a description of ... the real problem for the
systems vendor is we finish the box in 2010 and we have the exact same data
performance problem we have today jamming 1G and 10G links into a 10G
I propose that rather then do 40G, we put that effort into
working with 802.1 to resolve the perceived problems with LAG. Thus when
100Gbps is complete, we will have a N-LAG ... or New LAG ... that allows the
end user to create ANY size pipe required for 1G, 10G, and 100G core or ag
Ali Ghiasi wrote:
Marcus and Others
I like to present another point of
view in support of 40 Gig MAC.
We currently have the following option on
the backplane side
- KX-4 (XAUI)
- KR (1 lane ) 10Gig
The natural next step
for backplane Ethernet will be to operate KX-4
lanes at 10.3125
Regardless of what decision we make in the HSSG 40Gig MAC will
Assuming we will define the 40Gig MAC
sooner or later then allowing 40
for front panel becomes even
more compelling, specially when 100Gig is
applications in near term. If we define 40Gig MAC in the HSSG
> I think it was common sense at the
last meeting that the
> rate that service providers and IXPs are
looking for is 100 GbE.
> The discussion about 40 GbE is for the
*server market*, the
> classical LAN application of Ethernet. In the
network space you
> have already OTU3 and OC-768c PoS, so there is not
> need for another 40G Ethernet interface.
my personal opinion is regarding "broad market potential"
> that there
will be more networks or network types that require
> 100 GbE, however
in terms of volumes I could imagine that a 40GbE
> interface for
servers will actually produce more volumes, even though
> it is only
one type of network.
Toshinori Ishii wrote:
another IXP network engineer.
>> 2007/4/5, Henk
Back to 40GE: scaling link aggregation using 10GE for another 3
>>> will be very hard. The use of 40GE might be of help
here if it would
>>> allow for standardized products to become
available say second half
>>> of 2008.
Is there a way to expedite the standardization process (and
subsequent product development) of a 40GE standard? Within or
>>> of the IEEE?
>>> If the
answer to the above is "no" then I would say lets not spend
>>> time on anything other than 100GE so no delay is
introduced in the
>>> development of this standard and get it
finished as soon as possible.
need 100GE ASAP.
adr;dom:;;3151 Zanker Road;San Jose;CA;95014