Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

RE: Long distance links




Hi Paul,

My comments are in context below.

> Dan:
> 
> You are looking at the problem from the point of view of a 
> MAC talking to a
> PHY attached to an Optical network. I agree that about 1 
> packet time for
> transmit and 1-2 for receive is required for this configuration.
> The issue is what happens when a MAC delivers a 10 Gigbit 
> data steam to a
> 10 Gigabit PHY which is attached to a link which eventually reaches an
> Optical network. 

So if I understand this model, we have a 10Gig link (campus backbone)
that is connected to a campus switch. That switch wants to connect to
a WAN and thus will have a WAN port that operates at 9.58464 by using
its XGMII "hold" signal.

I agree that THAT switch will require buffering to handle the rate
mismatch, but that would be required in the event that it has more 
than 10 Gigabit links feeding it anyway. This is OK.

The WAN PHY will still only need its 1 packet worth of buffering to
deal with the rate conversion at the XGMII though.

> At the optical network a transform is performed to
> continue the link. The device at the juncture must flow 
> control the link to
> slow the data rate to 9.584640. To do so it must have enough buffer to
> cover 2*delay*bandwidth. Since this buffer depends on delay 
> it has a direct
> dependence on the link length (making the solution scale poorly). A
> reasonable design point for wide area equipment is about 25 
> msec (typical
> routers use 200 msec. which is where the design point needs to be for
> general applications). 

Now that the data from that switch has been restricted to the 9.58464
rate in the campus switch (via the hold signal) and has been placed onto
the SONET network, aren't we done?
 
> Doing the math on 25 msec gives us 2*25msec*400Mbps/8bpB = 
> 2.5 Mbytes. If
> we recalculate the buffer requirement based on the standard 
> router design
> point of 200msec we get 2*200msec*400Mbps/8bpB = 20 Mbytes.
> 
> Paul

If the campus switch doesn't want to provide 20Mbytes of buffering
for it's 10Gig WAN port, it can flow-control it's down-stream links
just as a 10Gig Campus port would do if it had 24 Gigabit links 
feeding it too much data.

If your point is that by limiting the MAC/PLS rate to 9.58464 we 
would be able to send data from a building backbone to the campus
switch and then on to the WAN without a rate mismatch, I accept 
your point. However, those of us who expect to be aggregating many
Gigabit links see the inherent buffering and flow-control issues
to be a more immediate concern and by dealing with those issues, 
the 10.0000 -> 9.58464 rate mismatch is resolved anyway. 

Thanks for continuing to illuminate your concerns. I hope my response
helps us to converge our understanding.

Regards,

Dan