Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

RE: Long distance links




Paul,

While we may not be coming closer to agreement (or maybe we are?) I 
believe we are at least coming closer to understanding.

More in context below...

> >So if I understand this model, we have a 10Gig link (campus backbone)
> >that is connected to a campus switch. That switch wants to connect to
> >a WAN and thus will have a WAN port that operates at 9.58464 by using
> >its XGMII "hold" signal.
> 
> Provided people built networks to this configuration, then it 
> works just
> fine. 
> The IEEE has not yet decided to build 2 PHYs. I believe that 
> the WAN PHY
> being talked about does not have a distinct identity from the LAN PHY.

This is one point at which we clearly have different perspectives. I
believe that there will be sufficient distinction in cost between a
DWDM laser for the WAN, and a (WWDM or serial) solution that is 
limited to a few Km for the campus. Otherwise, why do we need an XGMII?

> Because I don't have a good criteria for distinct identity 
> I've found no
> reason to believe the committee should build 2 PHYs. My 
> assumption is that
> any PHY developed may run on SMF and may be deployed in the 
> wide area. This
> is what is currently happening with 1 GigE. 

Actually, there is LX, SX, CX and 1000BASE-T not to mention a few
proprietary links for long-haul 1550nm. There is no reason not to
believe that 10G will follow the paradigm that allows multiple
PHYs for multiple cost/performance domains.
 
> >
> >I agree that THAT switch will require buffering to handle the rate
> >mismatch, but that would be required in the event that it has more 
> >than 10 Gigabit links feeding it anyway. This is OK.
> 
> In the configuration I described it is the buffer at a 
> transponder/repeater
> located at the junction between the IEEE segment and the DWDM 
> segment which
> requires buffering to rate match. At this juncture there are only two
> ports. One side is the IEEE 10.00 Gbps and the other side is 
> the 9.9584640
> Gbps DWDM cloud. The buffer size covers only the rate mismatch not the
> normal overload seen in packet switches. The photonic network 
> appears as a
> new segment in the link between switches, not as a separate link.

This looks like a specific implementation restriction. I doubt that
I would implement it that way. 

Regards,

Dan Dove