Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [802.3_400G] 802.3 400Gb/s Ethernet Study Group Logic ad hoc



Chris,

I agree that this is something we need to consider, and was also brought up by Steve Trowbridge during the last meeting.

Thanks, Mark

> 
> Another consideration when exploring PCS architectures is OTN
> compatibility.
> 
> As pointed in multiple contributions, two primary initial applications are in the
> central office for server to server and server to transport links. The optics
> used for these applications are multi-rate, if we use 40G and 100G as a guide,
> for example supporting both 100GbE and OTU-4 for 100G links. For 400G we
> should expect that optics will support 400GbE and OTU-5 rates. This leads to
> the desire to maximize the commonality between optical specifications and
> functionality between 400GbE and OTU-5. If 400GbE PCS requires block
> muxing, then the OTU-5 functionality maybe different if OTU-5 retains bit
> muxing. Further, the performance maybe different as the FEC is applied
> differently to the bit streams. This could increase verification and test time.
> 
> Having the PCS definition result in different optics functionality between
> 400GbE and OTU-5 modes would not make it OTN incompatible, but would
> complicate OTN support.
> 
> Chris
> 
> -----Original Message-----
> From: Mark Gustlin [mailto:mark.gustlin@xxxxxxxxxx]
> Sent: Friday, November 15, 2013 12:33 PM
> To: Chris Cole
> Cc: STDS-802-3-400G@xxxxxxxxxxxxxxxxx
> Subject: RE: [802.3_400G] 802.3 400Gb/s Ethernet Study Group Logic ad hoc
> 
> Chris,
> 
> > One of the topics discussed during this week's 400G SG meeting was
> > trade- off between PCS and PMA complexity.
> >
> > We faced the same trade-off during 100G SG, and it may be beneficial
> > to go back and look at some of the reasoning that went into the
> > definition of 100G PCS.
> >
> > In particular, Mark Nowell and Gary Nicholl presented several lessons
> > learned from 10G, one of which is to keep the PMD simple.
> >
> >
> http://www.ieee802.org/3/hssg/public/sep06/nowell_01_0906.pdf#page=1
> > 9
> 
> In the slides you point to, Gary and Mark talk about the complexities in
> putting a complete PCS sublayer into the module.
> What was presented in my slides this week was the possibility of doing block
> muxing in the module; only when you have to change widths and if you want
> to preserve the error detection capability of the RS-FEC in the face of burst
> errors (if the medium you will run across has a high burst error probability).
> Block muxing vs. a complete PCS is a much different level of complexity for
> the PMD; but of course bit level muxing is simpler still and would be the goal
> as long as it meets the needs of the PMDs.
> 
> >
> > 100G also offers a similar lesson, where even a simple 10:4 bit
> > gearbox created many complications in the physical layer. The current
> > generation of 4x25G I/O modules is significantly simpler to develop
> and test.
> 
> Any time you are not changing lane widths or encoding you will expect a very
> simple module, with just retimers.
> 
> >
> > In 400G, we should look for ways to keep PMDs simple and avoid
> > requiring awareness of higher layers in the physical layer.
> 
> I completely agree that we want PMDs to be as simple as feasible.
> Once we make progress on choosing technology for our new PMD objectives
> then we can explore error models of the PMDs, and explore PCS
> architectures which are appropriate for those PMDs.
> 
> Thanks, Mark
>