|Thread Links||Date Links|
|Thread Prev||Thread Next||Thread Index||Date Prev||Date Next||Date Index|
See below, please.
I know how 10GEPON works. I even got an award for it. (And what luxury - a whole piece of 8.5x11 paper, with color printing, even!)
GK: Sorry, that’s all IEEE could afford. If you expect a more lavish award, you may consider participating in ITU-T J
Let’s not forget that everything in the standard is only a reference implementation. What you actually build is up to you, as long as it behaves as it should. This is particularly true for EPON.
So, for the concern on the “whole data path running at 100G” – come on… that’s not how it would be done.
Now, what’s wrong with saying that there is a 100G MAC, that then only uses a subset of its capability? In our view, that’s how you’d build any of these sub-rated things.
GK: True, with Tx throttling above the MAC and Rx gap filling below the MAC, a 100G MAC can support any effective data rate, even 1Mb/s, if you wish. But, given that the PAR scope defines a limit on what can be standardized, how would you justify having such reduced mode in the draft, without it being mentioned in the scope? It was discussed on the call which you missed. We can either take a risk at the PAR approval time by adding an intermediate speed, or we may take a risk at Sponsor Ballot time (a risk much higher, in my opinion) if we put such intermediate rate in the draft without the PAR scope “allowing” it.
I think there is a false argument here, that somehow people are going to build a dedicated 50G EPON. Do you think the industry is going to do such incremental advances? I don’t think so, particularly not for the silicon.
GK: Who knows what the future holds? To go from 25G to 100G is a big step. I don’t exclude that intermediate MAC speeds will be standardized and intermediate silicon speeds will be built.
Requiring the silicon to support 100G from day one will do nothing except pushing silicon availability several years out.
Another possible approach is to use existing 40G MAC with 2x25G PHY. This will nicely take care of all FEC and other EPON overheads, allowing true 40G data rates. But even with this approach, an operation at reduced PHY capacity should be supported by PAR scope, I think.
We expect that the 25G single channel technology will be developed, and that a way to combine them into higher speeds will be developed.
The 100G MAC is the large enough bucket that can accommodate 4 25’s, and four is a proven modularity.
And so, anybody who is interested in building any of these sub-rated systems would end up using 100Gb/s switch-ports.
I can say that some of this may depend on exactly how the channel combining is done. Some people may be thinking of “hard bonding” – that is, the designer decides how many channels will be tied together, and once they are combined, any ONU that wants to use those channels must listen to all of them. This is how 100G Ethernet works, and that is fine for point to point. They become an indivisible block, and that (maybe) motivates the thinking about a 50G MAC. I think this is a very poor design choice for PON. The whole point of PON is to allow bandwidth flexibility. It is much better to have a scheme of “soft bonding” – that is, the operator decides which ONUs work on which set of channels, and it can change over time. In addition, it is likely that there can be 1, 2, and 4 channel ONUs, all sharing the available channels in an efficient manner. If one “hard bonds” 2 channels together, then the single channel ONUs can’t listen to those channels – you’d have to have a single channel to take care of them. And then there is no space for the 4 channel bonded group. We would quickly paint ourselves into a corner.
GK: Agree, a protocol should be flexible to not waste network resources. Even more, sometimes the OLT may decide to communicate with an ONU on fewer wavelength than the ONU can support.
So, back to the scope: If the scope is a maximum, then 100G is a fine maximum. If we do the right thing regarding how the 100G MAC gets reduced, all the desired use cases will be supported. And that is what really matters.
10G-EPON uses exactly the same MAC as is used in 10G point-to-point. This MAC runs at exactly 10Gb/s, no matter what the actual data throughput. The PHY adds an overhead due to FEC, so the effective throughput is lower. The data is throttled above MAC to make sure it does not overrun PHY capacity. But the MAC spits bits (idles if there is no data) out at exactly 10Gb/s. In other words, the data path in 10G-EPON runs at 10Gb/s.
Since the PAR scope is the upper bound and what the project is allowed to cover (as David clarified on the call), the existing scope limits us to only 25G and 100G MACs and nothing else.
If we don’t add 50G MAC, then we will have the MAC and the entire data path running at either 25Gb/s or 100Gb/s, no matter how many wavelengths are activated. This is what we try to avoid. We need to allow another generation between 25G and 100G.