|Thread Links||Date Links|
|Thread Prev||Thread Next||Thread Index||Date Prev||Date Next||Date Index|
After listening to some of the discussion and looking at the possible addition of another reach objective and it's implications, I can understand your concern about a train wreck.
In my humble opinion, there is too much focus on the reach objective distances as being engraved in stone. In 802.3ae, there was a 2 km SMF reach objective that was eventually satisfied by the 10GBASE-L PMD. In 802.3aq, the reach objective was reduced from the original 300 m because the task force felt there was risk in actually being able to achieve that reach in worst case conditions. While many IEEE 802.3 standards specify a target reach, many implementations exceed those reaches.
In this case, the 100 m reach for OM3 is a great starting point. It helps the task force focus on developing a low-cost short-reach solution. Once the data is pulled together in the draft and as the task force works through the task force ballot phase, it may be discovered that a reach greater than 100 m could be written into the specification with no impact. Or, maybe there is a decision to reference EDC or FEC as an optional means to permit longer reach or higher performance connections.
There are options, but the task force needs to get the PMDs specified so there is a starting point to some of these other discussions.
From: PETRILLA,JOHN [mailto:john.petrilla@xxxxxxxxxxxxx]
Sent: Wednesday, August 20, 2008 10:23 PM
Subject: [802.3BA] 802.3ba XR ad hoc next step concern
I’m concerned that the proposal of creating a new objective is leading us into a train wreck. This is due to my belief that it’s very unlikely that 75% of the project members will find this acceptable. This will be very frustrating for various reasons, one of which, almost all the modules expected to be developed will easily support the desired extended link reaches, will be discussed below.
I don’t want to wait until our next phone conference to share this in the hope that we can make use of that time to prepare a proposal for the September interim. I’ll try to capture my thoughts in text in order to save some time and avoid distributing a presentation file to such a large distribution. I may have a presentation by the phone conference.
Optical modules are expected to either have a XLAUI/CLAUI interface or a PMD service interface, PPI. Both are considered.
A previous presentation, petrilla_xr_02_0708, http://ieee802.org/3/ba/public/AdHoc/MMF-Reach/petrilla_xr_02_0708.pdf has shown that modules with XLAUI/CLAUI interfaces will support 150 m of OM3 and 250 m of OM4. These modules will be selected by equipment implementers primarily because of the commonality of their form factor with other variants, especially LR, and/or because of the flexibility the XLAUI/CLAUI interface offers the PCB designer. Here the extended fiber reach comes for no additional cost or effort. This is also true in PPI modules where FEC is available in the host.
Everyone is welcome to express their forecast of the timing and adoption of XLAUI/CLAUI MMF modules vs baseline MMF modules.
To evaluate the base line
proposal for its extended reach capability, a set of
The Tx distribution characteristics follow. All distributions are Gaussian.
Min OMA, mean = -2.50 dBm, std dev = 0.50 dBm (Baseline value = -3.0 dBm)
Tx tr tf, mean = 33.0 ps, std dev = 2.0 ps (Example value = 35 ps)
RIN(oma), mean = -132.0 dB/Hz, std dev = 2.0 dB (Baseline value = -128 to -132 dB/Hz, Example value = -130 dB/Hz)
Tx Contributed DJ, mean = 11.0 ps, std dev = 2.0 ps (Example value = 13.0 ps)
Spectral Width, mean = 0.45 nm, std dev = 0.05 nm (Baseline value = 0.65 nm).
Baseline values are from Pepeljugoski_01_0508 and where no baseline value is available Example values from petrilla_02_0508 are used.
All of the above, except spectral width, can be included in an aggregate Tx test permitting less restrictive individual parameter distributions than if each parameter is tested individually. In this example distributions are chosen such that only the mean and one std dev of the distribution satisfy the target value in the link budget spreadsheet. If the individual parameter is tested directly to this value the yield loss would be approximately 16%.
The Rx distribution characteristics follow. Again, all distributions are Gaussian.
Unstressed sensitivity, mean = -12.0 dBm, std dev = 0.75 dB (Baseline value = -11.3 dBm)
Rx Contributed DJ, mean = 11.0 ps, std dev = 2.0 ps (Baseline value = 13.0 ps)
Rx bandwidth, mean = 10000 MHz, std dev = 850 MHz (Baseline value = 7500 MHz).
For the Tx MC, only 2% of the combinations would fail the aggregate Tx test.
For the 150 m OM3 MC, only 2% of the combinations would have negative link margin and fail to support the 150 m reach. This is less than the percentage of modules that would have been rejected by the Tx aggregate test and a stressed Rx sensitivity test and very few would actually be seen in the field.
For the 250 m OM4 MC, only 8% of the combinations would have negative link margin. Here approximately half of these would be due to transmitters and receivers that should have been caught at their respective tests.
The above analysis is for a single lane. In the case of multiple lane modules, the module yield loss will increase depending on how tightly the lanes are correlated. Where module yield loss is high, module vendors will adjust the individual parameter distributions such that more than one std dev separates the mean from the spread sheet target value. This will reduce the proportion of modules failing the extended link criteria. Also, any correlation between lanes results in a module distribution of units that are shipped having fewer marginal lanes than where the lanes are independent.
So while there’s a finite probability that a PPI interface module doesn’t support the desired extended reaches, the odds are overwhelming that it does.
Then with all of one form factor and more than 92% of the other form factor supporting the desired extended reach, the question becomes, ‘what’s a rational and acceptable means to take advantage of what is already available?’ A new objective would enable this but, as stated above getting a new objective for this is at best questionable. Further, it’s expected that one would test to see that modules meet the criteria for the new objective, set up part numbers, create inventory, etc. and that adds cost. Finally, users, installers, etc. are intelligent and will soon find this out and will no longer accept any cost premium for modules that were developed to support extended reach - they will just use a standard module. There’s little incentive to invest in an extended reach module development.
I’ll make a modest proposal: Do nothing – just hook up the link. Do nothing to the standard and when 150 m of OM3 or 250 m of OM4 is desired – just plug in the fiber. The odds are overwhelming that it will work. If something is really needed in the standard, then generate a white paper and/or an informative annex describing the statistical solution.
Even with all the survey results provided to this project, it’s not easy to grasp what to expect for a distribution of optical fiber lengths within a data center and what is gained by extending the reach of the MMF baseline beyond 100 m. Here’s another attempt.
In flatman_01_0108, page 11, there’s a projection for 2012. There for 40G, the expected adoption percentage of links in Client-to-Access (C-A) applications of 40G is 30%, for Access-to-Distribution (A-D) links, it is 30%, and for Distribution-to-Core (D-C)links it is 20%. While Flatman does not explicitly provide a relative breakout of link quantities between the segments, C-A, A-D & D-C, perhaps one can use his sample sizes as an estimate. This yields for C-A 250000, for A-D 16000 and for D-C 3000. Combining with the above adoption percentages yields an expected link ratio of C-A:A-D:D-C = 750:48:6.
Perhaps Alan Flatman can comment on how outrageous this appears.
This has D-C, responsible for 1% of all 40G links, looking like a niche. Arguments over covering the last 10% or 20% or 50% of D-C reaches does not seem like time well spent. Even A-D combined with D-C, AD+DC, provides only 7% of the total.
Similarly for 100G: the 2012 projected percentage adoption for C-A:A-D:D-C is 10:40:60 and link ratio is 250:64:18. Here D-C is responsible for 5% of the links and combined with A-D generates 25% of the links. Now the last 20% of AD+DC represents 5% of the market.
Since the computer architecture trend leads to the expectation of shorter link lengths and there are multiple other solutions that can support longer lengths, activating FEC, active cross-connects, telecom centric users prefer SM anyway, point-to-point connections, etc., there is no apparent valid business case supporting resource allocation for development of an extended reach solution.