Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [802.3_100GNGOPTX] Emerging new reach space



Paul
They were talking square city blocks, which is much larger than 150 x 150 m.

John

Sent from my iPhone

On Nov 19, 2011, at 12:27 PM, "Kolesar, Paul" <PKOLESAR@xxxxxxxxxxxxx> wrote:

> John,
> Interesting input.  But let's put it in the context of actual building size that would require 2km reach, to get a feeling of what this implies.  I will look at two cases: a low-wide building and a high-rise building.
>
> Let's assume low-wide a data center hotel with a footprint of 500m x 500m.  That's 250,000 square meters, an area big enough to hold about 40 football playing fields or by my estimates about six entire football stadiums.  In other words, it's really huge.  To go from corner to corner following x-y directions (not line of sight) would consume 500 + 500 = 1000 m, leaving another 1000 m of reach to do the same thing on the second floor, which if done would bringing the two ends of the channel on top of each other separated by a single floor.  But that pathology is an illogical use of channel routing when it would be much more efficient to just drop down one floor, requiring only very short reach capability.  In such large footprint buildings it makes no sense to restrict the passage between floors to a single location, which is what seems to be required to support 2km needs even in mammoth-footprint buildings.
>
> Now let's consider a tall building with a footprint of 150m x 150m = 22,500 square meters.  Such a building would consume an entire US square block (one tenth of a mile on a side). Maximum reach required to span across a floor is 150 + 150 = 300 m.  Doubling that on a second floor, as in the previous example, consumes 600 m, leaving 1400 m for the vertical dimension.  1400 m would support a building of 350 stories, three and a half times taller than the Empire State building.  This is clearly another situation that in my opinion is rather ridiculous.
>
> Paul
>
>
> -----Original Message-----
> From: John D'Ambrosia [mailto:jdambrosia@xxxxxxxxxxxxxxx]
> Sent: Saturday, November 19, 2011 10:09 AM
> To: Kolesar, Paul
> Cc: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
> Subject: Re: [802.3_100GNGOPTX] Emerging new reach space
>
> Paul
> I actually recall input from hssg days where such reaches were discussed for  data center hotels, large multifloor buildings in cities.
>
> Don't remember file off top of my head but I do remember the data point.
>
> John
>
> Sent from my iPhone
>
> On Nov 19, 2011, at 10:56 AM, "Kolesar, Paul" <PKOLESAR@xxxxxxxxxxxxx> wrote:
>
>> Jeff,
>> I cannot imagine a data center that has 2,000m runs within a building.  Such data centers must actually be multiple buildings, perhaps in a campus or office park, similar to the central offices that have driven 2km into existing specs.  Once we step outside the confines of a single building we are on a rather slippery slope towards nebulous boundary conditions. Where do campus channels end and cross-town channels begin?  I am not saying we should ignore these inputs, but for this exercise we really need to judge the merits of proposals based on channel-coverage-vs-cost trade-off space rather than a particular reach at the outskirts of consideration.  So please quantify the frequency distribution of channel lengths that you gather to permit analysis of the type we need.
>>
>> Paul
>>
>>
>> -----Original Message-----
>> From: Jeffery Maki [mailto:jmaki@xxxxxxxxxxx]
>> Sent: Friday, November 18, 2011 4:24 PM
>> To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
>> Subject: Re: [802.3_100GNGOPTX] Emerging new reach space
>>
>> All,
>>
>> I am gathering data that is defensible.  I'm seeing that some datacenters need reach extension to 2,000 meters.  I have not gotten acceptance to using parallel SMF for even 500 meters.  Parallel MMF is similarly not acceptable.  I see acceptance that 100GBASE-LR4 can be brought to low cost in CFP4/QSFP-type implementations.  Our efforts in relation to a cost optimized standard for -nR4 needs to be made with the realization that we are bookended by 100GBASE-SR4 QSFP/CFP4 and 100GBASE-LR4 QSFP/CFP4. Of course, if -nR4 can be lower cost than 100GBASE-SR4, then that would be very important to capture.
>>
>> An open question:  Is there no cost to remove from the receiver?  I see presentations focused on the transmitter but none on the receiver.
>>
>> Jeff
>>
>> ....................................
>> Jeffery J. Maki, Ph.D.
>> Distinguished Engineer, Optical
>> Juniper Networks
>>
>>
>> -----Original Message-----
>> From: Jack Jewell [mailto:jack@xxxxxxxxxxxxxx]
>> Sent: Friday, November 18, 2011 12:46 PM
>> To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
>> Subject: Re: [802.3_100GNGOPTX] Emerging new reach space
>>
>> I threw out 2km to err on the high side, to invite discussion - glad to
>> see it, and I'd prefer to target something shorter!
>> While my impression is that IDCs don't want parallel fiber, many of you
>> have been far more in touch with them than me. If all we come up with for
>> a duplex SMF solution comes at similar cost/power to LR4, I don't know
>> if/how they will respond. I'm looking at duplex SMF solutions much harder
>> than I was at the outset of this discussion.
>> Cheers, Jack
>>
>>
>> On 11/18/11 1:25 PM, "Chris Cole" <chris.cole@xxxxxxxxxxx> wrote:
>>
>>> Ali,
>>>
>>> Very well put. I had actually started proposing 600m as the max reach
>>> objective earlier this year for the reasons you outline, for example at
>>> the EA TEF.
>>>
>>> However, in a number of conversations with end users, I was persuaded
>>> that 800m or even 1000m would totally future proof a standard which will
>>> be with us for the next decade.
>>>
>>> That's why my proposal for the 100GE-nR4 objective is "minimum reach of
>>> 1km".
>>>
>>> Chris
>>>
>>> -----Original Message-----
>>> From: Ali Ghiasi [mailto:aghiasi@xxxxxxxxxxxx]
>>> Sent: Friday, November 18, 2011 12:21 PM
>>> To: Chris Cole; Jack Jewell
>>> Cc: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
>>> Subject: Re: [802.3_100GNGOPTX] Emerging new reach space
>>>
>>> Chris/Jack
>>>
>>> Looking at the link Chris send on 10x10 MSA paper from Bikash and Vijay
>>> from Google are consistent with reach I showed
>>> from my study of the major IDCs in the us.  The 10x10 MSA paper largest
>>> data center was 400000 sq-ft in the my study I estimated
>>> the largest IDC in the us is now1000,000 sq-ft.  In my study I only
>>> assume square building where the 10x10 MSA also considered
>>> rectangular building.  I reported the longest reach based on square
>>> building would about 600 m but if we assume rectangular building
>>> then max reach would be 700 m,  since we don't the actually know the
>>> implementation of these IDCs, it is very likely  some of these large
>>> building are partitioned with longest link being shorter.
>>> http://www.ieee802.org/3/100GNGOPTX/public/sept11/ghiasi_01_a_0911_NG100GO
>>> PTX.pdf
>>>
>>> We should focus on the PMD that will deliver the cost, size, and power
>>> with min reach of 600 m which is more than sufficient for IDC
>>> applications.
>>> If it happens we could do 2 km with no penalty then great but lets not
>>> set our objective on 2 km.
>>>
>>> I also agree with Jack statement that duplex SMF is highly desired and is
>>> consistent with inputs I have received, obviously if we can't come up
>>> with any duplex PMD which is better than current 100Gbase-LR4 then
>>> parallel SMF still could fill a gap.
>>>
>>> Thanks,
>>> Ali
>>>
>>>
>>> On Nov 18, 2011, at 11:12 AM, Chris Cole wrote:
>>>
>>>> Jack,
>>>>
>>>> Thank you for continuing to lead the discussion. I am hoping it
>>>> encourages others to jump in with their perspectives, otherwise you will
>>>> be stuck architecting the new standard by yourself with the rest of us
>>>> sitting back and observing.
>>>>
>>>> Your email is also a good prompt to start discussing the specific reach
>>>> objective for 100GE-nR4. Since you mention 2000m reach multiple times in
>>>> your email, can you give a single example of a 2000m Ethernet IDC link?
>>>>
>>>> I am aware of many 150m to 600m links, with 800m mentioned as long term
>>>> future proofing, so rounding up to 1000m is already conservative. I
>>>> understand why several IDC operators have asked for 2km; it was the next
>>>> closest existing standard reach above their 500m/600m need; see for
>>>> example page 10 of Donn Lee's March 2007 presentation to the HSSG
>>>> (http://www.ieee802.org/3/hssg/public/mar07/lee_01_0307.pdf). It is very
>>>> clear what the need is, and why 2km is being brought up.
>>>>
>>>> Another example of IDC needs is in a 10x10G MSA white paper
>>>> (http://www.10x10msa.org/documents/10X10%20White%20Paper%20final.pdf),
>>>> where Bikash Koley and Vijay Vusirikala of Google show that their
>>>> largest data center requirements are met by a <500m reach interface.
>>>>
>>>> In investigating the technology for 100GE-nR4, we may find as Pete
>>>> Anslow has pointed out in NG 100G SG, that the incremental cost for
>>>> going from 1000m to 2000m is negligible. We may then chose to increase
>>>> the standardized reach. However to conclude today that this is in fact
>>>> where the technology will end up is premature. We should state the reach
>>>> objective to reflect the need, not our speculation about the
>>>> capabilities of yet to be defined technology.
>>>>
>>>> Thank you
>>>>
>>>> Chris
>>>>
>>>> -----Original Message-----
>>>> From: Jack Jewell [mailto:jack@xxxxxxxxxxxxxx]
>>>> Sent: Friday, November 18, 2011 9:38 AM
>>>> To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
>>>> Subject: Re: [802.3_100GNGOPTX] Emerging new reach space
>>>>
>>>> Hello All,
>>>> Thanks for all the contributions to this discussion. Here's a synopsis
>>>> and
>>>> my current take on where it's heading (all in the context of 150-2000m
>>>> links).
>>>> Starting Point: Need for significantly-lower cost/power links over
>>>> 150-2000m reaches has been expressed for several years. Last week in
>>>> Atlanta, four technical presentations on the subject all dealt with
>>>> parallel SMF media. Straw polls of "like to hear more about ___"
>>>> received
>>>> 41, 48, 55, and 48 votes, the 41 for one additionally involving new
>>>> fiber.
>>>> The poll "to encourage more on…duplex SMF PMDs" received 35 votes.
>>>> Another
>>>> straw poll gave strong support for the most-aggressive low-cost target.
>>>> Impressions from discussion and Atlanta meeting: Systems users
>>>> (especially
>>>> the largest ones) are strongly resistant to adopting parallel SMF. (not
>>>> addressing reasons for that position, just stating an observation.) LR4
>>>> platform can be extended over duplex SMF via WDM by at least one more
>>>> "factor-4" generation, and probably another (DWDM for latter); PAM and
>>>> line-rate increase may extend duplex-SMF's lifetime yet another
>>>> generation.
>>>> My Current Take: Given a 2-or-3-generation (factor-4; beyond 100GNGOPTX)
>>>> longevity of duplex SMF, I'm finding it harder to make a compelling case
>>>> for systems vendors to adopt parallel SMF for 100GNGOPTX. My current
>>>> expectation is that duplex SMF will be the interconnection medium. My
>>>> ongoing efforts will have more duplex-SMF content. I still think
>>>> parallel
>>>> SMF should deliver lowest cost/power for 100GNGOPTX, and provide an
>>>> additional 1-2 generations of longevity; just don't see system vendors
>>>> ready to adopt it now.
>>>> BUT: What about the Starting Point (above), and the need for
>>>> significantly-lower cost/power?? If a compelling case is to be made for
>>>> an
>>>> alternative to duplex SMF, it will require a very crisp and convincing
>>>> argument for significantly-lower cost/power than LR4 ("fair" comparison
>>>> such as mentioned earlier), or other duplex SMF approaches. Perhaps a
>>>> modified version of LR4 can be developed with lower-cost/power lasers
>>>> that
>>>> doesn't reach 10km. If, for whatever reasons, systems vendors insist on
>>>> duplex SMF, but truly need significantly-lower cost/power, it may
>>>> require
>>>> some compromise, e.g. "wavelength-shifted" SMF, or something else. Would
>>>> Si Photonics really satisfy the needs with no compromise? Without saying
>>>> they won't, it seems people aren't convinced, because we're having these
>>>> discussions.
>>>> Cheers, Jack
>>>>
>>>>
>>>> On 11/17/11 10:23 AM, "Arlon Martin" <amartin@xxxxxxxxxx> wrote:
>>>>
>>>>> Hello Jack,
>>>>> To your first question, yes, we are very comfortable with LAN WDM
>>>>> spacing. That never was a challenge for the technology. We have chosen
>>>>> to
>>>>> perfect reflector gratings because of the combination of small size and
>>>>> great performance. I am not sure exactly what you are asking in your
>>>>> second question. There may be a slightly lower loss to AWGs than
>>>>> reflector gratings. That difference has decreased as we have gained
>>>>> more
>>>>> experience with gratings. For many applications like LR and mR, the
>>>>> much,
>>>>> much smaller size (cost is related to size) of reflector gratings makes
>>>>> them the best choice.
>>>>>
>>>>> Thanks, Arlon
>>>>>
>>>>> -----Original Message-----
>>>>> From: Jack Jewell [mailto:jack@xxxxxxxxxxxxxx]
>>>>> Sent: Thursday, November 17, 2011 6:42 AM
>>>>> To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
>>>>> Subject: Re: [802.3_100GNGOPTX] Emerging new reach space
>>>>>
>>>>> Hi Arlon,
>>>>> Thanks very much for this. You are right; I was referring to thin film
>>>>> filters. My gut still tells me that greater tolerances should accompany
>>>>> wider wavelength spacing. So I'm guessing that your manufacturing
>>>>> tolerances are already "comfortable" at the LAN WDM spacing, and thus
>>>>> the
>>>>> difference is negligible to you. Is that a fair statement? Same could
>>>>> be
>>>>> true for thin film filters. At any rate, LAN WDM appears to have one
>>>>> factor-4 generation advantage over CWDM in this discussion, and it's
>>>>> good
>>>>> to hear of its cost effectiveness. Which brings up the next question.
>>>>> Your
>>>>> data on slide 15 of Chris's presentation referenced in his message
>>>>> shows
>>>>> lower insertion loss for your array waveguide (AWG) DWDM filter than
>>>>> for
>>>>> the grating filters. Another factor-of-4 data throughput may be gained
>>>>> in
>>>>> the future via DWDM.
>>>>> Cheers, Jack
>>>>>
>>>>> On 11/16/11 10:51 PM, "Arlon Martin" <amartin@xxxxxxxxxx> wrote:
>>>>>
>>>>>> Hello Jack,
>>>>>> As a maker of both LAN WDM and CWDM filters, I would like to comment
>>>>>> on
>>>>>> the filter discussion. WDM filters can be thin film filters (to which
>>>>>> you
>>>>>> may be referring) but more likely, they are PIC-based AWGs or
>>>>>> PIC-based
>>>>>> reflector gratings. In our experience at Kotura with reflector
>>>>>> gratings
>>>>>> made in silicon, both CWDM and LAN WDM filters work equally well and
>>>>>> are
>>>>>> roughly the same size. It is practical to put 40 or more wavelengths
>>>>>> on a
>>>>>> single chip. We have done so for other applications. There is plenty
>>>>>> of
>>>>>> headroom for more channels when the need arises for 400 Gb/s or 1 Tbs.
>>>>>> There may be other reasons to select CWDM over LAN WDM, but, in our
>>>>>> experience, filters do not favor one choice over the other.
>>>>>>
>>>>>> Arlon Martin, Kotura
>>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Jack Jewell [mailto:jack@xxxxxxxxxxxxxx]
>>>>>> Sent: Wednesday, November 16, 2011 9:09 PM
>>>>>> To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
>>>>>> Subject: Re: [802.3_100GNGOPTX] Emerging new reach space
>>>>>>
>>>>>> Thanks Chris for your additions.
>>>>>> 1. "CWDM leads to simpler optical filters versus "closer" WDM (LAN
>>>>>> WDM)"
>>>>>> -
>>>>>> For a given throughput transmission and suppression of
>>>>>> adjacent-wavelength
>>>>>> signals (assuming use of same available optical filter materials),
>>>>>> use of
>>>>>> a wider wavelength spacing can be accomplished with wider thickness
>>>>>> tolerance and usually with fewer layers. The wider thickness
>>>>>> tolerance is
>>>>>> basic physics, with which I won't argue. In this context, I consider
>>>>>> "wider thickness tolerance" as "simpler."
>>>>>> 2. "CWDM leads to lower cost versus "closer" WDM because cooling is
>>>>>> eliminated" - I stated no such thing, though it's a common perception.
>>>>>> Ali
>>>>>> Ghiasi suggested CWDM (implied by basing implementation on
>>>>>> 40GBASE-LR4)
>>>>>> might be lower cost, without citing the cooling issue. Cost is a far
>>>>>> more
>>>>>> complex issue than filter simplicity. You made excellent points
>>>>>> regarding
>>>>>> costs in your presentation cited for point 1, and I cited LAN WDM
>>>>>> (100GBASE-LR4) advantages as "better-suited-for-integration, and
>>>>>> "clipping
>>>>>> off" the highest-temp performance requirement." We must recognize
>>>>>> that at
>>>>>> 1km vs 10km, chirp issues are considerably reduced.
>>>>>> 3. "CWDM is lower power than "closer" WDM power" - I stated no such
>>>>>> thing,
>>>>>> though it's a common perception. I did say "More wavelengths per fiber
>>>>>> means more power per channel," which is an entirely different
>>>>>> statement,
>>>>>> and it's darned hard to argue against the physics of it (assuming same
>>>>>> technological toolkit).
>>>>>> All I stated in the previous message are the advantages of CWDM
>>>>>> (adopted
>>>>>> by 40GBASE-LR4) and LAN WDM (adopted by 100GBASE-LR4), without
>>>>>> favoring
>>>>>> one over the other for 100GbE (remember we're talking ~1km, not 10km).
>>>>>> But
>>>>>> my forward-looking (crude) analysis of 400GbE and 1.6TbE clearly
>>>>>> favors
>>>>>> LAN WDM over CWDM - e.g. "CWDM does not look attractive on duplex SMF
>>>>>> beyond 100GbE," whereas the wavelength range for 400GbE LAN 16WDM over
>>>>>> duplex SMF "is realistic." Quasi-technically speaking Chris, we're on
>>>>>> the
>>>>>> same wavelength (pun obviously intended) :-)
>>>>>> Paul Kolesar stated the jist succinctly: "that parallel fiber
>>>>>> technologies
>>>>>> appear inevitable at some point in the evolution of single-mode
>>>>>> solutions.
>>>>>> So the question becomes a matter of when it is best to embrace them."
>>>>>> [I
>>>>>> would replace "inevitable" with "desirable."] From a module
>>>>>> standpoint,
>>>>>> it's easier, cheaper, lower-power to produce a x-parallel solution
>>>>>> than a
>>>>>> x-WDM one (x is number of channels), and it's no surprise that last
>>>>>> week's
>>>>>> technical presentations (by 3 module vendors and 1 independent) had a
>>>>>> parallel-SMF commonality for 100GNGOPTX. There is a valid argument for
>>>>>> initial parallel SMF implementation, to be later supplanted by WDM,
>>>>>> particularly LAN WDM. With no fiber re-installations.
>>>>>> To very recent messages, we can choose which pain to feel first,
>>>>>> parallel
>>>>>> fiber or PAM, but by 10TbE we're likely get both - in your face or
>>>>>> innuendo :-)
>>>>>> Jack
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 11/16/11 6:53 PM, "Chris Cole" <chris.cole@xxxxxxxxxxx> wrote:
>>>>>>
>>>>>>> Hello Jack,
>>>>>>>
>>>>>>> You really are on a roll; lots of insightful perspectives.
>>>>>>>
>>>>>>> Let me clarify a few of items so that they don't detract from your
>>>>>>> broader ideas.
>>>>>>>
>>>>>>> 1. CWDM leads to simpler optical filters versus "closer" WDM (LAN
>>>>>>> WDM)
>>>>>>>
>>>>>>> This claim may have had some validity in the past, however it has not
>>>>>>> been the case for many years. This claim received a lot of attention
>>>>>>> in
>>>>>>> 802.3ba TF during the 100GE-LR4 grid debate. An example presentation
>>>>>>> is
>>>>>>> http://www.ieee802.org/3/ba/public/mar08/cole_02_0308.pdf, where on
>>>>>>> pages
>>>>>>> 13, 14, 15, and 16 multiple companies showed there is no practical
>>>>>>> implementation difference between 20nm and 4.5nm spaced filters.
>>>>>>> Further,
>>>>>>> this has now been confirmed in practice with 4.5nm spaced LAN WDM
>>>>>>> 100GE-LR4 filters in TFF and Si technologies manufactured with no
>>>>>>> significant cost difference versus 20nm spaced CWDM 40GE-LR4 filters.
>>>>>>>
>>>>>>> If there is specific technical information to the contrary, it would
>>>>>>> be
>>>>>>> helpful to see it as a  presentation in NG 100G SG.
>>>>>>>
>>>>>>> 2. CWDM leads to lower cost versus "closer" WDM because cooling is
>>>>>>> eliminated
>>>>>>>
>>>>>>> This claim has some validity at lower rates like 1G or 2.5G, but is
>>>>>>> not
>>>>>>> the case at 100G. This has been discussed at multiple 802.3 optical
>>>>>>> track
>>>>>>> meetings, including as recently as the last NG 100G SG meeting. We
>>>>>>> again
>>>>>>> agreed that the cost of cooling is a fraction of a percent of the
>>>>>>> total
>>>>>>> module cost. Even for a 40GE-LR4 module, the cost of cooling, if it
>>>>>>> had
>>>>>>> to be added for some reason, would be insignificant. Page 4 of the
>>>>>>> above
>>>>>>> cole_02_0308 presentation discusses why that is.
>>>>>>>
>>>>>>> This claim to some extent defocuses from half a dozen other cost
>>>>>>> contributors which are far more significant. Those should be at the
>>>>>>> top
>>>>>>> of the list instead of cooling. Further, if cooling happens to
>>>>>>> enable a
>>>>>>> technology which reduces by a lot a significant cost contributor,
>>>>>>> then
>>>>>>> it
>>>>>>> becomes a big plus instead of an insignificant minus.
>>>>>>>
>>>>>>> If there is specific technical information to the contrary, a NG
>>>>>>> 100G SG
>>>>>>> presentation would be a great way to introduce it.
>>>>>>>
>>>>>>> 3. CWDM is lower power than "closer" WDM power.
>>>>>>>
>>>>>>> The real difference between CWDM and LAN DWDM is that un-cooled is
>>>>>>> lower
>>>>>>> power. However how much lower strongly depends on the specific
>>>>>>> transmit
>>>>>>> optics and operating conditions. In 100G module context it can be
>>>>>>> 10% to
>>>>>>> 30%. However, for some situations it could be a lot more savings, and
>>>>>>> for
>>>>>>> others even less. No general quantification of the total power
>>>>>>> savings
>>>>>>> can be made; it has to be done on a case by case basis.
>>>>>>>
>>>>>>> Chris
>>>>>>>
>>>>>>> -----Original Message-----
>>>>>>> From: Jack Jewell [mailto:jack@xxxxxxxxxxxxxx]
>>>>>>> Sent: Wednesday, November 16, 2011 3:20 PM
>>>>>>> To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
>>>>>>> Subject: Re: [802.3_100GNGOPTX] Emerging new reach space
>>>>>>>
>>>>>>> Great inputs! :-)
>>>>>>> Yes, 40GBASE-LR4 is the first alternative to 100GBASE-LR4 that comes
>>>>>>> to
>>>>>>> mind for duplex SMF. Which begs the question: why are they
>>>>>>> different?? I
>>>>>>> can see advantages to either: (40G CWDM vs 100G closerWDM) -
>>>>>>> uncooled,
>>>>>>> simple optical filters vs better-suited-for-integration, and
>>>>>>> "clipping"
>>>>>>> off" the highest-temp performance requirement.
>>>>>>> It's constructive to look forward, and try to avoid unpleasant
>>>>>>> surprises
>>>>>>> of "future-proof" assumptions (think 802.3z and FDDI fiber - glad I
>>>>>>> wasn't
>>>>>>> there!). No one likes "forklift upgrades" except maybe forklift
>>>>>>> operators,
>>>>>>> who aren't well-represented here. Data centers are being built, so
>>>>>>> here's
>>>>>>> a chance to avoid short-sighted mistakes. How do we want 100GbE,
>>>>>>> 400GbE
>>>>>>> and 1.6TbE to look (rough guesses at the next generations)? Here are
>>>>>>> 3
>>>>>>> basic likely scenarios, assuming (hate to, but must) 25G electrical
>>>>>>> interface and no electrical mux/demux. Considering duplex SMF,
>>>>>>> 4+4parallel
>>>>>>> SMF, and 16+16parallel SMF:
>>>>>>> Generation
>>>>>>> 100GbE       duplex-SMF /  4WDM      4+4parallel / no WDM
>>>>>>> 16+16parallel / dark fibers
>>>>>>> 400GbE       duplex-SMF / 16WDM      4+4parallel /  4WDM
>>>>>>> 16+16parallel / no WDM
>>>>>>> 1.6TbE       duplex-SMF / 64WDM      4+4parallel / 16WDM
>>>>>>> 16+16parallel /  4WDM
>>>>>>> The above is independent of distances in the 300+ meter range we're
>>>>>>> considering. Yes, there are possibilities of PAM encoding and
>>>>>>> electrical
>>>>>>> interface speed increases. Historically we've avoided the former, and
>>>>>>> the
>>>>>>> latter is expected to bring a factor of 2, at most, for these
>>>>>>> generations.
>>>>>>> Together, they might bring us forward 1 factor-of-4 generation
>>>>>>> further.
>>>>>>> For 40GbE or 100GbE, 20nm-spaced CWDM is nice for 4WDM (4
>>>>>>> wavelengths).
>>>>>>> At
>>>>>>> 400GbE, 16WDM CWDM is a 1270-1590nm stretch, with 16 laser products
>>>>>>> (ouch!). 20nm spacing is out of the question for 64WDM (1.6TbE). CWDM
>>>>>>> does
>>>>>>> not look attractive on duplex SMF beyond 100GbE.
>>>>>>> OTOH, a 100GBASE-LR4 - based evolution on duplex SMF, with ~4.5nm
>>>>>>> spacing,
>>>>>>> is present at 100GbE. For 400GbE, it could include the same 4
>>>>>>> wavelengths,
>>>>>>> plus 4-below and 12-above - a 1277.5-1349.5nm wavelength span, which
>>>>>>> is
>>>>>>> realistic. The number of "laser products" is fuzzy, as the same
>>>>>>> epitaxial
>>>>>>> structure and process (except grating spacing) may be used for maybe
>>>>>>> a
>>>>>>> few, but nowhere near all, of the wavelengths. For 1.6TbE 64WDM,
>>>>>>> LR4's
>>>>>>> 4.5nm spacing implies a 288nm wavelength span and a plethora of
>>>>>>> "laser
>>>>>>> products." Unattractive.
>>>>>>> On a "4X / generational speed increase," 4+4parallel SMF gains one
>>>>>>> generation over duplex SMF and 16+16parallel SMF gains 2 generations
>>>>>>> over
>>>>>>> duplex SMF. Other implementations, e.g. channel rate increase and/or
>>>>>>> encoding, may provide another generation or two of "future
>>>>>>> accommodation."
>>>>>>> The larger the number of wavelengths that are multiplexed, the higher
>>>>>>> the
>>>>>>> loss budget that must be applied to the laser-to-detector (TPlaser to
>>>>>>> TPdetector) link budget. More wavelengths per fiber means more power
>>>>>>> per
>>>>>>> channel, i.e. more power/Gbps and larger faceplate area. While duplex
>>>>>>> SMF
>>>>>>> looks attractive to systems implementations, it entails
>>>>>>> significant(!!)
>>>>>>> cost implications to laser/transceiver vendors, who may not be able
>>>>>>> to
>>>>>>> bear "cost assumptions," and additional power requirements, which may
>>>>>>> not
>>>>>>> be tolerable for systems vendors.
>>>>>>> I don't claim to "have the answer," rather attempt to frame the
>>>>>>> question
>>>>>>> pointedly "How do we want to architect the next few generations of
>>>>>>> Structured Data Center interconnects?" Insistence on duplex SMF works
>>>>>>> for
>>>>>>> this-and-maybe-next-generation, then may hit a wall. Installation of
>>>>>>> parallel SMF provides a 1-or-2-generation-gap of "proofing," with
>>>>>>> higher
>>>>>>> initial cost, but with lower power throughout, and pushing back the
>>>>>>> need
>>>>>>> for those abominable "forklift upgrades."
>>>>>>> Jack
>>>>>>>
>>>>>>>
>>>>>>> On 11/16/11 1:00 PM, "Kolesar, Paul" <PKOLESAR@xxxxxxxxxxxxx> wrote:
>>>>>>>
>>>>>>>> Brad,
>>>>>>>> The fiber type mix in one of my contributions in September is all
>>>>>>>> based
>>>>>>>> on cabling that is pre-terminated with MPO(MTP)array connectors.
>>>>>>>> Recall
>>>>>>>> that single-mode fiber represents about 10 to 15% of those channels.
>>>>>>>> Such cabling infrastructure provides the ability to support either
>>>>>>>> multiple 2-fiber or parallel applications by applying or removing
>>>>>>>> fan-outs from the ends of the cables at the patch panels.  The
>>>>>>>> fan-outs
>>>>>>>> transition the MPO terminated cables to collections of LC or SC
>>>>>>>> connectors.  If fan-outs are not present, the cabling is ready to
>>>>>>>> support
>>>>>>>> parallel applications by using array equipment cords.  As far as I
>>>>>>>> am
>>>>>>>> aware this pre-terminated cabling approach is the primary way data
>>>>>>>> centers are built today, and has been in practice for many years.
>>>>>>>> So
>>>>>>>> array terminations are commonly used on single-mode cabling
>>>>>>>> infrastructures.  While that last statement is true, it could leave
>>>>>>>> a
>>>>>>>> distorted impression if I also did not say that virtually the entire
>>>>>>>> existing infrastructure e!
>>>>>>>> mploys fan-outs today simply because parallel applications have not
>>>>>>>> been
>>>>>>>> deployed in significant numbers.  But migration to parallel optic
>>>>>>>> interfaces is a matter of removing the existing fan-outs.  This is
>>>>>>>> what
>>>>>>>> I
>>>>>>>> tried to describe at the microphone during November's meeting.
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>> Paul
>>>>>>>>
>>>>>>>> -----Original Message-----
>>>>>>>> From: Brad Booth [mailto:Brad_Booth@xxxxxxxx]
>>>>>>>> Sent: Wednesday, November 16, 2011 11:34 AM
>>>>>>>> To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
>>>>>>>> Subject: Re: [802.3_100GNGOPTX] Emerging new reach space
>>>>>>>>
>>>>>>>> Anyone have any data on distribution of parallel vs duplex volume
>>>>>>>> for
>>>>>>>> OM3/4 and OS1?
>>>>>>>>
>>>>>>>> Is most SMF is duplex (or simplex) given the alignment requirements?
>>>>>>>>
>>>>>>>> It would be nice to have a MMF version of 100G that doesn't require
>>>>>>>> parallel fibers, but we'd need to understand relative cost
>>>>>>>> differences.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Brad
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> -----Original Message-----
>>>>>>>> From: Ali Ghiasi [aghiasi@xxxxxxxxxxxx<mailto:aghiasi@xxxxxxxxxxxx>]
>>>>>>>> Sent: Wednesday, November 16, 2011 11:04 AM Central Standard Time
>>>>>>>> To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
>>>>>>>> Subject: Re: [802.3_100GNGOPTX] Emerging new reach space
>>>>>>>>
>>>>>>>> Jack
>>>>>>>>
>>>>>>>> If there is another LR4 PMD out there the best starting point would
>>>>>>>> be
>>>>>>>> 40Gbase-LR4, look at its cost structure, and build a 40G/100G
>>>>>>>> compatible
>>>>>>>> PMD.
>>>>>>>>
>>>>>>>> We also need to understand the cost difference between parallel MR4
>>>>>>>> vs
>>>>>>>> 40Gbase-LR4 (CWDM).  The 40Gbase-LR4 cost with time could be assumed
>>>>>>>> identical to the new 100G MR4 PMD.  Having this baseline cost then
>>>>>>>> we
>>>>>>>> can
>>>>>>>> compare its cost with 100GBase-LR4 and parallel MR4.  The next step
>>>>>>>> is
>>>>>>>> to
>>>>>>>> take
>>>>>>>> into account higher cable and connector cost associated with
>>>>>>>> parallel
>>>>>>>> implementation then identify at what reach it gets to parity with
>>>>>>>> 100G
>>>>>>>> (CWDM) or
>>>>>>>> 100G (LAN-WDM).
>>>>>>>>
>>>>>>>> In the mean time we need to get more direct feedback from end users
>>>>>>>> if
>>>>>>>> the parallel SMF is even an acceptable solution for reaches of
>>>>>>>> 500-1000
>>>>>>>> m.
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Ali
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Nov 15, 2011, at 8:41 PM, Jack Jewell wrote:
>>>>>>>>
>>>>>>>> Thanks for this input Chris.
>>>>>>>> I'm not "proposing" anything here, rather trying to frame the
>>>>>>>> challenge,
>>>>>>>> so that we become better aligned in how cost-aggressive we should
>>>>>>>> be,
>>>>>>>> which guides the technical approach. As for names, "whatever works"
>>>>>>>> :-)
>>>>>>>> It would be nice to have a (whatever)R4, be it nR4 or something
>>>>>>>> else,
>>>>>>>> and
>>>>>>>> an english name to go with it. The Structured Data Center (SDC)
>>>>>>>> links
>>>>>>>> you
>>>>>>>> describe in your Nov2011 presentation are what I am referencing,
>>>>>>>> except
>>>>>>>> for the restriction to "duplex SMF." My input is based on use of any
>>>>>>>> interconnection medium that provides the overall lowest-cost,
>>>>>>>> lowest-power solution, including e.g. parallel SMF.
>>>>>>>> Cost comparisons are necessary, but I agree tend to be dicey.
>>>>>>>> Present
>>>>>>>> 10GbE costs are much better defined than projected 100GbE NextGen
>>>>>>>> costs,
>>>>>>>> but there's no getting around having to estimate NextGen costs, and
>>>>>>>> specifying the comparison. Before the straw poll, I got explicit
>>>>>>>> clarification that "LR4" did NOT include mux/demux IC's, and
>>>>>>>> therefore
>>>>>>>> did not refer to what is built today. My assumption was a "fair"
>>>>>>>> cost
>>>>>>>> comparison between LR4 and (let's call it)nR4 - at similar stage of
>>>>>>>> development and market maturity. A relevant stage is during
>>>>>>>> delivery of
>>>>>>>> high volumes (prototype costs are of low relevance). This does NOT
>>>>>>>> imply
>>>>>>>> same volumes. It wouldn't be fair to project ER costs based on SR or
>>>>>>>> copper volumes. I'm guessing these assumptions are mainstream in
>>>>>>>> this
>>>>>>>> group. That would make the 25% cost target very aggressive, and a
>>>>>>>> 50%
>>>>>>>> cost target probably sufficient to justify an optimized solution.
>>>>>>>> Power
>>>>>>>> requirements are a part of the total cost of ownership, and should
>>>>>>>> be
>>>>>>>> consider!
>>>>>>>> ed, but perhaps weren't.
>>>>>>>> The kernel of this discussion is whether to pursue "optimized
>>>>>>>> solutions"
>>>>>>>> vs "restricted solutions." LR4 was specified through great scrutiny
>>>>>>>> and
>>>>>>>> is expected to be a very successful solution for 10km reach over
>>>>>>>> duplex
>>>>>>>> SMF. Interoperability with LR4 is obviously desirable, but would a
>>>>>>>> 1km-spec'd-down version of LR4 provide sufficient cost/power savings
>>>>>>>> over
>>>>>>>> LR4 to justify a new PMD and product development? Is there another
>>>>>>>> duplex
>>>>>>>> SMF solution that would provide sufficient cost/power savings over
>>>>>>>> LR4
>>>>>>>> to
>>>>>>>> justify a new PMD and product development? If so, why wouldn't it be
>>>>>>>> essentially a 1km-spec'd-down version of LR4? There is wide
>>>>>>>> perception
>>>>>>>> that SDC's will require costs/powers much lower than are expected
>>>>>>>> from
>>>>>>>> LR4, so much lower that it's solution is a major topic in HSSG. So
>>>>>>>> far,
>>>>>>>> it looks to me like an optimized solution is probably warranted. But
>>>>>>>> I'm
>>>>>>>> not yet convinced of that, and don't see consensus on the issue in
>>>>>>>> the
>>>>>>>> group, hence the discussion.
>>>>>>>> Cheers, Jack
>>>>>>>>
>>>>>>>> From: Chris Cole
>>>>>>>> <chris.cole@xxxxxxxxxxx<mailto:chris.cole@xxxxxxxxxxx>>
>>>>>>>> Reply-To: Chris Cole
>>>>>>>> <chris.cole@xxxxxxxxxxx<mailto:chris.cole@xxxxxxxxxxx>>
>>>>>>>> Date: Tue, 15 Nov 2011 17:33:17 -0800
>>>>>>>> To:
>>>>>>>>
>>>>>>>> <STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx<mailto:STDS-802-3-100GNGOPTX
>>>>>>>> @L
>>>>>>>> I
>>>>>>>> S
>>>>>>>> T
>>>>>>>> SERV.IEEE.ORG>>
>>>>>>>> Subject: Re: [802.3_100GNGOPTX] Emerging new reach space
>>>>>>>>
>>>>>>>> Hello Jack,
>>>>>>>>
>>>>>>>> Nice historical perspective on the new reach space.
>>>>>>>>
>>>>>>>> Do I interpret your email as proposing to call the new 150m to 1000m
>>>>>>>> standard 100GE-MR4? ☺
>>>>>>>>
>>>>>>>> One of the problems in using today’s 100GE-LR4 cost as a comparison
>>>>>>>> metric for new optics is that there is at least an order of
>>>>>>>> magnitude
>>>>>>>> variation in the perception of what that cost is. Given such a wide
>>>>>>>> disparity in perception, 25% can either be impressive or inadequate.
>>>>>>>>
>>>>>>>> What I had proposed as reference baselines for making comparisons is
>>>>>>>> 10GE-SR (VCSEL based TX), 10GE-LR (DFB laser based TX) and 10GE-ER
>>>>>>>> (EML
>>>>>>>> based TX) bit/sec cost. This not only allows us to make objective
>>>>>>>> relative comparisons but also to decide if the technology is
>>>>>>>> suitable
>>>>>>>> for
>>>>>>>> wide spread adoption by using rules of thumb like 10x the  bandwidth
>>>>>>>> (i.e. 100G) at 4x the cost (i.e. 40% of 10GE-nR cost) at similar
>>>>>>>> high
>>>>>>>> volumes.
>>>>>>>>
>>>>>>>> Using these reference baselines, in order for the new reach space
>>>>>>>> optics
>>>>>>>> to be compelling, they must have a cost structure that is
>>>>>>>> referenced to
>>>>>>>> a
>>>>>>>> fraction of 10GE-SR (VCSEL based) cost, NOT referenced to a
>>>>>>>> fraction of
>>>>>>>> 10GE-LR (DFB laser based) cost. Otherwise, the argument can be made
>>>>>>>> that
>>>>>>>> 100GE-LR4 will get to a fraction of 10GE-LR cost, at similar
>>>>>>>> volumes,
>>>>>>>> so
>>>>>>>> why propose something new.
>>>>>>>>
>>>>>>>> Chris
>>>>>>>>
>>>>>>>> From: Jack Jewell [mailto:jack@xxxxxxxxxxxxxx]
>>>>>>>> Sent: Tuesday, November 15, 2011 3:06 PM
>>>>>>>> To:
>>>>>>>>
>>>>>>>> STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx<mailto:STDS-802-3-100GNGOPTX@
>>>>>>>> LI
>>>>>>>> S
>>>>>>>> T
>>>>>>>> S
>>>>>>>> ERV.IEEE.ORG>
>>>>>>>> Subject: [802.3_100GNGOPTX] Emerging new reach space
>>>>>>>>
>>>>>>>> Following last week's meetings, I think the following is relevant to
>>>>>>>> frame our discussions of satisfying data center needs for low-cost
>>>>>>>> low-power interconnections over reaches in the roughly 150-1000m
>>>>>>>> range.
>>>>>>>> This is a "30,000ft view,"without getting overly specific.
>>>>>>>> Throughout GbE, 10GbE, 100GbE and into our discussions of 100GbE
>>>>>>>> NextGenOptics, there have been 3 distinct spaces, with solutions
>>>>>>>> optimized for each: Copper, MMF, and SMF. With increasing data
>>>>>>>> rates,
>>>>>>>> both copper and MMF specs focused on maintaining minimal cost, and
>>>>>>>> their
>>>>>>>> reach lengths decreased. E.g. MMF reach was up to 550m in GbE, then
>>>>>>>> 300m
>>>>>>>> in 10GbE (even shorter reach defined outside of IEEE), then
>>>>>>>> 100-150m in
>>>>>>>> 100GbE. MMF reach for 100GbE NextGenOptics will be even shorter
>>>>>>>> unless
>>>>>>>> electronics like EQ or FEC are included. Concurrently, MMF solutions
>>>>>>>> have
>>>>>>>> become attractive over copper at shorter and shorter distances. Both
>>>>>>>> copper and MMF spaces have "literally" shrunk. In contrast, SMF
>>>>>>>> solutions
>>>>>>>> have maintained a 10km reach (not worrying about the initial 5km
>>>>>>>> spec
>>>>>>>> in
>>>>>>>> GbE, or 40km solutions). To maintain the 10km reach, SMF solutions
>>>>>>>> evolved from FP lasers, to DFB lasers, to WDM with cooled DFB
>>>>>>>> lasers.
>>>>>>>> The
>>>>>>>> 10km solutions increasingly resemble longer-haul telecom solutions.
>>>>>>>> T!
>>>>>>>> here is an increasing cost disparity between MMF and SMF solutions.
>>>>>>>> This
>>>>>>>> is an observation, not a questioning of the reasons behind these
>>>>>>>> trends.
>>>>>>>> The increasing cost disparity between MMF and SMF solutions is
>>>>>>>> accompanied by rapidly-growing data center needs for links longer
>>>>>>>> than
>>>>>>>> MMF can accommodate, at costs less than 10km SMF can accommodate.
>>>>>>>> This
>>>>>>>> has the appearance of the emergence of a new "reach space," which
>>>>>>>> warrants its own optimized solution. The emergence of the new reach
>>>>>>>> space
>>>>>>>> is the crux of this discussion.
>>>>>>>> Last week, a straw poll showed heavy support for "a PMD supporting a
>>>>>>>> 500m
>>>>>>>> reach at 25% the cost of 100GBASE-LR4" (heavily favored over
>>>>>>>> targets of
>>>>>>>> 75% or 50% the cost of 100GBASE-LR4). By heavily favoring the most
>>>>>>>> aggressive low-cost target, this vote further supports the need for
>>>>>>>> an
>>>>>>>> "optimized solution" for this reach space. By "optimized solution" I
>>>>>>>> mean
>>>>>>>> one which is free from constraints, e.g. interoperability with other
>>>>>>>> solutions. Though interoperability is desirable, an interoperable
>>>>>>>> solution is unlikely to achieve the cost target. In the 3 reach
>>>>>>>> spaces
>>>>>>>> discussed so far, there is NO interoperability between copper/MMF,
>>>>>>>> MMF/SMF, or copper/SMF. Copper, MMF and SMF are optimized
>>>>>>>> solutions. It
>>>>>>>> will likely take an optimized solution to satisfy this "mid-reach"
>>>>>>>> space
>>>>>>>> at the desired costs. To repeat: This has the appearance of the
>>>>>>>> emergence
>>>>>>>> of a new "reach space," which warrants its own optimized solution.
>>>>>>>> Since
>>>>>>>> the reach target lies between "short reach" and "long reach," "mid!
>>>>>>>> reach" is a reasonable term
>>>>>>>> Without discussing specific technical solutions, it is noteworthy
>>>>>>>> that
>>>>>>>> all 4 technical presentations last week for this "mid-reach" space
>>>>>>>> involved parallel SMF, which would not interoperate with either
>>>>>>>> 100GBASE-LR4, MMF, or copper. They would be optimized solutions, and
>>>>>>>> interest in their further work received the highest support in straw
>>>>>>>> polls. Given the high-density environment of datacenters, a solution
>>>>>>>> for
>>>>>>>> the mid-reach space would have most impact if its operating power
>>>>>>>> was
>>>>>>>> sufficiently low to be implemented in a form factor compatible with
>>>>>>>> MMF
>>>>>>>> and copper sockets.
>>>>>>>> Cheers, Jack
>>>>
>>>