Re: Long distance links
I agree with your view of the attachment to the DWDM network. This is quite
a complicated issue since we need to consider both a new (standard) DWDM
installation (very clean) and DWDM on existing installed base where we may
need converter equipment to shift standard frequencies to those used in the
installed base. The installations I've seen (e.g. for trans-ocean submarine
cables) tend to be bulky and complicated and inherently expensive. We want
to isolate this complexity from the standard product.
From: Paul Bottorff <pbottorf@xxxxxxxxxxxxxxxxxx>
To: DOVE,DANIEL J (HP-Roseville,ex1) <dan_dove@xxxxxxxxxxxxxx>; HSSG
Date: Thursday, September 02, 1999 1:54 PM
Subject: RE: Long distance links
>I also think we are getting closer to understanding. A few comments.
>At 05:49 PM 9/1/99 -0600, DOVE,DANIEL J (HP-Roseville,ex1) wrote:
>>While we may not be coming closer to agreement (or maybe we are?) I
>>believe we are at least coming closer to understanding.
>>More in context below...
>>> >So if I understand this model, we have a 10Gig link (campus backbone)
>>> >that is connected to a campus switch. That switch wants to connect to
>>> >a WAN and thus will have a WAN port that operates at 9.58464 by using
>>> >its XGMII "hold" signal.
>>> Provided people built networks to this configuration, then it
>>> works just
>>> The IEEE has not yet decided to build 2 PHYs. I believe that
>>> the WAN PHY
>>> being talked about does not have a distinct identity from the LAN PHY.
>>This is one point at which we clearly have different perspectives. I
>>believe that there will be sufficient distinction in cost between a
>>DWDM laser for the WAN, and a (WWDM or serial) solution that is
>>limited to a few Km for the campus. Otherwise, why do we need an XGMII?
>I agree that a PHY which included a DWDM laser would have a distinct
>identity. However, I don't believe this interface is the current topic of
>standardization. How I see the system being built is that the DWDM network
>will be terminated in a shelf which provides 10 GigE access ports. On one
>side of the shelf will be IEEE standard 10 GigE on the other side of the
>shelf will be a DWDM photonic network. The device in the middle at the
>demarcation point will be a transponder/repeater. For a router to access
>the photonic network it will attach a 10 GigE interface to the photonic
>network access port.
>A typical 10 GigE WAN link which attaches to a photonic network would be
>built using 3 or more link segments. If you refer to my slides from
>Montreal the 5th slide provides a picture of such a network. The link
>segments which attach from the router to the photonic network need to
>provide the 9.584640 data rate since this is all the data the photonic
>network can carry due to historic reasons. The PHYs in the router do not
>have DWDM photonics.
>>> Because I don't have a good criteria for distinct identity
>>> I've found no
>>> reason to believe the committee should build 2 PHYs. My
>>> assumption is that
>>> any PHY developed may run on SMF and may be deployed in the
>>> wide area. This
>>> is what is currently happening with 1 GigE.
>>Actually, there is LX, SX, CX and 1000BASE-T not to mention a few
>>proprietary links for long-haul 1550nm. There is no reason not to
>>believe that 10G will follow the paradigm that allows multiple
>>PHYs for multiple cost/performance domains.
>Access to the photonic network described above can (and will in some cases)
>be less than 100 meters. It may use 850, 900, 1300, for 1550 nm lasers. It
>may be serial or CWDM. Finally it may have a different encode that the DWDM
>network (though I dislike this).
>>> >I agree that THAT switch will require buffering to handle the rate
>>> >mismatch, but that would be required in the event that it has more
>>> >than 10 Gigabit links feeding it anyway. This is OK.
>>> In the configuration I described it is the buffer at a
>>> located at the junction between the IEEE segment and the DWDM
>>> segment which
>>> requires buffering to rate match. At this juncture there are only two
>>> ports. One side is the IEEE 10.00 Gbps and the other side is
>>> the 9.9584640
>>> Gbps DWDM cloud. The buffer size covers only the rate mismatch not the
>>> normal overload seen in packet switches. The photonic network
>>> appears as a
>>> new segment in the link between switches, not as a separate link.
>>This looks like a specific implementation restriction. I doubt that
>>I would implement it that way.
>Paul A. Bottorff, Director Switching Architecture
>Enterprise Solutions Technology Center
>Nortel Networks, Inc.
>4401 Great America Parkway
>Santa Clara, CA 95052-8185
>Tel: 408 495 3365 Fax: 408 495 1299 ESN: 265 3365