RE: WWDM vs. 10Gb/s serial
- To: firstname.lastname@example.org, "'BRIAN_LEMOFF@HP-PaloAlto-om16.om.hp.com'" <BRIAN_LEMOFF@HP-PaloAlto-om16.om.hp.com>
- Subject: RE: WWDM vs. 10Gb/s serial
- From: "Cornejo, Edward (Edward)" <email@example.com>
- Date: Fri, 7 May 1999 12:41:49 -0400
- Cc: firstname.lastname@example.org, email@example.com, firstname.lastname@example.org, email@example.com, firstname.lastname@example.org
- Sender: email@example.com
Sorry to jump in so late, but I've been traveling and was unable to pick
email for awhile.
From a component and electronics perspective, I also disagree that we are
6-7 years out from a low cost solution. In the proposal I made, I clearly
stated that these were uncooled devices that were not much different than
what we use for 2.5Gbps today. The area that needed further work was in the
electronics and also some of the spec's. We believe that with the right
penalty allocations the F-P will satisfy 2km, and the 1.3 DFB 20km. And this
would work over existing single mode fiber. Granted there is an awful lot of
62.5µm MMF installed, but probably there is a bunch of dark single mode
fiber installed in the campus backbones also where 10GbE is most likely to
reside at the onset. To discuss this more inteligently, we need to perform a
new fiber survey as was suggested at our last meeting.
One question I have on the WWDM approach is DMD. Since HP is proposing
1300nm for 62.5µ, are there any DMD issues? And if the part is offset
internally, how will the same part work over SMF?
The area of 10gig electronics was discussed quite extensively at our last
meeting with various companies talking about SiGe and other processes that
will make the cost of these solutions quite attractive. The cost estimates
that I've seen for 10gig electronics are quite agressive and are within the
realm of what 2.5Gig is today.
Likewise, the cost of lasers have dramitically dropped over the last several
years. In fact, today a 1000LX part is at maximum 1.5 more than the 1000SX
and the gap continues to close rapidly. I would even venture to say that if
the volumes were equal between SX and LX the price differential would be
I agree with Brian that it is too early to choose one approach over the
other. We need to discuss requirements, feasibility, performance,
reliability, and future proofing before we make any choices.
Lucent Technologies ME-Opto
> Sent: Thursday, May 06, 1999 9:08 PM
> To: firstname.lastname@example.org
> Cc: email@example.com; firstname.lastname@example.org;
> email@example.com; firstname.lastname@example.org; email@example.com
> Subject: WWDM vs. 10Gb/s serial
> I will try to respond to some of Bryan Gregory's remarks regarding
> CWDM vs. 10-Gb serial. By the way, I will refer to it as WWDM
> (SpectraLAN is HP's implementation of WWDM), since CWDM is apparently
> used to refer to 400-GHz spaced telecom systems, and this has caused
> some confusion among some people on this reflector.
> First, let me say that I agree that long-term (say 6-10 years out) a
> low-cost 10-Gb/s serial solution may be the simplest and lowest cost
> solution. That having been said, I think that with today's
> (and for several years out) WWDM will be the lowest cost and most
> useful technology for 10-GbE LAN applications.
> Fiber: A 4 x 2.5-Gb/s WWDM module in the 1300nm band should still
> support useful distances of up to 300m on the installed base of 62.5
> micron core fiber. The SpectraLAN approach, like 1000LX, will
> simultaneously support multimode and single mode applications (up to
> 10-km) with a single transceiver. All 10-Gb/s serial approaches that
> have been proposed (excluding multilevel logic) will require new
> to be installed in premises applications.
> Laser Cost: At 2.5-Gb/s, low-cost uncooled, unisolated DFB lasers
> can be
> used with no side-mode suppression requirement (double moded lasers
> okay) up to 10km. These lasers are readily available today in die
> form at
> costs not that much higher than the FP lasers used in 1000LX.
> RIN, and Jitter requirements at 2.5-Gb/s are MUCH easier to realize
> high yield and low-cost electrical packaging than they are at 10-Gb/s
> to mention 12.5 Gbaud). Optical isolation will probably be required
> achieve the necessary noise and linewidth requirements for a 10-km,
> serial link (Lucent presented an unisolated FP solution for 1km. The
> they showed for a 10km uncooled DFB link required isolation). Given
> I believe that the 4 lasers required for WWDM will be many times
> lower cost
> than the single laser required for serial.
> Optical Packaging Cost: The 1000LX standard has forced transceiver
> to develop low-cost automated alignment and precision die attach
> for aligning edge-emitting lasers to single-mode fiber. In our WWDM
> solution, we are leveraging such a system to robotically assemble and
> our 4 lasers and MUX in a fast, low-cost process. On the Rx side,
> multimode alignment tolerances are required to align the demux to the
> detector array and glue it into place. The mux and demux optics
> are low-cost parts (many times lower cost than a micro-optical
> The mux is a simple, unpolished, unpigtailed, silica waveguide chip
> (several hundred devices on a standard 4" wafer). The demux is an
> injection-molded plastic optical part, requiring minimal assembly.
> may sound complicated, but it is not expensive. As we get further
> into the
> standards discussions, we'll provide more details that should help
> the skeptics that this is a realistic and low-cost solution.
> Electronics: WWDM at 2.5-Gb/s per channel works with existing
> low-cost Si
> electronics. 10-Gb/s serial Tx and Rx IC's will require processes at
> 4 times faster. Add to this the tighter jitter and noise
> requirements, the
> poorer performance of dielectric circuit boards, the higher laser
> requirements (required to push relaxation oscillation frequencies 4
> further out), and you have a difficult electrical problem to solve.
> cost associated with the electronics and electrical packaging is
> likely to
> be much higher than that for 4ch WWDM for several years.
> Scalability: Bryan made a good point that a 10-Gb/s serial solution
> adopted now could be combined with WWDM later to provide even higher
> capacity (e.g. 40 Gb/s). Why not adopt the WWDM (4 x 2.5 Gb/s)
> now, when 10-Gb/s lasers and electronics are still very expensive,
> and then
> in a few years, increase the channel rate to 10-Gb/s. Either
> solution for
> 10-GbE is scalable to 40-Gb/s when it is combined with the other.
> Eye-safety: The proposed power budget for SpectraLAN meets the Class
> eye-safety requirement by a comfortable margin. At 1550nm it would
> be even
> better, but increased fiber dispersion and the lack of
> fiber in the LAN make this a more difficult option. It should be
> that 4 lasers means 6-dB less eye-safe power available per laser, but
> at 4
> times the speed, for a given IC process, a typical receiver will be
> sensitive by at least 6 dB, negating the eye-safety advantage
> inherent in
> the serial approach.
> "Inherent Simplicity": A serial approach is "inherently simple".
> question which we must answer over the coming year is which approach
> the most practical sense from a performance and cost perspective,
> given the
> technologies that are available today.
> I hope I have at least provided a few reasons why 4x2.5-Gb/s WWDM
> might be
> better than a 10 Gb/s serial approach, at least in the near-term.
> There is
> still a lot to be learned, a lot to be demonstrated, and an awful lot
> discussion to be had before one solution is chosen over another.
> -Brian Lemoff
> ______________________________ Reply Separator
> Subject: Re: 1310nm vs. 1550nm -> Eye Safety + Attenuation
> Author: Non-HP-bgregory (firstname.lastname@example.org) at HP-PaloAlto,mimegw2
> Date: 5/6/99 9:32 AM
> In response to Bill's email... regarding the EDFA issue, I'd imagine
> that this would only be used in a small number of cases with a serial
> 10GbE approach. I don't think it needs to be a core concern of the
> group, but in some dark fiber trunking applications it can be useful.
> I am most concerned about wavelengths vs. eye safety, and wavelengths
> vs. fiber attenuation. This could end up being a real killer. Four
> lasers @ 850nm or 1310nm put out quite a bit of light in an eye
> sensitive range. As I remember, four lasers at 1550nm offer a lot
> more margin. A single source at 1550nm could be very strong and
> meet the eye safe requirements. This increase in power combined with
> lower fiber attenuation would reduce some of the link distance
> problems that we're bound to run into.
> Also, long term I can't see how [4 lasers and an optical mux] + [4
> photodiodes and an optical de-mux] would be better than a single
> source and photodiode. There is a lot of difficult packaging
> in the CWDM approach. I think the CWDM solution offers a quicker
> to market because most of that technology is available today. But
> term a single 10 Gb source (uncooled DFB without isolator) has a lot
> of advantages. It is intrinsically much simpler. I think the board
> layout and chip-sets will eventually support this as well. If the
> standard wanted to be able to scale beyond 10 gigs, even the serial
> 10Gb solution could allow further CWDM scaling.
> Bryan Gregory
> ______________________________ Reply Separator
> Subject: RE: 1310nm vs. 1550nm window for 10GbE
> Author: "Bill St. Arnaud" <email@example.com> at INTERNET
> Date: 5/6/99 10:38 AM
> Hmmm. I just assumed that 802.3 HSSG would be looking at 1550 solutions
> well as 1310 and 850
> I agree with you on longer haul links it makes a lot more sense to operate
> at 1550
> I am not a big fan of EFFA pumping. It significantly raises the overall
> system cost. It only makes sense in very dense wave long haul systems
> typically deployed by carriers.
> CWDM with 10xGbE transcivers should be significantly cheaper. That is
> another reason why I think there will be a big market for 10xGbE with all
> those transceivers every 30-80km on a CWDM system. However there is a
> tradeoff. There is greater probablity of laser failure with many
> transceivers and the need for many spares. I figure somewhere between 4-8
> wavelengths on a CWDM and transceivers is the breakpoint where it is
> probably more economical to go to DWDM with EDFA. Also EDFA is protocol
> bit rate transparent.
> An EDFA will ..(edited)..... But EDFA window is very small, so wavelength
> spacing is very tight requiring expensive filters and very stable,
> temperature compensated lasers at each repeater site. Also laser power
> to be carefully maintained within 1 db otherwise you will get gain tilt in
> EDFAs. A loss of a signal laser can throw the whole system off, that is
> you need SONET protection swicthing. But companies are developing feedback
> techniques to adjust power on remaining lasers to solve this problem.
> A single 10xGbE transceiver will .(edited)....??? Probably less. So 6
> 10xGbE transceivers will equal one EDFA. No problems with gain tilt. If
> you lose one laser you only lose that channel, not the whole system.
> Protection switching not as critical, etc
> Bill St Arnaud
> Director Network Projects