Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [HSSG] BER Objective




Well,

maybe we should then distinguish (again) between the

a) 300m short-reach data-center application w/o FEC for low latencies
b) the long reach WAN application for WDM backbones with FEC

For b) the latency wouldn't be such an issue because time of flight
will be much higher anyway ...

Marcus


Roger Merel wrote:

Unfortunately FEC is not an option as these applications are even more sensitive to being low latency.  FEC adds latency.

 


From: Mike Dudek [mailto:mike.dudek@xxxxxxxxxxxxx]
Sent: Tuesday, August 29, 2006 2:36 PM
To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: Re: [HSSG] BER Objective

 

Am I being naive here, or could the applications that require the super low error rates include some FEC, without burdening all the physical layer applications with a requirement for extremely low error rate.   Although present applications may be expected to be the high end users that require the extremely low error rates, if we have broad market potential that segment of the market is going to drop in percentage in the future.  Other standards have included FEC why not this one?  (10GEPON is looking at pretty strong FEC). 

 

Regards,

Mike Dudek
Director Transceiver Engineering
Picolight Inc
1480 Arthur Avenue
Louisville
CO 80027
Tel  303 530 3189 x7533.
mike.dudek@xxxxxxxxxxxxx

 


From: Trowbridge, Stephen J (Steve) [mailto:sjtrowbridge@xxxxxxxxxx]
Sent: Tuesday, August 29, 2006 2:50 PM
To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: Re: [HSSG] BER Objective

Jungu,

Looking further into the thread, I think there are at least two separate questions:

 

- What BER level is feasible to test/verify? Can it be tested directly, or only through extrapolation?

 

- What BER is needed for the service?

 

I had reacted initially to the statement about errors being "few and far between" and the discussion about lengthy tests being required, almost in the same breath as proposing BER of 10^-10, which made no sense.

 

Other discussion seems to revolve around what this BER is good enough for the service. From the discussion, I agree that if your 100 Gbit/s is an aggregate of a huge number of much smaller flows, we can consider BER in terms of the percentage of corrupted packets and required retransmissions and not have to strive for lower BER as we go to higher bitrates. But if the interface is used because the customer needs a small number (perhaps only one, in a supercomputer environment) of very large flows and stopping to retransmit might idle some very large processors, this is a different situation entirely.

 

Probably good to try to separate the two questions above for the discussion.

Regards,

Steve

 


From: OJHA,JUGNU [mailto:jugnu.ojha@xxxxxxxxxxxxx]
Sent: Tuesday, August 29, 2006 2:14 PM
To: Trowbridge, Stephen J (Steve); STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: RE: [HSSG] BER Objective

Steven,

 

I was trying to understand whether 1 error per second is a lot worse from a performance point of view than 1 error every ten seconds.  Your point about the extra test time is valid for the foreseeable future, where these interfaces are only used for high-end applications.  

 

Regards,

Jugnu

 


From: Trowbridge, Stephen J (Steve) [mailto:sjtrowbridge@xxxxxxxxxx]
Sent: Tuesday, August 29, 2006 1:11 PM
To: OJHA,JUGNU; STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: RE: [HSSG] BER Objective

 

Jungu,

I'm confused. I see a proposal to use a BER requirement of 10^-10 and these words about errors being "few and far between", yet 100 Gbit/s is 10^11 bits/s. An average of 10 errors per second doesn't meet my idea of "few and far between", and I doubt this would be acceptable to most users who will pay the kind of money this kind of interface is likely to cost.

 

I am curious whether a really high end interface like this makes a lengthier test more feasible (i.e., you couldn't afford to do a 10 minute test on an interface you expected to sell for $10. But for an infrastructure interface like this one, perhaps some extra testing time wouldn't be a significant portion of the interface cost)

Regards,

Steve

 


From: OJHA,JUGNU [mailto:jugnu.ojha@xxxxxxxxxxxxx]
Sent: Tuesday, August 29, 2006 11:48 AM
To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: Re: [HSSG] BER Objective

Roger, 

 

I understand that test time is the issue.  The point I’m getting at (and which I’ve always wondered about) is, if the errors are so few and far between that it takes so long to find them, how much impact can they really be having on the system/network performance?  I.e., are we being too demanding with the BER requirements.  

 

Jugnu

 


From: Roger Merel [mailto:roger@xxxxxxxxxxx]
Sent: Tuesday, August 29, 2006 10:44 AM
To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: Re: [HSSG] BER Objective

 

It’s not hard to measure, just time consuming.  If one wants to keep optics affordable, one need manufacturing test to be <minutes, not >10 minutes.

 

Although my position is that 1E-15 BER is not required; but only 1E-13 at most.

 


From: OJHA,JUGNU [mailto:jugnu.ojha@xxxxxxxxxxxxx]
Sent: Tuesday, August 29, 2006 10:37 AM
To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: Re: [HSSG] BER Objective

 

All of this raises the following question:  If this is so hard to measure, how much impact can it really have in the real world?  Why not back the BER requirement off to 10e-10? 

 

Regards,

Jugnu

 


From: Petar Pepeljugoski [mailto:petarp@xxxxxxxxxx]
Sent: Monday, August 28, 2006 8:03 PM
To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: Re: [HSSG] BER Objective

 


I agree with Howard. It is impractical and expensive to test for very low BERs - the specs should be such that the power budget is capable of achieving BER =1e-15, yet the testing can be some kind of accelerated BER at lower value that is derived from the curve interpolation.

However, the as with any extrapolation of testing results one has to be careful, so in this case it will be manufacturers' responsibility to guarantee the BER=1e-15.  

Regards,

Petar Pepeljugoski
IBM Research
P.O.Box 218 (mail)
1101 Kitchawan Road, Rte. 134 (shipping)
Yorktown Heights, NY 10598

e-mail: petarp@xxxxxxxxxx
phone: (914)-945-3761
fax:        (914)-945-4134

Howard Frazier <hfrazier@xxxxxxxxxxxx>

08/28/2006 05:39 PM

Please respond to
Howard Frazier <hfrazier@xxxxxxxxxxxx>

To

STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx

cc

 

Subject

Re: [HSSG] BER Objective

 

 

 




 
For the 100 Mbps EFM fiber optic links (100BASE-LX10 and 100BASE-BX10)
we specified a BER requirement of 1E-12, consistent with the BER requirement
for gigabit links. We recognized that this would be impractical to test in a
production environment, so we defined a means to extrapolate a BER of 1E-12
by testing to a BER of 1E-10 with an additional 1 dB of attenuation.  See
58.3.2 and 58.4.2.
 
Howard Frazier
Broadcom Corporation


From: Roger Merel [mailto:roger@xxxxxxxxxxx]
Sent:
Monday, August 28, 2006 1:54 PM
To:
STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject:
Re: [HSSG] BER Objective


David,
 
Prior to 10G, the BER standard (for optical communications) was set at 1E-10 (155M-2.5G).  At 10G, the BER standard was revised to 1E-12.  For unamplified links, the difference between 1E-12 and 1E-15 is only a difference of 1dB in power delivered to the PD.  However, the larger issue is one of margin and testability (as the duration required to reliably verify 1E-15 for 10G is impractical as a factory test on every unit) especially since we’d want to spec worst case product distribution at worst case path loss (cable+connector loss) and at EOL with margin.  Thus in reality, all products ship at BOL from the factory with a BER of 1E-15 and in fact nearly all will continue to deliver 1E-15 for their entire life under their actual operating conditions and with their actual cable losses.
 
Thus, if by “design target”, you mean a worst case-worst case with margin to be assured at EOL on every factory unit, then this is overkill.  I might be willing to entertain a 1E-13 BER as this would imply that same number of errors per second (on an absolute basis; irrespective of the number of bits being passed; this takes the same time in the factory as verifying 1E-12 at 10G although this is in fact a real cost burden which adversely product economics); however, this would not substantially change the reality of the link budget.  It would make for a sensible policy for the continued future of bit error rate specs (should their be future “Still-Higher-Speed” SG’s).
 
-Roger
 
 


 



From: Martin, David (CAR:Q840)
Sent:
Friday, August 25, 2006 12:22 PM
To:
STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject:
BER Objective

 
During the discussion on Reach Objectives there didn’t appear to be any mention of corresponding BER.
 
Recall the comments from the floor during the July meeting CFI, regarding how 10GigE has been used more as infrastructure rather than as typical end user NICs. And that the application expectation for 100GigE would be similar.
 
Based on that view, I’d suggest a BER design target of (at least) 1E-15. That has been the defacto expectation from most carriers since the introduction of OC-192 systems.
 
The need for strong FEC (e.g., G.709 RS), lighter FEC (e.g., BCH-3), or none at all would then depend on various factors, like the optical technology chosen for each of the target link lengths.

...Dave

David W. Martin
Nortel Networks

dwmartin@xxxxxxxxxx
+1 613 765 2901 (esn 395)
~~~~~~~~~~~~~~~~~~~~

 


-- 
___________________________
Marcus Duelk
Bell Labs / Lucent Technologies
Data Optical Networks Research

Crawford Hill HOH R-237
791 Holmdel-Keyport Road
Holmdel, NJ 07733, USA
fon +1 (732) 888-7086
fax +1 (732) 888-7074