Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

[8023-10GEPON] Optical Overload Ad-Hoc announcement



Dear All, 

I was tasked with leading the discussion regarding optical damage / overload
issues. 

I think there are three sub-items that all relate to this issue
1. What values should be used for the optical damage levels for the optics

2. What dynamic performance can be expected from strong-to-weak burst
reception (the Treceiver_settling question)? 

3. What about limiting the rate-of-attack of the burst Tx (Ton/Toff)?

We don't have much time, since formal comments must be submitted by April
4th.  So, below, I have put down my own initial thoughts on these topics.  I
invite all to reply with their comments as soon as possible.

1. What values should be used for optical damage levels? 
If we look at clause 52, this sets the precedent for the damage level being
1dB over the Maximum Receive level.  If we look at the absolute level for
the 10G LX optics, that is 0dBm, which admittedly is getting pretty strong.

For some of our optics, the math would put the Overload+1 level at 0dBm,
which is probably where it should stay.  However, for other optics, the
formula puts the damage level is considerably lower.  I'd expect that damage
would not be a problem for levels lower than -3dBm (just a seat-of-the-pants
sort of judgment).  

But, maybe the best approach is to take the damage level out of the main
table, and just put it as a footnote (just like the 10G point-to-point
clause did.)  

2. What dynamic performance can be expected from strong-to-weak burst
reception (the Treceiver_settling question)? 

The Nagahori presentation gives us very useful data.  Let me illustrate it
in the following way:  From Nagahori page 7, we can see that a tau/T of 210
results in an error curve that has zero penalty at the higher bit error
rates that we are working at. (There are signs of an error floor, but it
happens at 1E-10, so we don't care).  T, in out case, is 97 ps.  So, the
data says that setting tau to be 20ns is OK.  

Suppose we want to tolerate 20 dB of dynamic range burst to burst.  This
means that we need to set the time constant of the AC-coupling to be at
least 5 times shorter than the burst-to-burst time.  (e^5=148 > 20dB).  That
means that the burst to burst time needs to be 100ns.  So far, we are not
seeing any problems.  (By the way, the value of 100ns is what I put forward
in 3av_0801_effenberger_3-page4.)  

I also think that real circuits will need to allocate time for control of
the pre-amplifier stage (setting of the APD bias and/or the TIA impedance).
This should take no longer than an additional 100ns of time.  

So, this leaves us with a requirement of 200ns, which has a safety margin of
2x below the 400ns that is the proposed value for Treceiver_settling.  

Thus, I don't see any reason why we should change the value from 400ns, just
like in 1G EPON.  While it is true that Treceiver_settling will likely need
to be longer than T_cdr, setting the maximum values of both at 400ns will
not preclude any implementations.  I fully expect that real systems will
actually do much better than both of these limits.  

3. What about limiting the rate-of-attack of the burst Tx (Ton/Toff)?
I went to talk with my optical front-end expert, and he explained the latest
results that we've been seeing.  The whole motivation of our concern is the
large 20dB dynamic range that we are targeting in PON systems.  The problem
is that the receiver is normally in the maximum gain condition, and then a
strong burst comes in that threatens to overload the circuit.  

Initially, we were concerned that the APD and the TIA would be most
sensitive to high burst transients.  However, this seems to be not the case.
The APD gain may be self-limiting (saturating), and this helps to limit the
signal to some extent.  So, damage to that part of the circuit seems
unlikely.   

However, there still is a problem, and that is that the second stage
amplifier (the one that is driven by the TIA) tends to get overloaded by the
strong bursts. (This is understandable, since the signal has received more
gain by this point.)  This prevents the output signal from being useful (for
control as well as for the actual signal), and the recovery from overload is
not well behaved.  So, we'd like to avoid that.  

The simplest way to prevent transient overload is to reduce either the APD
gain (by reducing its bias), or reducing the TIA impedance.  Either of these
methods is essentially a control loop, and it will have a characteristic
speed.  The setting of the speed is bounded on both directions just like the
AC coupling speed, and a value of 20ns is good.  Given that we have a
control speed of 20ns, the loop will respond only that fast to input
transients.  We can thereby reduce the excursion of the control system
output by limiting the "time constant" of the input signal to be similar to
that of the control loop.  This is why we suggest a 'rise time' on the order
of 20ns.  

I was wrong in extending this to also specifying a 'fall time' - there is no
need for controlling the trailing edge, at least, not strictly.  The reason
is that the receiver will 'know' when the burst is over, so it should be
able to manage its withdrawal symptoms.  (Note that this implies that the Rx
has certain feedback paths, such as when the CDR declares loss of lock.)  

So, that's the reason why we should consider having a controlled turn-on for
the transmitter.  

As for specifying it, the currently suggested text (a Minimum Ton) is not
good. We should rather specify a maximum rate of power increase.  Since we
are ramping from essentially zero to Pmax in about 20ns, I would suggest
setting the maximum rate of power increase to be Pmax(mW)/10ns.  This allows
for some non-linear power curve (e.g, exponential decay), since it provides
a margin factor of 2 over the straight line value.  

Regards,
Frank E.