RE: New thread on EMI
In general we agree, but we differ in what we emphasize. Furthermore, we
are discussing EMI issue in a general term, but not a specific design.
Nevertheless, we can try to get some useful direction.
The clock is not as simple as you like to be able to hide them easily
between layers. Usually there are multiple clocks, and often, the clock
will feed many different chips. The board placement will determine how your
multiple clocks will be routed to meet all the circuit design rules. Before
you worry about EMI you have to worry about the board will function to meet
BER of 10^-12 first. For a LAN board with I/O, MAC, PHY, MDI, there are
lots of design rules to satisfy I/O requirements, high frequency
requirements, time race requirement, skew requirement, clock length versus
data length, cross-talk, power design, real-state restriction,.... ...etc.
You are lucky, if you can get most of the design rules requirements met at
the first try. Often you need many manual intervention to finally meet all
the design rule requirements. Then you may be able to optimize some runs
for EMI issue.
However, the EMI emission presented at the clock frequencies are not by
clocks only, but a large portion are from all those data lines. Therefore,
even if you can hide clocks between shields, you still have many data lines
all over the PC board to radiate EMI.
Always, the dominant EMI emission are clock frequency related, and the
single data line emission is always way very low -- IDLE in a serial data
It is true, the EMI compliance test does not require all "1" and "0"
(neither repetitive IDLE) to cause the worst case noise amplitude. I did
not suggest for EMI test, I just mentioned it as an example for the parallel
data test. However, a board performance test should pass this test.
I know many optical transceiver vendors perform EMI test on their
transceiver modules. We have a choice of using the transceivers which pass
EMI test to avoid transceiver EMI problems. All we need is to make sure the
EMI design is correctly implemented to the whole equipment design.
In an electrical interface, the EMI problem is quite different. Anything
can leak including IDLE and clock related signals, if the EMI shield is not
correctly implemented at the bulk head and cable. Probably, the short
copper cables for 10GbE equipment should be kept inside a cabinet.
I am not in favor of scrambling the IDLE signal, because a scrambled IDLE
signal can not be used for debugging a system by studying the waveforms.
After power-up, the IDLE signal is continuously sent out from a SERDES, and
through the transmitter, cable and receiver, the IDLE signal is returned to
the SERDES without any debugging software help. It is a very convenient,
powerful tool to debug a system in the field by comparing the waveforms of
the sent IDLE and the received IDLE. Anything scrambled is becoming fuzzy,
which can not be used for accurate waveform diagnosis.
Furthermore, scrambling the IDLE is not free, it came with other concerns.
Edward S. Chang
NetWorth Technologies, Inc.
[mailto:owner-stds-802-3-hssg@xxxxxxxx]On Behalf Of DOVE,DANIEL J
Sent: Thursday, March 30, 2000 11:48 AM
Subject: RE: New thread on EMI
> The worst signals are always those clocks and synchronous
> data from their
> associated synchronous circuits. In GbE, for example, there
> are serial
> clock, transmit byte clock, receive byte clock, I/O clock,
> and other logic
> clocks. Those clocks are high frequency with sharp
> rise/fall edges (high
> frequency components). When a synchronous clock switches, all other
> associated circuits also switch to provide multiple
> synchronous noises,
> which enhance the EMI amplitude many times more than a
> single signal --IDLE.
I have to agree with some of your points, and disagree with others.
First off, yes, the clocks and their fast edges represent a continuous
and therefore serious concern with regard to EMI (emissions). However,
a condition like an 'all zero' or 'all one' transition on a bus, while
it does generate a much larger spike, is irrelevant for EMI testing
because the tests involve averaging of spectral components over a period
of time in the range of many milliseconds. An instantaneous surge that
lasts for 100ps will be averaged out by the many milliseconds where such
a surge does not exist. This does not mean we can disregard busses, but
it says that the spurious issues of "all ones" or "all zeroes" can be
disregarded from an emissions perspective. Obviously they are of concern
with regard to ground-bounce and signal integrity issues.
> Especially, if all parallel bits (for example, 64 bit-wide
> PCI bus) switch
> the same data pattern (all "0", and all "1') at the same
> time, the EMI
> radiation level will far exceed any EMI level generated by a
> single signal.
> The occasional IDLE signal is much weaker than those clocks and their
> associated synchronous signals.
On the other hand, I disagree with your premise that IDLE is "occasional"
because it is reasonable for EMC testing to actually test the system
while it is continously IDLE or at a very low utilization. While many
customers use their equipment to its fullest capacity, many others buy their
equipment, install it, then proceed to under-utilize it with the intention
of providing future capacity. FCC and CISPR require that we test in a
reasonable customer configuration. So when we test, we test with maximum and
minimum utilization levels. We do not test (EMC) with a network that has
nothing but "all one" or "all zero" data however, as that would be an
artificial construction that is not reasonable to expect in a real network.
We do perform such tests for margin and system reliability tests though.
> The IDLE signal in the 8B/10B code is alternately reversing
> the polarity
> every 10 bits as any other 8B/10B data pattern does. The
> only unique thing
> about IDLE is "REPETITIVE" during the idle period. If
> "repetitive" is the
> reason for EMI concern, then how are we going to deal with
> clocks which are
> REPETITIVE all the time, and have much stronger EMI level.
Now I will address why a repetitive IDLE signal is more of an issue
than a clock. The simple fact is, the clocks are usually kept as far
from the bulkhead as possible. In general, I keep them in the center
or back of the board and distributed in an inner layer where they are
shielded by ground planes.
The IDLE signal, when in electrical form, will by necessity be sent
out to the front-plane of the board where it will remain in electrical
form even into the transceiver module until it is converted to an
optical signal. Since lasers are not differential by nature, the signal
will have to be converted to a single-ended electrical voltage to drive
the laser. At this point, it becomes an EMI concern. It is close to the
bulkhead, single-ended, high-frequency and repetitive.
Optical manufacturers have done many things to reduce the impact of the
8B10B IDLE on Gigabit Ethernet, and we should try to ensure that we don't
make their job harder with 10 Gigabit Ethernet.
HP ProCurve Networks