Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: Why scrambling may be bad - SYNC attacks


Scrambling seems like a good idea to me from the perspective of overall
economic cost and simplicity. Code efficiency is a major system cost issue
for wide area since raising the transmission frequency will reduce the
reach requiring more amplifiers, repeaters, and huts to locate them. For
instance the reduction of span length (for the same powers & RBER) due to
dispersion on NDSF varies by 1/bitrate^2 for externally modulated lasers.
Thus 80 km @ 10G becomes 53 km @ 12.5G. Of course each laser and cable
combination needs analysis.

In the LAN code efficiency is less important since cable is free and
amplifiers are not required. Never the less 1 Gigabit had great trouble
making the 500 m spec for multimode fiber. Reductions in reach outside the
standards for building wiring are expensive for LAN applications.

In the wide area the economics vary depending on the type of application. I
believe there are three basically different types of wide area
applications. First, we have dark fiber applications like your ZDSF
application. Second, we have dark wavelenghts on DWDM where the other
wavelengths are being used for a mix of data and TDM. Third, we have
applications where 10 GigE would be carried over a SONET multiplexor.



Paul A. Bottorff
Director Switching Architecture, BAL
Nortel Networks, Inc.

At 09:18 PM 5/10/99 -0400, Bill St. Arnaud wrote:
>The IETF and other bodies have been wrestling with the scrambling issue for
>some time, particularly in relationship to POS. The concern is SYNC attacks
>of somebody deliberately sending repeating data fields that are the inverse
>of the scrambling code.
>The IETF recognized it as a serious problem a long time ago and that is why
>they recommended and X^43 XOR code in the POS standard.  But it has now been
>recognized that they may be insufficient and still susceptible to SYNC
>attacks with partial synchronization and/or the new proposed mega packets.
>The ITU is looking at a "state based" scrambling system which changes state
>every few seconds.  This of course requires distribution of scrambling
>codes, public key distribution across network nodes, etc etc - a management
>8b/10b is terribly inefficient but it is simple and works. And it is the
>overall economic cost, not coding efficiency that will drive the appropriate
>I fully endorse Occam's rule of networking: "The simplest network solution
>is the probably the best network solution"
>Bill St Arnaud
>Director Network Projects
>> -----Original Message-----
>> From: owner-stds-802-3-hssg@xxxxxxxxxxxxxxxxxx
>> [mailto:owner-stds-802-3-hssg@xxxxxxxxxxxxxxxxxx]On Behalf Of Paul
>> Bottorff
>> Sent: Monday, May 10, 1999 5:34 PM
>> To: Ed Grivna; stds-802-3-hssg@xxxxxxxx
>> Subject: RE: WWDM vs. 10Gb/s serial
>> Ed:
>> Ethernet has been non-deterministic since inception. Even though some
>> probability exists of generating a long sequence of zeros causing PLL
>> failure the chances can be low enough to fit within the data error rate. A
>> string of 70 zeros has only a 1 / (2 ^ 70) chance of occurring. On a
>> scrambled 10 GigE link a zeros string would happen once every
>> 3000 years or
>> so.
>> Of course the PLLs must be much more rigid for the scrambler solutions
>> allowing the PLL to hold lock over short periods of imbalance. Since there
>> is no longer a need for Ethernet to have rapid phase lock the more rigid
>> PLL with a long acquistion time should not present a design problem.
>> The problem of transparency can be solved in a variety of ways.
>> One example
>> is to use a variation of the HEC check algorithm to create a
>> <lenght><type><check> field for delimiting both frames and special
>> sequences like idle and management. The time required for these algorithms
>> to settle on a frame can also be relatively fast if the <check>
>> sequence is
>> large.
>> The overhead of 8/10 is 25% not 20% which translates to as much as a 33%
>> reduction in reach. Scrambling is a perfect solution to reduce the cost of
>> data networks especially in the wide area. In the local area
>> scrambling can
>> also help contain the transmission frequencies improving the distance of
>> any given technology.
>> Paul
>> At 01:54 PM 5/10/99 -0500, Ed Grivna wrote:
>> >
>> >Paul A. Bottorff wrote:
>> >
>> >>
>> >> Bill:
>> >>
>> >> I certainly agree that the PHY must provide fast failure detection (<10
>> >> msec). In addition, it would be nice if the PHY layer could inform the
>> >> transmitting end of failures.
>> >>
>> >> 10 GigE has a broad range of uses for both LAN and WAN
>> applications. The
>> >> design tradeoffs for photonics in the WAN and LAN are
>> different. In the WAN
>> >> a major component of cost is the optical reach. The data which
>> I've seen
>> >> indicates that transmission frequency very significantly
>> affects reach. To
>> >> get the lowest cost solutions 10 GigE should move away form group codes
>> >> systems like 8/10 into scrambler code systems. Scrambling
>> provides an NRZ
>> >> efficient line encode giving 10 gigabits of data at 10 gigabaud.
>> >
>> >While scrambling is an efficient way of dealing with data, it is also
>> >non-deterministic in how that data is handled.  Given any scrambling
>> >polynomial and an uncontrolled data stream, it is always possible to
>> >zero out the scrambler.  Once this happens, there are NO transitions
>> >in the serial stream until new data containing 1's is present to
>> >re-seed the scrambler.
>> >
>> >The SMPTE-259 serial interface recognized this problem a long time ago
>> >and requires the characters in the video field to be between 004h
>> >and 3FBh.  Even with these limitations, they can wind up with long
>> >repeating patterns of 19 zero bits followed by a single 1-bit (or 19
>> >one bits follwed by a single 0-bit).  These signals have a very high
>> >DC content (making it difficult to send through an AC-coupled channel).
>> >An alternate pattern that they see consists of an alternating 20
>> >0-bits followed by 20 1-bits (which generates a square wave).  These
>> >are all referred to as the SMPTE pathalogical patterns (see ANSI/SMPTE
>> >RP-178-1996).  They are nasty to handle, both for the interface
>> >circuitry AND for the PLLs.
>> >
>> >Since there is no control over the content of the data field in Ethernet
>> >packets, and the fact that many records are padded with trailing zeros,
>> >I feel that the usage of a scrambler would not be condusive to good
>> >engineering practices for THIS application.  As the data rate is
>> increased,
>> >the time penalty and system-level impact required to recover from
>> >data errors becomes much more significant.  If the scrambler ever did
>> >zero out, the PLL would most surely drift by at least one bit.
>> >
>> >Once that happend framing must occur again to start processing data.
>> >But frameing is no longer simple either.  Unlile a block code (such as
>> >8B/10B) where not all characters are valid and extra chracters are
>> >present for in-band signalling, scrambled codes use all possible
>> >characters.  This carries numerous implications.
>> >
>> >First, framing must be done using combinations of data characters.
>> >In telecom environments they identify a specific sequnce  oc data
>> >characters and the specific period in which they must occur.  Once these
>> >characters are found numerous times in that location, framing is
>> >declared to be achieved.  Unfortunately, nothing prevents these specific
>> >characters from being part of the data field, where they may also
>> >occur on the same bopundaries.
>> >
>> >Since Ethernet is also not based on fixed length records or records that
>> >are sent on a continuous basis on the same boundary, performaing framing
>> >with a scrambled codeing will be quite difficult.  Because it requires
>> >multiple recognitons of the pattern to validate the framing, it also
>> >takes a lot of time.  All this to handle errors that are GUARENTEED
>> >to happen because of the uncontroled nature of the data streem contents.
>> >
>> >This makes the handling of errors much more onerous on the system.  Its
>> >bad enough that the lost data may have to be resent, but the system
>> >level impact is much more than just the time necessary to re-transmit.
>> >802.3z chose a block coded interface for mutiple reasons, including those
>> >listed here.  Yes there is a penalty in symbol rate to deal with this,
>> >but generally the 20% adder is significantly easier to handle than the
>> >baggage that comes with a scrambled interface.
>> >
>> >By changing to a scrambler, this also makes the optics much more
>> difficult
>> >to design and more expensive to build.  Most cannot handle even
>> a limited
>> >unbalance in the data stream.  They are usually AC-coupled to limit the
>> >noise gain in the receiver. The clock/data recovery PLLs are also more
>> >difficult and must be much more stable.
>> >
>> >
>> >
>> >Regards,
>> >
>> >Ed Grivna
>> >Cypress Semiconductor
>> >elg@xxxxxxxxxxx
>> >
>> >>
>> >> Paul
>> >>
>> >> Paul A. Bottorff
>> >> Director Switching Architecture, Bay Architecture Lab
>> >> Nortel Networks, Inc
>> >> pbottorf@xxxxxxxxxxxxxxxxxx
>> >>
>> >
>> >