Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

RE: Comments On LAPS




Dave,
 
It could be, though one wouldn't expect that to happen with such a high
loss rate unless the design for allowing  for frame growth was very poor
or the frames were constructed fairly maliciously.
 
I am still interested in the answers to the question I asked.
 
What are we trying to accomplish with our comments?
 
Regards,
Pat
 
-----Original Message-----
From: David Martin [mailto:dwmartin@xxxxxxxxxxxxxxxxxx]
Sent: Wednesday, October 25, 2000 10:25 AM
To: stds-802-3-etholaps@xxxxxxxx
Subject: RE: Comments On LAPS




Pat, 

It's possible that the packet loss Roy refers to arises due 
to the byte stuffing mechanism, where occasionally a 'highly byte 
stuffed' HDLC frame spills a buffer. It could be argued that 
such a problem is a design issue. 

...Dave 

-----Original Message----- 
From: pat_thaler@xxxxxxxxxxx [ mailto:pat_thaler@xxxxxxxxxxx
<mailto:pat_thaler@xxxxxxxxxxx> ] 
Sent: Wednesday, October 25, 2000 8:29 AM 
To: rabynum@xxxxxxxxxxxxxx; pat_thaler@xxxxxxxxxxx; Martin, David 
[SKY:1I63-M:EXCH]; stds-802-3-etholaps@xxxxxxxx 
Subject: RE: Comments On LAPS 


Roy, 

I have no doubt that a packet loss of 5% will have severe performance 
effects. 
However, the points David made do not explain such a loss rate. It seems 
most 
likely that much of the loss you are seeing is because of the particular 
designs 
rather than because of the encapsulation. 

Regards, 
Pat Thaler 

-----Original Message----- 
From: Roy Bynum [ mailto:rabynum@xxxxxxxxxxxxxx
<mailto:rabynum@xxxxxxxxxxxxxx> ] 
Sent: Sunday, October 22, 2000 3:09 PM 
To: pat_thaler@xxxxxxxxxxx; dwmartin@xxxxxxxxxxxxxxxxxx; 
stds-802-3-etholaps@xxxxxxxx 
Subject: RE: Comments On LAPS 



Pat, 

The points that David has made are valid.  I have been building PPP/HDLC 
and POS IP data networks for years.  The effects indicated in all of these 
points are empirically observed over IP networks that use Packet Over 
SONET/SDH (POS) links.  This is a primary reason that the Internet as it is 
currently implemented has non-deterministic data jitter and latency. 

The empirical data loss in the Internet (POS) backbone transport is a much 
as 5%.  99% of the data loss in due to packets being dropped in the PPP to 
SONET gearbox with no indication of why.  Over design of bandwidth 
utilization is normalized at minimum of 20% in loaded conditions.   The 
design bandwidth utilization of non-converged redundant network links is at 
40% per in order to allow for convergence traffic loading an still have a 
20% overhead to account for data expansion due to byte stuffing. 

As a side issue, ironically, TDM sub-rate PPP link interfaces do not have 
the data loss that is observed in POS interfaces.  The nominal data packet 
loss is about 1%.  Bandwidth utilization design parameters are the same for 
TDM sub-rate links as they are for POS interfaces. 

I believe that the reason TDM sub-rate interfaces have a lower packet 
failure is that the minor clock tolerance adjustment of the self 
synchronizing data within the TDM payload is at bit level granularity.  The 
clock tolerance adjustment of the self synchronizing data within SONET OCnC 
or SDH STMnC SPE is at byte level granularity. 


At 05:56 PM 10/20/00 -0600, pat_thaler@xxxxxxxxxxx wrote: 

>David, 
> 
>Like most LAN people I'm not a great fan of byte stuffing however, some of 
>the points you make in 
>items 2 and 3 seem very stretched and reduce credibility. 
> 
>Frame loss effect of frame expansion - The maximum expansion of a frame is 
>to twice its length. 
>For a 1518 byte frame and a relatively bad bit error rate of 10e-9 and 
>assuming that errors are 
>uncorrelated, probability of loosing the frame due to a bit error is: 
> 
>   1 - ((Pgoodbit)^bits ~ 0.0012%) 
> 
>where Pgoodbit is the probability that a bit is not errored = 1 - BER 
>     and bits = 1518 * 8 
> 
>doubling the number of bits changes this to 0.0024%. A factor of 2 is not 
>going to make a significant 
>difference to throughput and most of the time (unless someone is 
maliciously 
>creating expanding frames) 
>the factor is a lot less than 2. 
> 
>What bothers me more is item 3. A bit hit in a frame will trash the frame 
>whether it creates a false 
>delimiter, damages a delimiter or just changes data. This is true for all 
>the existing Ethernet codings. 
>Once the error ends and a delimiter is received, the following frames will 
>be received successfully. 
> 
>Since a hit on the length field of the protocol proposed for GFP by Nortel 
>and Lucent causes frame 
>sync to be lost and a bunch of frames can then be lost until sync has been 
>regained, it seems that 
>this might not be a topic you would want to introduce. It really isn't a 
>weakness for LAPS but a case 
>can be made for it as a weakness for GFP. 
> 
>The point about service impacting effects does have validity at least where

>the frame queuing decision 
>is made disconnected from the LAPS transmission time. If there is a device 
>with separate queues 
>for different service levels and it gives the LAPS a frame at a time, only 
>choosing the next frame 
>when the last one is being finished, this shouldn't produce much delay. 
>Worst case is for that is a 
>lower service level frame delays a higher service level frame for twice the

>maximum packet size because 
>of expansion. If, however, the queuing mechanism is disconnected from 
>knowledge of LAPS transmission 
>time, then the effect you are talking about does occur. If for instance, a 
>switch is feeding stream into 
>an Ethernet link which is then converted into a LAPS stream and the Sonet 
>data rate is less than twice 
>the Ethernet links rate, expansion can introduce as much delay as there is 
>FIFO provided and too much 
>expansion can overrun the FIFO causing packet loss. 
> 
>What is our objective here? 
> 
>My understanding is that the LAPS is almost sure to get approval at this 
>point. 
> 
>LAPS does have some weaknesses and if I was constrained to a byte stuffing 
>approach, I would have 
>done it a bit differently. 
> 
>It would be better to scramble then delimit because then the scheme would 
>not be subject to expansion 
>by unlucky decisions in applications or malicious packet content. Frankly, 
>the long tail of the distribution 
>doesn't bother me as long as packet loss due to it is sigificantly below 
>packet loss due to design bit 
>error rate. Any of these scrambled rather than block coded transmission 
>schemes has a distribution 
>for run lengths of successive 1's or 0's which has a similar very long very

>skinny tail. That distribution 
>can continue on to infinity but as long as the probablity of a run long 
>enough that the receiver makes 
>bit errors is low, it doesn't matter how long the tail is. 
> 
>Also, I think they put the LSB of each Ethernet byte into the LSB of the 
>Sonet byte and Sonet sends 
>MSB first so they reduce the burst error protection of the CRC code - a 
>mistake that was also made 
>in 100BASE-T, so it isn't the end of the world - but flipping the byte as 
it 
>was put into Sonet would 
>have been preferable. 
> 
>Are we trying to 
>    generate comments about what should be changed to improve the LAPS 
spec; 
>    generate a justification that will cause the LAPS spec to fail to get 
>approval; 
>    propose criteria for when LAPS should be used versus GFP and 10GBASE-W;

>or 
>    something else? 
> 
>Regards, 
>Pat 
> 
>-----Original Message----- 
>From: David Martin [ mailto:dwmartin@xxxxxxxxxxxxxxxxxx
<mailto:dwmartin@xxxxxxxxxxxxxxxxxx> ] 
>Sent: Friday, October 20, 2000 12:05 PM 
>To: stds-802-3-etholaps 
>Subject: Comments On LAPS 
> 
> 
> 
> 
>All, 
> 
>Some comments on LAPS (Draft X.86, April 2000) to get the ball rolling. 
> 
>Background 
> 
>LAPS is a modified version of PPP with the following similarities: 
> 
> 
>         *       Uses the same HDLC-like frame 
>*       Uses the same byte-stuffing / flag pattern delineation mechanism 
>*       Supports only point-to-point Layer 2 topology (i.e. no 
address/label 
>fields) 
> 
>Differences wrt PPP: 
> 
> 
>         *       Uses a much-simpler version of Link Control Protocol (no 
>'Protocol' field, so no LCP frames; only 2 states instead of "16 events, 12

>actions, and 11 LCP frame formats") 
> 
>         *       Uses the 'Address' field to identify among IPv4, IPv6, 
etc. 
>(PPP fixes it as FF). 
> 
>The simplified LCP is laudable, but the similarities to PPP/HDLC/SDH mean 
>that LAPS shares the same drawbacks in throughput performance and the 
>service effects of flag/byte-stuffing delineation. The following elaborates

>on these issues. 
> 
>Comments 
> 
>Packet based traffic generally requires received frames to be error-free. 
>Any frames lost due to bit errors within the frame payload or due to loss 
of 
>frame delineation will usually trigger a request for re-transmission at a 
>higher layer. The re-transmission requests will then generate more traffic.

>This positive feedback mechanism makes frame loss performance an important 
>parameter for packet-based traffic in general. 
> 
>Three aspects of throughput are discussed: deterministic versus statistical

>behaviour, the effect of frame inflation on throughput, and the effect of 
>delineation performance on throughput. 
> 
>1.      Deterministic vs Statistical Throughput 
> 
>All currently defined Ethernet physical layers provide a deterministic 
>throughput capacity. The throughput capacity is independent of the data 
>contents. This is an important attribute, since it permits predictable 
>performance. 
> 
>For byte-oriented LAPS, frame delineation uses a simple flag mechanism: a 
>unique one-byte pattern is used to detect both beginning and end of each 
>frame. To ensure the flag pattern is unique, any occurrences of it must be 
>removed from the data prior to encapsulation. This is done by replacing 
each 
>occurrence of the flag pattern with a sequence of two bytes: a special 
>'escape' byte, followed by a slightly modified version of the flag pattern.

>Because of its special meaning, the 'escape' character must also be 
>'escaped'. Consequently, the frame length of the payload is inflated in a 
>non-deterministic manner. Since two byte patterns are replaced by pairs of 
>bytes, the probability that a random data byte will be 'escaped' is p = 
>1/128. 
> 
>For a frame F bytes long prior to 'escaping', the average frame-length 
after 
>'escaping' will be:    F' = F + m bytes 
> 
>Where  F        is the un-escaped frame length in bytes 
> 
> 
>         m       is the mean of the distribution of the number of bytes 
that 
>must be escaped in the original F-byte frame 
> 
>Assuming random data the number of bytes 'escaped' per F-byte frame follows

>a binomial distribution with: 
> 
>probability of 'success':               p = 2/256 = 1/128 
>probability of 'failure':               q = 1 - p = 127/128 
>number of trials:                               n = F 
>mean of the distribution is:    m (mu) = np = F*(1/128) bytes 
>standard deviation is:          s (sigma) = sqrt (npq) = sqrt (F*127) / 128

> 
>The tail of this distribution is extremely long: it reaches zero only after

>a potential doubling of the frame size. 
> 
>Non-Random Data 
> 
>The assumption of random data could easily be invalidated by applications 
>that happen to produce the escaped octet values more frequently than a 
>random process would. 
> 
>Vulnerability to Emulation Attacks 
> 
>The inflation problem is aggravated by the possibility of emulation 
attacks. 
>Malicious users can generate frames with a high density of octet values 
>which must be 'escaped'. This definitely invalidates the assumption of 
>random data and skews the distribution towards the worst-case of a doubling

>of frame size. 
> 
>Service-Impacting Effects of Non-Deterministic Inflation 
> 
>The non-deterministic inflation imposed by LAPS byte stuffing could make 
>tightly controlled frame delay variation (with acceptable absolute delays) 
>very difficult if not impossible to achieve. The quality of frame-based 
>real-time services, such as voice-over-IP, would suffer as a consequence. 
> 
> 
>2.      LAPS Frame Inflation: Effect on Throughput 
> 
>The throughput capacity of an error-free link is inversely related to the 
>overhead required by a given encapsulation mechanism. The inflation 
>introduced by LAPS reduces throughput in a non-deterministic manner, by 
>adding more overhead in an error-free environment. 
> 
>Once errors are introduced, throughput is diminished in two ways: random 
>errors occurring within the frame; and frames lost through loss of 
>delineation. 
> 
>For frames on a link with given BER, the probability of errors within the 
>frame are a function of the frame length: the longer the frame the higher 
>the probability of an error occurring in the frame. As seen above, LAPS 
>inflation can significantly increase the frame length. This degrades the 
>throughput by providing a larger target for random errors to hit. 
> 
>3.      LAPS Delineation Performance: Effect on Throughput 
> 
>The second factor affecting throughput on a link with non-zero BER is frame

>delineation. It is also affected by choice of encapsulation. LAPS frames 
>rely on error-free matches to the flag pattern to indicate start and end of

>frame for every frame. This leads to an unnecessarily high probability of 
>lost frames due to loss of delineation. This in turn contributes to a lower

>absolute throughput than could be realized with a more robust delineation 
>mechanism. 
> 
>More robust delineation mechanisms are possible that are designed to be 
>error-tolerant. This allows them to coast through random bit errors that 
>would defeat a flag delineation mechanism. This would reduce the number of 
>frames lost due to delineation failures. 
> 
>Conclusion 
> 
>Mapping approaches for Ethernet over SONET/SDH which do not use flag-based 
>delineation are preferable. 
> 
> 
>David W. Martin & Tim Armstrong 
>            Nortel Networks 
> 
>=========================