Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index



I think your efficiency computation it is a little simplistic. This calculation
only applies to overload conditions when there is always enough data in the
queue to fill the assigned burst. However, under normal conditions it is more
difficult to predict how much of the assigned bandwidth a CPE may be able to use
if it is assigned in a static basis. On the other hand, if the bandwidth is
assigned based on request then this is not an issue. In  our presentation we
tried to highlight these issues and give an initial estimate of performance
(both efficiency and delay) based on simulation results.

Below there is a little more detailed answer to your computation and explanation
of the general case.

glen.kramer@xxxxxxxxxxxx wrote:

> Dear Xu,
> I think I know what confused you in the presentation as I got several
> similar questions.
> Timeslot is not an analog to a cell. While, from the slide 4 in the
> presentation you may conclude that one timeslot is only large enough to hold
> one maximum size packet, that is not the case.  Timeslot in our example was
> 125 us, which equals to 15625 byte times.  Then you can see that in the
> worst case it will have 1518 + 4(VLAN) + 8(preamble)+12(IPG) - 1 = 1541
> bytes of unused space at the end of timeslot (assuming there is data to be
> sent and no fragmentation).  With realistic packet size distribution (like
> the one presented by Broadcom), the average unused portion of the timeslot
> is only about 570 bytes.  That gives channel efficiency of 96%, or
> accounting for 8 us guard bands - 90%

This efficiency calculation considers only the efficiency impact of the last
packet in the burst transmission. If the system doesn't allow fragmenting
packets, there will be wasted bandwidth at the end of the burst because the
packet in the head of the queue does not fit in the remaining space of the
burst. As you say, the size of this gap is on average equal to the average
packet size. This means that the lack of fragmentation effects 5-10% efficiency.
So I agree with you that this is a small penalty for the complexity involved in
implementing fragmentation (i.e. buffering and SAR). This is why we did not
mention fragmentation in our presentation.

However the key factor in the efficiency is how often the CPE has enough
information to fill the entire burst. This depends more on the burstiness and
arrival process of packets and less on packet size distribution. If not enough
bandwidth is assigned to a CPE, the queues build up and hence there is always
enough packets to fill any bandwidth assigned. This is the case where your
computation applies. However this overflow condition is not a good operation
point for the system. Latency is too high to be able to have any interactivity
in the system. It could work only for best effort data. Hence in general we have
to be able to adapt the bandwidth assignment so that the queues are used only to
mitigate the burstiness of the traffic. In this case it is difficult to predict
in a simple calculation the efficiency of the system. To give an initial
estimate, we showed too examples in our presentation, one for video traffic and
another with a model corresponding to the current mixed of traffic in
residential cable systems.  As you can see in there, the saturation point of the
system (i.e. maximum efficiency) is reached earlier than this simple calculation
indicates. And the reason is that there is much more left over in the burst than
just the last packet. Note that the two lines in the plots assume no
fragmentation, hence the actual difference is a measure of this left over in the

These two plots were taken just as example. There are many more situations to
consider. An interesting one is when close-loop (TCP-like)  traffic models are
considered. In this case the traffic generation is driven by the response of the
system. If acks cannot be transmitted, no more data is generated. In this case,
the queues never build up, instead the service just degrades due to the lack of
bandwidth at the appropriate time. We can provide additional simulation results
for these cases. Hence the assumption that the packets will eventually build up
in the queue does not always apply.

Our presentation and this description tries to motivate the need for DBA in the
system design. While the actual algorithms don't need to be standardized, the
interface between HE and CPE for the adaptation must be included.


> DBA is a separate question.  While it may be important for an ISP to have
> DBA capabilities in their system, I believe it will not be part of the 802.3
> standard.  But a good solution would provide mechanisms for equipment
> vendors to implement DBA.  These mechanisms may include, for example, an
> ability to assign multiple timeslots to one ONU or to have timeslot of
> variable size. Grant/Request approach is trying to achieve the same by
> having variable grant size.

> Having small timeslots will not solve QOS either.  Breaking packet into
> fixed small segments allows efficient memory access and a cut-through
> operation of a switch where small packets are not blocked behind the long
> ones (and it assumes that short packets have higher QOS requirements).  In
> such a distributed system as EFM is trying to address (distances in excess
> of 10 km) the gain of cutting through is negligible comparing to propagation
> delay or even the time interval before ONU can transmit in a time-sharing
> access mode (be that TDMA or grant/request method).
> Thank you,
> Glen
> -----Original Message-----
> From: xu zhang [mailto:zhangxu72@xxxxxxxxx]
> Sent: Thursday, July 12, 2001 7:01 PM
> To: glen.kramer@xxxxxxxxxxxx
> Cc:
> Subject: EPON TDMA
> hi, glen:
>  I had seen your presentation file about EPON TDMA in
> PHY, it help me a lot to understand your EPON system.
> We had developed the first APON system in china, when
> I think of the TDMA of EPON, I think though the uplink
> data rate is 1Gbits/s when shared by 16 or 32 users is
> still not enough, so the dynamic bandwidth
> allocate(DBA) protocal must be a requiremant
> especially when take care of the QoS performance. In
> DBA protocal, in order to achieve high performance the
> time slot need be to small, I think why not we divide
> the ethernet packet to 64 byte per solt, it is often
> used in ethernet switch when store packet in SRAM.
> best regards
> xu zhang
> __________________________________________________
> Do You Yahoo!?
> Get personalized email addresses from Yahoo! Mail