Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [EFM-P2MP] MPCP: Report message

We need your help in 802.3ae!  It turns out that those specs may make the 10GE transceivers more expensive then SONET.
I hope not.
Richard Brand

vincent.bemmel@xxxxxxxxxxxx wrote:

Hi David,Re: item 1...  It makes me a little nervous  when you say that the EFM transceiver costs the same as the FSAN compliant one...  isn't that exactly what we're trying to avoid?   Economics is one of EPON's claims to fame. I hope we can learn from FSAN in that we should avoid putting aggressive requirements on the optics and end up with a costly solution that no one can afford.  We should be able to come up with a simple solution that works effectively with inexpensive optics.Thanks,Vincent
-----Original Message-----
From: David Levi [mailto:david@xxxxxxxxxxxxxx]
Sent: Monday, February 11, 2002 3:18 AM
To: vincent.bemmel@xxxxxxxxxxxx;;
Subject: RE: [EFM-P2MP] MPCP: Report message


I just want to clear things concerning to Burst mode transceivers.

1. FSAN compliant 1.25Gbps D/S and U/S transceiver with 3 or 12 bytes(several nsec) overhead cost the same as transceiver with (10 usec? overhead or 1 usec). This is because Laser Driver ASIC that can handle short turn on delay cost the same as continues Laser Driver (LD), where probably the same LD that support continues mode will support burst mode with several nsec turn on delay.

2.FSAN compliant transceiver cost the same as 802.3ah transceiver. The transceiver costs lies on the optic (BIDI, Laser Diode, PIN ), and definitely not on the Laser Driver ASIC, where 802.3ah transceiver uses the same optical components (two or three port bidi)as FSAN transceiver.

3. 8 bytes overhead, which represents turn on delay of several nsecs provides almost 150Mbps bandwidth more over the link than 1usec turn on delay. Turn on /off of 1usec is 156byets times 64 for TDM voice per ONU resulting in 10000 bytes + 64* 156 for DBA + 70 ( for 1500buts per slot)*156 = 31000 bytes. 31000 bytes in 1 msec is around 200Mbps. 8 OH bytes only eat 64Mbps. 

The price for end to end 8 bytes OH cost the same as 156 bytes OH.



-----Original Message-----

From: []On Behalf Of vincent.bemmel@xxxxxxxxxxxx

Sent: Friday, February 08, 20022:20 AM


Subject: RE: [EFM-P2MP] MPCP: Report message


I'd like to follow up on some of the discussions re: scheduling and REPORTs.

So far we have identified two approaches to assigning timeslots:

1. Static timeslot assignments -- no REPORTs are generated by ONUs for BW


The OLT sets up astatic timeslot scheduling plan, and generates GATEs

for each ONU

Two options:

Each ONU receives a grant for a fixed timeslot at a time, or

Each ONU recieves a grant for multiple fixed timeslots for a period of

time (multi-cycle grant)

A TDM scheme is best implemented through this approach.For jitter/delay

sensitive TDM traffic, some of us strongly believe multi-cycle granting is

the best way to go.

2. Dynamic timeslot assignments --REPORTs may be used to provide the DBA

feedback loop.

The OLT now dynamically schedules timeslots, and generates GATEs for each

ONU accordingly.

Each ONU receives a grant for a fixed timeslot at a time.It may

optionally communicate it's BW needs to the OLT via REPORTs.

Notice that DBA designs are proprietary, and may desire a wide range of

parameters in the feedback loop.Fortunately this is (and should remain)

out of our scope.However, even in its simplest form, the upstream BW

efficiency is negativelyaffected in order to support REPORTs upstream.

Here is an example (assuming 64-Byte REPORTs).

In order to have ONUs responsive with a granularity of 1ms, the

overhead for a 64-split is 64x64B/(0.001s) = 32.8 Mbps + timeslot framing

overhead.As Bob pointed out, the latter can easily be in the order of


E.g., 3 usec = 600% overhead on a 64Byte (0.512 usec) frame

So *very* conservatively we can easily eat up> 20% of the upstream

BW just to make DBA work.

(That would be > 60% BW overhead if the framing overhead was 10


Note: numbers may look slightly better (statistically) with upstream

piggybacking of REPORTs in the case of higher upstream traffic loads.

The tendency will be to reduce these numbers by putting tighter constraints

on the lasers and clock recovery mechanisms a la FSAN.This equates to more

expensive components

Now consider a solution, based on DBA, that includes TDM services... The ONU

now hasto dothe following:

wait for a GATE poll

send a REPORT (requesting TDM BW)

wait for a GATE (w/details of Timeslot to transmit TDM data)

wait for actual time to send its data...

... and repeat this for EVERY TIMESLOT.

The actual overhead needed to manage this mechanism is significant to say

the least.

Fortunately, today we are faced with a 'glut' of bandwidth and the

need/requirement for dynamic bandwidth provisioning is unnecessary.

Furthermore, burdening the upstream bandwidth with a constant request for

bandwidth is not warranted

We have the opportunity to introduce a more cost effective approach.That

translates to a solution that will win acceptance in the marketplace, and

that is good for all of us.

Static mapping of upstream bandwidth is sufficient, and TDM traffic can be

transported expediently.It means a less complex OLT and ONU.

The nature of the optics will limit the number of ONUs to 64.Once again,

this isn't a 2000-modem DOCSIS system,nor does it exhibit the dynamics of

a cellular phone system, so the standard Aloha-based algorithms are not

required.This is not to say that they should be dismissed,but it is

important that multiple methods of bandwidth scheduling should be allowed in

the protocol.


-----Original Message-----

From: Horne, David M [mailto:david.m.horne@xxxxxxxxx]

Sent: Wednesday, February 06, 20027:55 PM

To: 'vincent.bemmel@xxxxxxxxxxxx'; dolors@xxxxxxxxxxxx;


Cc: yoshi@xxxxxxxxxxxxxx; onn.haran@xxxxxxxxxxx;

Subject: RE: [EFM-P2MP] MPCP: Report message

re: Especially if the intention is to have the OLT be responsive to ONU


in near real-time,...

Can you define "near real time?" Could be 6000-7000 timestamps elapse just

from transit delay. How would we put a bound on processing and scheduling

delay at OLT if we are not specifying scheduling algorithm?

There are many ways to address the provision of upstream bandwidth requests,

but no one wants any level of complexity or contention slots. That pretty

much means waste is a given. This isn't necessarily a bad thing if it saves

complexity and doesn't encroach on needed bandwidth. Have to show

quantitatively that the waste is too excessive. It will be hard to do that,

given that unused reservations don't follow any regular pattern.


-----Original Message-----

From: DolorsSala [mailto:dolors@xxxxxxxxxxxx]

Sent: Wednesday, February 06, 20022:48 PM

To: ariel.maislos@xxxxxxxxxxx

Cc: Osamu Yoshihara; onn.haran@xxxxxxxxxxx; Stds-802-3-Efm-P2mp

Subject: Re: [EFM-P2MP] MPCP: Report message


We have several requests on the table. Vincent is really interested in

supporting well TDM-like services. Osamu is concerned on TCP data


My argument is that if you have to support both in the same system


is a requirement by some service providers) the MAC client will be using


than one priority queue. In this case, Osamu's dual request parameter is not


If we assume that more than one priority queue is used, trying to avoid

fragmentation in a grant only makes sense if the actual frames that


the request are the ones transmitted in the corresponding grant. Otherwise,

there is a size mismatch. And the indication of a max and min size does not


on it.

I am not sure we want to go to the detail of trying to avoid this


But if we want to enable this capability, I think we should specify a


approach. It is equally painful to send an additional parameter upstream



Note that sending several consecutive requests would allow to indicate frame

boundaries to the OLT without additional parameters. If we are concerned on


single queue case, then we do not need the parameter.

In summary, when the discussion relates to the scheduling algorithm there

are a

lot of things that can be done. Passing the appropriate information helps


particular implementations to one or other application. The only information

missing on the system is the priority in the grant. We have discussed this


we had not decided about it yet. We should probably make the decision


with this request. I think this parameter is a more general solution for


request. But I may be interpreting wrong what he is suggesting. Osamu,



We can discuss more during the call,


Ariel Maislos wrote:



I do not think the concept of per-priority granting is appropriate. It


too much like DOCSIS service-flows to me.


What you are suggesting makes priorities meaningless. By definition,

granting priority 4 while there is a pending packet of priority 1, (1 is

higher than 4) will transmit the packet having priority 1. Behaving

otherwise makes priorities meaningless.

Operating otherwise looks like DOCSIS service-flows, and not like 802.1p.


The original presentation by Yoshihara-san tried to minimize fragmentation

by granting exactly up to a FRAME BOUNDARY. This was suggested in the

context of a flat FIFO queue.

I would suggest the appropriate way to do this is by using an additional

field in the report (optional of course) that will give the number of


at a frame boundary waiting in the queue. This number will be lower than a

set threshold, while the threshold can be set statically or dynamically.




> -----Original Message-----


> [] On Behalf Of Dolors


Sent: Sunday, February 03, 2002 5:53 PM

To: Osamu Yoshihara

Cc: onn.haran@xxxxxxxxxxx; Stds-802-3-Efm-P2mp

Subject: Re: [EFM-P2MP] MPCP: Report message




I thinkyou are bringing up a good point here. I think we can


the two request types by modifying the interpretation of the request



If I interpret you correctly, you are looking for a requesting mechanism

that exactly matches with the frames to be sent (and avoid left over

space in the grant). Since the ONU is the one sending the request and

knowing about the size of the packets in the queue, it is the one that

can compute exactly how much bandwidth is needed to send a certain set

of packets. Therefore I think we can achieve what you want by saying


> "the request indicates the amount of bandwidth (bytes) needed for a

given priority". The ONU can compute the number by calculating the

amount of bytes needed by adding for each packet all the

bandwidth components (the size of the packet,plus the interframe gap,

plus the phy overhead). This way, the request exactly corresponds to

the grant size needed to transmit the packets the ONU had selected

to request for bandwidth.


The buffer threshold in your presentation could be an ONU policy

parameter. The ONU will decide how many packets to consider in a

particular request based on a policy and a (potential) maximum

request size (due to limit of the field size).


I think to achieve what you want, you also need the OLT to specify the

priority of the grant. And this information should be sent up to the MAC

client at the ONU when the grant arrives. Right now we had provision

passing grant information from the MAC-control to the MAC client and

this would be an example.


Would this do it?




Osamu Yoshihara wrote:


> > Onn,

> >

> > Thank you for the summary.

> > I have one suggestion about REPORT content.

> >

> > I'd like to allow ONU to send 2 or more REQUESTs(number of bytes

> > requested for queue) per priority queue in a REPORT frame.

> > For example, "minimum number of bytes requested for queue #n" and

> > "maximum number of bytes requested for queue #n".

> >

> > I made a presentation about this suggestion in Austin.

> >



> >

> > We as NTT intend to use EPON system for TCP/IP data traffic mainly.

> > To yield high TCP throughput, low upstream delay is necessary, and

> > to keep upstream delay low, short grant cycle should be required.

> > When only "size of total buffered frames" is conveyed per priority


> > OLT can't always allocate exact requested bytes. It's because granted


> > per ONU becomes smaller when many ONUs transmit REQUESTs.

> > In this case bandwidth can't be allocated efficiently because bandwidth

> > wastage per ONU (maximum wastage per ONU is maximum MAC frame size)


> > negligible when grant cycle is short.

> > To address this issue, we suggested to convey "frame boundary


> > for high priority traffic in addition to "size of total buffered


> >

> > OLT receives two types of request information, and chooses appropriate


> > as the granted bandwidth by a proprietary DBA algorithm. For example,


> > many ONUs transmit REQUESTs, "minimum number of bytes requested

> > for queue" is chosen as "frame boundary information". In this case

> > there are no bandwidth wastage except guardband because the next ONU can

> > transmit data frames just after the previous ONU finishes transmitting.

> >

> > To send multiple REQUESTs ,"size of total buffered frames" and

> > "frame boundary information" ,is quite useful to yield high TCP


> > ,low upstream delay and high bandwidth efficiency for high priority


> > And it is also useful for TDM traffic because low delay can be achieved.

> > (page 17 of


> >

> > Osamu Yoshihara

> > NTT

> >

> > 2002.01.31 OnnHaran wrote:

> > >The following presentation summarizes report message suggestion.

> > >

> > >Onn.