You need three pieces of information to determine
buffer size requirements: (1) delay from the RS to the MDI, (2) MDI
to MDI delay (media delay), and (3) the time from the arrival of a PAUSE frame
at the MDI to the last packet output at the MDI. While the media delay can
be the dominant factor, it does not necessarily have to be.
Please refer to Annex 31B, 31B.3.7 (draft
"At operating speeds of 10 Gb/s and above, a
station shall not begin to transmit a (new) frame more than forty pause_quantum
bit times after the reception of a valid PAUSE frame that contains a non-zero
value of pause_time, as measured at the MDI".
"In addition to DTE and MAC Control delays, system
designers should take into account the delay of the link segment when designing
devices that implement the PAUSE operation (see Clause 29)."
The delay constaints in the various clauses go to
the 40 pause_quantum reaction time limit and the RS to MDI propagation
delay. The second quoted passage indicates that it is the
responsibility of the system designer to add appropriate buffering to account
for media delay. If you are designing for long-haul links, you will
require significantly more buffering to support PAUSE.
(formerly the Microelectronics Group of Lucent
----- Original Message -----
Sent: Friday, December 22, 2000 1:02
Subject: RE: delay constraints from XGMII
you said "The only MAC requirement I can think of
for bounding the
delay is for buffer sizing for 802.3x flow
control. That has
been the factor
for keeping the delay tables in various
since the delay for the flow control seems to be greater than 1 Mbit,
there are delay constraints of 256/212/272 bits in various
which are negligible as compared to the delay of 40KM
Correct me if I am wrong.
The max delay occures when MAC(1) informed about
the need for stop_rx from higher layer just at the beginning of MAX sized
frame transmission, and similarly, MAC(2) get the pause just at the
beginning of max frame transmission.
However, as already noted here, this is negligible
compare to the delay of 40KM fiber, which is more then
Boaz Shahar, MystiCom.
As for as I see, the time-point of pause command
assertion by MAC(1) until this pause command is actually executed by
MAC(2) is also dependent upon the transmitter status of the MAC(1) and
MAC(2). Let us take an example : MAC(1) is in the process of
transmitting a frame of about 1500 bytes and it saw pause command
assertion. The MAC(1) first completes its frame transmission and then only
it transmits the PAUSE frame. Similarly when MAC(2) is in process of
transmitting a frame of about 1500 bytes and it receives the
PAUSE frame, then it first completes its frame transmission before
executing PAUSE. So this delay is not the only linear delay of
transfer between the MACs.
am I right Boaz?
SInce there is no colision, the only effect of
the MAC to MAC delay is the size of the buffer which stores packets from
the time-point of pause command assertion by MAC(1) until this pause
command is actually executed by MAC(2). This is linear with the delay of
transfer between the MACs.
Boaz Shahar, MystiCom.
Can you clarify, what you exactly mean by the delay
XGMII to XGMII (MAC reaction time to
the flow control).
Boaz, as you says "The only MAC requirement I can
think of for bounding the
lower layer delay is
for buffer sizing for 802.3x flow control."
About which buffer you are talking about? and how it relates
with the delay constraints?
From: THALER,PAT (A-Roseville,ex1) [mailto:pat_thaler@xxxxxxxxxxx]
Sent: Friday, December 15, 2000 12:58 AM
To: Boaz Shahar; HSSG
Subject: RE: delay constraints from XGMII to XAUI
If there is a compatibility interface such as XAUI,
then one needs to define
how much delay is on
either side of it. Therefore one needs to spec at least
to XGMII (MAC reaction time to the flow control).
XGMII to XAUI
XBSI to MDI
The point of compatibility interfaces is to define an
designed can be compatible when mated at that
interface. Therefore, a budget can't be specified for the total
providing a breakdown with respect to
the compatibility interfaces. If a
compatibility interface is not physically instantiated then the
to meet the total but it is up to
the device how to allocate that total.
For instance, if a device had an XGMII and an MDI,
then it would have to
meet the sum of the
three delays: XGMII to XAUI, XAUI to XBSI and XBSI to
One way to specify this is to specify the delay for
each sublayer and say
that the total between
compatibility interfaces need to meet the total for
the sublayers between them.
Since this delay is only important for the flow
control, I recommend we
choose values that are
easy to meet and specify them per sublayer. No matter
what we do, our delays will be long in bit times compared to
speeds because our paths so wide.
There is no reason to tweak the delays to
shave a few byte times.
From: Boaz Shahar [mailto:boazs@xxxxxxxxxxxx]
Sent: Thursday, December 14, 2000 12:43 AM
Subject: RE: delay
constraints from XGMII to XAUI
I think that the standard
should constraint only the total delay from the
MDI to the XGMII, and leave internal partitioning to
> -----Original Message-----
> From: Rich Taborek [mailto:rtaborek@xxxxxxxxxxxxx]
> Sent: Thursday, December 14, 2000 12:18
> To: HSSG
Subject: Re: delay constraints from XGMII to XAUI
> Just to carify, the MDI to XGMII delay
includes the XAUI to
> XGMII delay.
> > The only MAC requirement I can think of for bounding
> lower layer delay
> > is for buffer sizing for 802.3x flow control.
> been the factor for
> > keeping the delay tables in various
> > --Bob Grow
> > -----Original Message-----
> > From: Steven Shen [mailto:ss_shen@xxxxxxxxxxxxxxxxx]
> > Sent: Wednesday, December 13, 2000 11:58
> > To:
> > Subject:
delay constraints from XGMII to XAUI
> > Hi all:
> > In D2.0 table 48.5 defines the
MDI to XGMII delay
> constraints to be
> > UI. I wonder that "is there any
delay constraints on XAUI to XGMII
> > thanks
> > best regards
> > Steven Shen
> > Silicon Bridge Inc.
> Richard Taborek
> Chief Technology
> 2500-5 Augustine
> Santa Clara, CA