Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [HSSG] Jumbo Thursday



Title:
Pat and Joel

Both of you are right in regard to XAUI FIFO.  Pat generically speaking XAUI like limitation can be overcome and
Joel speaking from his experience where he has seen FIFO collisions due to Jumbo frames.

As an example IB frame  can be as long as 4608 Byte long where the data payload is 4096 byte.  Using +/-100 PPM
clocks one may send under worse case uninterpretable 5Kbyte frame.  Clock with 50 PPM tolerance are widely available
with very little premium. 

With more sophisticated PLL as in case of XFI one can still get good jitter performance without any limitation on the
frame size.

Thanks,
Ali

Pat Thaler wrote:
Joel,
 
You still haven't explained the problem you are claiming XAUI has. It is possible that some people built implementations that have problems with passing jumbos but I don't think anything in the standard's definition of XAUI makes it unable to handle jumbo frames. I'm not questioning that you have seen some instances in the lab of PHYs that don't handle jumbos. If there is something in the standard for XAUI that is a problem for jumbos and you want us to avoid it next time, you will have to be more specific about the mechanism. What in your understanding specifically is different from what I describe?
 
Regards,
Pat


From: Joel Goergen [mailto:joel@xxxxxxxxxxxxxxxxxxx]
Sent: Friday, August 11, 2006 11:36 AM
To: Pat Thaler
Cc: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: Re: [HSSG] Jumbo Thursday

Pat,

My understanding of a XAUI SERDES implementation is slightly different then you describe.  Having designed and tested this case across corners in the lab, I have to disagree with you in part.

I agree with your last point ... I still like my own words, but your's works just fine.

-joel

Pat Thaler wrote:
Joel,
 
I don't understand your point about XAUI. There is no reason that a XAUI implementation has to have a problem with jumbo frames. Jumbos shouldn't affect alignment at all. All four lanes for XAUI are transmitted with the same clock and once alignment has been gained, it should stay. Jumbo frames mean that clock skew builds up more bit accumulation or depletion in the FIFO during a packet but at 9 KByte that shouldn't be enough to be a problem for an implementation.
 
Worst case clock skew is 0.02%. Over the course of 9 Kbytes that is a difference of 14.7 bits between the fastest rate the frame could be transferred and the slowest rate the frame could be transferred.  Minimum transmitted interpacket gap is 9 bytes (because of aligning the start) but the average IPG is 12 bytes so every IPG should offer a chance to delete an column of idle. Multiple PHY sublayers may be deleting or inserting idles so it would be best to design an implementation to be able to wait for multiple packets to drop an idle in case other sublayers decide they want to delete at about the same moment but at 14.7 bits per idle that doesn't require many additional bits of FIFO - significantly less than the amount of FIFO required for the deskew. This calculation is the same regardless of whether the interface is single lane or multi-lane.
 
I don't think there is anything inherent in a multi-lane PHY that is hostile to handling jumbos. All PHYs have to deal with the clock skew issue if they are using asynchronous clocks to pass the data.
 
I have no problem with the concept of designing PHYs so that compliant parts can be jumbo compatible to enable the proprietary use of jumbo packets. That may be what you meant by "at a minimum provision for it". What I don't want to do is to add jumbos to the standard.
 
Regards,
Pat


From: Joel Goergen [mailto:joel@xxxxxxxxxxxxxxxxxxx]
Sent: Friday, August 11, 2006 10:45 AM
To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: [HSSG] Jumbo Thursday

Wow, I guess it really doesn't pay to take a day off!  Yesterday should go down as 'Jumbo Thursday'.

Rather then respond to individuals, thought I would sum it all up and hit each item.

1. Latency
2. Betrayal and inappropriate use of the process
3. Kevin, Brad, Geoff, Shimon, and maybe Howard say 'no' to jumbos in HSSG
4. Research Community asks for clarification of the question ... to jumbo or not to jumbo ...

1. Latency : Do Jumbos add latency in a store and forward implementation?
To some degree, I agree with Pat on this issue.  In a small scale system or merchant silicon application, latency is a problem with jumbo frames and does increase with usage. 
In a high density core or edge implementation with specific memory techniques deployed in a network application engineered for jumbo frames, no .... any latency added is negligible.  Check out available statistics for high end core implementations.

2. Betrayal and inappropriate use of the process
WOW ... not sure where that came from.  I thought this was a forum for discussion around HSSG.  Are we going to accuse everyone of this?  If we are, it sure is going to make things difficult.

3. Kevin, Brad, Geoff, Shimon, and maybe Howard say 'no' to jumbos in HSSG
Good for you all!  It so happens that I agree with you in principle. 

I didn't ask the question to be the one that brings it up every few years.  I asked the question for very relevant reasons ... the least of which is every customer technical discovery I am involved in, the RFQ always contains "Jumbo Support ... check YES or NO".

I also worry about compatibility .. I don't want to see anyone have to redesign systems or silicon because of what I perceive a documentation procedure.

4. Research Community asks for clarification of the question ... to jumbo or not to jumbo ...
My original question:
My question is not whether to support jumbos, because we all already do ... my question is should we finally spec it out?  I think we should, at a minimum, provision for it. Menachem, Michael, and Marcus ... thanks for giving me the benefit of the doubt and asking to expand on my point of view.  As a research scientist, I greatly appreciate it!

In 10Gbps and higher speeds, there are many of us that need to support jumbo frames for various customer engineered applications.  In some ways, it's like MPLS in that most don't need it but still want it or want to use it in some application specific area in their network.  Shimon eluded to this in his comments.

If I look at the data pipe from the input of the PMD to the input of the NPU for 10Gbps implementations, there are four basic interfaces used: XAUI, XFI, XGMII, and SFI.  The last three interfaces have no physical implications or limitations to jumbo frames.  XAUI (no disrespect intended as it is a great interface and I fully support it ), because of the alignment concept, may drop a frame after the fifth extended frame when deployed in an async environment.  The XAUI interface was never intended to support extended frames, but can be used to do so when careful of the clock distribution.  But it doesn't always work in all extended frame implementations..

My question comes about because there are many people within HSSG considering an architecture composed of multiple Nx10Gbps links in some phy aggregation concept to give us higher speeds.  I'm not naive here.  A serial 100Gbps pipe, end to end, is not likely for a few years ... just like XFI took a bit of work before it's debut.  What I worry is that 'IF' we deploy a XAUI like concept across this phy aggregation concept, none of us will be able to supply extended frames to our customers in the fashion we do so today - out of bounds of the standard, but easy to do so because three of the four interface available allow for it.

I'm not so much interested in specifying jumbo frames as I am in an objective or motion that allows for an interface implementation that does not preclude the use of jumbo frames.  No plot, no hidden agenda ... I just wanted those of us that have to deploy frame extension a mechanism to do so.

-joel







begin:vcard
fn:Ali Ghiasi
n:Ghiasi;Ali
org:Broadcom;HSIP
adr;dom:;;3151 Zanker Road;San Jose;CA;95134
email;internet:aghiasi@xxxxxxxxxxxx
title:Chief Arcitect
tel;work:(408)922-7423
tel;cell:(949)290-8103
version:2.1
end:vcard