Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [HSSG] Topics for Consideration



Ali,
 
I'd like to understand where you got the 5M port count for GbE when 10GbE started.  From my recollection, 10GbE started with the CFI in November 1998 which was the same year 802.3z was ratified and prior to the ratification of 802.3ab (1000BASE-T).  And 1000BASE-T devices saw limited deployment after the standard was ratified due to lack of shipping product.
 
Was 10GbE started too early?  Given the market hype at the time and the lack of crystal balls, it wasn't.  Hindsight is 20/20, so it is easy to make judgements now about past decisions on when to start the effort.  There's market demand now for something faster than 10GbE and given the time it takes to do a project like this, in my humble opinion, HSSG is starting at the right time.
 
Cheers,
Brad


From: Ali Ghiasi [mailto:aghiasi@xxxxxxxxxxxx]
Sent: Tuesday, August 08, 2006 6:24 PM
To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: Re: [HSSG] Topics for Consideration

Mike


Please see my comments below

Mike Bennett wrote:
Ali,

Ali Ghiasi wrote:
Mike and Jugnu

You both have some very good points. On one hand we have the high performance computing demanding highest bandwidth
achievable but on the other hand Jugnu brings up the fundamental requirement for having successful Ethernet standard "mass market potential".
I would say cost-effectively achievable.  Bell labs demonstrated multi-terabit transmission in 2000, but that system wasn't/isn't a product, we can't buy it, (or afford it if it were a real product), nor do we need it now for that matter.
I expect this was some type of DWDM network, which one can even apply to 10Gig Ethernet right now.
Mike also mentions this project will take 3.5-5 years, IEEE project traditionally have not taken this long this means then we are
too early. 
I have to disagree.  If you include the time spent by study groups, several projects fall within that range.  I expect this study group to take a year.  Add 2.5-3.5 years to get through 5 drafts of a standard and again we're right in the range I've stated. 

Based on input I've seen from end users, taken from a survey prior to the CFI, I believe we've started just in time. 
When we started 10Gig E the volume for 1 Gig E had already exceed 5 million ports.  In the HSSG meeting several
people indicated we started 10Gig E to early, in 2006 by best estimate for 10Gig volume shipment will be <10% of the
1Gig E volume when 10Gig E was started.  If there is such a huge market for 100 Gig E then the current market for
10Gig E should be 10x as huge.

For example when 802.3ae was started in the 1999 10Gig lasers/modulators existed for more than
15 years prior. 
Then 10G should not have been so "expensive" nor slow on market uptake since we reused existing components, right?   I'm not quite sure what your point is.
The bubble of 1999/2000 created the marketing requirement for 10Gig E at time where the basic components for
10Gig had already existed for some times.  The real issue was cost, many more would have used 10Gig if the
cost was just little bit more than 1Gig so this bring us to need vs nice to have.

I have listed dilemma we are facing:
    - Implementing 100 Gig in the near term means Nx10Gig
Having not seen a single presentation regarding possible solutions to the problem, I wouldn't be so sure this is the only cost-effective way to implement 100G (if that's the speed we include in our objectives).
    - Implementing 100Gig in few years the right answer might be nx25Gig
and it might be something else.  I don't see your point.
    - Carrier want to leverage their existing DWDM layer which mean baudrate in the 9.95-12.5 Gig

    - If LAG implemented why not allow n to be 4?
You must have heard the numerous complaints by now from the people who actually have to live with operating and troubleshooting Link Aggregation.  Link-layer aggregation is an unacceptable option.

When I referred to LAG I didn't meant exiting LAG, but rather a more efficient method something between the MAC and
PCS layer.



    - Operation with different width
    - Backward compatibility XAUI, LX4 ?
    - Greatest bandwidth demands (100+Gig) are on VSR links <50 m but the longer reach >10Km
    may be able to live with 4x10Gig.

All these means we should either define some sort of scalable architecture or just define LAG method and
do not define any PMDs!
I think it's a bit premature to come to this conclusion, but it makes for lively discussion.

Mike

Thanks,
Ali


Thanks,
Ali









Jugnu,

OJHA,JUGNU wrote:

Mike,

It seems more reasonable to me to consider finally decoupling the physical pipe size from the rigid hierarchy used in the past.  Why not simply define a scalable interface that allows inverse multiplexing (physical layer aggregation – not the type of aggregation you have described, which sounds like the current LAG) of an arbitrary (within some bounds, obviously) number of physical channels (10G) into a single logical link?  The SONET/SDH and Digital Wrapper/OTN world already has mechanisms to do this (VCAT, LCAS), and dynamically to boot.  

This would provide a much more flexible, scalable solution to customers. 

On the surface, it seems to me that with flexibility comes complexity which leads to higher cost.  It's also not clear to me how the physical layer aggregation you propose translates to a port on a switch.  Would I have to buy an N x 10G transceiver?  Would it be a WDM-like transceiver?  Would this work with a single fiber-pair or multi-strand?  What would the relative incremental cost be (in percentages, non monetary units) to scale up?   Also, are you proposing that this would scale beyond 100G?  If so, how far?  You mention boundaries - I'm curious what you think the upper bound would be.  I hope you're planning to present something at the interim  as it would help me understand what you're really proposing and how that compares to other ideas.

Regards,

Mike

In particular, it would allow them to grow capacity on any given link as needed, instead of having to install 10x10G channels up front.  Further, when they hit 100G, they wouldn’t be stuck until some other solution is defined – they could continue to grow.  

Respectfully,

Jugnu Ojha

Avago Technologies


From: Mike Bennett [mailto:mjbennett@xxxxxxx]
Sent: Wednesday, August 02, 2006 12:21 PM
To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: Re: [HSSG] Topics for Consideration

John, et al.,

>During our first meeting, I anticipate spending a lot of time focusing on objectives.  At the >closing plenary I highlighted two issues / objectives that the SG would have to consider:
>
 
>
     Tradition of 10x leap in speed

I think the speed increase has to be 10x.  The standards development process will take at least 3.5 to 4 years to complete.  Anything less than 100G will force people who are currently aggregating 10G links to continue to use aggregation, only using fewer higher-speed, and more expensive links.   End users prefer using a single link over aggregating physical-layer links into a logical link because of the complications that come with aggregation.  The data in the CFI presentation was just a sample of cases in which network operators we're aggregating 10G links to accommodate the demand on their networks.  There will be many more by 2011 (when I expect there would be 'real' products on the market).

>    Multiple Reach Targets
 
>  It was also presented that the focus of this effort wasn’t for a desktop application, and >that  the cost model needs to be considered.

I believe we need to adjust the cost model in such a way that it is aligned with the ecosystem.  It is unreasonable, in my opinion, to expect a 10x/3x model to apply to systems designed for wide-area/metro-area networks.  I also think it's short-sighted to ignore the rest of the ecosystem and develop Ethernet only in the part of the ecosystem where the original cost model applies.

Regards,

Mike

-- 
Michael J. Bennett
Sr. Network Engineer
LBLnet Services Group
Lawrence Berkeley Laboratory
Tel. 510.486.7913



-- 
Michael J. Bennett
Sr. Network Engineer
LBLnet Services Group
Lawrence Berkeley Laboratory
Tel. 510.486.7913