Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [802.3BA] Longer OM3 Reach Objective



"Let the market decide" was how we ended up with 100BASE-TX, instead of 100BASE-T4, 100BASE-T2, or 100BASE-VG.  The 802.3 working group did a poor job of making tough decisions and minimizing the number of options to be presented to the industry.
 
What a mess.
 
But I think 100BASE-TX is the most widely deployed of the various 802.3 interfaces.  There have been a few billion shipped so far.
 
KB
 


From: Brad Booth [mailto:bbooth@AMCC.COM]
Sent: Monday, March 24, 2008 3:19 PM
To: STDS-802-3-HSSG@LISTSERV.IEEE.ORG
Subject: Re: [802.3BA] Longer OM3 Reach Objective

"Let the market decide" is a really, really bad way to write a standard.  The IEEE 802.3 working group has done a very good job of making tough decisions and minimizing the number of options to be presented to the industry.  To create a reach objective that can only be satisfied by one implementation is a poor choice as it reduces the ability of component vendors to compete based upon their respective implementation strategies.  As the current objective is written, the reach is achievable with limiting and linear TIA's and may be achievable with lower cost components.
 
Just my 2 cents,
Brad
 


From: Ali Ghiasi [mailto:aghiasi@BROADCOM.COM]
Sent: Monday, March 24, 2008 4:58 PM
To: STDS-802-3-HSSG@LISTSERV.IEEE.ORG
Subject: Re: [802.3BA] Longer OM3 Reach Objective

Petar

Thanks for sending the pointer to the top 500 list and I do see the server at TJW.

In November 2007, 2 systems appeared in the TOP500 list.

Rank System Procs Memory(GB) Rmax (GFlops) Rpeak (GFlops) Vendor
8 BGW
eServer Blue Gene Solution
40960 N/A 91290 114688 IBM

They did not show a picture or how big is the server, but based on your remarks it is small enough to fit in modest room.

I assume the Intra-links with the Blue Gene might be proprietary or IB.  What does clustering system Intra-links has do to
with the Ethernet network connection. 

I assume still some of the users in TJW lab may want to connect with higher speed Ethernet to this server, very likely you will need
links longer than 100 m.  In addition higher speed Ethernet may be used to cluster several Blue Gene system for fail over,
redundancy, disaster tolerance, or higher performance which will require links longer than 100 m.

We are both in agreements that parallel ribbon fiber will provide the highest density in near future.  The module form factors with a gearbox
will be 3-4x larger.   Here is a rough estimate of BW/mm (Linear face plate) for several form factors:
Speed      Media Sig.         Form Factor                                             Bandwidth (Gb/mm) 
  10GbE     1x10G      SFP+ (SR/LR/LRM/Cu )                    1.52 (Assumes stacked cages)
  40 GbE     4x10G      QSFP (SR or direct attach)                  4.37 (Assumes stacked cages)
  40 GbE     TBD         If assumed Xenpak (LR)                     0.98
  100 GbE    10x10G   CSFP (SR or direct attach)                  3.85 (The proposed connector already is stacked )
   100 GbE   4x25G     CFP (LR)                                             1.23

As you could see here the form factors which allow you to go >100 m will be several time larger and not compatible
with the higher density solution based on nx10G.  Linear nx10G as given in 
http://www.ieee802.org/3/ba/public/jan08/ghiasi_02_0108.pdf
can extend the reach to 300 m on OM3 fiber and relax the transmitter and jitter budget.

You have stated strongly you see no need for more than 100 m,  but we have also heard from other who stated
there is a need for  MMF for more than 100 m especially if you have to change the form factor for more than
100m!  Like FC and SFP+  we can define limiting option for 100 m and  linear option for 300 m, and
let the market decide.

Thanks,
Ali

Petar Pepeljugoski wrote:
OF4D7F1939.EE7C74E4-ON8525740D.000B4235-8525740D.000CD5A7@us.ibm.com type="cite">
Frank,

You are missing my point. Even the best case stat, no matter how you twist it in your favor, is based on distances from yesterday. New servers are much smaller, require shorter interconnect distances. I wish you could come to see the room where current #8  on the top500 list of supercomputers is (Rpeak 114 GFlops), maybe you'll understand then.

Instead of trying to design something that uses more power and goes unnecessarilly longer distances, we should focus our effort towards designing energy efficient, small footprint,  cost effective modules.

Regards,

Petar Pepeljugoski
IBM Research
P.O.Box 218 (mail)
1101 Kitchawan Road, Rte. 134 (shipping)
Yorktown Heights, NY 10598

e-mail: petarp@us.ibm.com
phone: (914)-945-3761
fax:        (914)-945-4134



Frank Chang <ychang@VITESSE.COM>

03/14/2008 09:23 PM
Please respond to
Frank Chang <ychang@VITESSE.COM>

To
STDS-802-3-HSSG@LISTSERV.IEEE.ORG
cc

Subject
Re: [802.3BA] Longer OM3 Reach Objective







Petar;
 
Depending on the sources of link statistics, 100m OM3 reach objective actually covers from 70% to 90% of the links, so we are talking about that 100m isnot even close to 95% coverage.    
 
Regards
Frank

From: Petar Pepeljugoski [mailto:petarp@US.IBM.COM]
Sent:
Friday, March 14, 2008 5:09 PM
To:
STDS-802-3-HSSG@listserv.ieee.org
Subject:
Re: [802.3BA] Longer OM3 Reach Objective


Hello Jonathan,


While I am sympathetic with your view of the objectives, I disagree and oppose changing the current reach objective of 100m over OM3 fiber.


From my previous standards experience, I believe that all the difficulties arise in the last 0.5 dB or 1dB of the power budget (as well as jitter budget). It is worthwhile to ask module vendors how much would their yield improve if they are given 0.5 or 1 dB. It is responsible for most yield hits, making products much more expensive.
I believe that selecting specifications that penalize 95% of the customers to benefit 5% is a wrong design point.


You make another point - that larger data centers have higher bandwidth needs. While it is true that the bandwidth needs increase, you fail to mention is that the distance needs today are less than on previous server generations, since the processing power today is much more densely packed than before.


I believe that 100m is more than sufficient to address our customers' needs.  


Sincerely.


Petar Pepeljugoski
IBM Research
P.O.Box 218 (mail)
1101 Kitchawan Road, Rte. 134 (shipping)
Yorktown Heights, NY 10598

e-mail: petarp@us.ibm.com
phone: (914)-945-3761
fax:        (914)-945-4134

Jonathan Jew <jew@j-and-m.com>

03/14/2008 01:32 PM
Please respond to
jew@j-and-m.com


To
STDS-802-3-HSSG@LISTSERV.IEEE.ORG
cc

Subject
[802.3BA] Longer OM3 Reach Objective


I am a consultant with over 25 years experience in data  center
infrastructure design and data center relocations including in excess of 50
data centers totaling 2 million+ sq ft.  I am currently engaged in data
center projects for one of the two top credit card processing firms and one
of the two top computer manufacturers.

I'm concerned about the 100m OM3 reach objective, as it does not cover an
adequate number (>95%) of backbone (access-to-distribution and
distribution-to-core switch) channels for most of my clients' data centers.


Based on a review of my current and past projects, I expect that a 150m or
larger reach objective would be more suitable.  It appears that some of the
data presented by others to the task force, such as Alan Flatman's Data
Centre Link Survey supports my impression.

There is a pretty strong correlation between the size of my clients' data
centers and the early adoption of new technologies such as higher speed LAN
connectivity.   It also stands to reason that larger data centers have
higher bandwidth needs, particularly at the network core.

I strongly encourage you to consider a longer OM3 reach objective than 100m.

Jonathan Jew
President
J&M Consultants, Inc
jew@j-and-m.com

co-chair BICSI data center standards committee
vice-chair TIA TR-42.6 telecom administration subcommittee
vice-chair TIA TR-42.1.1 data center working group (during development of
TIA-942)
USTAG representative to ISO/IEC JTC 1 SC25 WG3 data center standard adhoc