Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: 10,000,000,000 bps is the right choice for the next Ethernet data rate




Howard,

I would like to take exception to some of what you write here.  I have
been implementing LAN and WAN systems for several years.  I have had to
compare the cost of the interfaces as well as other cost of
implementation and ownership issues.  Please take my experience as a
guide on some of these.

Section "A" is an expression of the "traditional produce" of 802.3
bandwidths.  Other than a clean signal rate boundary, there was not a
specific reason for the original Ethernet to be an exact 10mbps.  802.3
adapted this rate because it was convenient.  From what I can gather,
the continuance of moduo 10mb signal rates has become  tradition of
802.3, it was not a technical limitation of physics.  Even with the
original Ethernet, the actual data rate was not 10mbs.  Slot time,
interframe gap, and other overhead reduced the actual data rate just
like it does for other protocols.  Insisting on "traditional" solutions
would actually restrict competition and technology development, and
would cost a lot more to implement in some environments.

As for a 4% difference making the difference in the sale, you may be
right.  There are a lot of unscrupulous sales people in our industry.  I
just turned down a sale support job because I wanted to be able to
maintain my personal ethics.  This is where the market will make the
difference, not the engineers.  Look at the history between VHS and
BetaMax.  BetaMax has been touted as the superior technology, but VHS
was easier to sell.  From a service provider standpoint, the additional
potential functionality and leveraged existing technology of ~9.58Gbps
is easier to sell.

Under section "B" you talk about a "nice to do" issue.  Have you set
down an calcualte the link overhead for each of the 96 ports and
subtracted it from the fully saturated bandwidth produced by those
ports?  With only one link on a ~9.58Gb data rate interface,  I am sure
that it can keep up with 96 100BaseT ports.  There would be more of a
concern for keeping up with 10 GbE ports; but you did not mention that.

Under section "C" you reference the cost of ATM as a reflection of a
SONET based PHY.  The cost of an ATM interface in routers and other ATM
terminating equipment is more a reflection of the segmentation and
reassembly (SAR) processing costs.  Most of the latency of ATM
interfaces is also in the SAR processing.  To make a comparison between
interfaces that have SAR function and those that do not, compare the
cost of an OC12 interface in a router, such as one from Cisco, with one
in a ATM only switch, such as one from Fore Systems. An ATM cell switch
only interface is much less expensive than a SAR interface.  For an
equitable bandwidth, an ATM cell switch only switch is much closer to
GbE costs than is an ATM SAR interface.  For the ATM cell switch only
interface, the higher cost is the additional processing required for the
higher number of switched data blocks (cells) for ATM compared to GbE.
The additional cost of the SONET like PHY is negligible for given
bandwidth, fiber type, and wavelength.

Under section "D" you have concerns about 10GbE looking like packet over
SONET (POS).  It will not.  In the first place POS is a HDLC/PPP link
for L3 only switching.  10GbE is for L2 switching.  Different
functionalities will produce different interface types.  True, the L3
routers will not be able to leverage the cost saving of 10GbE as much as
the L2 switches will.  At the cost that one of the L3 only router
vendors is wanting for the OC192C interface, there may not be much
market for POS at that rate.

Personally, I tend to lean toward a parallel PHY for initial LAN
deployment.  I think that long term the ability to leverage technology
with a serial ~9.58Gbps PHY will over take the parallel PHY.  This is
personal opinion based on observing the technology/market for several
years.

Thank you,
Roy Bynum,
MCI WorldCom


Howard Frazier Wrote:
______________________________________________________

10,000,000,000 bps is the right choice for the next Ethernet data rate
because:

A) it is exactly 10 times faster than Gigabit Ethernet.  This is more
important than many people recognize.  In the switch business, products
are evaluated (in part) on their packet forwarding rate, and on the
fraction of wire speed performance they can achieve.  At 1 Gbps, an
ideal switch forward minimum size packets at a rate of 1,488,095
packets per second.  This is ten times faster than a 100 Mbps Ethernet
port (148,809 pps) and 100 times faster than a 10 Mbps Ethernet port
(14,880 pps).  These numbers have been ingrained in the heads of
customers, testers, writers, designers, managers, salesmen, indeed any
one who has ever gotten their hands on an Ethernet switch.  To these
people, 10 Gbps means that their switch should forward 14,880,952
packets per second.

This is a very strong point of competition between vendors, and a very
common metric for comparison.  Therefore, the customers have strongly
held expectations, because every sales guy they have ever met and every
comparison they have ever read stresses this figure.  A 4% difference
between products can, and does, make the difference in a sale.

If we adopt a signaling rate which does not yield a maximum small
packet forwarding rate of 14,880,952 packets per second, we will fail
to meet customer's expectations.  Thus, we will start out with a
negative image to overcome.

B) A "10 Gig" switch port that actually runs at only 9.584640 Gbps won't

quite keep up with 96 fast ethernet ports.  96 turns out to be a nice
number of ports, because it is a multiple of 24, which is the maximum
number
of ports you can line up across the front of a line card in a chassis
that is mounted in a 19" rack.  Thus, you find a lot of 24, and even
48 port line cards out in the world.

C) I am very concerned about adopting a SONET signaling rate and the
associated scrambler and framing logic, because my experience with SONET

framing chips is that they are very big and very expensive.  ATM adopted

SONET framing for the physical layer, even in the LAN environment,
precisely because the ATM proponents wanted "seamless connectivity"
between the LAN and the WAN.  There was no other reason to use SONET
framing in the LAN.  However, the SONET physical layer chips for
ATM were extremely costly, so ATM host adapters and switch ports were
very costly, and this is one of the factors that prevented ATM from
gaining
any serious penetration in the LAN market.

The argument has been made that, however many gates it takes to do
SONET,
it can't be any worse than 1000BASE-T.  This sounds compelling, until
you consider that the silicon represents most of the cost of a
1000BASE-T
physical layer.  With a SONET based 10 gigabit Ethernet physical layer,
you will incur the cost of a complicated piece of silicon, PLUS the cost

of the optics.

D) I am also concerned that simply adopting a data rate of 9.584640 Gbps

and the scrambling polynomial and the frame structure is not the end of
the story.  Once we set off down the road towards SONET compatibility,
we
will wind up with something that looks more like POS than Ethernet, and
we don't need to write a standard for POS.

E) I think that there are other choices.  In the LAN, I see no reason
why we can't use a 10,000,000,000 bps data rate (which I will
henceforth shorten to 10 Gbps).  For WAN connectivity, I think that a
device can be built that will perform rate conversion between 10 Gbps
and 9.584640E9 bps.  This device would need a relatively small amount
of buffering, and it would need a mechanism to slow down the transmit
side of the 10 Gbps MAC.  This mechanism could be 802.3x frame based
flow control.  Since the data stream to the receiving side of the 10
Gbps MAC is carrying only 9.584640E9 bps worth of Ethernet frames,
there is plenty of bandwidth available to send Pause frames from the
rate converter to the 10 Gbps MAC.  Thus, a MAC designed to run at 10
Gbps in the LAN will be throttled back to 9.584640E9 bps when connected
to the WAN through one of these rate converters.

Howard Frazier