Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [HSSG] 40G MAC Rate Discussion

Hash: SHA1

Shimon Muller wrote:
> Med,
> Excellent questions. See in-line:
>> 1) Application: 40GE vsus 4x10G LAG:
>> What application(s) do think will require a 40GE single pipe
>> and can  not be addressed by 4x 10GE LAG?
> Any application where round-trip latency (rather than bulk throughput)
> is of essence. These are request-response type of applications, such
> as Oracle. Both 40GE and 4x10GE LAG may be able to handle the same
> number of transactions, however from a single user's perspective, the
> 40GE network will respond 4 times faster. This has direct correlation to
> the application's performance.

With all respect, I question this logic - it is my understanding that
the time involved in processing a packet on a host in the host protocol
stack, memory copies, context switching, etc is much greater than
propagation time on the wire (different order of magnitude).  I'm happy
to be wrong if someone has a reference, but I seem to remember that
host protocol stacks take longer than network transceivers, fibers and
switches to process packets, and so in a data center environment with
short transmission distances host packet processing time dominates
application to application latency.

Secondly, assuming that this logic is correct, a 100G interface would
be much better than a 40G interface - the signaling rate is higher for
100G, and even if the host bus only went at 40G you'd have less time on
the wire at 100G.

Is there data showing a significant decrease in transaction latency
between 1G and 10G host interfaces?  One might be able to extrapolate
from that....



>> Few people mentioned already that 4x LAG was manageable.
>> My understanding is high end server are going multi-core/multi-CPI,
>> I  would imagine that Network IOs (flows) would fit well within the 4x
>> 10GE model.
> Correct, but only if you get uniform spreading of the flows. And how do
> you guarantee that? That's the "manageable" dilemma that applies for
> both carrier and server environments, and is regardless of how many
> links you have in the LAG. There is essentially no good way to control
> this at the network management level.
> For network environments where a link has lots (millions) of flows, you
> can rely on statistical multiplexing for getting good spreading of flows.
> I would imagine that this would happen more often in carrier links rather
> that server links, since they would have a lot more aggregation of users.
> Apparently it is a problem for both.
> Finally, in the database example that I described earlier (Oracle),
> typically
> you get a few dozen flows at best, and the spreading is very poor.
>> 2) Relative cost:
>> What is the expected relative cost vsus 10GE ports (or 4x 10GE LAG)?
>> and in what time frame? Given that the idea is the 40GE  will fill 
>> the gap
>> between 10G and 100G for servers (say between 2010-2015?). 
> I will let the optical vendors speak for themselves, but what I have
> heard is that a 40Gb QSFP solution will be 2x of current 10Gb SFP in
> 2008. By 2010 I would expect it to be similar to SFP.
> On the MAC side it should be essentially free, since I expect by 2010
> quad 10Gb NICs being the highest volume NICs, and adding 1x40Gb
> capability will be trivial.
> Regards,
> Shimon.

- --
Eli Dart                                         Office: (510) 486-5629
ESnet Network Engineering Group                  Fax:    (510) 486-6712
Lawrence Berkeley National Laboratory
PGP Key fingerprint = C970 F8D3 CFDD 8FFF 5486 343A 2D31 4478 5F82 B2B3
Version: GnuPG v1.4.7 (FreeBSD)