Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [802.3EEESG] On the topic of tranistion time (was Re: [802.3EEESG] Comments on our work from Vern Paxson)



AV applications (which should fit under "realtime streaming") can occur
quite broadly and in space where there can be power savings by using
EEE. For example in the home where one might be distributing audio or
video. When a data transfer starts up that causes the link to kick to a
higher speed, it shouldn't disrupt that signal. If EEE isn't compatible
with residential audio and video, the default for it is likely to be
power management off and not many will actively manage to turn it on.
Therefore, I think it is important to ensure that it is not disruptive
to AV. 

-----Original Message-----
From: Ken Christensen [mailto:christen@CSE.USF.EDU] 
Sent: Tuesday, June 19, 2007 8:21 AM
To: STDS-802-3-EEE@listserv.ieee.org
Subject: Re: [802.3EEESG] On the topic of tranistion time (was Re:
[802.3EEESG] Comments on our work from Vern Paxson)

Hello all, if I may weigh-in...  It is no doubt that there will be
applications for which EEE is not suitable (and should be disabled). 
Such applications may include latency-sensitive cluster computing and
some realtime streaming applications.  But, will these be a majority of
applications?  So, maybe Vern's comments may still hold for the majority
of cases.  Over time it seems that applications have become more robust,
not less so, to loss and delay. As bandwidth, processing, and memory
capacity have all increased the need for fine grained control (for
classic "QoS") has diminished.

In a previous email to this list, Geoff suggested a control packet to
maintain a soft state of "link power management = off".  Did I
understand this correctly?  I think that remote management of power
state of links and devices will become an interesting problem area for
many protocol standards.  DMTF looked at this in 2004.  Should such
power management control occur at the 802.3 level?

For speed of packet transmission, there are two factors: transmission
time and end-to-end latency.  For small packets, the latter may
dominate.  For a 64 byte packet, the transmission time at 100 Mb/s is
about 5 microseconds and at 1 Gb/s it is about 0.5 microseconds... is
there any protocol (or operating system or device) that is sufficiently
sensitive to tell the difference between 0.5 and 5 microseconds?

Thanks for letting me say a few words.

Regards,

Ken Christensen
Department of Computer Science and Engineering University of South
Florida
Phone: (813) 974-4761
Web: http://www.csee.usf.edu/~christen