Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [8023-POEP] Presentations for the ad hoc meeting today



Title: RE: [8023-POEP] Presentations for the ad hoc meeting today

Hi Steve,

 

Thanks for the quick analysis.

 

The end result that you got makes sense due to the fact the main reason we did classification in the 802.3af is to save power supply costs.

Your results support it.

 

The 802.3at objectives required enhancements for the classification especially the granularity since system vendors was not happy with the current situation.

I expect that (as shown in previous presentations) that Power Management reduces systems cost dramatically.

 

We all aware that we should find the point in which in 802.3at we will have further improvements by negligible investment in increased number of classes.

 

I'll check our data base for testing costs compare to IEEE802.3af and PS costs and review your analysis which looks reasonable.

 

 

 

Yair  

 


From: owner-stds-802-3-poep@xxxxxxxx [mailto:owner-stds-802-3-poep@xxxxxxxx] On Behalf Of Steve Robbins
Sent: Friday, March 03, 2006 2:06 AM
To: STDS-802-3-POEP@xxxxxxxxxxxxxxxxx
Subject: Re: [8023-POEP] Presentations for the ad hoc meeting today

 

 

_____________________________________________
From: Steve Robbins [mailto:steven_robbins@xxxxxxxxxxxxx]
Sent: Thursday, March 02, 2006 10:24 AM
To: 'STDS-802-3-POEP@xxxxxxxxxxxxxxxxx'
Subject: RE: [8023-POEP] Presentations for the ad hoc meeting today

Guys,

During today’s classification adhoc, there was a lot of discussion about the cost of increasing classification granularity.

I think that looking at this from an economic viewpoint is probably a very good idea.  Below is my rough attempt to estimate costs.  I welcome comments and other methods.

1.      This is all about PSE costs.  PD costs are outside the scope of my analysis.
2.      I can’t address PSE development costs (non-recurring engineering), since nobody will give any info on that.  So the following only looks at production costs.

3.      Increasing the number of classes will increase the cost of manufacturing testing.
a.      Assume the cost of testing is linearly related to the number of test cases.
b.      Let Ctotal= Total cost of producing a PSE, including material and testing.
c.      Let Ctest=Total cost of testing a PSE during manufacture.  That’s PoE tests plus all other non-PoE tests.
d.      Let Nclass=Number of test cases required (under Af) to verify the PSEs ability to recognize classification signatures.

e.      Let Ntotal=Total number of manufacturing test cases for a PSE (under Af).  This’s total for PoE and non-PoE tests.

f.      Let Nat=Number of class signatures proposed under At.  (There are 4 classes under Af, including class 0.)
g.      Then the cost increase = (Ctest/Ctotal)*(Nclass/Ntotal)*(Nat/4)
h.      From informal discussions, I gather (Ctest/Ctotal)=5 to 10%, and (Nclass/Ntotal) is approx 5%.  But feel free to plug in your own numbers.

i.      Therefore, if we let Nat=30, the cost of the system goes up approx 1.9% to 3.8%.
4.      In theory, more class granularity would reduce wasted power, and lower material cost by allowing a smaller main power supply to be used.

a.      Let Cps=Cost of main power supply in an Af-PSE
b.      Let Waf=Worst-case wasted power ratio due to limited granularity of present class scheme (af).  Example: If a PD needs 6.5W then it presents a class 3 signature (see table 33-10), and the PSE must allocate 15.4W (see table 33.3).  Therefore worst-case Waf=15.4/6.5=2.37.

c.      Let Wat=Worst-case wasted power ratio due to granularity with new scheme (At).  For 30 classes (Nat=30) on an exponential scale covering 2W to 100W, I’ve calculated Wat=1.15.

d.      The relative cost of the smaller power supply is approx (Cps/Ctotal)*(Wat/Waf).
e.      From informal discussions, I gather (Cps/Ctotal) is approx 50%.  (I don’t know if that was for a midspan or endspan.)

f.      Then the cost savings would be approx (50%)*(1.15/2.37)=24%

So, about 4% extra test cost could facilitate a 24% reduction in material cost, or a net reduction of about 20%.   Obviously, this doesn’t include the extra material cost associated with the new class protocol (be it ping-pong or time-based).

I don’t know if 30 is the optimal number of classes; it’s just one number the adhoc group was throwing around. 

I realize this is a crude analysis, but then engineering ecomonics usually is.  Can anyone come up with better equations, or more realistic numbers to plug in?

Can anyone come up with a reasonable analysis that shows no cost savings by increasing class granularity?

Steve