Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [BP] Simulations for EIT



Hi Pat,

 

I wasn’t taking about 90%. I was taking about 99.99+%. If the worst-worst case was at 10 sigma, would that be over design? Can you even calculate 10 sigma on a hand calculator? J

 

I use all sorts of distributions in my multi variable assessments. It turns out that, when I have 5 variables, the impact of distribution choice is small. When I hit 10 or more variables it is irrelevant. It’s sort of the “Central Limit Theorem”.

 

However I do understand what you a saying. There statically variations and blocking conditions. Our challenges it to determine which are blocking conditions. Is drive voltage blocking or stochastic?

 

Perhaps we can get consensus on this at April 19th meeting if you like.

 

... Rich

 

 

 


From: Pat Thaler [mailto:pthaler@BROADCOM.COM]
Sent: Friday, March 17, 2006 1:03 PM
To: STDS-802-3-BLADE@listserv.ieee.org
Subject: Re: [BP] Simulations for EIT

 

Rich,

 

One can't assume that implementations are done such that the limits in the spec are at the 3 sigma points with the mean about the middle. Implementations won't necessarily be built to the average of the spec. For instance, I've seen cases with past standards of interoperability problems where one vendor chose to put their transmit level close to the minimum with tight standard deviation (trying to optimize for emmissions, power consumption, etc.) and another vendor put their input sensitivity near the maximum.  Another example is that sometimes one or more of the specs is particularly difficult to achieve and designs have an average value that is very close to the limit. A vendor may even take a yield hit and sort out those chips that are over the limit so occurance at or near the limit value happens much more often than 3 sigma would predict.

 

In the past standards I've worked on, we have usually assumed that implementations could be running at the limits because we don't have a basis for assuming a distribution. Where we have applied distribution, it has been to channel characteristics for which we had some basis for believing there would be a distribution - e.g. not all channels will have the worst crosstalk situation.

 

Also, a typical backplane system will have a lot of links. If 90% of them work, the system still has a problem. We have to do a good deal better than that.

 

Pat

 


From: Mellitz, Richard [mailto:richard.mellitz@INTEL.COM]
Sent: Thursday, March 16, 2006 4:20 PM
To: STDS-802-3-BLADE@listserv.ieee.org
Subject: Re: [BP] Simulations for EIT

Ya know… I just did a statistical analysis of the probability of units failing the return we just voted on. Under some assumptions I made, I came up with 2.5 units per 1000 would fail RL and still pass. I heard folks thought this might be something like 90%. The design question is what quality level is acceptable.  I understand this from a business perceptive because I can relate it to cost.  I don’t know how to apply this to standard work. Maybe it does mean all limits at the worst case must work regardless of the likelihood. Maybe this is a Pandora’s Box too. I think our cost is not dollars but delay producing a workable standard.

 

The 80% Joel was talking about was design engineers. This is not the statistics of a design’s quality. If I +/-3 sigma all our limits in the spec, I think we are more like in the 99.9+% quality range right now.

 

The task at hand was to determine if the informative channel spec sufficiently predicted confidence related to the EIT receiver test. That why we used the term “confidence” and not “limit.”  Remember that is why we chose the channel to be informative. We showed we couldn’t constrain all 3 (tx, channel, rx) and create a reasonable and marketable solution. So in light of that I believe we should constrain the analysis to reasonable. Maybe we should do it both ways and discuss what is reasonable at April 19 meeting.

 

… Rich