From: Ernst_Wahl [wahl@cool.agere.com] Sent: Wednesday, November 20, 2002 7:14 AM To: stds-1450-4@majordomo.ieee.org Subject: stds-1450.4: Regarding the 10/24/2002 FlowDiagram Don, Thanks for expressing your take on the state of development in a most thoughtful way. I'd like to respond: > Thanks to Jim and to Ernie and others for the productive discussion > today regarding the block diagrams as posted on 10/24. I found this very > helpful in understanding some of the more subtle aspects of that > model. I considered this discussion as intended to help understanding of > that model, rather than as "is that what we should standardize on?". A model is useful as a foundation for developing a syntax. I favor that approach over developing a syntax and trying to divine from that what the model is. Nonetheless, i expect interplay between the definition of a model and the associated syntax during development. I think we need to standardize on a model and certainly, this is one that has been proposed as a basis. > Here are some of my thoughts related to its applicability to > standarization. > > 1.Inheritance. I consider inheritance (in the object oriented sense) a > very powerful concept and I very much like the approach where when a > FlowNode calls a "test" and that "test" can be either a test method > or a subflow. However, STIL.4 will be specifying a syntax, and it is > less clear to me how inheritance is represented in syntax - or even > if it is necessary or appropriate to do so. Inheritance could be > considered an implementation aspect, and therefore not be explicitly > identified in the language. For example, in looking at the STIL data > model (Annex B - informative) of 1450-1999, I see inheritance > (generalization - represented by a small upwards-pointing triangle > with lines coming out each vertex) being used in places where there > isn't much/any polymorphism (such as Pattern Statement), or where > there is an invented base-class (SignalRef) that doesn't otherwise > appear in the language. Also, what I thought was an example of > inheritance (PatternBurst ISA Pattern) didn't show up that way. So, > I'm not saying inheritance is the wrong model, I'm just questioning > whether it is a concept that will be carried forward explicitly into > the syntax. With respect to the proposed model, inheritance would be carried into the syntax for building user-defined test methods from primitives supplied by the standard. Where inheritance is not carried into the syntax, it remains useful for explaining the model, whether or not software implementers choose to use it or not. > 2.It is interesting to me how similar a Test Flow Node is to a Test - > but that they are related to each other in the inheritance model. As > a designer I've learned that "similar but different" often indicates > that perhaps more refactoring of the model is necessary/possible to > make things that are similar actually be the same. I'm not saying > anything is wrong here, only that more thinking may be warranted. I would expect software vendors to implement as they see fit. Even though there are many HAS-A relationship similarities between a FlowNode and a Test, i fail to see an IS-A relationship and therefore would be hesitant to use the word inheritance in describing the model or the syntax. If there is a benefit that i can't presently see, i'd be motivated to follow this track. > 3.Regarding tests resolving to just a Pass/Fail. (this was still an > open issue from earlier discussions). I'm hoping we can find a way > where the syntax is nice and simple and obvious when a test returns > pass/fail, but that scales easily to other cases that can arise (such > as an error return or returns only a measurement, or returns one of N > states). Certainly I agree that just Pass/Fail probably covers >90% > of the situations, and that the N-ports capability at the enclosing > FlowNode could cover those remaining cases, I'm just not convinced > yet that that is the best way to factor this problem. Without a doubt, we have to decide how to deal with exceptions, e.g., "divide by zero" or "can't make a measurement". Do we complicate and clutter the flow by using a test's Pass/Fail result as we normally do or do we provide a back-door mechanism ? Here's a case for both: i don't want to see the flow explicitly everywhere a "divide by zero" can occur but i would like to control the flow explicitly when a binary search method "can't make a measurement" due to faulty end points. We are very close to the line between a test specification language and a test implementation language here. > 4.In my mind, I group the "Actions" (i.e. Pre-Action, Pass-Action, > Fail-Action, N-Action) somewhat differently. One group is what I'd > call flow-control actions (such as the by-pass capability, binning, > selecting a port); these have direct execution semantics. In another > group, I think there is a need for user-controlled actions > (user-code), that would include saving values into a variable, but > also things like logging to a file or opening/closing a relay on the > loadboard. I don't think we'll be able to define what those are, but > I think we can/should accomodate those by placing some restrictions > on where they can occur. Perhaps "only in a test method" is a > sufficient restriction, but that seems overly limiting to me. In a perfect world, i would try to keep what is currently labeled as Actions as flow control actions. The current repertoire of actions are SkipIf (pre-test action only), conditional and unconditional forms of Assign (pre- and/or post-test action), Bin (post-test action only), and grouped actions. There's already a crack in my perfect world: there's nothing to prevent a user from assigning a value to a non-testflow-controlling variable. Allowing a variable to be assigned the return value of a user-defined function is a new wrinkle, but seems reasonable to me. Until we understand why it's too limiting, i would be tempted to have all tester resource manipulating statements be carried out under the auspices of an instantiated testmethod. It seems reasonable to have a "testmethod" that does nothing but specify or load tester resources, including closing relays. Data-logging i would also tie to specific testmethods, e.g., the "testmethod" that loads tester resources may not have data to log. Again, we are treading the line between a test specification language and a test implementation language, i.e., how far do we want to go in describing a test implementation language ? > 5.I'm intrigued with the by-pass capability. It seems like a good idea > and easy to implement. However, I note that it is not a capbility of > other systems I'm familiar with and that I don't remember any > problems arising in those systems where bypassing would have been the > solution (I guess I'm questioning how essential it is.) I do wonder > if it might lead to some more subtle issues - such as how to scope > data produced by a test method such that it becomes invalid (or > defaulted?) if bypassed, or if there are any customer-user-code isses > that might arise. You'll find this type of capability on the Credence for example, in the form of an execution bits mask. In addition to its utility during development and debugging mentioned by Jim O'Reilly, it is also a useful maintenance construct, i.e., you can add or remove a test to or from a flow in one motion including the part that says: "do this test only in the wafer environment". Other testers implement test-skipping logic statements surrounding the test instead of being part of it. Because of differing scoping rules on various target testers, i try to keep things global. > 6.I've found that applications engineers generally have wanted to be > able to call more than one test-method in sequence (from other > system's equivalents of the same Test or TestFlowNode). We've > discussed this before and I haven't found much support for this > concept, but I look at what our users are doing and see that feature > is being used quite extensively. So I do wonder what impacts that > would have if added to the model. A more concrete example would be useful here, maybe a formal testcase. Certainly, we have the notion of testmethods with multiple personalities, e.g., production, characterization (linear, binary search). There is nothing to prevent us from defining testmethods that perform more than one task, although i would expect the standard to provide primitives which could be used to build more complex multi-task testmethods. > -DVO- ----------------------------------------------------------------------- Ernie Wahl Agere Systems Tel: 610.712.6720 Computer-Aided Test 22F-218B, 555 Union Blvd Fax: 610.712.4235 Allentown, PA, 18109 Email: ejwahl@agere.com -----------------------------------------------------------------------