Concepts in Test Flow --------------------- Gordon Robinson, Third Millennium Test Solutions. There are several different concepts I see in the test flow examples and discussions. I'm going to try to identify them in the hope that we'll see a way forward. I'll use the term "test" for the main entry in the flow structure, hoping we all understand roughly the same thing. "PatternExec with Levels" is a good start point for the simple digital Tests. I'll also refer to "systems" and "people" fairly randomly when expressing different points of view or methods seen in implemented systems. Overall Program Structure ------------------------- Until Flow work (re)started, STIL was a purely declarative statement of some of the test information. Some in the STIL community see STIL as containing "most" of the test program information, and the flow extensions are seen as the place to make STIL "complete" and able to express the complete test program. Something tells me that STIL will never be complete as a test program language, and that even if STIL expresses 99.9% of the data that moves between the design and tester space, that remaining 0.1% will be both tricky and extremely important. That doesn't mean that we hold back on trying to standardize enough to get the 99.9% really smooth. But I'll try hard to avoid STIL turning into a general programming language in order to handle "a bit more". I much prefer that STIL provide a body of "named things" that can be manipulated in true procedural languages. There is a good reason that 99% of programming languages that have been implemented are now dead: most were hard for even the designer to like. There is no industry consensus I know of for what a "test program" real scope is. Some people think of probe and final test as the same program, others that they are different programs with common pieces. There's disagreement over whether characterization is the same or a different program. And should load board checkers be part of the program or not? I don't believe that we will succeed in standardizing answers to all these questions, but we may standardize how to express whatever preference an organization wants to use. Independent Development of Pieces --------------------------------- Within the whole process of test program creation, there are many different tool suites and styes of test development. Almost any real test program today contains a mix of all of the following: 1. Automatically Generated Patterns (e.g. from TetraMax or FastScan). 2. Verification results converted to tester patterns (e.g. using Fluence TDS) 3. Hand-crafted patterns (mixed-signal tests, continuity tests, wierd stuff) 4. Cut/Paste pieces from earlier programs. One of the subtle requirements on our languages is the need to be able to integrate the pieces from diverse tool sets like this, without each tool having to understand the conventions used by others. STIL actually does quite well on this measure, certainly compared with many tester-specific languages. Examples of the features that enable this requirement are the ability to have named collections of SignalGroups and Spec variables, so that we don't have "all creating tools must agree on variable names and signalgroup names", or the even worse "we use a tool to take all of the pieces and change them to use the unified name set". There are some ways that STIL doesn't do as well, e.g. there aren't well-understood conventions for how to have STIL data in multiple files independently developed, and there is a hidden concept (or several) in the area of "Timings and Patterns that should be handled compatibly". One can get clues about that compatibility from the set of PatternBursts and PatternExecs, but more explicitness would probably simplify implementations. Extensibility ------------- A simple fact of life is that we will never see all of the test methods standardized (OK, not in the active careers of those participitating now). New test methods are always being developed, and personally I hope that continues for many more years. This implies, at least to my tiny mind, that we will never have a syntax, schema or whatever for all of the test method information that actually gets used. We cannot "lock in" the data for test methods in our flow syntax. This is one reason why I like "links" as in web technology, where one place references another that it doesn't need to embed the syntax of the destination. Tests Pass, Fail, or What? -------------------------- Some systems have a strong concept that each test in a flow either "passes" or "fails". Other systems allow several shades of pass or fail, and may even be non-judgmental, just identifying several different exits from each test. Personally I'm a strong believer in tests producing pass/fail decisions. That seems to be the main concept in so many test execution systems that I see it as fundamental. Several Flows to Handle Several Events -------------------------------------- Many views of a test program have several flows, intended to handle different events. Some applications of this concept include: 1. Flows for probe, package test 2. Flows for initializing the program 3. Flows for characterization 4. Flows for load board checkout and custom calibration 5. Flows for debugging this one part. 6. Flows invoked for powerdown of a device when testing stops. Is a Flow a Sequence, a Tree, a Graph, a State Machine, or a Turing Machine? ---------------------------------------------------------------------------- Many flows are (close to) simple linear sequences of tests. Sometimes a Flow allows "some" branching so that it's a tree structure (no reconvergence after branching) Other Flow structures allow acyclic or even cyclic graph structures. And some would like them to be state machine capable or even Turing equivalent. Group Nodes in Flows -------------------- Many systems allow a "group" test concept to express hierarchy and convenience in the flow structure. A flow may be treated as just a special type of group. Some systems allow a group node to set up some instrument conditions (such as power supply settings) for use by the tests in the group. Flow, Stopping, Binning ----------------------- Test systems in the implemented variety have shown many ways of mixing the concepts of test flow, stopping testing as soon as the answer is known well enough and binning strategies. Which Test to Run Next ---------------------- Within a Flow, there are mechanisms that determine which test to run next. The simplest mechanism identifies a test to go to if this test passes, another for if this test fails. Multiple shades of pass/fail give different next tests as the destination. Systems may have simpler handling of "next" test in sequence, either in a graphical view, use of tabular formats for flows, or in the syntax chosen. Facts Determined When Testing ----------------------------- Decisions about the next test to run, binning, stopping, and varying test behavior can often be described in terms of the "facts" established when testing. These facts can include whether particular tests passed or failed, measurements being in named ranges (very low, low, normal, high, real high), or values in variables produced by test algorithms. When to Stop Testing -------------------- One view of when to stop testing is that a terminal node is reached in the structure of the flow. Systems that use "disqualify bins" may stop testing as soon as all "passing" bins have been disqualified. Systems often provide a "Stop/Continue on Fail" option to cause more testing to place. Which Bin the Device Goes into ------------------------------ Some systems identify "bin" with the terminal node in a flow execution structure. Some systems disqualify bins when tests fail. Some systems allow binning to be specified in a decision table type of structure, where the pass/fail and other facts from various tests can be combined. Making Tests Vary Behavior on Different Paths --------------------------------------------- When the same test can be run several times in one flow, there needs to be some behavioral difference between the runs. This could arise from things that happen in other tests obeyed along the path. For instance power supply setup tests may be used. These establish different conditions for the "same" test to run. Keep Separate Concerns Separate ------------------------------- I believe that we need to keep a strong separation of concerns so that we can succesfully standardize some pieces, yet not everything, and have a standard that gets used as intended. In particular I believe that the flow "language" should "link to" or "reference" the tests run in that flow. We should never allow ourselves to get into the syntactic gridlock of having to standardize almost everything to get anything usable. I also believe that flow and binning, while related, are separate and should be defined separately.