Phone Conference P1450.1 Working Doc Subgroup

Thurs August 14, 10:00 am PDT

 

Attendees:

Greg Maston (chair & scribe)

Bruce Kaufman

Peter Wohl

Tom Bartlestein

Doug Sprague

 

Documents

 

   http://grouper.ieee.org/groups/1450/dot1/p1450.1-D16-review-resolution.pdf  (open issues)

   Document for Tom Bartlestein (see below)

 

Agenda

  1. ITC meeting

  2. Doc clarification

  3. Fail Feedback


Meeting discussion

 

Topic 1- ITC:

  A face-to-face meeting has been proposed for ITC, Mon Sep 29, with

.3 in the morning , and .1 in the afternoon, roughly 1-5pm. There

has been limited response to this meeting so far (Doug unavailable,

Peter in the afternoon, Greg and Tony will be there). A goal for

this face-to-face meeting is to approve the draft in existence at

that time for balloting.

 

A concern was raised with acceptance by .6, and Greg stated that the

expectation is that we would clear all .6 issues before this meeting.

 

Both Bruce and Tom indicated a tentative availability, but ITC

attendance has yet to be official for them. Assuming they attend ITC

they would be available.

 

Topic 2:

Tom identified a concern with clarity of the current proposal,

specifically the \j and \m constructs. His primary confusion was with

justification for these constructs. His *perception* was that this was

primarily to support bidirectional handling, but that was not stated

and the application examples not sufficiently clear on the motivation.

 

Peter identified that the original motivation for these constructs

came from bidirectional handling.

 

After Greg started to review section 18.2, he took [AI1] to rewrite

this section. It is missing entirely any syntax discussion about these

constructs, and needs more explanation. It is also incorrect to

definitions in Annex H that were derived in a STIL dot1 meeting years

ago.

 

Tom also identified that it is difficult to understand this construct

because of the separation of all components (\j in pattern data,

WFCMap in Signals). Pointed out Annex H which may address this

particular issue; Annex H should probably be referenced from section

18.2.

 

Topic 3:

  Peter summarized the previous discussions on the cross-referencing

mechanism between pattern data and fail information, with the

following, hopefully not too badly reworded, generalization:

  The current proposal is a simple construct that does not impose

  proper usage. The STIL generator needs to be aware of where it

  places these constructs, in order to get back informative fail data,

  and the STIL fail-detecting tool needs to apply the proper

  interpretation principles on this data to generate appropriate

  information.

 

Fundamentally, the principal of operation for this construct is a

"label" (not (necessarily) a STIL label, just the term "label") that

is defined at "important points", and a concatenation process that

appends labels through procedure and macro calls. And a default label

at the start of the patterns, as well.

 

The issue of whether to use STIL labels or use the X-ref statement was

discussed later.

 

A fail-logging system could report a failure that occurred, for

example, 10,000 cycles from the start of the patterns (using the

default label at the start of the patterns as a reference), or it

could identify the failure as occurring 10 cycles after the last-seen

concatenated set of labels. Both are valid locations, and both may

have useful applications. But there needs to be co-ordination between

the fail-logging system and the fail-interpreter so the interpreter

gets the information necessary. That co-ordination is defined next:

 

The requirement that the fail-logging system reports AT LEAST the

failure relative to the LAST SEEN LABEL is the key expectation to what

the fail-interpreter is expected to support. The fail-reporting

mechanism may report additional labels as well, but the last-seen

label and cycle-count from that label is required.

 

A concern was raised about failing during loops. If a label is placed

inside a loop, then you only see the number of cycles offset from the

last time the vector associated with that label was

executed. Therefore, the *recommendation* was made to NOT place labels

inside loops. However, placing the label just before the loop will

cause the number of cycles since that label to be counted. This does

still require some interpretation on the fail-reader side (cycle 15 of

a 3-cycle loop would be the 5th iteration through that loop), but the

data is available to make that interpretation.

 

Fail data contains the concatenated label sequence, the cycle count

after that label, and the failing signal.

 

Some discussion occurred about the limitation around test-data

transforms; by definition, these labels are effective only in the STIL

data they are defined with. If that data is transformed, then the

transforming tool is responsible for supporting reverse-mapping of any

transform operations, to allow the labels to be interpreted in the

incoming context.

 

A question was raised about tagging inside a Shift construct, and the

same context as the Loop was presented: ideally the label placement is

before the Shift, to allow the cycle-count to be correlated with the

data-offset and ultimately mapped to a corresponding register that

containes the unexpected value. The point was raised that if the last

label was placed 2 Vectors before the Shift started, then the

interpreting tool needs to be aware of where that label was placed

during generation, and compensate accordingly for that label

placement.

 

A request was made to carefully and fully define the term "cycle

offset" which we are so liberally bantering about here.

 

Peter presented another usage of this data, which is a perspective

provided from Jason Doege who wasn't present at this meeting: the

other application of this data is to support test translating

processes where it is necessary to know (or have identified)

pattern-unit boundaries. Peter believes that this labeling strategy

addresses this issue as well.

 

There are some recommendations of where these labels should occur:

 - at the start of a discrete integral test unit aka a "pattern unit".

 - at the start of a load/unload process (notably when scan data

 starts to be shifted outside the context of a Shift statement)

 - at the end of a load/unload process (notably when scan data

 continues to be shifted outside the context of a Shift statement)

 - at the start of a capture operation.

 

The following context was added to this as well: in contexts that

require "timed tests" notably a clock "release" and a clock "capture"

operation, these labels may be appropriate to identify these distinct

operations as well.

 

Doug raised some more concerns about transforms, for instance where a

single Vector statement gets broken into 3 separate Vectors at

test. Need to maintain the reverse-mapping operation. Peter identified

that this might even be accomplished by the tool performing the

transformation, by generating labels in the transformed data that

indicate the transform effects, and support re-mapping back to the

original label by understanding these constructs.

 

Doug requested a cohesive example of all of this, which Greg recorded

as [AI2] to Tony and Greg.

 

At this point, Tom's proposal on identifying the

fails-collected-region was reviewed. Greg indicated that the labels as

they've been discussed should be sufficient to represent the

PU_IDENTIFIER in Tom's email, to mark the region of the test over

which fail data has been collected.

 

Peter started a topic on label uniqueness ---

  The recommendation that the distance from the last-seen label is

  important. But there are some contexts where additional

  label-distance information may be useful as well.

 

For instance, consider a single ATPG "pattern_unit" that is looped at the

tester a number of times, and fails are reported against all

iterations of that looped pattern. In this case, the pattern will have

internal labels to this outer loop operation (placed by the ATPG

generation), and the ATPG tool will need this labels --- even though

they are inside a loop --- to perform it's diagnosis on that data

[since this loop was added as a post-generation transform the ATPG

cannot know about it]. HOWEVER, the user may want to ALSO have the

failures reported against a label OUTSIDE of this loop, in order to

identify what iterations of the loop actually failed. Since the user

(ought to) know how large this pattern loop is, the user can calculate

which iterations failed.

 

This is one example of where multiple sets of label-offset information

per failure may be useful.

 

Peter requested that the fail statement support multiple

labels-and-offsets to address this behavior, and the recommendation or

requirement that the last-seen-label always be present.

 

Peter identified that we do not want to restrict definitions to

require either single labels, or necessarily the presence of unique

labels in the flow.... because labels are concatenated, what may

appear to be the same label at different locations may well be

presented as a unique sequence of concatenated information from the

run-environment.

 

As a consequence of these considerations, Peter voiced a preference to

use the X statement for labeling these behaviors rather than

stretching the current label constructs.

 

After Peter's presentation of his debugging scenario above, which

violated the earlier statement about placing labels in loops, the

following statement or words to this effect was requested to be

inserted into the presentation of this behavior: "A label inside a

loop is ambiguous or may result in incompletely defined information,

and should be applied with caution."

 

The question of the option for multiple fail reports was

considered. The Working Group agreed that the fail reports should

follow more-or-less standard naming conventions for STIL blocks -

there may be a single unnamed fail-report block, or multiple named

fail-report blocks in one STIL context. For instance, multiple blocks

could contain the results of failed devices across one wafer or lot...

 

After a bit more discussion on whether to use labels or the X

statement, in which Greg recommended using the X statement to avoid

conflicts with current STIL implementations that made assumptions on

label behavior base on current (1450-1999) definitions, the Working

Group agreed to use the X statement to support this construct. Peter

requested that this construct be presented as an "advanced labeling

construct", and to not limit the definition to fail-feedback contexts

as the power of the construct is likely to find application in other

labeling contexts (for instance, Jason's test-translation needs).

 

Doug then raised a new perspective on this issue. He raised the

concern about using this mechanism to report not just fail data, but

full "dump" of all states during a scan-unload operation, to feed back

to a diagnosis tool. The concern is : is there redundant information

in the current fail-feedback construct that ought to be more concisely

represented if the tester was running in a mode that was just dumping

the states out?

 

Tom extended this notion to not be limited to a single signal, but a

collection of signals, in fact, perhaps, all the signals at test. The

consideration was to perhaps apply SignalGroups to the fail data,

which Peter was apprehensive about as the Groups could be general

signal-reference expressions, and in this context could contain

variables that might even change value as the dump (patterns)

progress.

 

This was left as [AI3] for all members of the Working Group to

consider and generate proposals to address in the next meeting.

 


Next meeting

Next phone meeting Aug 28.

 

AIs

new

[AI1] Greg augment section 18.2 to contain syntax and semantics.

[AI2] Tony (w/Greg) work up a complete representation of this proposal.

[AI3] Working Group - consider the consequences of large amounts of

      fail data and identify proposals if there is concern here.

 

old

[AI1] Greg - generate examples requested at 6/27 meeting

[AI2] Tony - Work with Paul Reuter to see if any additional syntax or

interpretations are required to support the lock-step implementation as

currently being envisioned by the P1450.6 working group.


Document for Tom Bartlestein

PatternFailReport {

        Pattern PAT_NAME;

        PatternBurst PAT_BURST_NAME;

        PatternExec PAT_EXEC_NAME;

        FailData { // ...as before

        } // end of fail data

        FailsCollected {

                ( CollectedFrom PU_IDENTIFIER CollectedTo PU_IDENTIFIER;

)*

                ( CollectedFrom PU_IDENTIFIER {

                        (Loop loop_number;)*

                        (Macro macro_name; )*

                        (Measure measure_number; )*

                        (Procedure proc_name; )*

                   }

                  CollectedTo PU_IDENTIFIER {

                        (Loop loop_number;)*

                        (Macro macro_name; )*

                        (Measure measure_number; )*

                        (Procedure proc_name; )*

                   }

                )

        } // end of fail data collected info

 

The PU_IDENTIFIER in CollectedFrom indicates the start of a pattern

range where the test equipment was potentially collecting failure data,

if failures occurred.  The CollectedTo PU_IDENTIFIER indicates the last

measure for which failure data was potentially collected.  The details

of the syntax of PU_IDENTIFIER and its attributes are exactly the same

as the PU_IDENTIFIERs which appear in the FailData block itself.