TestSockets:
A Framework for System-On-Chip Designs

White Paper, Version 0.4

by

Bernd K. Koenemann (LogicVision, Inc.)
and
Kenneth D. Wagner (S3, Inc.)

April 1997


1. Objectives

This public white paper is meant to initiate a discussion and thought process leading to an industry-wide technical framework and associated specification language for how to integrate pre-designed cores, macros, or megacells (Note: in the following we will use "core" as the representative term for all) with pre-defined test requirements and test data into a testable System-On-Chip design.

The objectives of an industry-wide framework and specification language in this area are

The white paper is submitted to the IEEE P1500 working group as a "suggestion for a proposal". The working group will have to decide whether or not to pursue standardization of such information model is worthwhile, and whether or not the described architecture can be used as a viable starting point.

Please, understand that this document is a work-in-progress.

2. Background

Deep sub-micron semiconductor technologies enable the integration of complex pre-designed system components together with user-defined functionality within a single chip. The result is a System-On-Chip design.

In a traditional Application Specific Integrated Circuit (ASIC), the user builds the desired functions essentially out of small, pre-built gate-level building blocks from a cell library. The test requirements for each cell typically are encapsulated in an explicit or implicit gate-level fault model (e.g., stuck-at faults). The objective of fault-oriented test generation (whether automatic or manual) is to construct test sequences for the complete design, which locally deliver the test conditions implied by the fault model (e.g., a logic '1' is locally needed to test for a stuck-at-'0' fault), and which make any resulting faulty behavior visible to the tester (by sensitizing one or more paths from the fault location to an observable point).

The System-On-Chip paradigm stipulates the widespread use of much larger and more complex pre-designed building blocks (embedded cores or macros) in addition to the relatively simple gate-level ASIC cells. Such cores come in wide variety of flavors ranging all the way from "soft" cores delivered in some higher-level synthesizable Hardware Description Language (HDL) all the way to "hard" cores that are fully implemented down to the physical mask layers. From a test development point of view cores can be mergeable or non-mergeable. Mergeable cores are "merged" with other mergeable cores and user-defined functions. The merged core loses its identity and becomes indistinguishable from the other pieces. The composite structure is tested as one unit. Non-mergeable cores, by contrast, retain their own identity and are tested separately from the rest of the logic. These non-mergeable cores come with specific test requirements and/or pre-defined test patterns rather than a gate-level representation and associated explicit fault model that could be merged with the ASIC-style user-defined functions for conventional test generation.

Notwithstanding the importance of guidelines for dealing with mergeable cores, the P1500 working group has tentatively decided to focus on the non-mergeable variety. In light of this decision, we restrict the following discussion how such non-mergeable cores can be handled from a chip-level testing point of view.

In reality, the issue is neither new nor radical. Embedded memory cores (RAM/ROM) have been quite extensively for several years now, at least at the high-end spectrum of the ASIC world. Memory cores generally are not treated as gate-level structural entities, but as "monolithic" building blocks to whose inputs very specific test pattern sequences must be delivered, and whose outputs must be monitored for correct responses. The main problem to deal with is that after embedding into a complex design, the core Inputs/Outputs (core I/Os) are not directly accessible from the chip periphery. In that sense, testing embedded memories is no different from testing other embedded cores. The generic objective is to develop and implement access methods that make it possible to deliver pre-defined stimulus/measure events to the I/Os of the embedded core, and to generate comprehensive tests for the rest of the chip in presence of the core.

The industry has responded to the core test challenge with mostly proprietary approaches that logically "un-embed" the core for testing. Probably the best known method is to insert multiplexing logic around each core such that, in a special test mode, each core input can be controlled from a corresponding chip input and each core output can be observed at a corresponding chip output. The multiplexing logic establishes a simple test pattern translation method by replacing each reference to a core I/O in a core test pattern with a corresponding reference to a chip I/O (I/O translation map) combined with control signals that establish the proper test configuration. Over the years, numerous proprietary improvements of this basic method have been developed and used. The improvements, for example, allow for sensitizing existing logic paths rather than adding explicit multiplexers, extending the access paths through other combinational logic cores (transparent mode) or across register boundaries (clocked access), exploiting the scan cells in scan-design networks to provide serial access, and/or testing several cores simultaneously. To take advantage of some of the additional options, it becomes necessary to augment the simple I/O reference translation with more sophisticated protocol translation capabilities for test pattern mapping. For example, when using scan cells as corresponding virtual I/Os, each associated stimulus and measure command in the core test patterns must be translated into a corresponding stimulus/measure command at the appropriate offset in a chip level scan operation.

Un-embedding cores for test has certain implications. Multiplexing core I/Os to chip I/Os conceptually may seem enable the application of arbitrary test sequence exactly as if the core were a stand-alone entity. This, however, is not always entirely true. For instance, if the delays along and skew between the access paths cannot be carefully controlled and/or taken into account, it may not be possible to guarantee proper signal timings at the core boundary. That is, the core tests either have to be sufficiently robust with respect to timing, or the access paths must meet certain timing requirements dictated by the core test timing constraints. The differences between the stand-alone test sequence and that seen by an embedded core can be even more striking if some of the other access methods are used. For example, if a scan-cell is used as corresponding virtual chip input for supplying data to a core input, a chip-level scan operation has to be inserted in the chip-level test program whenever a change of stimulus value on the core boundary is required. As a result, the core input could experience significant signal switching as stimulus values are scanned through the associated scan chain before the desired value finally reaches its destination. To be successful in this scenario, either the core test program must be insensitive to more or less arbitrary signal switching between stimulus events on the core input in question (which may be possible for certain data inputs, but probably not for clocks or control signals), or it must be possible to hold the scan cell data outputs stable during scan. Serializing test access via scan reduces the amount of wiring required between the macro I/Os and corresponding chip I/Os, but may result in unacceptably long test sequences. We also must be careful to not overly complicate the protocol translation by inserting several levels of sequential control into the access scheme.

To make a long story short, not all test sequences are created equal. There can be significant differences between the types of access constraints associated with different core I/Os. It is vital that the core developer understands and anticipates the resulting constraints on the environment in which the core can be tested after it is embedded (i.e., the available access methods and path types, and their respective implications), and that the core tests are validated in the context of the anticipated environment. Likewise, each core user must exactly know what specific restrictions/permissions are associated with each individual core I/O such that an appropriate access method and path can be implemented and verified at the chip-level. Finally, tools are needed that help the core users construct and/or validate access paths, and that can perform the appropriate test pattern translation steps (i.e., I/O reference and protocol translations) for successfully embedding pre-defined core tests into a chip-level test program.

The methods discussed above focus on how to access an embedded core and supply pre-defined tests to it. They do not address what happens to other cores and/or the user-defined functions while the core under test is accessed and tested, nor does it answer the question how the chip including the user-defined functions is tested as a whole. A comprehensive chip-level test approach, clearly, cannot stop at testing each embedded core one at a time, but has to address those other issues as well. For example, it should be possible to clearly define and specify how a core should be protected while another core is tested (e.g., to prevent bus-conflicts), how safeguards can be enabled that may have been built in to prevent the core from interfering with the test of another core (e.g., the core should not draw any steady state current, while an Iddq test is applied to another core), and what features are available with a core to enable the testing of the surrounding functions (e.g., the ability to observe core inputs and control core outputs).

3. What Is the Purpose of a Test Socket?

Although the intent is to be as flexible as possible, some fundamental assumptions must be made. One unilateral assumption made for the following is that the chip-level design, including the use of cores, is defined in terms of a structural Hardware Description Language (HDL). To fit into that scheme, we suggest that the test models used for representing the cores also should be structural in nature. Hence, it is proposed to use a special kind of structural entity, called Test Socket, as a universally recognizable container for all test information related to the non-mergeable aspects of a core.

Integration and test of pre-designed embedded cores typically requires the implementation of test access and support structures around the cores. The access structures have to be compatible with the target test environment anticipated by the core provider for the core tests. And, each core should have a mechanism for protecting the core itself while other cores or surrounding functions are tested, and prevent the core from adversely affecting the test of such other functions. In addition, the cores should include features that support the test of surrounding functions (user-defined and/or interconnect). Finally, the cores may come with sets of pre-defined test patterns.

Currently, the industry has no common mechanism for specifying any of the above mentioned features. However, a core user generally needs to understand these features to successfully integrate the core into a chip together with other cores and user-defined functions, and to generate a comprehensive test for the complete chip.

We introduce a special structural entity called Test Socket Block as a universal repository for information related to accessing and controlling embedded cores such that each core and the chip containing them can be properly tested, debugged, and diagnosed. The proposal is to define dedicated attributes and description constructs that can be used in a Test Socket Block description to, among other things,

This preliminary document makes no attempt to formally and comprehensively define all aspects of a Test Socket description language (e.g., based on a Hardware Description Language like VHDL or Verilog). Instead, we introduce a tentative architecture and a small sample subset of possible basic description features for starting a focused discussion.

4. Test Socket Overview

A Test Socket Block, as introduced in the previous chapter, is described by a special set of language constructs. It is anticipated that the special language features eventually will be based on one of the industry-standard Hardware Description Languages (like, for example, the Boundary Scan Description Language BSDL is an extension of VHDL). For now, however, we will use a less formal pseudo-code approach as a medium for introducing and shaping the basic description features.

The overview is somewhat structured like a tutorial that gradually builds up some key concepts with the help of examples. Along the way we will be developing a small subset of pseudo-code constructs that, hopefully, help to make the discussion more precise. The subset clearly is far from complete, and does not delineate the full scope of a robust solution.

4.1 Defining a Test Socket

This section informally introduces an initial set of basic Test Socket features to provide an overview for the reader. It should be understood that each of the features touched on in this section will have to be discussed and evaluated before it can become part of a true proposal.

4.1.1 Test Socket Basics

A Test Socket is envisioned to be a dedicated and universally recognizable type of structural block in a structural Hardware Description Language (HDL; e.g., VHDL or Verilog) model representing the test view of a core (core test model). The core test model itself is assumed to be a "conventional" structural HDL representation (that is, a list of structural blocks, and nets connecting the blocks) of the core intended for chip-level test generation. One or more Test Socket Blocks can be included in the model. In some cases (e.g., the core provider wants to model the core completely as a black box), the Test Socket Block may be the only block in the model.

Some core test models, however, may also contain a shell of "regular" (e.g., gate-level) structural blocks and nets. It is assumed that a Test Socket Block is made recognizable as such by an attribute keyword, say "TestSocket", attached to the block. This keyword indicates that this is a special type of block whose purpose in life is to be a container for test-related information in the form of a Test Socket Description. Otherwise, the Test Socket Block externally looks like any other structural block.

The core test model, including any Test Socket Blocks, is expected to be integrated (merged) with other cores and user-defined functions into a chip-level test model. The regular blocks and nets in the core model shell may completely "disappear" into the overall chip-level model. The Test Socket Blocks, however, remain easily recognizable representatives of each core's identity and special test requirements.

4.1.2 Test Socket Block Name and Terminal Definition

Test Socket Blocks must be defined in some library before they can be used (instantiated) together with other blocks in a structural model. We begin our overview by defining an example Test Socket Block that then can be instantiated in a core test model. The core test model in turn gets integrated into a chip test model. As mentioned, we are using home-grown pseudo-code and diagrams for this purpose.

The Test Socket Description, which defines the properties and characteristics of the Socket, has several sections. It begins with a Block/Terminal Definition Section, which specifies

Rather than beginning with a formal definition of proposed Attributes, we will use a simple Test Socket Block example to introduce the basic features. Figure 4-1 shows pseudo-code for the Block/Terminal Definition Section of a Test Socket Block example:

TestSocket(RedDogOne) is
 Define Group(AllPins) as
  Begin Terminals(AllPins)
   Terminal(a0) as Class(In) in Mode(Assert:In,Logic,X),
   Terminal(a1) as Class(In) in Mode(Assert:In,Logic,X),
   Terminal(a2) as Class(In) in Mode(Assert:In,Logic,X),
   Terminal(a3) as Class(In) in Mode(Assert:In,Logic,X),
   Terminal(a4) as Class(In) in Mode(Assert:In,Logic,1),
   Terminal(b) as Class(Out) in Mode(Assert:Out,Logic,X),
   Terminal(c) as Class(BiDiT) in Mode(Assert:Out,Logic,X),
   Terminal(d) as Class(Out) in Mode(Assert:Out,Logic,0);
  End Terminals(AllPins);
 End Group(AllPins);

Figure 4-1: Block/Terminal Definition Section Example

Note: the pseudo-code used in this document should not be misunderstood as suggesting a particular formal language syntax for a proposed Test Socket HDL. We will use such pseudo-code throughout this document largely as an informal tool to clarify the intended meaning of examples.

The pseudo-code assigns the name "RedDogOne" to the Test Socket Block. The Terminals of the Test Socket Block are defined in groups. The grouping construct is primarily provided for convenience (e.g., a memory core provider could define groups for control inputs, address inputs, data inputs/outputs, and power pins). RedDogOne has seven Terminals named a0, a1, a2, a3, a4, b, c, and d. The meaning of the Class Attributes used in the example is as follows:

The Mode Attributes assigned in the Terminal Definition Section define a default "behavior" that should be used for the Terminals of Test Socket Block instances within a core and/or chip-level test model, as long as no specific Configuration is invoked by the test environment (the purpose and basic features of Configurations will be explained in the next section). The Mode Attribute is written in the form "Mode-Type:Mode--Value". Only a single Mode-Type is used in the pseudo-code in Figure 4-1:

Additional Mode-Types will be introduced later on. The Mode-Value defines the signal nature and actual value assigned to the Terminal:

The resulting default "behavior" defined for Test Socket Block RedDogOne is that of a black box, except for Terminals a4 and d. All input Terminals except a4 are don't cares, while output Terminal b and the bi-directional Terminal c both assert unknown logic values. Only Terminal d drives a known value, namely a logic '0'. The core examples used later on illustrate how a specific value assignment can be utilized for customizing the default behavior of a core.

The Default Context also establishes the boundary requirements for the core when the chip is in its normal (non-test) operating mode.

4.1.3 Test Socket Usage Contexts and Configurations

Designing, testing, debugging, and diagnosing System-On-Silicon (SOS) chips is a complex multi-step process. The test-related requirements at each step along the way may be vastly different even for the same core. For example, testing and diagnosing the core itself requires a thorough exercise of the core. On the other hand, it may be advisable to put the same core into a protected stand-by state when testing other cores or the rest of the chip. The Test Socket concept should make it possible to express such diverse requirements in a convenient way.

To that effect, we introduce the notion of multiple Configurations within a single Test Socket. Each Configuration customizes what is expected from the environment and how the core behaves during various steps of an overall test methodology.

Configurations are specified in one or more Configuration Definition Sections within the Test Socket Description. Each Configuration Definition Section includes

Figure 4-2 adds three Configurations to Test Socket Block RedDogOne from Figure 4-1:

TestSocket(RedDogOne) is
    etc.
    etc.
 Define Configuration(SafeState) for Context(External) as
  Begin Overrides(SafeState)
   Use Terminal(a0) in Mode(Assert:In,Logic,1),
   Use Terminal(a4) in Mode(Assert:In,Logic,X),
   Use Terminal(c) in Mode(Assert:In,Logic,X),
   Use Terminal(d) in Mode(Assert:Out,Logic,1);
  End Overrides(SafeState);
 End Configuration(SafeState);
 Define Configuration(TestMe) for Context(Internal) as
  Begin Overrides(TestMe)
   Use Terminal(a0) in Mode(Assert:In,Logic,0),
   Use Terminal(a4) in Mode(Assert:In,Logic,0),
   Use Terminal(a1) in Mode(Probe:In,Timed,Pi),
   Use Terminal(b) in Mode(Probe:Out,Static,PoOrScancell)
   Use Terminal(c) in Mode(Probe:Out,Timed,Po),
   Use Terminal(d) in Mode(Assert:Out,Logic,0);
  End Overrides(TestMe);
 End Configuration(TestMe);
End TestSocket(RedDogOne);

Figure 4-2: Pseudo-Code Example with Multiple Test Socket Configurations

The first Configuration, called "SafeState", is specified for use with a Context called "External":

SafeState only overrides the Mode Attributes for three Socket Terminal. The meaning of the new Mode-Values is pretty self-explanatory:

The "In,Logic,1", hence, tells us that the core environment will have to provide a static logic '1' to the net connected to Terminal a0 to comply with the requirements for SafeState. The "In,Assert,LogicX" assigned to the BiDi Terminal c indicates that the BiDi is in input-only mode for Configuration SafeState, and that any external logic value is valid for the net connected to the Terminal. This is important to know when trying to avoid three-state conflicts in the surrounding chip functions. The default assignment of a fixed logic '1' for Terminal a4 has been replaced with a don't care, to make sure a4 can be used as a scan mode signal in this Configuration.

Terminals whose Mode Attribute is not overridden retain their default Mode Attribute. In other words, Terminal b of RedDogOne still drives an unknown value, and input Terminals a2 and a3 are don't cares.

The second Configuration, called "TestMe" is defined for a Context named "Internal":

TestMe defines new Mode Attributes for some of the Socket Terminals. One new Mode-Type is introduced in TestMe:

Access from a tester to an embedded Test Socket generally is accomplished by sensitizing a path through intervening functions between the respective Socket Terminal and a qualifying test resource that can provide test stimulus values or measure test response values using a known protocol. A qualifying test resource, for example, could be a chip-level Primary I/O, a physical probe point, a scan cell, or a particular Built-In Self-Test (BIST) circuit I/O, depending on the target test methodology and the type of test data associated with the Terminal.

The Mode-Value Attribute provides additional information about specific requirements for the access mechanism. The following Mode-Values are used in TestMe:

The default Mode Attributes and their Overrides defined in the Configurations determine different modes for how a Test Socket instance interacts with other block instances in a core test model and, eventually, in the test model of a chip using the core.

4.2 Test Socket Blocks and Core Test Models

The purpose of Test Sockets is to complement potentially incomplete structural models that represent a core for testing purposes. The structural test model delivered with the core, for example, could be a simple scannable boundary shell that does not reveal any of the possibly very complex internal functions inside the core, and a Test Socket Block that encapsulates all necessary information on how the core internals can be tested, as well as what should be done with and how the core behaves when testing something else. It must be assumed that the structural test model accurately represents the I/O behavior only under certain conditions (for instance, in an external test mode), while under different conditions (e.g., functional test of the core internals) the I/O behavior is controlled by the invisible internal functions. This section will describe how such ambivalent behavior can be represented by incorporating a Test Socket Block into the structural test model of a core.

4.2.1 Embedding a Test Socket Block in a Core Test Model

The interaction between an instance of a Test Model Block and other blocks in structural core test model is determined by the default and Configuration-specific Mode Attributes, and by how the Test Socket Block Terminals are interfaced with the rest of the model. How this works is best illustrated by an example.

Figure 4-3 shows a simple core test model:

Figure 4-3: Diagram of Scannable Core Test Model with Test Socket Block.

The test model for core XYZ combines an instance of Test Socket RedDogOne with a simple scannable shell model (it does not matter whether this represents an explicit boundary-scan structure provided with the core, or some 'outer layer' of functional scan cells carved out of the core. The example has logic between core input oscar and the nearest scan Flip-Flop. Hence, it is not a dedicated boundary scan design.). We anticipate that the combination of a simple shell model together with a Test Socket that encapsulates all information pertaining to complex internal functions will be a useful and probably fairly common application scenario.

4.2.2 Configuration-Specific Core Behavior

Figure 4-4 illustrates the externally visible characteristics of core XYZ when the default Mode Attributes are active:

Figure 4-4: Diagrammatic View of Core XYZ with Default Mode Attributes

Lines and blocks that are at a fixed value or do not contribute to the externally visible behavior have been de-emphasized in the figure. The default Mode Attributes in RedDogOne assert a logic '0' at Terminal d. This inhibits the explicit three-state driver such that the bi-directional net named jo is totally controlled by the bi-directional Socket Terminal c. Without the '0' on Terminal d, there would be a potential bus-conflict between the explicit three-state driver and the implicit three-state driver behind Terminal c (which is of Class BiDiT). The '0' at d also makes sure that core output net amy in default mode is controlled by the Test Socket. The scan cells, as a result, are not observable and, hence, do not contribute to the externally visible behavior. In other words, in default mode, the core essentially behaves as a black box.

The effective behavior of the core can be changed drastically by invoking different Test Socket Configurations. Figure 4-5 depicts the effective behavior of core XYZ as seen by the chip-level environment when Configuration SafeState is invoked for this core.

Figure 4-5: Diagrammatic View of Core XYZ for Configuration SafeState

The dashed lines in the figure represent signals carrying a fixed value because of the "Assert:In,Logic,1" on Terminal a0. The dotted lines indicate connections that don't contribute to the externally visible behavior (Inputs a1 through a2 are don't cares by default, and Terminal b is effectively disconnected because the fixed values switch the multiplexer that core output amy to the other leg). The Mode Attribute "Assert:In,Logic,X" switches the BiDi Terminal c into input-only mode, and makes Terminal c a don't care. As a result, the behavior seen outside of the core for inputs fred, mary, and oscar, as well as for the output amy, and the bi-directional net jo are now completely determined by the scannable shell model. The Context Attribute in the Test Socket indicates that this Configuration should be used whenever another core or other chip-level functions are tested, but not for testing the core itself.

The situation is quite different for Configurations TestMeMain and TestMeClose, which are depicted in Figure 4-6:

Figure 4-6: Diagrammatic View of Core XYZ for Configurations TestMe

The output behavior of the core is now controlled by the Test Socket rather than the shell model. Socket Terminal b is in Probe mode, and the multiplexer is set such that Terminal b drives core output net amy with static data that can be captured at a chip level Primary Output (PO) or in a scan cell. The explicit three-state driver in the test model is inhibited by the '0' asserted at Terminal d, and Terminal c effectively controls the bi-directional net jo. The Mode Attributes on Terminals a1 and c indicate that when Configuration TestMe is invoked for this core at the chip level, then net mary must be connected to a chip-level Primary Input (PI), and net jo must be connected to a chip-level Primary Output (PO). The assumed driver behind Terminal c is in drive mode.

To summarize, the simple test model with an embedded Test Socket Block establishes four distinct flavors of behavior for the core. The default core behavior is that of a black box. The behavior intended for testing functions outside of the core requires to assert a logic '1' at core input paul, which enables the scannable boundary model. Finally, to test the core internals, a logic '0' must be asserted at core input paul, and appropriate connections must be established between qualifying test resources and Socket Terminals a1, b, and c for the transfer of test data.

4.3 Test Data Sequences

Thus far we have introduced a mostly structural view of the core test requirements, with very limited capabilities for explicit value assignments (a single value can be assigned to a terminal by using the Mode-type Assert). In this chapter we will introduce some basic data sequencing capabilities.

4.3.1 Embedded Test Data Sequence Basics

One express purpose of Test Sockets is to permit the embedding of pre-defined core tests into a chip-level test program. Also, it maybe necessary to apply particular input data sequences to properly initialize a core. Both of these desired functions imply the ability to somehow attach pre-defined test data sequences to a Test Socket. Unfortunately, the industry uses many different formats for expressing test data, and we have elected at this time not to get into a debate over which particular format (if any) should be used.

On the other hand, we would like to demonstrate some of the benefits of having a simple sequence definition capabilities available as part of the Test Socket description. Like we did with the other parts of a Test Socket Description, we will introduce a simple pseudo-language to discuss some fundamental issues related to data sequences, and to show their utility by example. The capabilities of this pseudo-language are not intended to be a pre-view of the full scope and power of a true test data sequence description language (like, for example, the evolving Standard Test Interface Language, STIL).

The pseudo-language and examples developed below are restricted to static data sequences. That is, no wave form timing specification capabilities are included. The underlying model is that of an ordered sequence of virtual pattern Steps. A step is an abstract unit within which certain assignments of test Events can be made to Terminals whose Mode-Type permits this. A step could be interpreted as the cycle of a very simple virtual stored pattern tester. This virtual tester is able to drive one logic state per channel or measure a one logic state per channel within a cycle. It is assumed that the sequence description pseudo-code can be interpreted, or compiled and executed, by test pattern translation utilities to generate a two-dimensional array of stored-pattern Events. The array is indexed by step-number in one direction and by the Terminals to which assignments are made in the other direction. This is illustrated in Figure 4-7:

Step   Terminal T1   Terminal T2  ...  Terminal Tn
 1       V(1,T1)       V(1,T2)    ...    V(1,Tn)
 2       V(2,T1)       V(2,T2)    ...    V(2,Tn)
                                  etc.
 m       V(m,T1)       V(m,T2)    ...    V(m,Tn)

Figure 4-7: Two-Dimensional Array of Value Assignments

V(Step,Terminal-Name) is an abbreviation for a Event value assigned to this Terminal for the Sequence Step. Each event is characterized by an event type (drive or measure), and an event value (logic 0, 1, high-Z, or don't care)

4.3.2 Sequence Definition Section

The short pseudo-code statements listed in Figure 4-8 define a particular sequence of test data that should be applied to each instance of Test Socket RedDogOne:

Define Sequence(RunBIST) as
/* First invoke the appropriate Configuration, */
Invoke Configuration(TestMe);
 Begin Section(RunIt)
  For I=1 to 1024
/* apply positive pulse to clock input; */
   Add Step() with Event(a1:In,Logic,0);
   Add Step() with Event(a1:In,Logic,1);
   Add Step() with Event(a1:In,Logic,0);
/* monitor bit c after each pulse; */
   Add Step() with Event(c:Out,Logic,0);
  Next I
 End Section(RunIt);
 Begin Section(CheckCompletion)
/* make sure that clock is off; */
  Add Step() with Event(a1:In,Logic,0);
/* verify that bit b is set. */
  Add Step(CheckStatus) with Event(b:Out,Logic,1);
 End Section(CheckCompletion);
End Sequence(RunBIST);

Figure 4-8: Data Sequence Pseudo-Code Example

The Sequence Definition Section assigns ordered sequences of Events (an Event asserts a stimulus or response value) to Terminals with a Mode-Type that is compatible with sequenced data. "Probe" is the only such Mode-Type introduced thus far. Mode-Type "Probe" only has been used for Overrides in Configurations TestMeMain and TestMeClose. Hence, the sequence definition always must invoke one of those two Configurations before sequence Event assignments can be made.

This is done by a statement of the form "Invoke Configuration(Configuration-Name)". TestMe is the only configuration that is invoked in RunBist. After TestMe has been invoked, sequence Events can be assigned to to the three Probe Terminals a1, b, and c established by Configuration TestMe (see Figure 4-2).

Events are defined within named Sequence Sections, which are bracketed by statements of the form "Begin Section(Section-Name)" and "End Section(Section-Name);". Sequence RunBIST is broken into two Sections named "RunIt" and "CheckCompletion". The Events themselves are defined in statements of the form "Add Step([Step-Name]) with [Event(Terminal-Name:Event-Type), ]Event(Terminal-Name:Event-Type);". This statement creates a new sequence Step with an optional Step-Name. Each Step contains a list of one or more Events. The example only uses a single Event per Step. An Event is assigned to a specific Terminal by name. The Event-Type defines a Signal-Direction, Signal-Type, and Signal-Value, which essentially are interpreted as in Terminal Mode-Type Assert.

Events with Signal-Direction In are persistent until explicitly changed by a new Event for the same Terminal. That is, if a Terminal does not get an Event assigned to in a Sequence Step, then the previous Signal-Type and Signal-Value must remain asserted. Events with Signal-Direction Out, by contrast, are not persistent. If no particular Event is specified for this Terminal in the next Sequence Step, then the associated output Terminal returns to a don't care state. (Note: the don't care state drives an unknown value for simulation purposes, but there is no associated tester measure command. This is different from an explicitly asserted unknown value, which implies a measure command with an unknown expect value that must be masked by the tester.)

To avoid any ambiguities in interpreting the Sequence Definition, we restrict what can be done within each Sequence Step: the Events in single Sequence Step must

In other words, no mixing of input and output events is allowed within a single Sequence Step.

Please, understand, that this language is intended for demonstration purposes only, not as a model for a true test sequence language.

4.4 Integrating Cores Into a Chip-Level Test Model

The re-use of pre-designed intellectual property in the form of soft (synthesizable) or fixed core functions is an important factor for meeting System-On-Chip design productivity goals. While soft cores generally can be merged with and tested as an indistinguishable part of the user-defined functions on the chip, this is not necessarily the case for fixed cores. Fixed cores tend to retain a separate identity and often come with very specific pre-defined test requirements and test patterns. The Test Socket concept offers a generic, tool- and design-independent mechanism for specifying such special requirements and patterns.

In this section, we will use a simple SOS chip design example to illustrate how multiple cores can be integrated into a chip-level design, and how the Test Sockets within the Core Test Models can be used

4.4.1 Creating Chip-Level Functionality by Instantiating Cores

Figure 4-9 depicts a chip-level design using two instances of Core XYZ:

Figure 4-9: Diagram of Chip-Level Functional Core Use

The figure shows how two instances of macro XYZ are used to implement some intended functionality. No scan or any other test support have been implemented at the chip-level yet. The core-level scan inputs are temporarily tied to logic '1' in the functional model, matching the default requirement for the core. Although we show a schematic view, the design could equally well have been specified at the Register-Transfer-Level (RTL) for synthesis. It is assumed that some form of simulation models are available for the core functions (which may or may not include any test-related features), such that the overall chip functionality can be validated. The test-related information is represented by the structural test models for each core.

4.4.2 Creating a Chip-Level Test Environment and Test Model

The core test model and associated Test Socket Descriptions contain crucial information about how the cores should be handled for testing. That by itself, however, is not sufficient. The cores and their tests must fit into an overall chip-level test methodology.

It is clear that a comprehensive chip test in the presence of cores will require at least two distinct test phases: one to test the core internals and another one to test the rest of the chip. In many practical cases more than two test phases will be required as different test types (e.g., stuck-at test, Iddq, parametric, etc.) and different sub-groupings of simultaneously tested cores get involved. The Test Socket Descriptions contain detailed specifications that the chip-level test environment must establish at the core boundaries when applying tests to that particular core or when testing something else. In addition to the core-level information itself we need to define a chip-level test methodology framework into which the core-level tests can be integrated. Within this context, it is necessary to understand how to deal with multiple cores and their respective Test Sockets.

4.4.2.1 Dependencies/Interactions/Hierarchies Between Test Sockets

Test Sockets as defined in this document are empty structural blocks. That is, no hierarchical nesting of Test Sockets is possible. On the other hand, many chip designs like the one illustrated in Figure 4-9 will contain multiple cores and multiple Test Sockets. Each Test Socket in turn can define several configurations and possibly multiple test sequences. The question is how multiple configurations and test sequences can coexist and interact.

The Test Socket itself is core-specific and can represent certain generic constraints associated with using this particular core, but it has no knowledge about the specific characteristics of the overall chip design. Among the generic core-level constraints we anticipate attributes that, for example, specify whether the core always must be tested by itself or can be tested in parallel with other cores. Other attributes would, for example, specify the anticipated power consumption associated with testing the core and, thus, make it possible to consider the overall power-consumption when constructing a chip-level test flow. The Context attribute associated with each Test Socket Configuration also conveys some information about how Configurations coexist. For example, the generic External Context-type for Configuration SafeState introduced in Test Socket RedDogOne indicates that this Configuration should be used whenever any other function (other core or user-defined function) on the chip is tested. We further envision additional generic Context-types that indicate core-specific dependencies. For instance, it may be necessary to initialize the core before anything else can be done with it, or a particular Configuration can only be invoked from another specific Configuration. Such "pecking orders" between the Configurations of a particular Test Socket could be indicated by constructs like a generic Initialization Context-type, and/or by additional statements controlling how to enter or exit a particular Configuration.

4.4.2.2 Defining the Chip-Level Test Flow

It is assumed that chip-level test assembly tools (called Test Flow Manager in the following) will be available to help the user establish the order in which tests should be assembled (e.g., perform Iddq test first, etc.), and what the global chip-level test environment should look like (e.g., no scan, partial scan, full scan, BIST, etc.) during each test phase. The Test Flow Manager also will permit the user to control other trade-offs like testing compatible cores in parallel versus one after the other.

The chip-level design example contains two instances of the same core type. Each core instance has an associated Test Socket with two Configurations. The user, with help from the Test Flow Manager, must decide whether to test the cores in parallel or one at a time, and in which order to test the cores and the rest of the chip. Figure 4-10 shows a possible chip-level test flow that tests each core by itself:

+--------------+------------------+-----------------+--------------+
|     Test     |  Socket Config.  |  Socket Config. |    Global    |
|     Phase    |    for XYZ.1     |    for XYZ.2    |     Mode     |
+--------------+------------------+-----------------+--------------+
|     Normal   |     Default      |     Default     |    Normal    |
|     Core 1   |     TestMe       |    SafeState    |     Scan     |
|     Core 2   |    SafeState     |      TestMe     |     Scan     |
| Rest of Chip |    SafeState     |    SafeState    |     Scan     |
+--------------+------------------+-----------------+--------------+

Figure 4-10: Chip-Level Test Flow for Testing Each Core Separately

The necessary Configurations for supporting this flow are easily extracted from the Context attributes in the Test Sockets. By default, we assign a dummy Test Phase called Normal to represent the functional (non-test) mode of operation. In this mode of operation each core is in its Default Configuration, and the effective chip structure must look as depicted in Figure 4-9. To test each core one at a time, we must configure the selected core for the Internal Context (Configuration TestMe) while the other core is configured for the External Context (Configuration Safe State). Finally, for testing the rest of the chip, both cores must be configured for the External Context (Configuration SafeState). The user further has decided that the chip-level test environment should be scan-based for all test phases.

Figure 4-11 shows a different test flow that tests both cores simultaneously.

+--------------+------------------+-----------------+--------------+
|     Test     |  Socket Config.  |  Socket Config. |    Global    |
|     Phase    |    for XYZ.1     |    for XYZ.2    |     Mode     |
+--------------+------------------+-----------------+--------------+
|     Normal   |     Default      |     Default     |    Normal    |
|   Both Cores |     TestMe       |     TestMe      |     Scan     |
| Rest of Chip |    SafeState     |    SafeState    |     Scan     |
+--------------+------------------+-----------------+--------------+

Figure 4-11: Chip-Level Test Flow for Testing Both Cores Simultaneously

To accomplish this, both cores must be configured for the Internal Context (Configuration TestMe) at the same time.

4.4.2.3 Implementing a Chip-Level Test Support Architecture

The test flow tables in Figure 4-10 and Figure 4-11 in conjunction with the Test Socket descriptions define the chip-level test environment that must be provided for each test phase. It is anticipated that chip-level test assembly tools will offer analysis and synthesis features that can analyze the chip-level environment, help generate any required additional support logic, and facilitate the integration of a chip-level test program flow.

Clearly, there are many different ways to implement and optimize the stipulated test flows. The suggested Test Socket description method itself does not specify or embrace a particular chip-level support architecture. The generic architecture illustrated in Figure 4-12 shows only one of many possible approaches.

Figure 4-12: Generic Chip-Level Test Support Architecture

The generic architecture draws some inspiration from the widely used board-level In-Circuit-Test (ICT) concept. In the board world, physical probe points are accessed via a physical bed-of-nails fixture. Boundary scan has replaced some of the physical fixturing with electronic means by implementing a dedicated Test Resource (Boundary Scan Cell) behind each pin inside the chips. Boundary Scan Cells, thus, can be viewed as parts of a virtualized bed-of-nails fixture concept that extends and complements physical bed-of-nails fixtures. In the chip world, the physical bed-of-nails fixture is completely replaced by an electronic "Test Adapter". Being on-chip, the Test Adapter can take full advantage of active circuitry made possible by the chip medium. That is, explicit wires can be replaced by more area-efficient logic path-sensitization, gating, and multiplexing techniques. In addition, internal scan cells (including those in the logic shell that comes with a core) and/or other types of test support circuits can be used to bring on-chip Test Resources closer to the test targets at minimal transistor and wiring overhead.

The example core test model used in this white paper combines a structural gate-level logic shell model with a structural Test Socket Block. By doing that, we essentially make the core test model itself compatible and "mergeable" with the rest of the chip-level test model. The logic shell becomes part of the chip-level logic, and the Test Socket Block becomes a structural block in the merged chip-level logic model. However, being a special type of block, the Test Socket Block remains a clearly identifiable representative of the original core with its unique test requirements and pre-defined test data.

The purpose of the Test Adapter is to control the different test phases and to logically connect compatible Test Resources (e.g., chip I/O pins or scan cells for the example) to the Virtual Probe Points (i.e., Test Socket Block Terminals with a Mode-Type of Probe) associated with certain Test Socket Block Configurations.

The Configuration Section in the Test Socket Block descriptions specifies the Virtual Probe Points for each Configuration, and define which type of Test Resource (e.g., chip I/O or scan-cell) is permissible for each individual Virtual Probe Point. The Test Adapter provides the necessary multiplexing functions that connect each core-level Virtual Probe Point to a matching Test Resource. If the Test Resource is a scan cell, the Test Adapter may even provide the scan-cell, if no suitable functional scan-cell is available. As mentioned earlier, the logic shell of the core itself could include viable Test Resources. In addition, the Test Adapter is expected to contain test phase control and decode logic that asserts the necessary boundary conditions (e.g, values expected at Test Socket Block Input Terminals with a Mode-Type of Assert) to establish the appropriate Test Socket Block Configurations.

Given the above scenario, the following key requirements have to be addressed during the construction of the test support logic:

  1. State variables representing the test phases must be defined, and decode logic must be generated to assert the required input states (Test Socket input Terminals with Mode-type Assert) at the Test Socket boundaries for each test phase.
  2. Gating/multiplexing ("virtual fixturing") logic must be generated and controlled by the test phase decode logic such that each Test Socket Terminal with the Mode-type Probe for the current test phase is logically connected to a matching Test Resource (e.g., chip I/O or scan-cell). If necessary or desired, new resources (chip I/Os or scan-cells) must be created to satisfy the access requirements. The gating/multiplexing logic must be compatible with the signal type specified for the Probe point (e.g., if the Probe point is capable of high-Z states, then the path must be capable of propagating high-Z).
  3. The user has selected scan as the global chip-level test methodology for all test phases (except the normal functional mode). That means, the test synthesis tool must implement a valid chip-level scan infrastructure for each test phase. If scan-cells are used as "Test Resources" care may have to be taken in separating their clocking from the clocking of the logic elements under test.

5. Summary

There are many more details and issues that we would have liked to address in the white paper, but had to omit because limited time. Hence, we apologize that the document appears to come to a somewhat abrupt end here.

However, we hope that it offers at least a glimpse of a possible approach that could be useful for the industry. At the end of the day, no matter what methodology or style of testing is anticipated for a core, the final requirement will come down to accessing a subset of embedded core pins and transporting test data between those core pins and corresponding compatible chip-level test data sources/sinks. This is one common denominator that is shared by all methodologies, and the approach outlined in the white paper builds on this commonality. We hope this white paper can contribute to the foundation of an industry-standard information model that can capture all relevant core-level test access and control requirements, enabling the industry to move ahead with commercial support for a chip-level integration framework by taking advantage of this information.