From: Gordon Robinson [Gordon_Robinson@3mts.com]
Sent:
Wednesday, July 24, 2002 3:32 PM
To: 'Micek Tom-ra1370'; 'STIL. 4';
'don.organ@inovys.com'; David Dowding (E-mail)
Subject: RE:
stds-1450.4: RE: 1450.4 multisite status
Let me
comment on some of the multisite issues, with the
danger
that this may add to people's confusion with a difficult
topic
before
it resolves things.
I try
to take a couple of extremely strong stances about
multisite:
- When
testing multisite, each device ends up in the same bin as it does when tested
alone
- When
testing multiste, each device produces the same datalog results as when tested
alone
- When
testing multisite, each device experiences the same set of test steps as when
tested alone
- The
user specifies what happens for testing the device as if testing it alone, and
very little else for multisite beyond load board
connections
I
regard it as OK for a device to experience different time delays "between steps"
when being tested multisite.
Different stimulus of any form gets me
concerned.
I
think we're in trouble when a device has to go through new power-up sequences or
be reinitialized whenever it gets enabled after being
disabled.
There
are quite a few general approaches to how multisite testing is
performed.
- Have
a complete set of tester resources, processor, sequencer, pins, instruments
for each site. The system is a collection of single site testers. This method
is used for some memory testers, and handles issues like match very well. All
sites progress through the flow independently at their own natural
pace.
- Have
a single thread of activity and run all sites "lockstep". This is the common
method where the system has (or behaves like) a single digital sequencer. The
system has to choose which step in the flow to run next, and enable and
disable sites as it does so. When in trouble, run the sites through a step one
at a time. This is the style that Tom's note seems to be assuming,
particualrly when discussing "pass-priority" and
"fail-priority".
- In
that model I've heard plenty of advocates for each priority strategy. Let the
user choose whic they believe gives them fewest problems.
- Have
a thread for each site within a test process, allow each to run "sort-of"
independently, and to come together and run a step lockstep style where
appropriate.
The
hard issues of multisite occur when the test program has to deal with issues
that challenge its model.
- Running sites independently, but there are some
expensive instruments that can't be dedicated, and so sites need to be
scheduled for whan they can use them.
- Conditional pattern issues with lockstep operation.
I've seen some approaches to simple match loops such as having "prematch" and
"postmatch" fragments that get spliced so that each site matches independently
while others do either pre or post. We may want to consider adding such blocks
to the STIL Pattern language.
- Making sure that switching matrices are used
correctly when they can switch things to several sites.
- Making sure that activities on one site really don't
disturb others. E.g. if a DPS channel is shared between sites, is it OK for an
enabled site to change the setting while another site is
disabled.
- Loadboard instrumentation the test system doesn't
even realize is instrumentation!
I
don't view datalogs as a a simple stream, but as a separate stream for each
site. How thats treated visually is a UI design issue.
I hope
I've stirred up a few things!
Gordon