From: owner-stds-1450-4@majordomo.ieee.org on behalf of Gordon
Robinson [Gordon_Robinson@3mts.com]
Sent: Thursday, July 25, 2002 7:10
AM
To: 'don.organ@inovys.com'; Gordon Robinson; 'Micek Tom-ra1370';
'STIL. 4'; David Dowding (E-mail)
Subject: RE: stds-1450.4: RE: 1450.4
multisite status
The
Credence Kalos flash memory tester is an example of
the
"complete tester per site" approach. It's also the only
tester
good
looking enough to get a spread in "Wired" magazine! I've
heard
that some Agilent flash memory testers also have that
architecture.
I've
heard from flash memory gurus that flash testing is not
sensible
without a completely separate digital sequencer etc per site
because
flash
test patterns have an extremely high density of conditional,
response-dependent
branching (which STIL is not exactly strong at
handling).
There
is also some trend I've observed to architectures with a sequencer
and
pins
per board that can be lockstepped together, to make a system
have
a
serious number of separate digital instruments if needed.
I'm
fine with adding "optional pieces to guide multisite handling".
I've
tried
to express "idealism" principles. Multisite gets into trouble when
the
idealism breaks down.
Should
these be UserKeywords? I can easily imagine that each tester will
need
its
own workarounds.
I
believe strongly that disabling/enabling is not a test step and should be
"invisible"
to the
device except for time passing.
I'd
also like to mention something I don't want us to copy, from
Credence's
Agile
system. Agile has "conditional" stages, where the flow branches.
These
are
not available in multisite programs. Instead, multisite flow is handled
by
using
the "execution flags" that are kept for each site. These flags were
originally
intended to allow the operator to set one of a few options, and
particular tests would
be
obeyed or skippied based on those flag settings. The flags then started to get
set/unset
by the
test stages themselves, and used as a flow mechanism. That was the only way
allowed
for
multisite conditional flows. This does have the advantage of placing all of the
"fail-priority"
or
"pass-priority" in the user's hands because the sequence of stages is used, and
if one branch is
placed
first in sequence, it will happen first.
Gordon
Wow!
Great!
Comments are inline below.
-DVO-
Let me comment
on some of the multisite issues, with the
danger that this
may add to people's confusion with a difficult topic
before it
resolves things.
I try to take a
couple of extremely strong stances about multisite:
- When testing
multisite, each device ends up in the same bin as it does when tested
alone
- When
testing multiste, each device produces the same datalog results as when
tested alone
[DVO] This could be interpretted in two ways. I agree
that the information in the datalog should be the same. However, one
site's datalog information may be interspersed with another's. I'm
told that's how STDF is.
- When
testing multisite, each device experiences the same set of test steps as
when tested alone
[DVO] Does the disabling/enabling of a site count
as a "test step?"
- The user
specifies what happens for testing the device as if testing it alone, and
very little else for multisite beyond load board connections
[DVO] This is a
fundemental goal that (I think) we all agree with. However, in
practice, I've seen that many test-engineers prefer some additional
control over some of the multisite issues - generally with an eye toward
increased throughput or due to considerations relative to the device. The
way I reconcile this is by agreeing with your statement, but saying that
additional optional syntax may be added to the STIL file by a
test engineer to address multisite
issues.
I regard it as
OK for a device to experience different time delays "between steps" when
being tested multisite.
Different
stimulus of any form gets me concerned.
I think
we're in trouble when a device has to go through new power-up sequences or
be reinitialized whenever it gets enabled after being disabled.
[DVO] I'm not a
test-engineer, but different devices have different requirements. I'm
told that some must keep a clock going (or else they burn up!) and that some
of these "keep-alive" patterns require support from the pattern-sequencer,
and many testers can't maintain this "keep-alive" pattern on some sites,
while executing the AC tests on other sites. So, the alternative chosen is
to power-down these devices for the periods in which they are disabled.
Perhaps such devices aren't great candidates for multisite
testing.
There are quite
a few general approaches to how multisite testing is
performed.
- Have a complete
set of tester resources, processor, sequencer, pins, instruments for each
site. The system is a collection of single site testers. This method is
used for some memory testers, and handles issues like match very well. All
sites progress through the flow independently at their own natural
pace.
[DVO] This would make our job easy! Are
there many testers like this? This is the opposite of the multisite
DRAM tester I was introduced to several years ago. That system had
just a single sequencer and single set of programmable levels which were
shared across all sites.
- Have a single
thread of activity and run all sites "lockstep". This is the common method
where the system has (or behaves like) a single digital sequencer. The
system has to choose which step in the flow to run next, and enable and
disable sites as it does so. When in trouble, run the sites through a step
one at a time. This is the style that Tom's note seems to be assuming,
particualrly when discussing "pass-priority" and
"fail-priority".
- In that model
I've heard plenty of advocates for each priority strategy. Let the user
choose whic they believe gives them fewest problems.
- Have a thread
for each site within a test process, allow each to run "sort-of"
independently, and to come together and run a step lockstep style where
appropriate.
[DVO] We probably want a
programming model that makes few assumptions and could be implemented with
any of the above approaches. The 2nd bullet is the one I'm most familiar
with - and it is the one the raised in issue #2 (disabling/re-enabling) of
the working list.
The hard
issues of multisite occur when the test program has to deal with issues that
challenge its model.
- Running sites
independently, but there are some expensive instruments that can't be
dedicated, and so sites need to be scheduled for whan they can use
them.
- Conditional
pattern issues with lockstep operation. I've seen some approaches to
simple match loops such as having "prematch" and "postmatch" fragments
that get spliced so that each site matches independently while others do
either pre or post. We may want to consider adding such blocks to the STIL
Pattern language.
- Making sure
that switching matrices are used correctly when they can switch things to
several sites.
- Making
sure that activities on one site really don't disturb others. E.g. if a
DPS channel is shared between sites, is it OK for an enabled site to
change the setting while another site is disabled.
[DVO] Let's call this
"isolation". Should 1450.4 make explicit assumptions about a tester's
capabilities in this regard?
- Loadboard
instrumentation the test system doesn't even realize is
instrumentation!
[DVO] Bullets #1, #3, and maybe #4 are issues that I
think are owned by the ATE vendor and don't require much consideration from
us. It is the ATE vendor's responsibility to execute STIL.4 on their tester
and if their tester has limitiations such as those listed, then they should
be responsible for resolve them. I don't think STIL.4 should need any syntax
to address those issues.
I don't view datalogs as a a simple
stream, but as a separate stream for each site. How thats treated visually
is a UI design issue.
I hope
I've stirred up a few things!
[DVO] From reading your email, I think the following are
issues we should consider adding to the working list (remembering that the
list is not of things we must address, but is of things we should consider
addressing):
- Syntax for the loadboard
connections. 1450 doesn't have a construct for defining the mapping
from Signals to tester resources - but I think this is in the 1450.3
charter. Should we consider adding a multisite capable mapping? or at
least participate in 1450.3 enough to see that they do it in a way we
like?
- prematch and postmatch for match
mode.
- Isolation assumptions/requirements.
This is touched on in working list issue #2 (disabling/re-enabling), but
maybe this warrants further
consideration.
Gordon