Scan Design

Scan design [1] is one of the most effective structured DfT solutions. It enables controlling and observing any internal state of the circuit. The scan design converts

Fig. 6.1 Scan design every flip flop to fully accessible scan cell by adding multiplexers to select either the output of the previous scan cell or the corresponding output of the combinational circuit to update each scan cell. All the scan cells, namely registers (flip flops), are linked together to form a chain, in which the first scan cell is driven by an input pin and the last scan cell drives an output pin. The scan design is illustrated in Fig. 6.1. If all the registers have the scan property, the design is considered as full scan. Otherwise, it is partial scan. While in the normal mode the chip performs its functional operations, the test mode in the scan design supports two different modes, which are the shift mode and the capture mode. Scan enable signal can be used to switch between these two modes. In the shift mode, the test stimulus is shifted into the scan chain through the scan input pin, while the test response is observed through the scan output pin one bit at a time. Shifting the test stimulus necessitates activating the shift mode until the whole pattern is shifted in. In the capture mode, the test stimulus already shifted into the scan cells is applied to the combinational logic circuit and then the test response is captured in the same scan cells. The captured test response can be observed, while shifting in a new stimulus pattern. As a result, sequential logic circuit can be treated as a combinational circuit, in which each flip flop can be treated as an input and an output at the same time. Therefore, the test quality is improved.

For larger designs with tremendous number of flip flops, shifting each test stimulus through a single scan chain results in a long test application time. A scan chain can be divided into many chains of shorter length as in Fig. 6.2, which can be accessed

Basic scan architecture

Fig. 6.2 Basic scan architecture: an example with 7 scan chains with a scan depth of 4

S.M. Saeed et al.

110

simultaneously. The length of the longest chain represents the scan depth. A group of scan cells of equal distance to the input/output pins is denoted as a scan slice. Increasing the number of chains reduces the scan depth, and, thus, the test application time at the cost of additional channels and pins, that are connected to the scan chains. Thus, there is a trade-off between the test time and the test cost.

6.2.1.1 Test Data Compression

Although the scan design enhances the testability, the test cost is dramatically increased for complex designs due to the long test time and the large tester memory requirement. To ensure high test quality, a large number of test stimulus and response patterns are stored. They occupy a large space on the external tester’s storage. The storage capacity should be expanded to accommodate the larger number of patterns. The limitation of the bandwidth and the number of tester channels to transfer the test data between the tester and the chip increases the number of test cycles, and, thus, the overall test time. The test time can be reduced either through the reduction of the number of test patterns or the increase of the number of channels. However, the former one results in fault coverage loss, while the latter one is too costly to implement.

Test data compression [2-4] has been developed to address the problem of large test data volume and test time. Two components are added to the basic scan architecture, which are the stimulus decompressor and the response compactor. A stimulus decompressor expands a few number of tester channels into a much larger number of internal scan chains. A response compactor collects the responses from a large number of internal scan chains and feeds a small number of tester channels as illustrated in Fig. 6.3. Scan depth is reduced due to the increased number of internal scan chains while retaining the number of channels. As a result, the number of clock cycles for loading test stimuli and unloading the test responses is reduced, resulting

Test data compression in a reduction in the overall test time

Fig. 6.3 Test data compression in a reduction in the overall test time. Furthermore, since the size of each test vector is determined by the number of dedicated tester channels, the required tester storage is also reduced, resulting in a reduction in the overall test data volume. Therefore, test data compression reduces the test cost.

Test Stimulus Compression

Each test vector targets a specific set of faults. Only some bits of a test vector are utilized to activate and propagate the fault effects, while the remaining bits are left unspecified, referred to as don’t-care bits. Test pattern generation tools can randomly specify these bits as 0’s and 1’s. A decompressor exploits the high density of don’t- care bits in a stimulus (test pattern), compressing the test stimuli.

While adding a stimulus decompressor into the scan architecture reduces the test data volume, this scan architecture can degrade the test quality. The stimulus decompressor introduces correlation among the delivered bits to the chains, which depends on the decompressor structure. As a result, a stimulus decompressor maybe unable to deliver the desired test pattern; if a test pattern does not comply with the correlation induced by the decompressor, the test pattern is said to be unencodable. Faults that can only be detected by unencodable test patterns may remain untested in the presence of a stimulus decompressor. The internal structure of the decompressor determines the correlation, and, thus, delivery constraints.

A stimulus decompressor can be either sequential, such as Linear-Feedback Shift Register (LFSR), or combinational, such as fan-out and XOR-based decompressors [5]. An LFSR randomly generates the test pattern. Fan-out decompressors introduce correlation in the form of repeated bits within a slice fragment, whereas XOR- based decompressors introduce linear correlation among the bits delivered into scan cells. As shown in Fig.6.4a, any 0-1 conflict within a slice fragment results in an unencodable pattern for the fan-out decompressor, as such a pattern fails to comply with the expected correlation. For XOR-based decompressors, the encodability of patterns is determined via solving a system of linear equations. Figure 6.4b provides an example of an unencodable pattern by highlighting the bits that result in unsolvable linear equations. In this figure, x’s denote don’t-care bits.

Typical test application procedures include a second phase, where unencodable patterns are applied serially by bypassing the decompressor [6]. As the second phase delivers no compression, every pattern applied in this phase degrades the overall compression level attained. Targeting an aggressive compression level, by increasing the number of internal scan chains, can reduce the test data volume per pattern in the first phase due to reduced scan depth. Yet, having to apply more patterns in the serial phase may offset the compression benefits of the first phase. A predictive analysis can help the designer in selecting the best possible configuration for a given compression technique at an early design stage, in order to find the balance between the test cost and the test quality.

Test Response Compaction

While a stimulus decompressor reduces the test data volume for the input stimuli, output responses can be similarly compressed by a response compactor. However, a response compactor may degrade observability. Some information is lost due to compaction, which can affect the observability and the fault coverage of the circuit. Some fault effects that were observed in the original circuit maybe masked due to output response compaction. The main underlying reasons are the unknown values and the fault aliasing. Unknown values can mask the fault effects captured in scan cells. Fault aliasing refers to the situation where multiple fault effects maskeach other. An example of fault aliasing is illustrated in Fig. 6.5, where fault effects of f 1 cancel each other upon getting compacted. Unknown values can be captured in the scan cells due to many reasons such as uninitialized memory and bus contentions. Unknown value, denoted by x, can mask the fault effects in the presence of response compactor. In Fig. 6.5, f 2 is undetected, as its effect goes through the compactor along with an x. Although fault aliasing is a problem, the biggest concern is the unknown values due to their severe impact. Unknown values can be either static or dynamic [7, 8]. Static unknown values are discovered in the design time at the outputs of the un-modeled blocks (memory (RAM)) or bus contentions. Dynamic unknown values appear later after the design stage due to timing problems, the impact of operating parameters (voltage, temperature), and the defects caused during manufacturing.

Encodability problem. a Fan-out. b XOR decompressor

Fig. 6.4 Encodability problem. a Fan-out. b XOR decompressor

The effect of XOR-compactor on fault coverage

Fig. 6.5 The effect of XOR-compactor on fault coverage: fault cancellation and fault masking

Sequential compaction circuitries [9, 10], such as multiple input signature registers (MISRs), can be utilized for compressing the scan responses into a signature that is observed at the end of the test application process. The output vectors of the internal scan chains are compressed during different clock cycles to produce a signature that represents the output response of a certain pattern. A typical MISR consists of flip flops and XOR gates connected together into a register. MISR not only compresses a long scan-out sequence in the absence of unknown values, but also minimizes the aliasing impact on the fault coverage. However, one or more unknown values will corrupt the corresponding signature. Also, it is difficult to directly identify the location of the scan cell that captured a fault effect from the obtained signature of the MISR.

Combinational compaction solutions [11, 12], mostly XOR-based, are also utilized for response compaction. Every slice in the scan architecture is compacted independently. Unknown values may mask some of the captured bits in the same clock cycle, depending on the tolerance of a space compactor to unknown values per shift. However, the space compactor is susceptible to the occurrence of aliasing and offers reduced compaction levels than the time compactor.

Regardless of the compaction methods, unknown values can be handled in a variety of ways to achieve high fault coverage. Multiple XOR trees can be constructed that propagate the unknown values to the corresponding compressed response outputs, while observing scan cells that are connected to different compressed response outputs. Furthermore, DFT hardware can block unknown values before reaching the scan cells [13]. It is also possible to mask the unknown values before reaching the compactor [3, 14]. The response compactor can also be constructed to adapt to the varying density of unknown values in the response patterns. For XOR-based compactors for instance, the fan-out of scan chains to XOR trees within the compactor can be adjusted per pattern/region/slice to minimize the corruption impact of the unknown values in a cost-effective way [15, 16].

 
Source
< Prev   CONTENTS   Source   Next >