A Fault Detection and Tolerance Tradeoff Evaluation Methodology for VLSI Systems
暂无分享,去创建一个
Fault tolerant architectures have traditionally relied upon system level protocols for fault detection and recovery. However, the increasing density and pin counts of VLSI devices are providing new opportunities for incorporation of on-chip fault detection and tolerance features. Also supporting this trend is the more frequent incorporation of on-chip redundancy for defect tolerance (ie. yield improvement). With appropriate error detection and soft reconfiguration capabilities, redundant circuitry not needed for “defect tolerance” can be used to support “fault tolerance” at the system level. The incorporation of such features on-chip generally results in a reduction in system level performance. Designers are seldom willing to compromise system speed and functionality at the VLSI chip level for the implementation of fault tolerance. The speed and functionality vs. fault tolerance tradeoff is especially difficult because chip and system designers have no quantitative methods to assess the impact of fault tolerance on their designs. Hence, the need arises to quantitatively assess the performance attributes of fault tolerance techniques and their associated detection and recovery mechanisms. Through such an assessment, the optimum techniques which meet system requirements without excessive area or throughput penalties can be identified. This paper describes a framework and methodology for performing quantitative cost-benefit tradeoff analysis of fault tolerance techniques at the VLSI chip level.
[1] James B. Clary,et al. Self-Testing Computers , 1979, Computer.