A Case Study of Testing Strategy for AI SoC

Recent advances in artificial intelligence (AI) are becoming a driving force behind the technological revolution and industrial transformation leading to economic and social development. Application specific AI SoCs are being developed at different companies to accelerate processing of the data-intensive AI computations. There are many new challenges in designing and implementing Design-For-Test (DFT) logic for AI SoCs. In this paper, we share our experiences with DFT implementation for our AI SoC. To achieve lower power and higher bandwidth for AI SoC, we use high speed Serdes PHY with lower threshold voltage, which uses many SoC pins. Therefore, it has a negative impact on DFT and ATPG due to lack of reusable IOs that can be used as scan test channels. In this paper, we present our solution and tradeoffs made to optimize DFT silicon area overhead, test cost, test coverage, pre-silicon verification run time with ready-to-use silicon bring-up methodologies.

[1]  Jing Wang,et al.  Test access mechanism for multiple identical cores , 2008, 2009 International Test Conference.

[2]  Nilanjan Mukherjee,et al.  Embedded deterministic test , 2004, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems.

[3]  Xiao Liu,et al.  Case Study of Testing a SoC Design with Mixed EDT Channel Sharing and Channel Broadcasting , 2016, 2016 IEEE 25th North Atlantic Test Workshop (NATW).

[4]  Guigang Zhang,et al.  Deep Learning , 2016, Int. J. Semantic Comput..

[5]  Dan Trock,et al.  Recursive hierarchical DFT methodology with multi-level clock control and scan pattern retargeting , 2016, 2016 Design, Automation & Test in Europe Conference & Exhibition (DATE).