Machine Learning in Logic Circuit Diagnosis

Shawn Blanton   - Carnegie Mellon University, US

Abstract:

In this seminar, we examine representative work of the use of ML in diagnosis from the Advanced Chip Testing Laboratory of Carnegie Mellon University. We partition the work into three categories, namely, pre-diagnosis, during-diagnosis, and post-diagnosis. Pre-diagnosis is concerned with any activities that are performed before diagnosis is deployed. Examples of pre-diagnosis activities includes classic work such as diagnostic ATPG, and DFT for increasing testability. Pre-diagnosis activities that utilize ML include test reordering and optimizing test response data collection, for example. In addition, there is work that predicts the outcome of diagnosis so that the use of compute resources can be optimized. During-diagnosis involves the algorithms (e.g., cause-and-effect, path tracing, etc.) and the underlying technologies used by the algorithms such as fault simulation. The use of ML in during-diagnosis activities generally involves learning while diagnosis executes. For example, a k-nearest neighbor model is created, evolved, and used during on-chip diagnosis to improve diagnosis outcomes. Finally, post-diagnosis includes all the activities that occur after diagnosis execution. These approaches usually involve volume diagnosis (i.e., using the outcome results of many diagnoses) to improve diagnostic resolution, that is, the number of possible failure locations (within the netlist and/or the layout). The use of ML in post-diagnosis activities is a perfect application for ML given the abundance of well-structured, labeled data from prior diagnoses, physical failure analyses, and precise fault simulations. The bulk of the work discussed lies in post-diagnosis and focusses on localization behavior identification improvement.