Lectures

The continued scaling of semiconductor devices along with technological innovations allow designers to pack more than 20 billion transistors on a single die of state-of-the-art AI chips. On the other hand the explosion of automotive ICs is reshaping the quality requirements of manufacturing test as well as the functional safety. Testing designs of this complexity poses a significant challenge. Not only more and more stringent quality requirements have to be satisfied but also the cost of test and design cycles are subjected to strong competitive pressures.

This lecture will focus on some of the advances shaping the test industry to address design and process changes. In particular, we will present state-of-the-art design-for-test (DFT) methodologies and practices for high-quality low-cost manufacturing test.

There are growing numbers of applications such as automotive electronics that require system level test. In addition to scan-based testing and issues related to at-speed testing in scan environment, the lecture will cover guidelines for design of built-in self-testable cores, techniques for random pattern testability, as well as BIST architectures for random logic. The lecture will also highlight, from a DFT perspective, methods deployed both to control test power dissipation and to reduce a negative impact of unknown states on test quality.

Finally, we will illustrate applications and discus future trends of DFT technology.

  1. Introduction- Current design trends and quality requirements
    1. Design characteristics, semiconductor technology trends
    2. Quality and productivity requirements
    3. Challenges facing logic test
    4. Defects and fault models, including defect based test and cell-aware test
    5. Test quality – how is it measured
  2. Scan-based designs
    1. Logic-to-pin ratio, circuit complexity, test generation time
    2. Scan cells, multiplexing of data flip-flops, impact on performance, modes of operation, generic scan path, parallel scan chains,
    3. Test application time, safe scan shifting, volume of test data,
  3. At-speed scan-based test
    1. Single clock domain - single capture, over-testing, speed of loading, timing in capture window, frequency to reduce power and constraints on BIST controller,
    2. Frequency to reduce test time, double capture, launch from a semi-legal state, slow scan enable,
    3. Multiple clock domains, at-speed testing within and between clock domains, clock suppression, hold states, multiple frequencies - single capture.
    4. On-Chip Controllers for at-speed test.
  4. Embedded test compression for deterministic test
    1. Analysis of requirements
    2. Basic architecture
    3. Compression schemes
    4. LFSR reseeding, solving linear equations,
    5. Stimuli generators – different types and their properties, linear dependencies and phase shifters
    6. On-chip compactors of test responses, time compactors, space compactors, finite memory compactors.
    7. Handling of X states
    8. Power management in shift and capture
  5. Logic BIST –implementation and industrial practices
    1. Requirements for system test, functional safety, 26262
    2. STUMPS LBIST architecture
    3. BIST-ready cores
    4. Test point insertion
    5. Generation of pseudorandom test patterns, test sequence aperiodicity, structural and linear dependencies, driving large number of scan chains
    6. Generators of pseudorandom sequences, linear feedback shift registers (LFSRs).
    7. Signature analyzers, Multiple input signature register (MISR) Parallel data acquisition, multiple error injections, unknown (X) states and their sources, X states and signatures, X-masking schemes, X-bounding logic
    8. At-speed test
    9. LBIST controllers
  6. Hybrid schemes – EDT and LBIST
    1. Dual generators/decompressors
    2. Signature registers/compactors
    3. Power management in shift and capture
    4. Test scheduling
    5. Clock gating, low-transition test pattern generators
    6. IJTAG and pattern retargeting for EDT and LBIST
    7. Applications
  7. Hierarchical test
    1. Need for hierarchical test methodology
    2. DFT implementation to facilitate hierarchical test
    3. Re-usable test patterns
    4. Pattern retargeting
    5. Bandwidth management and test buses
    6. Applications
  8. Conclusions
Janusz Rajski
Janusz Rajski
Mentor, A Siemens Business

The steady march of Moore's Law in semiconductors has enabled the creation of ever more complex systems with electronics playing a central role. As a result, thorough testing of individual components is no longer adequate to ensure overall system performance, quality, and reliability. The rising importance of system-level testing (SLT) to supplement traditional component structural test has gained wide recognition recently. In this lecture, we start by examining where SLT fills the gap in conventional testing through which marginal defects escape. In many cases, system failures are the result of complex software and hardware component interactions leading to abnormal scenarios not attributable to simple and single root causes. Though SLT has shown superiority in catching marginal defects, current ad hoc practices leave much room for improvement in SLT’s cost effectiveness. We explore the idea of creating a more methodical approach to SLT built on the principles of testability, system modeling, and data analytics. Putting method to practice will rely on links to design verification and machine learning domains. We will also discuss required advances in test equipment capabilities to match the SLT methodology. The aim is to stimulate new ideas and directions for future research in system-based testing as upcoming 5G/IoT/AI-based applications penetrate deeper and pervasively into our daily lives.

  1. Overview of testing challenges posed by rising system complexity
  2. Current fault models and why marginal defects escape
  3. Current SLT practices from test development to tester execution
  4. Developing a methodology to achieve cost-effective SLT
  5. System-level fault modeling, simulation and test generation
  6. Adaptive SLT via data analytics to lower SLT cost
  7. Test equipment advances to support SLT methodology
  8. Looming test challenges in upcoming 5G/IoT/AI-based systems
Harry Chen
Harry Chen
MediaTek

Test generation is one of the central tasks in testing. While classical algorithms reach their limit, there are new approaches based on formal proof techniques. In this presentation first the classical approaches to Automatic Test Pattern Generation (ATPG) are briefly reviewed. Then, the lecture gives an overview of the development of solve engines over the past two decades. It is shown how the core techniques work and what the core paradigms behind a successful automatic engine are. Not only Boolean techniques, like BDD and SAT, are presented, but also extensions to word-level descriptions for decision diagrams and solve engines exploiting additional theories. It is shown how ATPG can be formulated in terms of SAT. Also hybrid semi-formal techniques, i.e. combining SAT-based ATPG with classical approaches, are considered and evaluated on benchmark circuits. Finally, if time permits, the generality of the approach based on formal techniques are shown by also considering scenarios in the context of verification and debugging.

  1. Test Basics
    1. Fault models
    2. Test generation – classical approaches
  2. Formal Proof Techniques
    1. Boolean
      1. Decision diagrams
      2. Satisfiability
    2. Word-level
      1. Extensions of decision diagrams
      2. SMT solvers
  3. SAT-based Test Generation
    1. Encoding
    2. Algorithms
    3. Hybrid approaches
    4. Experiments
  4. Formal Techniques along the Design Flow
    1. Verification
    2. Debugging
  5. Conclusions
Rolf Drechsler
Rolf Drechsler
University of Bremen/DFKI Bremen

There is a growing analog content in today’s ICs in light of increasing IoT, automotive, and wireless communication applications. Testing analog ICs to meet quality and reliability requirements imposed by the application has become a challenging task. The traditional approach in industry is to measure directly the performances promised in the datasheet. In many cases, this straightforward and easily interpretable approach is not economically affordable, it does not deliver the low DPPM demanded by safety-critical applications, and it may not even be applicable in the context of SoCs. This lecture will give an overview of current test practices and challenges, and will focus on cutting edge approaches, including integrated test and machine learning-based approaches.

  • Part I: Introduction and Motivation
    • Failure mechanisms
    • Test objectives: post-manufacturing test, post-manufacturing tuning, on-line/in-field test
    • Current approaches in industry
    • Test costs consideration
  • Part II: Integrated Test Approaches
    • Test access mechanisms
    • Design-for-Test (DfT) and Built-in Self-Test (BIST)
    • Oscillation-based test
    • Loop-back test
    • Sensor-based test
    • Non-intrusive sensors
    • Temperature sensors
    • On-chip test stimulus generation and response analysis for data converters
    • Jitter estimators for phase locked loops
    • Defect-oriented test
  • Part III: Machine Learning-Based and Statistical Test Approaches
    • Alternate test
    • Test compaction
    • Fault diagnosis
    • Post-manufacturing tuning
    • Wafer-level spatial correlation modeling
    • Outlier detection
    • Adaptive test
    • Neuromorphic BIST
    • Practical recommendations: feature extraction, feature selection, dataset preparation, generalization assessment, etc.
  • Part IV: Test metrics
    • Defect coverage metrics estimation
    • Fast defect simulation
    • Parametric test metrics estimation
    • Fast Monte Carlo analysis
Haralampos-G. Stratigopoulos
Haralampos-G. Stratigopoulos
Sorbonne Université, CNRS, LIP6

This course focuses on test strategies which are based on defect oriented modelling rather than traditional fault models. These two approaches are compared and contrasted. Layout based extraction is emphasized and reporting of coverages based on defect critical area rather than fault counting is introduced. Recent advances such as cell-aware, bridging and cell neighborhood are described. Also discussed are IDDQ testing and necessary enhancements to ensure reliability of screened devices, such as outlier screening and stress.

  • Metrics and definitions
    • causes of circuit failures
    • circuit element, component, defects, faults
    • fault modelling
    • fault coverage
    • hard versus latent defects
    • test escapes and DPPM
    • reliability metrics
  • Traditional fault models
    • stuck-at
    • transition
    • bridging
    • path delay
    • stuck-open
    • analog behavior (voltage and IDDQ)
    • problems with traditional models
      • likelihood of occurrence
      • internal cell defects
  • Defect oriented modelling
    • Critical area for defects
      • definition
      • methods for calculation
      • concept of saturation
    • cell-aware – beyond port faults
      • cell layout extraction
      • SPICE simulation of defects
      • required inputs for defect detection
      • static and delay tests
    • interconnect layout extraction
      • bridges
      • opens
    • extension to cell neighborhoods
      • cell configurations
      • multiple analog characterizations
    • ATPG for interconnect faults
      • 4-way bridging model
      • 2-pattern test for opens
  • Fault coverage
    • what is it used for?
    • how is it calculated?
    • factors affecting coverage numbers
      • collapsing
      • redundant logic
    • limitations of traditional coverage
    • use of total critical area
    • fault coverage and quality
      • how much coverage is enough?
      • theoretical approaches
      • challenges in chip-level coverage from multiple test sets
      • effect of distribution of non-detected defects
  • Traditional vs Defect Based Models
    • Recent results on extraction of bridges, opens, cell neighbors
    • Weighted versus unweighted coverages
    • High volume production test results
  • IDDQ Testing
    • general principles
    • detection of defects that are undetectable using voltage-based tests
    • tester implementation
    • dealing with process variability
  • Enhancements for Reliability
    • Outlier Screening
      • Dealing with parametric faults
      • Part average testing – static vs dynamic
      • Nearest neighbor residuals
    • Voltage stress
      • The bathtub curve
      • The concept of acceleration
      • Limitations on stress voltage
      • Dynamic versus enhanced stress
    • Burn-in
      • Temperature acceleration
Peter Maxwell
Peter Maxwell
ON Semiconductor

This lecture will explain the circuitry and equipment, both on-chip and off-chip, that is used to test, debug, and diagnose integrated circuits. The discussion of on-chip circuitry includes everything from decades-old Design-For-Testability methods like scan testing to recently standardized IEEE 1687 embedded instruments for on-line monitoring. Likewise, we'll examine traditional external hardware like Automated Test Equipment and bench instruments and how they are being augmented with internal chip features. In addition to these high-volume manufacturing applications, we will review the infrastructure that is uniquely utilized for post-silicon debug and validation tasks as applied to hardware, software, and firmware. We will also study the mechanics of doing failure diagnosis as a gateway into yield enhancement by exploiting big data and machine learning techniques. We'll conclude with a forward-looking view of the direction in which all this infrastructure seems to be evolving.

  • Infrastructure overview
    • End-to-end design and manufacturing flow
    • Test equipment and flow
    • Debug equipment and flow
    • Diagnosis equipment and flow
  • Test
    • DFT infrastructure
    • Embedded instruments
    • Describing DFT infrastructure with IEEE 1687 ICL
    • Writing and retargeting portable tests with IEEE 1687 PDL
    • Dealing with analog circuits via IEEE P1687.2
    • Interfacing to chips without IEEE 1149.1 TAPs via IEEE P1687.1
  • Debug
    • DFD and DFV infrastructure
    • Debug and validation instruments
    • External hardware: Logic Analyzer, Protocol analyzer
    • Describing DFD/DFV infrastructure with IEEE 1687/1687.2 ICL
    • Hardware vs. Software vs. Firmware debug
    • Debug vs. Security conflict
  • Diagnosis
    • Logic Diagnosis
    • Cell-aware diagnosis
    • Layout-aware diagnosis
    • Physical failure analysis
    • Volume diagnosis and yield learning
    • Big data applied to diagnosis
    • Machine learning applied to diagnosis
  • Future vision
    • Adaptive test in a disaggregated supply chain environment
    • Sharing data and protecting data
    • Universal infrastructure
Jeff Rearick
Jeff Rearick
AMD Senior Fellow