Dependability and Security of Advanced AI Systems

Muhammad Shafique  - NYU Abu Dhabi, UAE

Abstract :

Modern Machine Learning (ML) and Artificial Intelligence (AI) approaches, such as, the Deep Neural Networks (DNNs), have shown tremendous improvement over the past years to achieve a significantly high accuracy for a certain set of tasks, like image classification, object detection, natural language processing, and medical data analytics. These DNNs are deployed in s a wide range of applications from Smart Cyber Physical Systems (CPS) and Internet of Thing (IoT) domains on resource-constrained devices subjected to unpredictable and harsh scenarios, thereby requiring dependable AI solutions. Moreover, in the era of growing cyber-security threats, the intelligent features of a smart CPS and IoT system face new type of attacks, requiring novel design principles for robust ML/AI.

In my research labs at New York University and TU Wien, I have been extensively investigating the foundations for the next-generation energy-efficient, dependable, and secure AI/ML computing systems, while addressing the above-mentioned challenges across the hardware and software stacks. This lecture will present design challenges and hardware/software techniques for building dependable and secure AI systems, which leverage optimizations at different software and hardware layers, and at different design stages (e.g., design-time vs. run-time approaches). These techniques provide crucial steps towards enabling the wide-scale deployment of dependable and secure embedded AI systems like UAVs, autonomous vehicles, Robotics, IoT-Healthcare / Wearables, Industrial-IoT, etc.

Sylabus :

  1. AI/ML for Smart Cyber Physical Systems (CPS): Introduction and Design Challenges
    • ML applications for Smart CPS and challenges
    • Accelerator based AI Systems
    • Introduction to different dependability threats for AI Systems
    • Introduction to different security threats for AI Systems
  2. Dependability for AI Systems
    • Resilience analysis
    • Techniques for mitigating manufacturing defects faults (stuck-at faults, etc.)
    • Techniques for mitigating soft errors
    • Techniques for mitigating aging
  3. Security for AI Systems
    • Threat models
    • Backdoor attacks
    • Adversarial attacks
    • Model Stealing attacks
    • Defenses
  4. Conclusion
    • Summary of problems and solutions
    • Key take away messages
    • Open Research Challenges