image part 008

Safe Systems

Software failures can have deadly consequences, but building software that is reliable and also has high performance remains an unsolved problem. This is especially so for learning-enabled AI systems. State-of-the-art machine learning models commonly behave incorrectly on unexpected or adversarial inputs. Also, AI systems deployers in the real world can often violate norms of privacy and fairness that we expect human decision-makers to follow. At the same time, it is usually impossible to reason about the correctness of machine learning models using traditional software debugging and development techniques. Overcoming these challenges through a synthesis of ideas from formal methods, probabilistic reasoning, and machine learning is a central objective of our research.

One aspect of our work on this topic concerns methods for analyzing the safety and robustness of machine learning models [IEEE S&P, 2018; PLDI 2019]. Techniques that integrate such analysis into the learning loop [NeurIPS 2020] of intelligent systems form the other dimension. Open challenges include scaling these methods to larger models, reducing the burden of formal specification needed for safety assurance, finding effective tradeoffs between safety and performance, and discovering algorithms that bring together symbolic and statistical methods for safety assurance.


Selected Publications

Anderson, Greg; Verma, Abhinav; Dillig, Isil; Chaudhuri, Swarat

Neurosymbolic Reinforcement Learning with Formally Verified Exploration Inproceedings

In: Larochelle, Hugo; Ranzato, Marc'Aurelio; Hadsell, Raia; Balcan, Maria-Florina; Lin, Hsuan-Tien (Ed.): Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.

Links | BibTeX

Anderson, Greg; Pailoor, Shankara; Dillig, Isil; Chaudhuri, Swarat

Optimization and abstraction: a synergistic approach for analyzing neural network robustness Inproceedings

In: Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2019, Phoenix, AZ, USA, June 22-26, 2019, pp. 731–744, 2019.

Links | BibTeX

Gehr, Timon; Mirman, Matthew; Drachsler-Cohen, Dana; Tsankov, Petar; Chaudhuri, Swarat; Vechev, Martin T.

AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation Inproceedings

In: 2018 IEEE Symposium on Security and Privacy, SP 2018, Proceedings, 21-23 May 2018, San Francisco, California, USA, pp. 3–18, 2018.

Links | BibTeX