image part 008

Safe Systems

Software failures can have deadly consequences, but building software that is reliable and also has high performance remains an unsolved problem. This is especially so for learning-enabled AI systems. State-of-the-art machine learning models commonly behave incorrectly on unexpected or adversarial inputs. Also, AI systems deployers in the real world can often violate norms of privacy and fairness that we expect human decision-makers to follow. At the same time, it is usually impossible to reason about the correctness of machine learning models using traditional software debugging and development techniques. Overcoming these challenges through a synthesis of ideas from formal methods, probabilistic reasoning, and machine learning is a central objective of our research.

One aspect of our work on this topic concerns methods for analyzing the safety and robustness of machine learning models [IEEE S&P, 2018; PLDI 2019]. Techniques that integrate such analysis into the learning loop [NeurIPS 2020] of intelligent systems form the other dimension. Open challenges include scaling these methods to larger models, reducing the burden of formal specification needed for safety assurance, finding effective tradeoffs between safety and performance, and discovering algorithms that bring together symbolic and statistical methods for safety assurance.


Selected Publications

Anderson, Greg; Verma, Abhinav; Dillig, Isil; Chaudhuri, Swarat

Neurosymbolic Reinforcement Learning with Formally Verified Exploration Inproceedings

In: Larochelle, Hugo; Ranzato, Marc'Aurelio; Hadsell, Raia; Balcan, Maria-Florina; Lin, Hsuan-Tien (Ed.): Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.

Links | BibTeX

Anderson, Greg; Pailoor, Shankara; Dillig, Isil; Chaudhuri, Swarat

Optimization and abstraction: a synergistic approach for analyzing neural network robustness Inproceedings

In: Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 2019, Phoenix, AZ, USA, June 22-26, 2019, pp. 731–744, 2019.

Links | BibTeX

Gehr, Timon; Mirman, Matthew; Drachsler-Cohen, Dana; Tsankov, Petar; Chaudhuri, Swarat; Vechev, Martin T.

AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation Inproceedings

In: 2018 IEEE Symposium on Security and Privacy, SP 2018, Proceedings, 21-23 May 2018, San Francisco, California, USA, pp. 3–18, 2018.

Links | BibTeX

image part 009

Interpretable Systems

To trust a system, you need to understand it. However, in learning-enabled systems, interpretability is often at odds with learning performance. For example, deep neural networks can learn efficiently but are opaque black boxes. On the other hand, linear models or shallow decision trees are more interpretable but do not perform well on complex tasks.

Our lab has introduced programmatic interpretability, a new way around this conflict. Here, one learns models represented as programs in neurosymbolic domain-specific languages [ICML 2018; NeurIPS 2020]. These languages are designed to be interpretable by specific groups of users; at the same time, they are more expressive than traditional “shallow” models. Other goals include the synthesis of programmatic explanations of local decisions made by a more complex model, the inference of human-comprehensible properties of models through program analysis, and the systematic exploration of the tradeoffs between interpretability and model performance.


Selected Publications

Shah, Ameesh; Zhan, Eric; Sun, Jennifer J.; Verma, Abhinav; Yue, Yisong; Chaudhuri, Swarat

Learning Differentiable Programs with Admissible Neural Heuristics Inproceedings

In: Larochelle, Hugo; Ranzato, Marc'Aurelio; Hadsell, Raia; Balcan, Maria-Florina; Lin, Hsuan-Tien (Ed.): Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.

Links | BibTeX

Verma, Abhinav; Murali, Vijayaraghavan; Singh, Rishabh; Kohli, Pushmeet; Chaudhuri, Swarat

Programmatically Interpretable Reinforcement Learning Inproceedings

In: Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, pp. 5052–5061, 2018.

Links | BibTeX

image part 010

Data-Efficient Learning

Humans are frequently able to learn new skills from just a few examples. In contrast, modern learning algorithms can be tremendously data-hungry. We have been exploring ways to overcome this shortcoming of machine learning through a combination of symbolic and statistical techniques.

As a concrete example, some of our recent work uses program synthesis to automatically compose a set of previously learned neural library modules. The composite models are fine-tuned on new tasks, and this fine-tuning takes many fewer examples than learning from scratch. Our longer-term goals include scaling such compositional program synthesis to larger libraries and much larger modules (think GPT-3), and discovering libraries in an unsupervised manner.


Selected Publications

Valkov, Lazar; Chaudhari, Dipak; Srivastava, Akash; Sutton, Charles; Chaudhuri, Swarat

HOUDINI: Lifelong Learning as Program Synthesis Inproceedings

In: Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018, Montréal, Canada, pp. 8701–8712, 2018.

Links | BibTeX

image part 011

Bridging Abstraction, Learning, and Reasoning

One of our key goals is to find new ways of automating complex, procedural tasks through a combination of abstraction, automated reasoning, and machine learning. Writing code and proving theorems are two examples of such tasks. Classical methods for these problems are based on rule-based search. In contrast, recent learning-based approaches treat code and proofs as text to which models of natural language can be directly applied. Our work aims to offer the best of the two worlds, for example, by exposing machine learning models to formal semantics of proofs and code, and by guiding rule-based searches using learned language models.

We are also interested in integrating such procedural reasoning with perception. Specifically, we are working to build agents that can abstract sensory inputs using learnable perception modules and then act on these abstract inputs using learning-enabled reasoning modules. Through collaboration with roboticists, we seek to deploy such agents on physical robots.


Selected Publications

Yeming Wen Rohan Mukherjee, Dipak Chaudhari; Jermaine, Chris

Neural Program Generation Modulo Static Analysis Journal Article

In: Neural Information Processing Systems (NeurIPS), 2021., 2021.

BibTeX

Murali, Vijayaraghavan; Qi, Letao; Chaudhuri, Swarat; Jermaine, Chris

Neural Sketch Learning for Conditional Program Generation Inproceedings

In: 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings, 2018.

Links | BibTeX