image part 004

Automated Reasoning

Another area of interest is automated reasoning, i.e., algorithms for proving and disproving mathematical statements. In certain cases, solving reasoning tasks is an end in itself. In other cases, automated reasoning is a tool for establishing that a system is safe or inferring useful facts that can aid learning and program synthesis. In either case, the computational complexity of exploring spaces of proofs and counterexamples is a formidable barrier. As with program synthesis, we approach this challenge by complementing classical automated reasoning techniques with contemporary statistical learning methods.

image part 005

Program Synthesis

Program synthesis is the problem of automatically discovering programs from specifications of their intended functionality. Such specifications can take various forms, including noisy input-output examples, desired program trajectories, behavioral constraints written in first-order or temporal logic, and visual or textual descriptions of the program’s goals. Program synthesis cuts across most of our lab’s research. In some cases, we are motivated by the potential of program synthesis to make programming more productive. In other cases, our goal is to synthesize programs that model, and help us understand better, the real world.

In either case, one faces two basic issues. First, a program synthesizer needs to search through a space of programs that is combinatorial and quickly explodes. Second, specifications are often ambiguous or incomplete, and this means that not all programs that meet the specification are interesting. Over the years, we have developed many approaches to these challenges. For example, some of our methods prune a search space of programs using automated deduction techniques. Others guide a discrete search over programs using learned statistical models. Yet others relax the space of programs into a continuous space that can be explored with scalable gradient-based optimization.


Selected Publications

Yeming Wen Rohan Mukherjee, Dipak Chaudhari; Jermaine, Chris

Neural Program Generation Modulo Static Analysis Journal Article

In: Neural Information Processing Systems (NeurIPS), 2021., 2021.

BibTeX

Shah, Ameesh; Zhan, Eric; Sun, Jennifer J.; Verma, Abhinav; Yue, Yisong; Chaudhuri, Swarat

Learning Differentiable Programs with Admissible Neural Heuristics Inproceedings

In: Larochelle, Hugo; Ranzato, Marc'Aurelio; Hadsell, Raia; Balcan, Maria-Florina; Lin, Hsuan-Tien (Ed.): Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.

Links | BibTeX

Verma, Abhinav; Le, Hoang Minh; Yue, Yisong; Chaudhuri, Swarat

Imitation-Projected Programmatic Reinforcement Learning Inproceedings

In: Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, pp. 15726–15737, 2019.

Links | BibTeX

Murali, Vijayaraghavan; Qi, Letao; Chaudhuri, Swarat; Jermaine, Chris

Neural Sketch Learning for Conditional Program Generation Inproceedings

In: 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings, 2018.

Links | BibTeX

Feser, John K.; Chaudhuri, Swarat; Dillig, Isil

Synthesizing data structure transformations from input-output examples Inproceedings

In: Proceedings of the 36th ACM SIGPLAN Conference on Programming Language Design and Implementation, Portland, OR, USA, June 15-17, 2015, pp. 229–239, 2015.

Links | BibTeX

image part 006

Probabilistic Programming

Another running theme is probabilistic programming, in which programs are used to represent complex, structured probability distributions. We are especially interested in using such programs to unite logical and probabilistic reasoning and perform complex generative modeling, and for causal inference and discovery. Our research studies a wide variety of technical problems in probabilistic programming, including the design of probabilistic programming languages, the development of methods to infer probabilities, independence relationships, and the effects of interventions and counterfactuals in probabilistic programs, and deriving algorithms for learning the structure and parameters of probabilistic programs.


Selected Publications

Samuel Anklesaria Calvin Smith,; Chaudhuri, Swarat

Deep Generative Logic Programming in Sherlog Journal Article

In: 0000.

BibTeX

image part 007

Neurosymbolic Programming

A key theme in our research is the use of neurosymbolic programs, i.e., models constructed through the composition of neural networks and traditional symbolic code. The neural modules in such a program facilitate efficient learning, while the symbolic components allow the program to use human domain knowledge and also be human-comprehensible. Our research studies a wide variety of challenges in neurosymbolic programming, including the design of language abstractions that allow neural and symbolic modules to interoperate smoothly, methods for analyzing the safety and performance of neurosymbolic programs, and algorithms for learning the structure and parameters of neurosymbolic programs from data.


Selected Publications

Anderson, Greg; Verma, Abhinav; Dillig, Isil; Chaudhuri, Swarat

Neurosymbolic Reinforcement Learning with Formally Verified Exploration Inproceedings

In: Larochelle, Hugo; Ranzato, Marc'Aurelio; Hadsell, Raia; Balcan, Maria-Florina; Lin, Hsuan-Tien (Ed.): Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.

Links | BibTeX

Shah, Ameesh; Zhan, Eric; Sun, Jennifer J.; Verma, Abhinav; Yue, Yisong; Chaudhuri, Swarat

Learning Differentiable Programs with Admissible Neural Heuristics Inproceedings

In: Larochelle, Hugo; Ranzato, Marc'Aurelio; Hadsell, Raia; Balcan, Maria-Florina; Lin, Hsuan-Tien (Ed.): Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.

Links | BibTeX

Verma, Abhinav; Le, Hoang Minh; Yue, Yisong; Chaudhuri, Swarat

Imitation-Projected Programmatic Reinforcement Learning Inproceedings

In: Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, pp. 15726–15737, 2019.

Links | BibTeX

Cheng, Richard; Verma, Abhinav; Orosz, Gábor; Chaudhuri, Swarat; Yue, Yisong; Burdick, Joel

Control Regularization for Reduced Variance Reinforcement Learning Inproceedings

In: Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, pp. 1141–1150, 2019.

Links | BibTeX

Kevin Ellis Swarat Chaudhuri, Oleksandr Polozov; Yue, Yisong

Neurosymbolic Programming. Foundations and Trends in Programming Languages. Book

2018.

BibTeX

Valkov, Lazar; Chaudhari, Dipak; Srivastava, Akash; Sutton, Charles; Chaudhuri, Swarat

HOUDINI: Lifelong Learning as Program Synthesis Inproceedings

In: Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3-8 December 2018, Montréal, Canada, pp. 8701–8712, 2018.

Links | BibTeX