Methods


Neurosymbolic Programming

A key theme in our research is the use of neurosymbolic programs, i.e., models constructed through the composition of neural networks and traditional symbolic code. The neural modules in such a program facilitate efficient learning, while the symbolic components allow the program to use human domain knowledge and also be human-comprehensible. Our research studies a wide variety of challenges in neurosymbolic programming, including the design of language abstractions that allow neural and symbolic modules to interoperate smoothly, methods for analyzing the safety and performance of neurosymbolic programs, and algorithms for learning the structure and parameters of neurosymbolic programs from data.

Read more.


Probabilistic Programming

Another running theme is probabilistic programming, in which programs are used to represent complex, structured probability distributions. We are especially interested in using such programs to unite logical and probabilistic reasoning and perform complex generative modeling, and for causal inference and discovery. Our research studies a wide variety of technical problems in probabilistic programming, including the design of probabilistic programming languages, the development of methods to infer probabilities, independence relationships, and the effects of interventions and counterfactuals in probabilistic programs, and deriving algorithms for learning the structure and parameters of probabilistic programs.

Read more.


Program Synthesis

Program synthesis is the problem of automatically discovering programs from specifications of their intended functionality. Such specifications can take various forms, including noisy input-output examples, desired program trajectories, behavioral constraints written in first-order or temporal logic, and visual or textual descriptions of the program’s goals. Program synthesis cuts across most of our lab’s research. In some cases, we are motivated by the potential of program synthesis to make programming more productive. In other cases, our goal is to synthesize programs that model, and help us understand better, the real world.

Read more.


Automated Reasoning

Another area of interest is automated reasoning, i.e., algorithms for proving and disproving mathematical statements. In certain cases, solving reasoning tasks is an end in itself. In other cases, automated reasoning is a tool for establishing that a system is safe or inferring useful facts that can aid learning and program synthesis. In either case, the computational complexity of exploring spaces of proofs and counterexamples is a formidable barrier. As with program synthesis, we approach this challenge by complementing classical automated reasoning techniques with contemporary statistical learning methods.

Read more.

Research Goals


Bridging Abstraction, Learning, and Reasoning

One of our key goals is to find new ways of automating complex, procedural tasks through a combination of abstraction, automated reasoning, and machine learning. Writing code and proving theorems are two examples of such tasks. Classical methods for these problems are based on rule-based search. In contrast, recent learning-based approaches treat code and proofs as text to which models of natural language can be directly applied. Our work aims to offer the best of the two worlds, for example, by exposing machine learning models to formal semantics of proofs and code, and by guiding rule-based searches using learned language models.

Read more.


Data-Efficient Learning

Humans are frequently able to learn new skills from just a few examples. In contrast, modern learning algorithms can be tremendously data-hungry. We have been exploring ways to overcome this shortcoming of machine learning through a combination of symbolic and statistical techniques.

Read more.


Interpretable Systems

To trust a system, you need to understand it. However, in learning-enabled systems, interpretability is often at odds with learning performance. For example, deep neural networks can learn efficiently but are opaque black boxes. On the other hand, linear models or shallow decision trees are more interpretable but do not perform well on complex tasks. Our lab has introduced programmatic interpretability, a new way around this conflict.

Read more.


Safe Systems

Software failures can have deadly consequences, but building software that is reliable and also has high performance remains an unsolved problem. This is especially so for learning-enabled AI systems. State-of-the-art machine learning models commonly behave incorrectly on unexpected or adversarial inputs. Also, AI systems deployers in the real world can often violate norms of privacy and fairness that we expect human decision-makers to follow. At the same time, it is usually impossible to reason about the correctness of machine learning models using traditional software debugging and development techniques. Overcoming these challenges through a synthesis of ideas from formal methods, probabilistic reasoning, and machine learning is a central objective of our research.

Read more.

Applications


Software Engineering

Many aspects of everyday software engineering are repetitive; today, developers commonly perform these tasks using the guidance of other developers through forums like Stack Overflow. Program synthesis systems, such as the ones developed in our work, can potentially automate away many of these repetitive tasks. By doing so, they can allow the expert software engineer to focus on the more creative aspects of their work and enable novice programmers to do far more complex tasks than they can do today.

Read more.


Scientific Discovery

The acceleration of scientific discovery is among the most exciting promises of modern AI. However, algorithms that discover new scientific hypotheses and guide experiments must not only have high performance but also produce interpretable outputs. This makes many of our methods a natural fit for this space. In particular, with collaborators in behavioral neuroscience, we have been recently working to use our methods in the analysis of animal behavior. Specifically, we have used neurosymbolic program synthesis to discover interpretable classifiers and clusters for behaviors, and models of divergences between different human experts annotating behaviors.

Read more.


Robotics and Cyber-Physical Systems

When designing policies or controllers for autonomous embodied systems, factors such as safety and data efficiency become paramount. For both low-level control and high-level planning problems, the standard practice has been to leverage symbolic domain knowledge (e.g., the governing equations of motion for the system, or an automaton representation of the high-level states) to design structured models that have certifiable guarantees, good generalization, or both. An emerging research direction is to automatically learn or discover the structure of the symbolic knowledge, which can be viewed as an instance of neurosymbolic programming.

Read more.