One of our key goals is to find new ways of automating complex, procedural tasks through a combination of abstraction, automated reasoning, and machine learning. Writing code and proving theorems are two examples of such tasks. Classical methods for these problems are based on rule-based search. In contrast, recent learning-based approaches treat code and proofs as text to which models of natural language can be directly applied. Our work aims to offer the best of the two worlds, for example, by exposing machine learning models to formal semantics of proofs and code, and by guiding rule-based searches using learned language models.
We are also interested in integrating such procedural reasoning with perception. Specifically, we are working to build agents that can abstract sensory inputs using learnable perception modules and then act on these abstract inputs using learning-enabled reasoning modules. Through collaboration with roboticists, we seek to deploy such agents on physical robots.