image part 009

Interpretable Systems

To trust a system, you need to understand it. However, in learning-enabled systems, interpretability is often at odds with learning performance. For example, deep neural networks can learn efficiently but are opaque black boxes. On the other hand, linear models or shallow decision trees are more interpretable but do not perform well on complex tasks.

Our lab has introduced programmatic interpretability, a new way around this conflict. Here, one learns models represented as programs in neurosymbolic domain-specific languages [ICML 2018; NeurIPS 2020]. These languages are designed to be interpretable by specific groups of users; at the same time, they are more expressive than traditional “shallow” models. Other goals include the synthesis of programmatic explanations of local decisions made by a more complex model, the inference of human-comprehensible properties of models through program analysis, and the systematic exploration of the tradeoffs between interpretability and model performance.


Selected Publications

Sorry, no publications matched your criteria.