Efficient Bayesian inverse reinforcement learning via conditional kernel density estimation

Abstract

Inverse reinforcement learning (IRL) methods attempt to recover the reward function of an agent by observing its behavior. Given the large amount of uncertainty in the underlying reward function, it is often useful to model this function probabilistically, rather than estimate a single reward function. However, existing Bayesian approaches to IRL use a $Q$-value function to approximate the likelihood, leading to a computationally intractable and inflexible framework. Here, we introduce kernel density Bayesian IRL (KD-BIRL), a method that uses kernel density estimation to approximate the likelihood. This lends itself to an efficient posterior inference for the reward function given a sequence of agent observations. Using both linear and nonlinear reward functions in a Gridworld environment, we demonstrate that the KD-BIRL posterior centers around the true reward function, and that our method is more efficient than existing approaches.

Publication
Symposium on Advances in Approximate Bayesian Inference