Published On May 20, 2024
UoE RL Reading Group | 10 December 2021
Speaker: Mohamad H. Danesh (NUS)
Title: Re-understanding finite-state representations of recurrent policy networks
Authors: Mohamad H. Danesh, Anurag Koul, Alan Fern, Saeed Khorram
In: International Conference on Machine Learning, pp. 2388-2397. PMLR, 2021
Abstract: We introduce an approach for understanding control policies represented as recurrent neural networks. Recent work has approached this problem by transforming such recurrent policy networks into finite-state machines (FSM) and then analyzing the equivalent minimized FSM. While this led to interesting insights, the minimization process can obscure a deeper understanding of a machine's operation by merging states that are semantically distinct. To address this issue, we introduce an analysis approach that starts with an unminimized FSM and applies more-interpretable reductions that preserve the key decision points of the policy. We also contribute an attention tool to attain a deeper understanding of the role of observations in the decisions. Our case studies on 7 Atari games and 3 control benchmarks demonstrate that the approach can reveal insights that have not been previously noticed.
Link: https://arxiv.org/abs/2006.03745
Bio: Mohamad is a visiting researcher at the Adaptive Computing Laboratory (AdaComp) at National University of Singapore (NUS), working with Panpan Cai and Prof. David Hsu. At AdaComp, he is working on integrating planning and learning for decision-making under uncertainty. Prior, he obtained a master's degree from Oregon State University, advised by Prof. Alan Fern. His master's thesis was focused on explainable and robust RL agents. In his presentation, he will present their work on understanding policies represented by RNNs.