Mapping Instructions and Visual Observations to Actions with Reinforcement Learning

Dipendra Misra, John Langford, Yoav Artzi


Abstract
We propose to directly map raw visual observations and text input to actions for instruction execution. While existing approaches assume access to structured environment representations or use a pipeline of separately trained models, we learn a single model to jointly reason about linguistic and visual input. We use reinforcement learning in a contextual bandit setting to train a neural network agent. To guide the agent’s exploration, we use reward shaping with different forms of supervision. Our approach does not require intermediate representations, planning procedures, or training different models. We evaluate in a simulated environment, and show significant improvements over supervised learning and common reinforcement learning variants.
Anthology ID:
D17-1106
Volume:
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing
Month:
September
Year:
2017
Address:
Copenhagen, Denmark
Editors:
Martha Palmer, Rebecca Hwa, Sebastian Riedel
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
1004–1015
Language:
URL:
https://aclanthology.org/D17-1106
DOI:
10.18653/v1/D17-1106
Bibkey:
Cite (ACL):
Dipendra Misra, John Langford, and Yoav Artzi. 2017. Mapping Instructions and Visual Observations to Actions with Reinforcement Learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1004–1015, Copenhagen, Denmark. Association for Computational Linguistics.
Cite (Informal):
Mapping Instructions and Visual Observations to Actions with Reinforcement Learning (Misra et al., EMNLP 2017)
Copy Citation:
PDF:
https://aclanthology.org/D17-1106.pdf
Attachment:
 D17-1106.Attachment.pdf
Code
 clic-lab/blocks