Subramanian Ramamoorthy


2023

pdf bib
Interactive Acquisition of Fine-grained Visual Concepts by Exploiting Semantics of Generic Characterizations in Discourse
Jonghyuk Park | Alex Lascarides | Subramanian Ramamoorthy
Proceedings of the 15th International Conference on Computational Semantics

Interactive Task Learning (ITL) concerns learning about unforeseen domain concepts via natural interactions with human users. The learner faces a number of significant constraints: learning should be online, incremental and few-shot, as it is expected to perform tangible belief updates right after novel words denoting unforeseen concepts are introduced. In this work, we explore a challenging symbol grounding task—discriminating among object classes that look very similar—within the constraints imposed by ITL. We demonstrate empirically that more data-efficient grounding results from exploiting the truth-conditions of the teacher’s generic statements (e.g., “Xs have attribute Z.”) and their implicatures in context (e.g., as an answer to “How are Xs and Ys different?”, one infers Y lacks attribute Z).

2017

pdf bib
Grounding Symbols in Multi-Modal Instructions
Yordan Hristov | Svetlin Penkov | Alex Lascarides | Subramanian Ramamoorthy
Proceedings of the First Workshop on Language Grounding for Robotics

As robots begin to cohabit with humans in semi-structured environments, the need arises to understand instructions involving rich variability—for instance, learning to ground symbols in the physical world. Realistically, this task must cope with small datasets consisting of a particular users’ contextual assignment of meaning to terms. We present a method for processing a raw stream of cross-modal input—i.e., linguistic instructions, visual perception of a scene and a concurrent trace of 3D eye tracking fixations—to produce the segmentation of objects with a correspondent association to high-level concepts. To test our framework we present experiments in a table-top object manipulation scenario. Our results show our model learns the user’s notion of colour and shape from a small number of physical demonstrations, generalising to identifying physical referents for novel combinations of the words.

2014

pdf bib
A Generative Model for User Simulation in a Spatial Navigation Domain
Aciel Eshky | Ben Allison | Subramanian Ramamoorthy | Mark Steedman
Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics