Tangible Objects for the Acquisition of Multimodal Interaction Patterns

Ronnie Taib, Natalie Ruiz


Abstract
Multimodal user interfaces offer more intuitive interaction for end-users, however, usually only through predefined input schemes. This paper describes a user experiment for multimodal interaction pattern identification, using head gesture and speech inputs for a 3D graph manipulation. We show that a direct mapping between head gestures and the 3D object predominates, however even for such a simple task inputs vary greatly between users, and do not exhibit any clustering pattern. Also, in spite of the high degree of expressiveness of linguistic modalities, speech commands in particular tend to use a limited vocabulary. We observed a common set of verb and adverb compounds in a majority of users. In conclusion, we recommend that multimodal user interfaces be individually customisable or adaptive to users’ interaction preferences.
Anthology ID:
L06-1136
Volume:
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)
Month:
May
Year:
2006
Address:
Genoa, Italy
Editors:
Nicoletta Calzolari, Khalid Choukri, Aldo Gangemi, Bente Maegaard, Joseph Mariani, Jan Odijk, Daniel Tapias
Venue:
LREC
SIG:
Publisher:
European Language Resources Association (ELRA)
Note:
Pages:
Language:
URL:
http://www.lrec-conf.org/proceedings/lrec2006/pdf/244_pdf.pdf
DOI:
Bibkey:
Cite (ACL):
Ronnie Taib and Natalie Ruiz. 2006. Tangible Objects for the Acquisition of Multimodal Interaction Patterns. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06), Genoa, Italy. European Language Resources Association (ELRA).
Cite (Informal):
Tangible Objects for the Acquisition of Multimodal Interaction Patterns (Taib & Ruiz, LREC 2006)
Copy Citation:
PDF:
http://www.lrec-conf.org/proceedings/lrec2006/pdf/244_pdf.pdf