Difference between revisions of "DSLI-14"
(Created page with "== NAACL HLT Workshop on Designing Speech and Language Interactions 2014 == Speech and natural language remain our most natural forms of interaction; yet the HCI community ha...") |
|||
Line 14: | Line 14: | ||
* Formally framing the challenges to the widespread adoption of speech and natural language interaction, | * Formally framing the challenges to the widespread adoption of speech and natural language interaction, | ||
− | * Taking concrete steps toward developing a framework of user-centric design guidelines for speech- and language-based interactive systems, grounded in | + | * Taking concrete steps toward developing a framework of user-centric design guidelines for speech- and language-based interactive systems, grounded in good usability practices, and |
− | good usability practices, and | + | * Establishing directions to take and identifying further research opportunities in designing more natural interactions that make use of speech and natural language |
− | * Establishing directions to take and identifying further research opportunities in designing more natural interactions that make use of speech | ||
− | and natural language | ||
Line 23: | Line 21: | ||
* Human factors and usability issues of imperfect speech- and language-based systems | * Human factors and usability issues of imperfect speech- and language-based systems | ||
* Meaningful evaluations of speech-based systems such as speech summarization, machine translation, synthetic speech, etc. | * Meaningful evaluations of speech-based systems such as speech summarization, machine translation, synthetic speech, etc. | ||
− | * Designing natural language-based mobile interfaces, such as embodied conversational agents or applications for facilitating access to large | + | * Designing natural language-based mobile interfaces, such as embodied conversational agents or applications for facilitating access to large multimedia repositories (e.g. meetings, video archives). |
− | multimedia repositories (e.g. meetings, video archives). | ||
* Improved accessibility through speech and language processing | * Improved accessibility through speech and language processing | ||
* Multimodal interfaces that combine speech with other input modalities for increased usability and robustness | * Multimodal interfaces that combine speech with other input modalities for increased usability and robustness | ||
Line 36: | Line 33: | ||
== Dates == | == Dates == | ||
− | * | + | * Submission of position papers: 17 January 2014 |
− | + | * Notification of acceptance: 10 February 2014 | |
− | * | + | * Workshop: 26 April 2014 |
− | |||
− | * | ||
== Submission == | == Submission == |
Latest revision as of 16:16, 8 January 2014
NAACL HLT Workshop on Designing Speech and Language Interactions 2014
Speech and natural language remain our most natural forms of interaction; yet the HCI community have been very timid about focusing their attention on designing and developing spoken language interaction techniques. While significant efforts are spent and progress made in speech recognition, synthesis, and natural language processing, there is now sufficient evidence that many real-life applications using speech technologies do not require 100% accuracy to be useful. This is particularly true if such systems are designed with complementary modalities that better support their users or enhance the systems' usability. Many recent commercial applications, especially in the mobile space, are already tapping the increased interest in and need for natural user interfaces by enabling speech interaction in their products.
This multidisciplinary, one-day workshop will bring together interaction designers, usability researchers, and general HCI practitioners to analyze the opportunities and directions to take in designing more natural interactions based on spoken language, and to look at how we can leverage recent advances in speech processing in order to gain widespread acceptance of speech and natural language interaction. Our goal is to create, through an interdisciplinary dialogue, momentum for increased research and collaboration in:
- Formally framing the challenges to the widespread adoption of speech and natural language interaction,
- Taking concrete steps toward developing a framework of user-centric design guidelines for speech- and language-based interactive systems, grounded in good usability practices, and
- Establishing directions to take and identifying further research opportunities in designing more natural interactions that make use of speech and natural language
Topics
- Human factors and usability issues of imperfect speech- and language-based systems
- Meaningful evaluations of speech-based systems such as speech summarization, machine translation, synthetic speech, etc.
- Designing natural language-based mobile interfaces, such as embodied conversational agents or applications for facilitating access to large multimedia repositories (e.g. meetings, video archives).
- Improved accessibility through speech and language processing
- Multimodal interfaces that combine speech with other input modalities for increased usability and robustness
- Speech applications that go beyond lexical recognition in novel ways (e.g. signal analysis for health diagnostics, learning analytics)
- Speech as an interface tool for building usable applications for illiterate or semi-literate populations
- Pervasive, augmented reality, or mixed-reality immersive systems enhanced with audio interactions
Location
The DSLI-14 workshop will be held in conjunction with CHI 2014 in Toronto, on April 26, 2014.
Dates
- Submission of position papers: 17 January 2014
- Notification of acceptance: 10 February 2014
- Workshop: 26 April 2014
Submission
Position papers should be no more than 4 pages long, in the ACM SIGCHI Archival format, and include a brief statement from the author(s) justifying the interest in the workshop's topic. Summaries of research already presented are welcome if they contribute to the multidisciplinary goals of the workshop (e.g. a speech processing research in clear need of HCI expertise). Submissions will be reviewed according to:
- Fit with the workshop topic
- Potential to contribute to the workshop goals
- A demonstrated track of research in the workshop area (HCI or speech processing, with an interest in both areas).
Please submit workshop papers to: dsli2014-submissions@cs.toronto.edu . For all other enquiries about the workshop, please contact us at: dsli2014@cs.toronto.edu
Program committee
- Cosmin Munteanu, National Research Council Canada and University of Toronto
- Mathhew Aylett, CereProc Inc.
- Stephen Brewster, University of Glasgow
- Nicolas d'Alessandro, University of Mons
- Matt Jones, Swansea University
- Sharon Oviatt, Incaa Designs
- Gerald Penn, University of Toronto
- Steve Whittaker, University of California at Santa Cruz