Chia-Ying Lee

Also published as: Chia-ying Lee


2015

pdf bib
Unsupervised Lexicon Discovery from Acoustic Input
Chia-ying Lee | Timothy J. O’Donnell | James Glass
Transactions of the Association for Computational Linguistics, Volume 3

We present a model of unsupervised phonological lexicon discovery—the problem of simultaneously learning phoneme-like and word-like units from acoustic input. Our model builds on earlier models of unsupervised phone-like unit discovery from acoustic data (Lee and Glass, 2012), and unsupervised symbolic lexicon discovery using the Adaptor Grammar framework (Johnson et al., 2006), integrating these earlier approaches using a probabilistic model of phonological variation. We show that the model is competitive with state-of-the-art spoken term discovery systems, and present analyses exploring the model’s behavior and the kinds of linguistic structures it learns.

2013

pdf bib
Joint Learning of Phonetic Units and Word Pronunciations for ASR
Chia-ying Lee | Yu Zhang | James Glass
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing

2012

pdf bib
A Nonparametric Bayesian Approach to Acoustic Model Discovery
Chia-ying Lee | James Glass
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Applications of GPC Rules and Character Structures in Games for Learning Chinese Characters
Wei-Jie Huang | Chia-Ru Chou | Yu-Lin Tzeng | Chia-Ying Lee | Chao-Lin Liu
Proceedings of the ACL 2012 System Demonstrations

2010

pdf bib
Hemispheric processing of Chinese polysemy in the disyllabic verb/ noun compounds: an event-related potential study
Chih-ying Huang | Chia-ying Lee
Proceedings of the NAACL HLT 2010 First Workshop on Computational Neurolinguistics

pdf bib
Visually and Phonologically Similar Characters in Incorrect Simplified Chinese Words
Chao-Lin Liu | Min-Hua Lai | Yi-Hsuan Chuang | Chia-Ying Lee
Coling 2010: Posters

pdf bib
Collecting Voices from the Cloud
Ian McGraw | Chia-ying Lee | Lee Hetherington | Stephanie Seneff | Jim Glass
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

The collection and transcription of speech data is typically an expensive and time-consuming task. Voice over IP and cloud computing are poised to greatly reduce this impediment to research on spoken language interfaces in many domains. This paper documents our efforts to deploy speech-enabled web interfaces to large audiences over the Internet via Amazon Mechanical Turk, an online marketplace for work. Using the open source WAMI Toolkit, we collected corpora in two different domains which collectively constitute over 113 hours of speech. The first corpus contains 100,000 utterances of read speech, and was collected by asking workers to record street addresses in the United States. For the second task, we collected conversations with FlightBrowser, a multimodal spoken dialogue system. The FlightBrowser corpus obtained contains 10,651 utterances composing 1,113 individual dialogue sessions from 101 distinct users. The aggregate time spent collecting the data for both corpora was just under two weeks. At times, our servers were logging audio from workers at rates faster than real-time. We describe the process of collection and transcription of these corpora while providing an analysis of the advantages and limitations to this data collection method.

2009

pdf bib
意見持有者辨識之研究 (A Study on Identification of Opinion Holders) [In Chinese]
Chia-Ying Lee | Lun-Wei Ku | Hsin-Hsi Chen
Proceedings of the 21st Conference on Computational Linguistics and Speech Processing

pdf bib
Identification of Opinion Holders
Lun-Wei Ku | Chia-Ying Lee | Hsin-Hsi Chen
International Journal of Computational Linguistics & Chinese Language Processing, Volume 14, Number 4, December 2009