Yuxin Huang


2023

pdf bib
Non-parallel Accent Transfer based on Fine-grained Controllable Accent Modelling
Linqin Wang | Zhengtao Yu | Yuanzhang Yang | Shengxiang Gao | Cunli Mao | Yuxin Huang
Findings of the Association for Computational Linguistics: EMNLP 2023

Existing accent transfer works rely on parallel data or speech recognition models. This paper focuses on the practical application of accent transfer and aims to implement accent transfer using non-parallel datasets. The study has encountered the challenge of speech representation disentanglement and modeling accents. In our accent modeling transfer framework, we manage to solve these problems by two proposed methods. First, we learn the suprasegmental information associated with tone to finely model the accents in terms of tone and rhythm. Second, we propose to use mutual information learning to disentangle the accent features and control the accent of the generated speech during the inference time. Experiments show that the proposed framework attains superior performance to the baseline models in terms of accentedness and audio quality.

pdf bib
Layer-wise Fusion with Modality Independence Modeling for Multi-modal Emotion Recognition
Jun Sun | Shoukang Han | Yu-Ping Ruan | Xiaoning Zhang | Shu-Kai Zheng | Yulong Liu | Yuxin Huang | Taihao Li
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Multi-modal emotion recognition has gained increasing attention in recent years due to its widespread applications and the advances in multi-modal learning approaches. However, previous studies primarily focus on developing models that exploit the unification of multiple modalities. In this paper, we propose that maintaining modality independence is beneficial for the model performance. According to this principle, we construct a dataset, and devise a multi-modal transformer model. The new dataset, CHinese Emotion Recognition dataset with Modality-wise Annotions, abbreviated as CHERMA, provides uni-modal labels for each individual modality, and multi-modal labels for all modalities jointly observed. The model consists of uni-modal transformer modules that learn representations for each modality, and a multi-modal transformer module that fuses all modalities. All the modules are supervised by their corresponding labels separately, and the forward information flow is uni-directionally from the uni-modal modules to the multi-modal module. The supervision strategy and the model architecture guarantee each individual modality learns its representation independently, and meanwhile the multi-modal module aggregates all information. Extensive empirical results demonstrate that our proposed scheme outperforms state-of-the-art alternatives, corroborating the importance of modality independence in multi-modal emotion recognition. The dataset and codes are availabel at https://github.com/sunjunaimer/LFMIM

pdf bib
融合多粒度特征的缅甸语文本图像识别方法(Burmese Language Recognition Method Fused with Multi-Granularity Features)
Enyu He (何恩宇) | Rui Chen (陈蕊) | Cunli Mao (毛存礼) | Yuxin Huang (黄于欣) | Shengxaing Gao (高盛祥) | Zhengtao Yu (余正涛)
Proceedings of the 22nd Chinese National Conference on Computational Linguistics

“缅甸语属于东南亚低资源语言,缅甸语文本图像识别对开展缅甸语机器翻译等任务具有重要意义。由于缅甸语属于典型的字符组合型语言,一个感受野内存在多个字符嵌套,现有缅甸语识别方法主要是从字符粒度进行识别,在解码时会出现某些字符未能正确识别而导致局部乱码。考虑到缅甸语存在特殊的字符组合规则,本文提出了一种融合多粒度特征的缅甸语文本图像识别方法,将较细粒度的字符粒度和较粗粒度的字符簇粒度进行序列建模,然后将两种粒度特征序列进行融合后利用解码器进行解码。实验结果表明,该方法能够有效缓解识别结果乱码的现象,并且在人工构建的数据集上相比“VGG16+BiLSTM+Transformer”的基线模型识别准确率提高2.4%,达到97.35%。 "

pdf bib
相似音节增强的越汉跨语言实体消歧方法(Similar syllable enhanced cross-lingual entity disambiguation for Vietnamese-Chinese)
Yujuan Li (李裕娟) | Ran Song (宋燃) | Cunli Mao (毛存礼) | Yuxin Huang (黄于欣) | Shengxiang Gao (高盛祥) | Shan Lu (陆杉)
Proceedings of the 22nd Chinese National Conference on Computational Linguistics

“跨语言实体消歧是在源语言句子中找到目标语言相对应的实体,对跨语言自然语言处理任务有重要支撑。现有跨语言实体消歧方法在资源丰富的语言上能得到较好的效果,但在资源稀缺的语种上效果不佳,其中越南语-汉语就是一对典型的低资源语言;另一方面,汉语和越南语是非同源语言存在较大差异,跨语言表征困难;因此现有的方法很难适用于越南语-汉语的实体消歧。事实上,汉语和越南语具有相似的音节特点,能够增强越-汉跨语言的实体表示。为更好的融合音节特征,我们提出相似音节增强的越汉跨语言实体消歧方法,缓解了越南语-汉语数据稀缺和语言差异导致性能不佳。实验表明,所提出方法优于现有的实体消歧方法,在R@1指标下提升了5.63%。”

2022

pdf bib
PaddleSpeech: An Easy-to-Use All-in-One Speech Toolkit
Hui Zhang | Tian Yuan | Junkun Chen | Xintong Li | Renjie Zheng | Yuxin Huang | Xiaojie Chen | Enlei Gong | Zeyu Chen | Xiaoguang Hu | Dianhai Yu | Yanjun Ma | Liang Huang
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: System Demonstrations

PaddleSpeech is an open-source all-in-one speech toolkit. It aims at facilitating the development and research of speech processing technologies by providing an easy-to-use command-line interface and a simple code structure. This paper describes the design philosophy and core architecture of PaddleSpeech to support several essential speech-to-text and text-to-speech tasks. PaddleSpeech achieves competitive or state-of-the-art performance on various speech datasets and implements the most popular methods. It also provides recipes and pretrained models to quickly reproduce the experimental results in this paper. PaddleSpeech is publicly avaiable at https://github.com/PaddlePaddle/PaddleSpeech.

pdf bib
融入音素特征的英-泰-老多语言神经机器翻译方法(English-Thai-Lao multilingual neural machine translation fused with phonemic features)
Zheng Shen (沈政) | Cunli Mao (毛存礼) | Zhengtao Yu (余正涛) | Shengxiang Gao (高盛祥) | Linqin Wang (王琳钦) | Yuxin Huang (黄于欣)
Proceedings of the 21st Chinese National Conference on Computational Linguistics

“多语言神经机器翻译是提升低资源语言翻译质量的有效手段。由于不同语言之间字符差异较大,现有方法难以得到统一的词表征形式。泰语和老挝语属于具有音素相似性的低资源语言,考虑到利用语言相似性能够拉近语义距离,提出一种融入音素特征的多语言词表征学习方法:(1)设计音素特征表示模块和泰老文本表示模块,基于交叉注意力机制得到融合音素特征后的泰老文本表示,拉近泰老之间的语义距离;(2)在微调阶段,基于参数分化得到不同语言对特定的训练参数,缓解联合训练造成模型过度泛化的问题。实验结果表明在ALT数据集上,提出方法在泰-英和老-英两个翻译方向上,相比基线模型提升0.97和0.99个BLEU值。”

2021

pdf bib
基于阅读理解的汉越跨语言新闻事件要素抽取方法(News Events Element Extraction of Chinese-Vietnamese Cross-language Using Reading Comprehension)
Enchang Zhu (朱恩昌) | Zhengtao Yu (余正涛) | Chengxiang Gao (高盛祥) | Yuxin Huang (黄宇欣) | Junjun Guo (郭军军)
Proceedings of the 20th Chinese National Conference on Computational Linguistics

新闻事件要素抽取旨在抽取新闻文本中描述主题事件的事件要素,如时间、地点、人物和组织机构名等。传统的事件要素抽取方法在资源稀缺型语言上性能欠佳,且对长文本语义建模困难。对此,本文提出了基于阅读理解的汉越跨语言新闻事件要素抽取方法。该方法首先利用新闻长文本关键句检索模块过滤含噪声的句子。然后利用跨语言阅读理解模型将富资源语言知识迁移到越南语,提高越南语新闻事件要素抽取的性能。在自建的汉越双语新闻事件要素抽取数据集上的实验证明了本文方法的有效性。

pdf bib
融合多粒度特征的低资源语言词性标记和依存分析联合模型(A Joint Model with Multi-Granularity Features of Low-resource Language POS Tagging and Dependency Parsing)
Sha Lu (陆杉) | Cunli Mao (毛存礼) | Zhengtao Yu (余正涛) | Chengxiang Gao (高盛祥) | Yuxin Huang (黄于欣) | Zhenhan Wang (王振晗)
Proceedings of the 20th Chinese National Conference on Computational Linguistics

研究低资源语言的词性标记和依存分析对推动低资源自然语言处理任务有着重要的作用。针对低资源语言词嵌入表示,已有工作并没有充分利用字符、子词层面信息编码,导致模型无法利用不同粒度的特征,对此,提出融合多粒度特征的词嵌入表示,利用不同的语言模型分别获得字符、子词以及词语层面的语义信息,将三种粒度的词嵌入进行拼接,达到丰富语义信息的目的,缓解由于标注数据稀缺导致的依存分析模型性能不佳的问题。进一步将词性标记和依存分析模型进行联合训练,使模型之间能相互共享知识,降低词性标记错误在依存分析任务上的线性传递。以泰语、越南语为研究对象,在宾州树库数据集上,提出方法相比于基线模型的UAS、LAS、POS均有明显提升。

2020

pdf bib
基于跨语言双语预训练及Bi-LSTM的汉-越平行句对抽取方法(Chinese-Vietnamese Parallel Sentence Pair Extraction Method Based on Cross-lingual Bilingual Pre-training and Bi-LSTM)
Chang Liu (刘畅) | Shengxiang Gao (高盛祥) | Zhengtao Yu (余正涛) | Yuxin Huang (黄于欣) | Congcong You (尤丛丛)
Proceedings of the 19th Chinese National Conference on Computational Linguistics

汉越平行句对抽取是缓解汉越平行语料库数据稀缺的重要方法。平行句对抽取可转换为同一语义空间下的句子相似性分类任务,其核心在于双语语义空间对齐。传统语义空间对齐方法依赖于大规模的双语平行语料,越南语作为低资源语言获取大规模平行语料相对困难。针对这个问题本文提出一种利用种子词典进行跨语言双语预训练及Bi-LSTM(Bi-directional Long Short-Term Memory)的汉-越平行句对抽取方法。预训练中仅需要大量的汉越单语和一个汉越种子词典,通过利用汉越种子词典将汉越双语映射到公共语义空间进行词对齐。再利用Bi-LSTM和CNN(Convolutional Neural Networks)分别提取句子的全局特征和局部特征从而最大化表示汉-越句对之间的语义相关性。实验结果表明,本文模型在F1得分上提升7.1%,优于基线模型。

pdf bib
基于拼音约束联合学习的汉语语音识别(Chinese Speech Recognition Based on Pinyin Constraint Joint Learning)
Renfeng Liang (梁仁凤) | Zhengtao Yu (余正涛) | Shengxiang Gao (高盛祥) | Yuxin Huang (黄于欣) | Junjun Guo (郭军军) | Shuli Xu (许树理)
Proceedings of the 19th Chinese National Conference on Computational Linguistics

当前的语音识别模型在英语、法语等表音文字中已经取得很好的效果。然而,汉语是 一种典型的表意文字,汉字与语音没有直接的对应关系,但拼音作为汉字读音的标注 符号,与汉字存在相互转换的内在联系。因此,在汉语语音识别中利用拼音作为解码 约束,引入一种更接近语音的归纳偏置。基于多任务学习框架,提出一种基于拼音约 束联合学习的汉语语音识别方法,以端到端的汉字语音识别为主任务,以拼音语音识 别为辅助任务,通过共享编码器,同时利用汉字与拼音识别结果作为监督信号,增强 编码器对汉语语音的表达能力。实验结果表明,相比基线模型,提出方法取得更优的 识别效果,词错误率WER降低了2.24个百分点

pdf bib
Towards Understanding Gender Bias in Relation Extraction
Andrew Gaut | Tony Sun | Shirlyn Tang | Yuxin Huang | Jing Qian | Mai ElSherief | Jieyu Zhao | Diba Mirza | Elizabeth Belding | Kai-Wei Chang | William Yang Wang
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Recent developments in Neural Relation Extraction (NRE) have made significant strides towards Automated Knowledge Base Construction. While much attention has been dedicated towards improvements in accuracy, there have been no attempts in the literature to evaluate social biases exhibited in NRE systems. In this paper, we create WikiGenderBias, a distantly supervised dataset composed of over 45,000 sentences including a 10% human annotated test set for the purpose of analyzing gender bias in relation extraction systems. We find that when extracting spouse-of and hypernym (i.e., occupation) relations, an NRE system performs differently when the gender of the target entity is different. However, such disparity does not appear when extracting relations such as birthDate or birthPlace. We also analyze how existing bias mitigation techniques, such as name anonymization, word embedding debiasing, and data augmentation affect the NRE system in terms of maintaining the test performance and reducing biases. Unfortunately, due to NRE models rely heavily on surface level cues, we find that existing bias mitigation approaches have a negative effect on NRE. Our analysis lays groundwork for future quantifying and mitigating bias in NRE.

2019

pdf bib
Mitigating Gender Bias in Natural Language Processing: Literature Review
Tony Sun | Andrew Gaut | Shirlyn Tang | Yuxin Huang | Mai ElSherief | Jieyu Zhao | Diba Mirza | Elizabeth Belding | Kai-Wei Chang | William Yang Wang
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

As Natural Language Processing (NLP) and Machine Learning (ML) tools rise in popularity, it becomes increasingly vital to recognize the role they play in shaping societal biases and stereotypes. Although NLP models have shown success in modeling various applications, they propagate and may even amplify gender bias found in text corpora. While the study of bias in artificial intelligence is not new, methods to mitigate gender bias in NLP are relatively nascent. In this paper, we review contemporary studies on recognizing and mitigating gender bias in NLP. We discuss gender bias based on four forms of representation bias and analyze methods recognizing gender bias. Furthermore, we discuss the advantages and drawbacks of existing gender debiasing methods. Finally, we discuss future studies for recognizing and mitigating gender bias in NLP.