Quang Thuy Ha

Also published as: Quang-Thuy Ha


2023

pdf bib
Solving Label Variation in Scientific Information Extraction via Multi-Task Learning
Dong Pham | Xanh Ho | Quang Thuy Ha | Akiko Aizawa
Proceedings of the 37th Pacific Asia Conference on Language, Information and Computation

pdf bib
Self-MI: Efficient Multimodal Fusion via Self-Supervised Multi-Task Learning with Auxiliary Mutual Information Maximization
Cam-Van Nguyen Thi | Ngoc-Hoa Nguyen Thi | Duc-Trong Le | Quang-Thuy Ha
Proceedings of the 37th Pacific Asia Conference on Language, Information and Computation

2021

pdf bib
UETrice at MEDIQA 2021: A Prosper-thy-neighbour Extractive Multi-document Summarization Model
Duy-Cat Can | Quoc-An Nguyen | Quoc-Hung Duong | Minh-Quang Nguyen | Huy-Son Nguyen | Linh Nguyen Tran Ngoc | Quang-Thuy Ha | Mai-Vu Tran
Proceedings of the 20th Workshop on Biomedical Language Processing

This paper describes a system developed to summarize multiple answers challenge in the MEDIQA 2021 shared task collocated with the BioNLP 2021 Workshop. We propose an extractive summarization architecture based on several scores and state-of-the-art techniques. We also present our novel prosper-thy-neighbour strategies to improve performance. Our model has been proven to be effective with the best ROUGE-1/ROUGE-L scores, being the shared task runner up by ROUGE-2 F1 score (over 13 participated teams).

2019

pdf bib
A Richer-but-Smarter Shortest Dependency Path with Attentive Augmentation for Relation Extraction
Duy-Cat Can | Hoang-Quynh Le | Quang-Thuy Ha | Nigel Collier
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)

To extract the relationship between two entities in a sentence, two common approaches are (1) using their shortest dependency path (SDP) and (2) using an attention model to capture a context-based representation of the sentence. Each approach suffers from its own disadvantage of either missing or redundant information. In this work, we propose a novel model that combines the advantages of these two approaches. This is based on the basic information in the SDP enhanced with information selected by several attention mechanisms with kernel filters, namely RbSP (Richer-but-Smarter SDP). To exploit the representation behind the RbSP structure effectively, we develop a combined deep neural model with a LSTM network on word sequences and a CNN on RbSP. Experimental results on the SemEval-2010 dataset demonstrate improved performance over competitive baselines. The data and source code are available at https://github.com/catcd/RbSP.

2012

pdf bib
An Experiment in Integrating Sentiment Features for Tech Stock Prediction in Twitter
Tien Thanh Vu | Shu Chang | Quang Thuy Ha | Nigel Collier
Proceedings of the Workshop on Information Extraction and Entity Analytics on Social Media Data

2006

pdf bib
Vietnamese Word Segmentation with CRFs and SVMs: An Investigation
Cam-Tu Nguyen | Trung-Kien Nguyen | Xuan-Hieu Phan | Le-Minh Nguyen | Quang-Thuy Ha
Proceedings of the 20th Pacific Asia Conference on Language, Information and Computation