A BERT-based Universal Model for Both Within- and Cross-sentence Clinical Temporal Relation Extraction

Chen Lin, Timothy Miller, Dmitriy Dligach, Steven Bethard, Guergana Savova


Abstract
Classic methods for clinical temporal relation extraction focus on relational candidates within a sentence. On the other hand, break-through Bidirectional Encoder Representations from Transformers (BERT) are trained on large quantities of arbitrary spans of contiguous text instead of sentences. In this study, we aim to build a sentence-agnostic framework for the task of CONTAINS temporal relation extraction. We establish a new state-of-the-art result for the task, 0.684F for in-domain (0.055-point improvement) and 0.565F for cross-domain (0.018-point improvement), by fine-tuning BERT and pre-training domain-specific BERT models on sentence-agnostic temporal relation instances with WordPiece-compatible encodings, and augmenting the labeled data with automatically generated “silver” instances.
Anthology ID:
W19-1908
Volume:
Proceedings of the 2nd Clinical Natural Language Processing Workshop
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota, USA
Editors:
Anna Rumshisky, Kirk Roberts, Steven Bethard, Tristan Naumann
Venue:
ClinicalNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
65–71
Language:
URL:
https://aclanthology.org/W19-1908
DOI:
10.18653/v1/W19-1908
Bibkey:
Cite (ACL):
Chen Lin, Timothy Miller, Dmitriy Dligach, Steven Bethard, and Guergana Savova. 2019. A BERT-based Universal Model for Both Within- and Cross-sentence Clinical Temporal Relation Extraction. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 65–71, Minneapolis, Minnesota, USA. Association for Computational Linguistics.
Cite (Informal):
A BERT-based Universal Model for Both Within- and Cross-sentence Clinical Temporal Relation Extraction (Lin et al., ClinicalNLP 2019)
Copy Citation:
PDF:
https://aclanthology.org/W19-1908.pdf
Data
MIMIC-III