Open Information Extraction on Scientific Text: An Evaluation

Paul Groth, Mike Lauruhn, Antony Scerri, Ron Daniel Jr.


Abstract
Open Information Extraction (OIE) is the task of the unsupervised creation of structured information from text. OIE is often used as a starting point for a number of downstream tasks including knowledge base construction, relation extraction, and question answering. While OIE methods are targeted at being domain independent, they have been evaluated primarily on newspaper, encyclopedic or general web text. In this article, we evaluate the performance of OIE on scientific texts originating from 10 different disciplines. To do so, we use two state-of-the-art OIE systems using a crowd-sourcing approach. We find that OIE systems perform significantly worse on scientific text than encyclopedic text. We also provide an error analysis and suggest areas of work to reduce errors. Our corpus of sentences and judgments are made available.
Anthology ID:
C18-1289
Volume:
Proceedings of the 27th International Conference on Computational Linguistics
Month:
August
Year:
2018
Address:
Santa Fe, New Mexico, USA
Editors:
Emily M. Bender, Leon Derczynski, Pierre Isabelle
Venue:
COLING
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3414–3423
Language:
URL:
https://aclanthology.org/C18-1289
DOI:
Bibkey:
Cite (ACL):
Paul Groth, Mike Lauruhn, Antony Scerri, and Ron Daniel Jr.. 2018. Open Information Extraction on Scientific Text: An Evaluation. In Proceedings of the 27th International Conference on Computational Linguistics, pages 3414–3423, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
Cite (Informal):
Open Information Extraction on Scientific Text: An Evaluation (Groth et al., COLING 2018)
Copy Citation:
PDF:
https://aclanthology.org/C18-1289.pdf