Difference between revisions of "BioNLP 2023"
Line 13: | Line 13: | ||
*BioNLP 2022 Workshop at ACL, May 26, 2022, Dublin, Ireland | *BioNLP 2022 Workshop at ACL, May 26, 2022, Dublin, Ireland | ||
− | |||
− | |||
− | < | + | |
+ | <h2>BioNLP 2022 Program</h2> | ||
+ | |||
+ | <h3>All times are Ireland timezone (GMT+1)</h3> | ||
− | <table cellspacing="0" cellpadding=" | + | <table cellspacing="0" cellpadding="5" border="0" valuing="top" width="95%"> |
<tr> | <tr> | ||
<td>09:00–09:10</td><td><b>Opening remarks</b></td> | <td>09:00–09:10</td><td><b>Opening remarks</b></td> | ||
Line 25: | Line 26: | ||
<tr> | <tr> | ||
<td nowrap valign=top bgcolor=#ededed><b>09:10–10:30</b></td> | <td nowrap valign=top bgcolor=#ededed><b>09:10–10:30</b></td> | ||
− | <td valign=top bgcolor=#ededed> | + | <td valign=top bgcolor=#ededed> |
− | <b>Session 1: Question Answering, Discourse Structure and Clinical Applications (Onsite oral presentations) </b> | + | <b>Session 1: Question Answering, Discourse Structure and Clinical Applications (Onsite oral presentations) </b> |
− | </td> | + | </td> |
</tr> | </tr> | ||
<tr> | <tr> | ||
<td nowrap valign=top>09:10–9:30 </td> | <td nowrap valign=top>09:10–9:30 </td> | ||
<td valign=top><b>Explainable Assessment of Healthcare Articles with QA</b> | <td valign=top><b>Explainable Assessment of Healthcare Articles with QA</b> | ||
− | <br> <i>Alodie Boissonnet<sup>1</sup>, | + | <br> <i>Alodie Boissonnet<sup>1</sup>, Marzieh Saeidi<sup>2</sup>, Vassilis Plachouras<sup>2</sup>, Andreas Vlachos<sup>1</sup></i><br> |
− | + | <sup>1</sup>University of Cambridge, <sup>2</sup>Facebook | |
− | + | </td> | |
− | + | </tr> | |
<tr> | <tr> | ||
<td nowrap valign=top>09:30–9:50</td> | <td nowrap valign=top>09:30–9:50</td> | ||
Line 41: | Line 42: | ||
<br> | <br> | ||
<i>John Giorgi<sup>1</sup>, Gary Bader<sup>1</sup>, Bo Wang<sup>2</sup></i><br> | <i>John Giorgi<sup>1</sup>, Gary Bader<sup>1</sup>, Bo Wang<sup>2</sup></i><br> | ||
− | + | <sup>1</sup>University of Toronto, <sup>2</sup>School of Artificial Intelligence, Jilin University | |
− | + | </td> | |
− | + | </tr> | |
+ | <tr> | ||
+ | <td nowrap valign=top> 09:50–10:10 </td> | ||
+ | <td valign=top> <b>Position-based Prompting for Health Outcome Generation</b> | ||
+ | <br> | ||
+ | <i>Micheal Abaho<sup>1</sup>, Danushka Bollegala<sup>2</sup>, Paula Williamson<sup>1</sup>, Susanna Dodd<sup>1</sup></i><br> | ||
+ | <sup>1</sup>University of Liverpool, <sup>2</sup>University of Liverpool/Amazon | ||
+ | </td> | ||
+ | </tr> | ||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> 10:10-10:30</td> | |
− | + | <td valign=top> | |
− | + | <b>How You Say It Matters: Measuring the Impact of Verbal Disfluency Tags on Automated Dementia Detection</b> | |
− | + | <br> | |
− | + | <i>Shahla Farzana, Ashwin Deshpande, Natalie Parde</i><br> | |
− | + | University of Illinois at Chicago | |
− | + | </td> | |
− | + | </tr> | |
− | + | <tr> | |
− | + | <td nowrap valign=top bgcolor=#ededed><b>10:30–11:00</b></td> | |
− | + | <td valign=top bgcolor=#ededed><b><em>Coffee Break</em></b></td> | |
− | + | </tr> | |
− | + | <tr> | |
− | + | <td valign=top bgcolor=#ededed><b>11:00–12:30</b></td> | |
− | + | <td valign=top bgcolor=#ededed><b>Hybrid Poster Session 1</b></td> | |
− | + | </tr> | |
− | |||
− | |||
− | |||
− | |||
− | |||
<tr> | <tr> | ||
− | <td nowrap valign=top | + | <td nowrap valign=top> </td> |
− | <b> | + | <td> |
+ | <b>Data Augmentation for Biomedical Factoid Question Answering</b> | ||
+ | <br> | ||
+ | <em>Dimitris Pappas, Prodromos Malakasiotis, Ion Androutsopoulos</em><br> | ||
+ | Athens University of Economics and Business | ||
</td> | </td> | ||
− | <td valign=top | + | </tr> |
− | <b><em> | + | <tr> |
+ | <td nowrap valign=top> </td> | ||
+ | <td> | ||
+ | <b>Slot Filling for Biomedical Information Extraction</b> | ||
+ | <br> | ||
+ | <em>Yannis Papanikolaou, Marlene Staib, Justin Grace, Francine Bennett</em><br> | ||
+ | Healx Ltd | ||
</td> | </td> | ||
</tr> | </tr> | ||
− | <tr> | + | <tr> |
− | <td valign=top | + | <td nowrap valign=top> </td> |
− | <td | + | <td> |
− | <b> | + | <b>Automatic Biomedical Term Clustering by Learning Fine-grained Term Representations</b> |
+ | <br> | ||
+ | <em>Sihang Zeng, Zheng Yuan, Sheng Yu</em><br> | ||
+ | Tsinghua University | ||
</td> | </td> | ||
</tr> | </tr> | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> </td> | |
− | + | <td> | |
− | + | <b>BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model</b> | |
− | + | <br> | |
− | + | <em>Hongyi Yuan<sup>1</sup>, Zheng Yuan<sup>1</sup>, Ruyi Gan<sup>2</sup>, Jiaxing Zhang<sup>2</sup>, Yutao Xie<sup>2</sup>, Sheng Yu<sup>1</sup></em><br> | |
− | + | <sup>1</sup>Tsinghua University, <sup>2</sup>International Digital Economy Academy | |
− | + | </td> | |
− | |||
− | |||
</tr> | </tr> | ||
− | |||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> </td> | |
− | + | <td> | |
− | + | <b>Incorporating Medical Knowledge to Transformer-based Language Models for Medical Dialogue Generation</b> | |
− | + | <br> | |
− | + | <em>Usman Naseem<sup>1</sup>, Ajay Bandi<sup>2</sup>, Shaina Raza<sup>3</sup>, Junaid Rashid<sup>4</sup>, Bharathi Raja Chakravarthi<sup>5</sup></em><br> | |
− | + | <sup>1</sup>University of Sydney, <sup>2</sup>Northwest Missouri State University, USA, <sup>3</sup>University of Toronto, Canada, <sup>4</sup>Kongju National University, South Korea, <sup>5</sup>National University of Ireland Galway | |
− | + | </td> | |
− | |||
− | |||
</tr> | </tr> | ||
− | |||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> </td> | |
− | + | <td> | |
− | + | <b>Memory-aligned Knowledge Graph for Clinically Accurate Radiology Image Report Generation</b> | |
− | + | <br> | |
− | + | <em>Sixing Yan</em><br> | |
− | + | Hong Kong Baptist University | |
− | + | </td> | |
− | |||
− | |||
</tr> | </tr> | ||
− | |||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> </td> | |
− | + | <td> | |
− | + | <b>Simple Semantic-based Data Augmentation for Named Entity Recognition in Biomedical Texts</b> | |
− | + | <br> | |
− | + | <em>Uyen Phan<sup>1</sup> and Nhung Nguyen<sup>2</sup></em><br> | |
− | + | <sup>1</sup>VNUHCM-University of Science, <sup>2</sup>The University of Manchester | |
− | + | </td> | |
− | |||
− | |||
</tr> | </tr> | ||
− | |||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> </td> | |
− | + | <td> | |
− | + | <b>Auxiliary Learning for Named Entity Recognition with Multiple Auxiliary Biomedical Training Data</b> | |
− | + | <br> | |
− | + | <em>Taiki Watanabe<sup>1</sup>, Tomoya Ichikawa<sup>2</sup>, Akihiro Tamura<sup>2</sup>, Tomoya Iwakura<sup>3</sup>, Chunpeng Ma<sup>1</sup>, Tsuneo Kato<sup>2</sup></em><br> | |
− | + | <sup>1</sup>Fujitsu Ltd., <sup>2</sup>Doshisha University, <sup>3</sup>Fujitsu | |
− | + | </td> | |
− | |||
− | |||
</tr> | </tr> | ||
− | |||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> </td> | |
− | + | <td> | |
− | + | <b>SNP2Vec: Scalable Self-Supervised Pre-Training for Genome-Wide Association Study</b> | |
− | + | <br> | |
− | + | <em>Samuel Cahyawijaya, Tiezheng Yu, Zihan Liu, Xiaopu Zhou, Tze Wing Mak, Yuk Yu Ip, Pascale Fung</em><br> | |
− | + | The Hong Kong University of Science and Technology, Hong Kong, China | |
− | + | </td> | |
− | |||
− | |||
</tr> | </tr> | ||
− | |||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> </td> | |
− | + | <td> | |
− | + | <b>Biomedical NER using Novel Schema and Distant Supervision</b> | |
− | + | <br> | |
− | + | <em>Anshita Khandelwal, Alok Kar, Veera Chikka, Kamalakar Karlapalem</em><br> | |
− | + | International Institute of Information Technology | |
− | + | </td> | |
− | |||
− | |||
</tr> | </tr> | ||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> </td> | |
− | + | <td> | |
− | + | <b>Improving Supervised Drug-Protein Relation Extraction with Distantly Supervised Models</b> | |
− | + | <br> | |
− | + | <em>Naoki Iinuma, Makoto Miwa, Yutaka Sasaki</em><br> | |
− | + | Toyota Technological Institute | |
− | + | </td> | |
− | |||
− | |||
</tr> | </tr> | ||
− | |||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> </td> | |
− | + | <td> | |
− | + | <b>Named Entity Recognition for Cancer Immunology Research Using Distant Supervision</b> | |
− | + | <br> | |
− | + | <em>Hai-Long Trieu<sup>1</sup>, Makoto Miwa<sup>2</sup>, Sophia Ananiadou<sup>3</sup></em><br> | |
− | + | <sup>1</sup>National Institute of Advanced Industrial Science and Technology, <sup>2</sup>Toyota Technological Institute, <sup>3</sup>University of Manchester | |
− | + | </td> | |
− | |||
− | |||
</tr> | </tr> | ||
− | |||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> </td> | |
− | + | <td> | |
− | + | <b>Intra-Template Entity Compatibility based Slot-Filling for Clinical Trial Information Extraction</b> | |
− | + | <br> | |
− | + | <em>Christian Witte and Philipp Cimiano</em><br> | |
− | + | Bielefeld University | |
− | + | </td> | |
− | |||
− | |||
</tr> | </tr> | ||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> </td> | |
− | + | <td> | |
− | + | <b>Pretrained Biomedical Language Models for Clinical NLP in Spanish</b> | |
− | + | <br> | |
− | + | <em>Casimiro Pio Carrino, Joan Llop, Marc Pàmies, Asier Gutiérrez-Fandiño, Jordi Armengol-Estapé, Joaquín Silveira-Ocampo, Alfonso Valencia, Aitor Gonzalez-Agirre, Marta Villegas</em><br> | |
− | + | Barcelona Supercomputing Center | |
− | + | </td> | |
− | |||
− | |||
</tr> | </tr> | ||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> </td> | |
− | + | <td> | |
− | + | <b>Zero-Shot Aspect-Based Scientific Document Summarization using Self-Supervised Pre-training</b> | |
− | + | <br> | |
− | + | <em>Amir Soleimani<sup>1</sup>, Vassilina Nikoulina<sup>2</sup>, Benoit Favre<sup>3</sup>, Salah Ait Mokhtar<sup>2</sup></em><br> | |
− | + | <sup>1</sup>University of Amsterdam, <sup>2</sup>Naver Labs Europe, <sup>3</sup>Aix Marseille Univ, Université de Toulon, CNRS, LIS, Marseille, France | |
− | + | </td> | |
− | |||
− | |||
</tr> | </tr> | ||
− | + | <tr> | |
− | + | <td nowrap valign=top> </td> | |
− | + | <td> | |
− | + | <b>Few-Shot Cross-lingual Transfer for Coarse-grained De-identification of Code-Mixed Clinical Texts</b> | |
− | + | <br> | |
− | + | <em>Saadullah Amin<sup>1</sup>, Noon Pokaratsiri Goldstein<sup>2</sup>, Morgan Wixted<sup>3</sup>, Alejandro Garcia-Rudolph<sup>4</sup>, Catalina Martínez-Costa<sup>5</sup>, Guenter Neumann<sup>1</sup></em><br> | |
− | + | <sup>1</sup>DFKI ;amp; Saarland University, <sup>2</sup>DFKI, <sup>3</sup>Saarland University, <sup>4</sup>Institut Guttmann, <sup>5</sup>University of Murcia | |
− | + | </td> | |
− | + | </tr> | |
− | + | <tr> | |
− | + | <td nowrap valign=top> </td> | |
− | + | <td> | |
− | + | <b>VPAI_Lab at MedVidQA 2022: A Two-Stage Cross-modal Fusion Method for Medical Instructional Video Classification</b> | |
− | + | <br> | |
− | + | <em>Bin Li<sup>1</sup>, Yixuan Weng<sup>2</sup>, Fei Xia<sup>3</sup>, Bin Sun<sup>1</sup>, Shutao Li<sup>1</sup></em><br> | |
− | + | <sup>1</sup>Hunan University, <sup>2</sup>Institute of Automation, Chinese Academy of Sciences, <sup>3</sup>1National Laboratory of Pattern Recognition,Institute of Automation 2University of Chinese Academy of Sciences, Beijing, China | |
− | + | </td> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
</tr> | </tr> | ||
<tr> | <tr> | ||
− | <td valign=top | + | <td valign=top bgcolor=#ededed> |
<b>12:30–14:00</b> | <b>12:30–14:00</b> | ||
</td> | </td> | ||
− | <td valign=top | + | <td valign=top bgcolor=#ededed> |
<b><em>Lunch Break</em></b> | <b><em>Lunch Break</em></b> | ||
</td> | </td> | ||
</tr> | </tr> | ||
<tr> | <tr> | ||
− | <td valign=top | + | <td valign=top bgcolor=#ededed>14:00–15:00</td> |
− | <td valign=top | + | <td valign=top bgcolor=#ededed> |
<b> Summarization and text mining (Onsite oral presentations) </b> | <b> Summarization and text mining (Onsite oral presentations) </b> | ||
</td> | </td> | ||
Line 298: | Line 242: | ||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> 14:00-14:20</td> | |
− | + | <td> | |
− | + | <b>GenCompareSum: a hybrid unsupervised summarization method using salience</b> | |
− | + | <br> | |
− | + | <em>Jennifer Bishop, Qianqian Xie, Sophia Ananiadou</em><br> | |
− | + | University of Manchester | |
− | + | </td> | |
− | |||
− | |||
</tr> | </tr> | ||
− | |||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> 14:20-14:40</td> | |
− | + | <td> | |
− | + | <b>BioCite: A Deep Learning-based Citation Linkage Framework for Biomedical Research Articles</b><br> | |
− | + | <em>Sudipta Singha Roy and Robert E. Mercer </em><br> | |
− | + | The University of Western Ontario | |
− | + | </td> | |
− | + | </tr> | |
− | |||
− | |||
− | |||
− | |||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> 14:40-15:00</td> | |
− | + | <td> | |
− | + | <b>Low Resource Causal Event Detection from Biomedical Literature</b> | |
− | + | <br> | |
− | + | <em>Zhengzhong Liang, Enrique Noriega-Atala, Clayton Morrison, Mihai Surdeanu</em><br> | |
− | + | The University of Arizona | |
− | + | </td> | |
− | |||
− | |||
</tr> | </tr> | ||
<tr> | <tr> | ||
− | <td valign=top | + | <td valign=top bgcolor=#ededed><b>15:00–15:30</b></td> |
− | <b>15:00–15:30</b> | + | <td valign=top bgcolor=#ededed> |
− | </td> | ||
− | <td valign=top | ||
<b><em>Coffee Break</em></b> | <b><em>Coffee Break</em></b> | ||
</td> | </td> | ||
</tr> | </tr> | ||
<tr> | <tr> | ||
− | <td valign=top | + | <td valign=top bgcolor=#ededed>15:30–17:00</td> |
− | <td valign=top | + | <td valign=top bgcolor=#ededed> |
<b> Hybrid Poster Session 2 </b> | <b> Hybrid Poster Session 2 </b> | ||
</td> | </td> | ||
</tr> | </tr> | ||
− | |||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> | |
− | + | | |
− | + | </td> | |
− | + | <td> | |
− | + | <b>Overview of the MedVidQA 2022 Shared Task on Medical Video Question-Answering</b> | |
− | + | <br> | |
− | + | <em>Deepak Gupta and Dina Demner-Fushman</em><br> | |
− | + | National Library of Medicine, NIH | |
− | + | </td> | |
</tr> | </tr> | ||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> | |
− | + | | |
− | + | </td> | |
− | + | <td> | |
− | + | <b>Inter-annotator agreement is not the ceiling of machine learning performance: Evidence from a comprehensive set of simulations</b> | |
− | + | <br> | |
− | + | <em>Russell Richie<sup>1</sup>, Sachin Grover<sup>1</sup>, Fuchiang Tsui<sup>2</sup></em><br> | |
− | + | <sup>1</sup>Children's Hospital of Philadelphia, <sup>2</sup>Children's Hospital of Philadelphia; University of Pennsylvania | |
− | + | </td> | |
</tr> | </tr> | ||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> | |
− | + | | |
− | + | </td> | |
− | + | <td> | |
− | + | <b>Conversational Bots for Psychotherapy: A Study of Generative Transformer Models Using Domain-specific Dialogues</b> | |
− | + | <br> | |
− | + | <em>Avisha Das<sup>1</sup>, Salih Selek<sup>2</sup>, Alia Warner<sup>2</sup>, Xu Zuo<sup>1</sup>, Yan Hu<sup>1</sup>, Vipina Kuttichi Keloth<sup>1</sup>, Jianfu Li<sup>1</sup>, W. Zheng<sup>1</sup>, Hua Xu<sup>1</sup></em><br> | |
− | + | <sup>1</sup>School of Biomedical Informatics, UTHealth, <sup>2</sup>McGovern Medical School, UTHealth | |
− | + | </td> | |
</tr> | </tr> | ||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> | |
− | + | | |
− | + | </td> | |
− | + | <td> | |
− | + | <b>Inter-annotator agreement is not the ceiling of machine learning performance: Evidence from a comprehensive set of simulations</b> | |
− | + | <br> | |
− | + | <em>Russell Richie<sup>1</sup>, Sachin Grover<sup>1</sup>, Fuchiang Tsui<sup>2</sup></em><br> | |
− | + | <sup>1</sup>Children's Hospital of Philadelphia, <sup>2</sup>Children's Hospital of Philadelphia; University of Pennsylvania | |
− | + | </td> | |
</tr> | </tr> | ||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> | |
− | + | | |
− | + | </td> | |
− | + | <td> | |
− | + | <b>BanglaBioMed: A Biomedical Named-Entity Annotated Corpus for Bangla (Bengali)</b> | |
− | + | <br> | |
− | + | <em>Salim Sazzed</em><br> | |
− | + | Old Dominion University | |
− | + | </td> | |
</tr> | </tr> | ||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> | |
− | + | | |
− | + | </td> | |
− | + | <td> | |
− | + | <b>BEEDS: Large-Scale Biomedical Event Extraction using Distant Supervision and Question Answering</b> | |
− | + | <br> | |
− | + | <em>Xing David Wang, Ulf Leser, Leon Weber</em><br> | |
− | + | Humboldt-Universität zu Berlin | |
− | + | </td> | |
</tr> | </tr> | ||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> | |
− | + | | |
− | + | </td> | |
− | + | <td> | |
− | + | <b>Data Augmentation for Rare Symptoms in Vaccine Side-Effect Detection</b> | |
− | + | <br> | |
− | + | <em>Bosung Kim and Ndapa Nakashole</em><br> | |
− | + | University of California, San Diego | |
− | + | </td> | |
</tr> | </tr> | ||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> | |
− | + | | |
− | + | </td> | |
− | + | <td> | |
− | + | <b>ICDBigBird: A Contextual Embedding Model for ICD Code Classification</b> | |
− | + | <br> | |
− | + | <em>George Michalopoulos<sup>1</sup>, Michal Malyska<sup>2</sup>, Nicola Sahar<sup>3</sup>, Alexander Wong<sup>1</sup>, Helen Chen<sup>1</sup></em><br> | |
− | + | <sup>1</sup>University of Waterloo, <sup>2</sup>University of Toronto, <sup>3</sup>Semantic Health | |
− | + | </td> | |
</tr> | </tr> | ||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> | |
− | + | | |
− | + | </td> | |
− | + | <td> | |
− | + | <b>Doctor XAvIer: Explainable Diagnosis on Physician-Patient Dialogues and XAI Evaluation</b> | |
− | + | <br> | |
− | + | <em>Hillary Ngai<sup>1</sup> and Frank Rudzicz<sup>2</sup></em><br> | |
− | + | <sup>1</sup>Vector Institute for Artificial Intelligence, <sup>2</sup>Vector Institute for Artificial Intelligence, University of Toronto | |
− | + | </td> | |
</tr> | </tr> | ||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> | |
− | + | | |
− | + | </td> | |
− | + | <td> | |
− | + | <b>DISTANT-CTO: A Zero Cost, Distantly Supervised Approach to Improve Low-Resource Entity Extraction Using Clinical Trials Literature</b> | |
− | + | <br> | |
− | + | <em>Anjani Dhrangadhariya<sup>1</sup> and Henning Müller<sup>2</sup></em><br> | |
− | + | <sup>1</sup>HES-SO Valais-Wallis, <sup>2</sup>HES-SO | |
− | + | </td> | |
</tr> | </tr> | ||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> | |
− | + | | |
− | + | </td> | |
− | + | <td> | |
− | + | <b>Improving Romanian BioNER Using a Biologically Inspired System</b> | |
− | + | <br> | |
− | + | <em>Maria Mitrofan<sup>1</sup> and Vasile Pais<sup>2</sup></em><br> | |
− | + | <sup>1</sup>RACAI, <sup>2</sup>Research Institute for Artificial Intelligence, Romanian Academy | |
− | + | </td> | |
</tr> | </tr> | ||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> | |
− | + | | |
− | + | </td> | |
− | + | <td> | |
− | + | <b>EchoGen: Generating Conclusions from Echocardiogram Notes</b> | |
− | + | <br> | |
− | + | <em>Liyan Tang<sup>1</sup>, Shravan Kooragayalu<sup>2</sup>, Yanshan Wang<sup>2</sup>, Ying Ding<sup>1</sup>, Greg Durrett<sup>3</sup>, Justin Rousseau<sup>1</sup>, Yifan Peng<sup>4</sup></em><br> | |
− | + | <sup>1</sup>University of Texas at Austin, <sup>2</sup>University of Pittsburgh, <sup>3</sup>UT Austin, <sup>4</sup>Cornell Medicine | |
− | + | </td> | |
</tr> | </tr> | ||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> | |
− | + | | |
− | + | </td> | |
− | + | <td> | |
− | + | <b>Quantifying Clinical Outcome Measures in Patients with Epilepsy Using the Electronic Health Record</b> | |
− | + | <br> | |
− | + | <em>Kevin Xie<sup>1</sup>, Brian Litt<sup>2</sup>, Dan Roth<sup>1</sup>, Colin Ellis<sup>2</sup></em><br> | |
− | + | <sup>1</sup>University of Pennsylvania, <sup>2</sup>Perelman School of Medicine, University of Pennsylvania | |
− | + | </td> | |
</tr> | </tr> | ||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> | |
− | + | | |
− | + | </td> | |
− | + | <td> | |
− | + | <b>Comparing Encoder-Only and Encoder-Decoder Transformers for Relation Extraction from Biomedical Texts: An Empirical Study on Ten Benchmark Datasets</b> | |
− | + | <br> | |
− | + | <em>Mourad Sarrouti, Carson Tao, Yoann Mamy Randriamihaja</em><br> | |
− | + | Sumitovant Biopharma | |
− | + | </td> | |
</tr> | </tr> | ||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> | |
− | + | | |
− | + | </td> | |
− | + | <td> | |
− | + | <b>Utility Preservation of Clinical Text After De-Identification</b> | |
− | + | <br> | |
− | + | <em>Thomas Vakili<sup>1</sup> and Hercules Dalianis<sup>2</sup></em><br> | |
− | + | <sup>1</sup>Department of Computer and Systems Sciences, Stockholm University, <sup>2</sup>DSV/Stockholm University | |
− | + | </td> | |
</tr> | </tr> | ||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> | |
− | + | | |
− | + | </td> | |
− | + | <td> | |
− | + | <b>Horses to Zebras: Ontology-Guided Data Augmentation and Synthesis for ICD-9 Coding</b> | |
− | + | <br> | |
− | + | <em>Matúš Falis<sup>1</sup>, Hang Dong<sup>2</sup>, Alexandra Birch<sup>1</sup>, Beatrice Alex<sup>1</sup></em><br> | |
− | + | <sup>1</sup>The University of Edinburgh, <sup>2</sup>Oxford University | |
− | + | </td> | |
</tr> | </tr> | ||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> | |
− | + | | |
− | + | </td> | |
− | + | <td> | |
− | + | <b>Towards Automatic Curation of Antibiotic Resistance Genes via Statement Extraction from Scientific Papers: A Benchmark Dataset and Models</b> | |
− | + | <br> | |
− | + | <em>Sidhant Chandak<sup>1</sup>, Liqing Zhang<sup>2</sup>, Connor Brown<sup>2</sup>, Lifu Huang<sup>2</sup></em><br> | |
− | + | <sup>1</sup>Indian institute of Technology Kanpur, <sup>2</sup>Virginia Tech | |
− | + | </td> | |
</tr> | </tr> | ||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> | |
− | + | | |
− | + | </td> | |
− | + | <td> | |
− | + | <b>Model Distillation for Faithful Explanations of Medical Code Predictions</b> | |
− | + | <br> | |
− | + | <em>Zach Wood-Doughty, Isabel Cachola, Mark Dredze</em><br> | |
− | + | Johns Hopkins University | |
− | + | </td> | |
</tr> | </tr> | ||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> | |
− | + | | |
− | + | </td> | |
− | + | <td> | |
− | + | <b>Towards Generalizable Methods for Automating Risk Score Calculation</b> | |
− | + | <br> | |
− | + | <em>Jennifer J Liang<sup>1</sup>, Eric Lehman<sup>2</sup>, Ananya Iyengar<sup>3</sup>, Diwakar Mahajan<sup>1</sup>, Preethi Raghavan<sup>1</sup>, Cindy Y. Chang<sup>4</sup>, Peter Szolovits<sup>2</sup></em><br> | |
− | + | <sup>1</sup>IBM Research, <sup>2</sup>MIT, <sup>3</sup>Northeastern University, <sup>4</sup>Brigham and Women's Hospital | |
− | + | </td> | |
</tr> | </tr> | ||
<tr> | <tr> | ||
− | + | <td nowrap valign=top> | |
− | + | | |
− | + | </td> | |
− | + | <td> | |
− | + | <b>DoSSIER at MedVidQA 2022: Text-based Approaches to Medical Video Answer Localization Problem</b> | |
− | + | <br> | |
− | + | <em>Wojciech Kusa<sup>1</sup>, Georgios Peikos<sup>2</sup>, Óscar Espitia<sup>3</sup>, Allan Hanbury<sup>1</sup>, Gabriella Pasi<sup>4</sup></em><br> | |
− | + | <sup>1</sup>TU Wien, <sup>2</sup>University of Milano-Bicocca, <sup>3</sup>University of Milano Bicocca, <sup>4</sup>Università degli Studi di Milano Bicocca | |
− | + | </td> | |
</tr> | </tr> | ||
Revision as of 08:09, 17 April 2022
BIONLP 2022 @ ACL 2022
The 21st BioNLP workshop associated with the ACL SIGBIOMED special interest group is co-located with ACL 2022
IMPORTANT DATES
- March 7, 2022: Workshop Paper Due Date
- Submission site: https://www.softconf.com/acl2022/BioNLP2022
- March 28, 2022: Notification of Acceptance
- April 10, 2022: Camera-ready papers due
- BioNLP 2022 Workshop at ACL, May 26, 2022, Dublin, Ireland
BioNLP 2022 Program
All times are Ireland timezone (GMT+1)
09:00–09:10 | Opening remarks |
09:10–10:30 |
Session 1: Question Answering, Discourse Structure and Clinical Applications (Onsite oral presentations) |
09:10–9:30 | Explainable Assessment of Healthcare Articles with QA
|
09:30–9:50 | A sequence-to-sequence approach for document-level relation extraction
1University of Toronto, 2School of Artificial Intelligence, Jilin University |
09:50–10:10 | Position-based Prompting for Health Outcome Generation
|
10:10-10:30 |
How You Say It Matters: Measuring the Impact of Verbal Disfluency Tags on Automated Dementia Detection |
10:30–11:00 | Coffee Break |
11:00–12:30 | Hybrid Poster Session 1 |
Data Augmentation for Biomedical Factoid Question Answering |
|
Slot Filling for Biomedical Information Extraction |
|
Automatic Biomedical Term Clustering by Learning Fine-grained Term Representations |
|
BioBART: Pretraining and Evaluation of A Biomedical Generative Language Model |
|
Incorporating Medical Knowledge to Transformer-based Language Models for Medical Dialogue Generation |
|
Memory-aligned Knowledge Graph for Clinically Accurate Radiology Image Report Generation |
|
Simple Semantic-based Data Augmentation for Named Entity Recognition in Biomedical Texts |
|
Auxiliary Learning for Named Entity Recognition with Multiple Auxiliary Biomedical Training Data |
|
SNP2Vec: Scalable Self-Supervised Pre-Training for Genome-Wide Association Study |
|
Biomedical NER using Novel Schema and Distant Supervision |
|
Improving Supervised Drug-Protein Relation Extraction with Distantly Supervised Models |
|
Named Entity Recognition for Cancer Immunology Research Using Distant Supervision |
|
Intra-Template Entity Compatibility based Slot-Filling for Clinical Trial Information Extraction |
|
Pretrained Biomedical Language Models for Clinical NLP in Spanish |
|
Zero-Shot Aspect-Based Scientific Document Summarization using Self-Supervised Pre-training |
|
Few-Shot Cross-lingual Transfer for Coarse-grained De-identification of Code-Mixed Clinical Texts |
|
VPAI_Lab at MedVidQA 2022: A Two-Stage Cross-modal Fusion Method for Medical Instructional Video Classification |
|
12:30–14:00 |
Lunch Break |
14:00–15:00 |
Summarization and text mining (Onsite oral presentations) |
14:00-14:20 |
GenCompareSum: a hybrid unsupervised summarization method using salience |
14:20-14:40 |
BioCite: A Deep Learning-based Citation Linkage Framework for Biomedical Research Articles |
14:40-15:00 |
Low Resource Causal Event Detection from Biomedical Literature |
15:00–15:30 |
Coffee Break |
15:30–17:00 |
Hybrid Poster Session 2 |
|
Overview of the MedVidQA 2022 Shared Task on Medical Video Question-Answering |
|
Inter-annotator agreement is not the ceiling of machine learning performance: Evidence from a comprehensive set of simulations |
|
Conversational Bots for Psychotherapy: A Study of Generative Transformer Models Using Domain-specific Dialogues |
|
Inter-annotator agreement is not the ceiling of machine learning performance: Evidence from a comprehensive set of simulations |
|
BanglaBioMed: A Biomedical Named-Entity Annotated Corpus for Bangla (Bengali) |
|
BEEDS: Large-Scale Biomedical Event Extraction using Distant Supervision and Question Answering |
|
Data Augmentation for Rare Symptoms in Vaccine Side-Effect Detection |
|
ICDBigBird: A Contextual Embedding Model for ICD Code Classification |
|
Doctor XAvIer: Explainable Diagnosis on Physician-Patient Dialogues and XAI Evaluation |
|
DISTANT-CTO: A Zero Cost, Distantly Supervised Approach to Improve Low-Resource Entity Extraction Using Clinical Trials Literature |
|
Improving Romanian BioNER Using a Biologically Inspired System |
|
EchoGen: Generating Conclusions from Echocardiogram Notes |
|
Quantifying Clinical Outcome Measures in Patients with Epilepsy Using the Electronic Health Record |
|
Comparing Encoder-Only and Encoder-Decoder Transformers for Relation Extraction from Biomedical Texts: An Empirical Study on Ten Benchmark Datasets |
|
Utility Preservation of Clinical Text After De-Identification |
|
Horses to Zebras: Ontology-Guided Data Augmentation and Synthesis for ICD-9 Coding |
|
Towards Automatic Curation of Antibiotic Resistance Genes via Statement Extraction from Scientific Papers: A Benchmark Dataset and Models |
|
Model Distillation for Faithful Explanations of Medical Code Predictions |
|
Towards Generalizable Methods for Automating Risk Score Calculation |
|
DoSSIER at MedVidQA 2022: Text-based Approaches to Medical Video Answer Localization Problem |
Submission Types & Requirements
Following the previous conferences, BioNLP 2022 will be open for two types of submissions: long and short papers. Please follow ACL guidelines https://acl-org.github.io/ACLPUB/formatting.html and templates: https://github.com/acl-org/acl-style-files
Overleaf templates: https://www.overleaf.com/project/5f64f1fb97c4c50001b60549
WORKSHOP OVERVIEW AND SCOPE
The BioNLP workshop associated with the ACL SIGBIOMED special interest group has established itself as the primary venue for presenting foundational research in language processing for the biological and medical domains. Despite, or maybe due to reaching maturity, the field of Biomedical NLP continues getting stronger. BioNLP welcomes and encourages inclusion and diversity. BioNLP truly encompasses the breadth of the domain and brings together researchers in bio- and clinical NLP from all over the world. The workshop will continue presenting work on a broad and interesting range of topics in NLP.
BioNLP 2022 will be particularly interested in work on detection and mitigation of bias, BioNLP research in languages other than English, particularly, under-represented languages, and health disparities.
Other active areas of research include, but are not limited to:
- Entity identification and normalization (linking) for a broad range of semantic categories;
- Extraction of complex relations and events;
- Discourse analysis;
- Anaphora/coreference resolution;
- Text mining / Literature based discovery;
- Summarization;
- Τext simplification;
- Question Answering;
- Resources and strategies for system testing and evaluation;
- Infrastructures and pre-trained language models for biomedical NLP / Processing and annotation platforms;
- Development of synthetic data;
- Translating NLP research into practice;
- Getting reproducible results.
Program Committee
* Sophia Ananiadou, National Centre for Text Mining and University of Manchester, UK * Saadullah Amin, Saarland University, Germany * Emilia Apostolova, Anthem, Inc., USA * Eiji Aramaki, University of Tokyo, Japan * Timothy Baldwin, University of Melbourne, Australia * Spandana Balumuri, National Institute of Technology Karnataka, India * Steven Bethard, University of Arizona, USA * Robert Bossy, Inrae, Université Paris Saclay, France * Berry de Bruijn, National Research Council Canada * Leonardo Campillos-Llanos, Centro Superior de Investigaciones Científicas - CSIC, Spain * Kevin Bretonnel Cohen, University of Colorado School of Medicine, USA * Fenia Christopoulou, Huawei Noah's Ark lab, UK * Brian Connolly, Ohio, USA * Mike Conway, University of Utah, USA * Manirupa Das, Amazon, USA * Surabhi Datta, The University of Texas Health Science Center at Houston, USA * Dina Demner-Fushman, US National Library of Medicine * Dmitriy Dligach, Loyola University Chicago, USA * Kathleen C. Fraser, National Research Council Canada * Travis Goodwin, US National Library of Medicine * Natalia Grabar, CNRS, U Lille, France * Cyril Grouin, LIMSI - CNRS, France * Tudor Groza, EMBL-EBI * Deepak Gupta, US National Library of Medicine * Sam Henry, Christopher Newport University, USA * William Hogan, UCSD, USA * Kexin Huang, Stanford University, USA * Brian Hur, University of Melbourne, Australia * Richard Jackson, AstraZeneca * Antonio Jimeno Yepes, IBM, Melbourne Area, Australia * Sarvnaz Karimi, CSIRO, Australia * Nazmul Kazi, Montana State University, USA * Won Gyu KIM, US National Library of Medicine * Ari Klein, University of Pennsylvania, USA * Roman Klinger, University of Stuttgart, Germany * Andre Lamurias, Aalborg University, DK * Majid Latifi, National College of Ireland * Alberto Lavelli, FBK-ICT, Italy * Robert Leaman, US National Library of Medicine * Lung-Hao Lee, National Central University, Taiwan * Ulf Leser, Humboldt-Universität zu Berlin, Germany * Diwakar Mahajan, IBM Thomas J. Watson Research Center, USA * Mark-Christoph Müller, Heidelberg Institute for Theoretical Studies, Germany * Claire Nédellec, INRA, Université Paris-Saclay, FR * Guenter Neumann, DFKI, Saarland, Germany * Aurelie Neveol, LIMSI - CNRS, France * Mariana Neves, Hasso-Plattner-Institute at the University of Potsdam, Germany * Yifan Peng, Weill Cornell Medical College, USA * Francisco J. Ribadas-Pena, Universidade de Vigo, Spain * Anthony Rios, The University of Texas at San Antonio, USA * Angus Roberts, King's College London, UK * Kirk Roberts, The University of Texas Health Science Center at Houston, USA * Roland Roller, DFKI, Germany * Mourad Sarrouti, Sumitovant Biopharma, Inc., USA * Mario Sänger, Humboldt-Universität zu Berlin, Germany * Diana Sousa, Universidade de Lisboa, Portugal * Michael Spranger, Sony, Tokyo, Japan * Peng Su, University of Delaware, USA * Madhumita Sushil, University of California, San Francisco, USA * Karin Verspoor, RMIT University, Melbourne, Australia * Roger Wattenhofer, ETH Zurich, Switzerland * Leon Weber, Humboldt Universität Berlin, Germany * Nathan M. White, James Cook University, Australia * Davy Weissenbacher, University of Pennsylvania, USA * W John Wilbur, US National Library of Medicine * Amelie Wührl, University of Stuttgart, Germany * Dongfang Xu, Harvard University, USA * Shweta Yadav, University of Illinois Chicago, USA * Jingqing Zhang, Imperial College London, UK * Ayah Zirikly, Johns Hopkins University, USA * Pierre Zweigenbaum, LIMSI - CNRS, France
SHARED TASK: MedVidQA 2022
The first challenge on Medical Video Question Answering is collocated with the BioNLP 2022 Workshop. MedVidQA focuses on providing relevant segments of videos as answers to health-related questions. Medical videos may provide the best possible answers to many first aid, medical emergency, and medical education questions. Please check the challenge website for details on the tasks, datasets, and submission guidelines: https://medvidqa.github.io
Organizers
Dina Demner-Fushman, US National Library of Medicine Kevin Bretonnel Cohen, University of Colorado School of Medicine Sophia Ananiadou, National Centre for Text Mining and University of Manchester, UK Jun-ichi Tsujii, National Institute of Advanced Industrial Science and Technology, Japan
Dual submission policy
Papers may NOT be submitted to the BioNLP 2022 workshop if they are or will be concurrently submitted to another meeting or publication.