Difference between revisions of "BioNLP 2023"
Line 67: | Line 67: | ||
<tr><td valign=top style="padding-top: 14px;"><b>13:00–14:30</b></td><td valign=top style="padding-top: 14px;"><b><em>Lunch</em></b></td></tr> | <tr><td valign=top style="padding-top: 14px;"><b>13:00–14:30</b></td><td valign=top style="padding-top: 14px;"><b><em>Lunch</em></b></td></tr> | ||
<tr><td valign=top style="padding-top: 14px;"><b>14:00–17:45</b></td><td valign=top style="padding-top: 14px;"><b>Onsite Poster Session 1</b></td></tr> | <tr><td valign=top style="padding-top: 14px;"><b>14:00–17:45</b></td><td valign=top style="padding-top: 14px;"><b>Onsite Poster Session 1</b></td></tr> | ||
− | |||
− | |||
<tr><td valign=top width=100> </td><td valign=top align=left><i>How Much do Knowledge Graphs Impact Transformer Models for Extracting Biomedical Events?</i><br> | <tr><td valign=top width=100> </td><td valign=top align=left><i>How Much do Knowledge Graphs Impact Transformer Models for Extracting Biomedical Events?</i><br> | ||
Laura Zanella and Yannick Toussaint, <i>LORIA, Université de Lorraine</i></td></tr> | Laura Zanella and Yannick Toussaint, <i>LORIA, Université de Lorraine</i></td></tr> | ||
Line 163: | Line 161: | ||
<sup>1</sup>Heidelberg University, <sup>2</sup>Heidelberg Institute for Theoretical Studies – HITS gGmbH</i></td></tr> | <sup>1</sup>Heidelberg University, <sup>2</sup>Heidelberg Institute for Theoretical Studies – HITS gGmbH</i></td></tr> | ||
+ | <tr><td valign=top width=100> </td><td valign=top align=left><i>Exploring Partial Knowledge Base Inference in Biomedical Entity Linking</i><br>Hongyi Yuan<sup>1</sup>, Keming Lu<sup>2</sup>, Zheng Yuan<sup>3</sup>, <i> | ||
+ | <sup>1</sup>Tsinghua University, <sup>2</sup>University of Southern California, <sup>3</sup>Alibaba Group</i></td></tr> | ||
Revision as of 21:22, 10 June 2023
BIONLP 2023 and Shared Tasks @ ACL 2023
The 22nd BioNLP workshop associated with the ACL SIGBIOMED special interest group is co-located with ACL 2023
IMPORTANT DATES
- April 24, 2023: Workshop Paper Due Date.
- Submission site for the workshop only: https://softconf.com/acl2023/BioNLP2023/
- Submission site for the SHARED TASKS only: https://softconf.com/acl2023/BioNLP2023-ST
- May 29, 20232: Notification of Acceptance
- June 6, 2023: Camera-ready papers due
- June 12, 2023: Pre-recorded video due
Video is optional. Instructions (below) are for the video only, not for the final paper submission. Video should not exceed 10 minutes.
Instructions:
https://docs.google.com/presentation/d/1STKSZ22v3ucS9smfDfhREQhwRB9_bIwu7mnVYKUq7A8/edit?usp=sharing
Form (linked in SLIDE 4) https://acl2023workshops.paperform.co/
- BioNLP 2023 Workshop at ACL, July 13, 2023, Toronto, Canada
VISA Information
ACL organizers are processing the requests.
Please see the instructions here: https://2023.aclweb.org/blog/visa-info/
Poster size:
All posters should be A0, orientation: Portrait.
BioNLP 2023: Program TENTATIVE
Thursday July 13, 2023 | |
8:30–8:40 | Opening remarks |
Session 1: Evaluating speech, models and literature-related tasks | |
8:40–9:00 | Evaluating and Improving Automatic Speech Recognition using Severity Ryan Whetten and Casey Kennington, Boise State University |
9:00–9:20 | Is the ranking of PubMed similar articles good enough? An evaluation of text similarity methods for three datasets Mariana Neves, Ines Schadock, Beryl Eusemann, Gilbert Schönfelder, Bettina Bert, Daniel Butzke, German Federal Institute for Risk Assessment |
9:20–9:40 | BIOptimus: Pre-training an Optimal Biomedical Language Model with Curriculum Learning for Named Entity Recognition Vera Pavlova and Mohammed Makhlouf, rttl.ai |
9:40–10:00 | Promoting Fairness in Classification of Quality of Medical Evidence/i> Simon Suster1, Timothy Baldwin2, Karin Verspoor3, 1University of Melbourne, 2MBZUAI, 3RMIT University |
10:00–10:30 | BioLaySumm 2023 Shared Task: Lay Summarisation of Biomedical Research Articles Tomas Goldsack1, Zheheng Luo2, Qianqian Xie2, Carolina Scarton1, Matthew Shardlow3, Sophia Ananiadou2, Chenghua Lin1, 1University of Sheffield, 2University of Manchester, 3Manchester Metropolitan University/i> |
10:30–11:00 | Coffee Break |
Session 2: Clinical Language Processing | |
11:00–11:40 | Invited Talk: Dementia Detection from Speech: New Developments and Future Directions Speaker: Kathleen Fraser |
11:40–12:10 | Overview of the Problem List Summarization (ProbSum) 2023 Shared Task on Summarizing Patients' Active Diagnoses and Problems from Electronic Health Record Progress Notes Yanjun Gao1, Dmitriy Dligach2, Timothy Miller3, Majid Afshar1, 1University of Wisconsin, 2Loyola University Chicago, 3Boston Children's Hospital and Harvard Medical School |
12:10–12:40 | Overview of the RadSum23 Shared Task on Multi-modal and Multi-anatomical Radiology Report Summarization Jean-Benoit Delbrouck, Maya Varma, Pierre Chambon, Curtis Langlotz, Stanford University |
12:40–13:00 | RadAdapt: Radiology Report Summarization via Lightweight Domain Adaptation of Large Language Models Dave Van Veen1, Cara Van Uden1, Maayane Attias1, Anuj Pareek1, Christian Bluethgen1, Malgorzata Polacin2, Wah Chiu1, Jean-Benoit Delbrouck1, Juan Zambrano Chaves1, Curtis Langlotz1, Akshay Chaudhari1, John Pauly1, 1Stanford University, 2Stanford University, ETH Zurich |
13:00–14:30 | Lunch |
14:00–17:45 | Onsite Poster Session 1 |
How Much do Knowledge Graphs Impact Transformer Models for Extracting Biomedical Events? Laura Zanella and Yannick Toussaint, LORIA, Université de Lorraine | |
DISTANT: Distantly Supervised Entity Span Detection and Classification Ken Yano1, Makoto Miwa2, Sophia Ananiadou3, 1The National Institute of Advanced Industrial Science and Technology, 2Toyota Technological Institute, 3University of Manchester | |
Event-independent temporal positioning: application to French clinical text Nesrine Bannour1, Bastien Rance2, Xavier Tannier3, Aurélie Névéol1, 1Université Paris Saclay, CNRS, LISN, 2INSERM, centre de Recherche des Cordeliers, Université Paris Cité, Sorbonne Paris Cité, AP-HP, HEGP, HeKa, Inria Paris, 3Sorbonne Université, Inserm, LIMICS | |
AliBERT: A Pre-trained Language Model for French Biomedical Text Aman Berhe1, Guillaume Draznieks2, Vincent Martenot2, Valentin Masdeu2, Lucas Davy2, Jean-Daniel Zucker3, 1SU/IRD UMMISCO & Quinten, 2Quinten, 3SU/IRD, UMMISCO | |
Building a Corpus for Biomedical Relation Extraction of Species Mentions Oumaima El Khettari, Solen Quiniou, Samuel Chaffron, Nantes Université - LS2N | |
Automated Extraction of Molecular Interactions and Pathway Knowledge using Large Language Model, Galactica: Opportunities and Challenges Gilchan Park1, Byung-Jun Yoon1, Xihaier Luo1, Vanessa López-Marrero1, Patrick Johnstone1, Shinjae Yoo2, Francis Alexander1, 1Brookhaven National Laboratory, 2BNL | |
Automatic Glossary of Clinical Terminology: a Large-Scale Dictionary of Biomedical Definitions Generated from Ontological Knowledge François Remy, Kris Demuynck, Thomas Demeester, Ghent University - imec | |
Resolving Elliptical Compounds in German Medical Text Niklas Kämmer1, Florian Borchert1, Silvia Winkler1, Gerard de Melo2, Matthieu-P. Schapranow1, 1Hasso Plattner Institute, University of Potsdam, 2HPI/University of Potsdam | |
End-to-end clinical temporal information extraction with multi-head attention Timothy Miller1, Steven Bethard2, Dmitriy Dligach3, Guergana Savova1, 1Boston Children's Hospital and Harvard Medical School, 2University of Arizona, 3Loyola University Chicago | |
Intermediate Domain Finetuning for Weakly Supervised Domain-adaptive Clinical NER Shilpa Suresh, Nazgol Tavabi, Shahriar Golchin, Leah Gilreath, Rafael Garcia-Andujar, Alexander Kim, Joseph Murray, Blake Bacevich, Ata Kiapour, Musculoskeletal Informatics Group, Boston Children's Hospital, Harvard Medical School | |
Biomedical Language Models are Robust to Sub-optimal Tokenization Bernal Jimenez Gutierrez, Huan Sun, Yu Su, The Ohio State University | |
BioNART: A Biomedical Non-AutoRegressive Transformer for Natural Language Generation Masaki Asada1 and Makoto Miwa2, 1National Institute of Advanced Industrial Science and Technology, 2Toyota Technological Institute | |
Can Social Media Inform Dietary Approaches for Health Management? A Dataset and Benchmark for Low-Carb Diet Skyler Zou, Xiang Dai, Grant Brinkworth, Pennie Taylor, Sarvnaz Karimi, CSIRO | |
Hospital Discharge Summarization Data Provenance Paul Landes1, Aaron Chase2, Kunal Patel1, Sean Huang2, Barbara Di Eugenio1, 1University of Illinois at Chicago, 2Vanderbilt University | |
Evaluation of ChatGPT on Biomedical Tasks: A Zero-Shot Comparison with Fine-Tuned Generative Transformers Israt Jahan1, Md Tahmid Rahman Laskar2, Chun Peng1, Jimmy Huang1, 1York University, 2Dialpad Inc. | |
Zero-Shot Information Extraction for Clinical Meta-Analysis using Large Language Models David Kartchner1,3, Selvi Ramalingam2, Irfan Al-Hussaini3, Olivia Kronick3, Cassie Mitchell3, 1Enveda Biosciences, 2Emory University, 3Georgia Institute of Technology | |
Good Data, Large Data, or No Data? Comparing Three Approaches in Developing Research Aspect Classifiers for Biomedical Papers Shreya Chandrasekhar, Chieh-Yang Huang, Ting-Hao Huang, Penn State University | |
15:30–16:00 | Coffee Break |
14:30–17:45 | Virtual Session 1 |
Multi-Source (Pre-)Training for Cross-Domain Measurement, Unit and Context Extraction Yueling Li1, Sebastian Martschat1, Simone Paolo Ponzetto2, 1BASF SE, 2University of Mannheim | |
Gaussian Distributed Prototypical Network for Few-shot Genomic Variant Detection Jiarun Cao, Niels Peek, Andrew Renehan, Sophia Ananiadou, University of Manchester | |
Boosting Radiology Report Generation by Infusing Comparison Prior Sanghwan Kim1, Farhad Nooralahzadeh2, Morteza Rohanian2, Koji Fujimoto3, Mizuho Nishio3, Ryo Sakamoto3, Fabio Rinaldi4, Michael Krauthammer2, 1ETH Zürich, 2University of Zurich, 3Kyoto University Graduate School of Medicine, 4IDSIA, Swiss AI Institute | |
Using Bottleneck Adapters to Identify Cancer in Clinical Notes under Low-Resource Constraints Omid Rohanian, Hannah Jauncey, Mohammadmahdi Nouriborji, Vinod Kumar, Bronner P. Gonçalves, Christiana Kartsonaki, ISARIC Clinical Characterisation Group, Laura Merson, David Clifton, University of Oxford | |
Zero-shot Temporal Relation Extraction with ChatGPT Chenhan Yuan, Qianqian Xie, Sophia Ananiadou, University of Manchester | |
Sentiment-guided Transformer with Severity-aware Contrastive Learning for Depression Detection on Social Media Tianlin Zhang, Kailai Yang, Sophia Ananiadou, University of Manchester | |
Exploring Drug Switching in Patients: A Deep Learning-based Approach to Extract Drug Changes and Reasons from Social Media Mourad Sarrouti, Carson Tao, Yoann Mamy Randriamihaja, Sumitovant Biopharma | |
An end-to-end neural model based on cliques and scopes for frame extraction in long breast radiology reports Perceval Wajsburt1 and Xavier Tannier2, 1Sorbonne Université, 2Sorbonne Université, Inserm, LIMICS | |
Large Language Models as Instructors: A Study on Multilingual Clinical Entity Extraction Simon Meoni1, Éric De la Clergerie2, Théo Ryffel3, 1Arkhn/INRIA, 2Iniria, 3Arkhn | |
ADEQA: A Question Answer based approach for joint ADE-Suspect Extraction using Sequence-To-Sequence Transformers Vinayak Arannil, Tomal Deb, Atanu Roy, Amazon | |
Privacy Aware Question-Answering System for Online Mental Health Risk Assessment Prateek Chhikara, Ujjwal Pasupulety, John Marshall, Dhiraj Chaurasia, Shweta Kumari, University of Southern California | |
Multiple Evidence Combination for Fact-Checking of Health-Related Information Pritam Deka, Anna Jurek-Loughrey, Deepak P, Queen's University Belfast | |
Comparing and combining some popular NER approaches on Biomedical tasks Harsh Verma, Sabine Bergler, Narjesossadat Tahaei, Concordia University | |
Extracting Drug-Drug and Protein-Protein Interactions from Text using a Continuous Update of Tree-Transformers Sudipta Singha Roy and Robert E. Mercer, The University of Western Ontario | |
Augmenting Reddit Posts to Determine Wellness Dimensions impacting Mental Health Chandreen Liyanage1, Muskan Garg2, Vijay Mago1, Sunghwan Sohn2, 1Lakehead University, 2Mayo Clinic | |
Distantly Supervised Document-Level Biomedical Relation Extraction with Neighborhood Knowledge Graphs Takuma Matsubara, Makoto Miwa, Yutaka Sasaki, Toyota Technological Institute | |
Biomedical Relation Extraction with Entity Type Markers and Relation-specific Question Answering Koshi Yamada, Makoto Miwa, Yutaka Sasaki, Toyota Technological Institute | |
Biomedical Document Classification with Literature Graph Representations of Bibliographies and Entities Ryuki Ida, Makoto Miwa, Yutaka Sasaki, Toyota Technological Institute | |
WeLT: Improving Biomedical Fine-tuned Pre-trained Language Models with Cost-sensitive Learning Ghadeer Mobasher1,2, Wolfgang Müller2, Olga Krebs2, Michael Gertz1 1Heidelberg University, 2Heidelberg Institute for Theoretical Studies – HITS gGmbH | |
Exploring Partial Knowledge Base Inference in Biomedical Entity Linking Hongyi Yuan1, Keming Lu2, Zheng Yuan3, 1Tsinghua University, 2University of Southern California, 3Alibaba Group | |
14:00–17:45 | Onsite Shared Task Poster Session |
GRASUM at BioLaySumm Task 1: Background Knowledge Grounding for Readable, Relevant, and Factual Biomedical Lay Summaries Domenic Rosati, scite | |
Team:PULSAR at ProbSum 2023:PULSAR: Pre-training with Extracted Healthcare Terms for Summarising Patients' Problems and Data Augmentation with Black-box Large Language Models Hao Li1, Yuping Wu1, Viktor Schlegel2, Riza Batista-Navarro1, Thanh-Tung Nguyen3, Abhinav Ramesh Kashyap2, Xiao-Jun Zeng1, Daniel Beck4, Stefan Winkler5, Goran Nenadic1, 1University of Manchester, 2ASUS AICS, 3ASUS, 4University of Melbourne, 5National University of Singapore | |
CUED at ProbSum 2023: Hierarchical Ensemble of Summarization Models Potsawee Manakul, Yassir Fathullah, Adian Liusie, Vyas Raina, Vatsal Raina, Mark Gales, University of Cambridge | |
shs-nlp at RadSum23: Domain-Adaptive Pre-training of Instruction-tuned LLMs for Radiology Report Impression Generation Sanjeev Kumar Karn1, Rikhiya Ghosh2, Kusuma P2, Oladimeji Farri2, 1Siemens, 2Siemens Healthineers | |
CSIRO Data61 Team at BioLaySumm Task 1: Lay Summarisation of Biomedical Research Articles Using Generative Models Mong Yuan Sim1, Xiang Dai2, Maciej Rybinski3, Sarvnaz Karimi3, 1The University of Adelaide, 2CSIRO Data61, 3CSIRO | |
KU-DMIS-MSRA at RadSum23: Pre-trained Vision-Language Model for Radiology Report Summarization Gangwoo Kim1, Hajung Kim1, Lei Ji2, Seongsu Bae3, chanhwi kim4, Mujeen Sung1, Hyunjae Kim1, Kun Yan5, Eric Chang6, Jaewoo Kang1, 1Korea University, 2MSRA, 3KAIST, 4Korea University, DMIS, 5Beihang University, 6Kingtex | |
IKM_Lab at BioLaySumm Task 1: Longformer-based Prompt Tuning for Biomedical Lay Summary Generation Yu-Hsuan Wu, Ying-Jia Lin, Hung-Yu Kao, National Cheng Kung University | |
MDC at BioLaySumm Task 1: Evaluating GPT Models for Biomedical Lay Summarization Oisín Turbitt, Robert Bevan, Mouhamad Aboshokor, Medicines Discovery Catapult | |
14:30–17:45 | Virtual Shared Task Poster Session |
TALP-UPC at ProbSum 2023: Fine-tuning and Data Augmentation Strategies for NER Neil Torrero, Gerard Sant, Carlos Escolano, Universitat politècnica de catalunya | |
Team Converge at ProbSum 2023: Abstractive Text Summarization of Patient Progress Notes Gaurav Kolhatkar, Aditya Paranjape, Omkar Gokhale, Dipali Kadam, Pune Institute Of Computer Technology | |
nav-nlp at RadSum23: Abstractive Summarization of Radiology Reports using BART Finetuning Sri Macharla, Ashok Madamanchi, Nikhilesh Kancharla, IIT Roorkee at Roorkee | |
APTSumm at BioLaySumm Task 1: Biomedical Breakdown, Improving Readability by Relevancy Based Selection A.S. Poornash, Atharva Deshmukh, Archit Sharma, Sriparna Saha, Indian Institute of Technology Patna | |
LHS712EE at BioLaySumm 2023: Using BART and LED to summarize biomedical research articles Quancheng Liu, Xiheng Ren, V.G.Vinod Vydiswaran, University of Michigan | |
ISIKSumm at BioLaySumm Task 1: BART-based Summarization System Enhanced with Bio-Entity Labels Cağla Colak and İlknur Karadeniz, Işık University | |
DeakinNLP at ProbSum 2023: Clinical Progress Note Summarization with Rules and Language ModelsClinical Progress Note Summarization with Rules and Languague Models Ming Liu1, Dan Zhang1, Weicong Tan2, He Zhang3 1Deakin University, 2Monash University, 3CNPIEC KEXIN LTD | |
ELiRF-VRAIN at BioNLP Task 1B: Radiology Report Summarization Vicent Ahuir Esteve, Encarna Segarra, Lluís Hurtado, Valencian Research Institute for Artificial Intelligence, Universitat Politècnica de València | |
SINAI at RadSum23: Radiology Report Summarization Based on Domain-Specific Sequence-To-Sequence Transformer Model Mariia Chizhikova, Manuel Díaz-Galiano, L. Alfonso Ureña-López, M. Teresa Martín-Valdivia, University of Jaén | |
KnowLab at RadSum23: comparing pre-trained language models in radiology report summarization Jinge Wu1, Daqian Shi2, Abul Hasan1, Honghan Wu1, 1University College London, 2University of Trento | |
e-Health CSIRO at RadSum23: Adapting a Chest X-Ray Report Generator to Multimodal Radiology Report Summarisation Aaron Nicolson, Jason Dowling, Bevan Koopman, CSIRO | |
UTSA-NLP at RadSum23: Multi-modal Retrieval-Based Chest X-Ray Report Summarization Tongnian Wang, Xingmeng Zhao, Anthony Rios, University of Texas at San Antonio | |
VBD-NLP at BioLaySumm Task 1: Explicit and Implicit Key Information Selection for Lay Summarization on Biomedical Long Documents Phuc Phan, Tri Tran, Hai-Long Trieu, VinBigData, JSC | |
NCUEE-NLP at BioLaySumm Task 2: Readability-Controlled Summarization of Biomedical Articles Using the PRIMERA Models Chao-Yi Chen, Jen-Hao Yang, Lung-Hao Lee, National Central University | |
Pathology Dynamics at BioLaySumm: the trade-off between Readability, Relevance, and Factuality in Lay Summarization Irfan Al-Hussaini, Austin Wu, Cassie Mitchell, Georgia Institute of Technology | |
IITR at BioLaySumm Task 1:Lay Summarization of BioMedical articles using Transformers Venkat praneeth Reddy, Pinnapu Reddy Harshavardhan Reddy, Karanam Sai Sumedh, Raksha Sharma, Indian Institute of Technology,Roorkee | |
17:45-18:00 | Closing remarks |
BioNLP 2023 Invited Talk
Title: Dementia Detection from Speech: New Developments and Future Directions
Abstract: Diagnosing and treating dementia is a pressing concern as the global population ages. A growing number of publications in NLP tackle the question of whether we can use speech and language analysis to automatically detect signs of this devastating disease. However, the field of NLP has changed rapidly since the task was first proposed. In this talk, Dr. Kathleen Fraser will summarize the foundational approaches to dementia detection from speech, and then review how current approaches are building on and improving over the earlier work. Dr. Fraser will present several areas that she believes are promising future directions, and discuss preliminary work from her group specifically on the topic of multimodal machine learning for remote cognitive assessment.
Bio: Dr. Kathleen Fraser is a computer scientist in the Digital Technologies Research Centre at the National Research Council Canada. Her research focuses on the use of natural language processing (NLP) in healthcare applications, as well as assessing and mitigating social bias in artificial intelligence systems. Dr. Fraser received her PhD in computer science from the University of Toronto in 2016, and subsequently completed a post-doc at the University of Gothenburg, Sweden. She was named an MIT Rising Star in Electrical Engineering and Computer Science, and was awarded the Governor General's Gold Academic Medal in 2017. She also co-founded the start-up Winterlight Labs, later acquired by Cambridge Cognition. She has been a research officer at the National Research Council since 2018 and also holds a position as adjunct professor at Carleton University.
WORKSHOP OVERVIEW AND SCOPE
The BioNLP workshop associated with the ACL SIGBIOMED special interest group has established itself as the primary venue for presenting foundational research in language processing for the biological and medical domains. The workshop is running every year since 2002 and continues getting stronger. BioNLP welcomes and encourages work on languages other than English, and inclusion and diversity. BioNLP truly encompasses the breadth of the domain and brings together researchers in bio- and clinical NLP from all over the world. The workshop will continue presenting work on a broad and interesting range of topics in NLP. The interest to biomedical language has broadened significantly due to the COVID-19 pandemic and continues to grow: as access to information becomes easier and more people generate and access health-related text, it becomes clearer that only language technologies can enable and support adequate use of the biomedical text.
BioNLP 2023 will be particularly interested in language processing that supports DEIA (Diversity, Equity, Inclusion and Accessibility). The work on detection and mitigation of bias and misinformation continues to be of interest. Research in languages other than English, particularly, under-represented languages, and health disparities are always of interest to BioNLP.
Other active areas of research include, but are not limited to:
- Tangible results of biomedical language processing applications;
- Entity identification and normalization (linking) for a broad range of semantic categories;
- Extraction of complex relations and events;
- Discourse analysis;
- Anaphora/coreference resolution;
- Text mining / Literature based discovery;
- Summarization;
- Τext simplification;
- Question Answering;
- Resources and strategies for system testing and evaluation;
- Infrastructures and pre-trained language models for biomedical NLP (Processing and annotation platforms);
- Development of synthetic data & data augmentation;
- Translating NLP research into practice;
- Getting reproducible results.
SUBMISSION INSTRUCTIONS
Two types of submissions are invited: full (long) papers and short papers.
Submission site for the workshop only: https://softconf.com/acl2023/BioNLP2023/
Shared task participants' reports should be submitted at https://softconf.com/acl2023/BioNLP2023-ST.
The reports on the shared task participation will be reviewed by the task organizers.
Publication chairs for the tasks:
- 1A: Yanjun Gao
- 1B: Jean Benoit Delbrouck
- 2: Chenghua Lin, Tomas Goldsack
Full (long) papers should not exceed eight (8) pages of text, plus unlimited references. Final versions of full papers will be given one additional page of content (up to 9 pages) so that reviewers' comments can be taken into account. Full papers are intended to be reports of original research.
BioNLP aims to be the forum for interesting, innovative, and promising work involving biomedicine and language technology, whether or not yielding high performance at the moment. This by no means precludes our interest in and preference for mature results, strong performance, and thorough evaluation. Both types of research and combinations thereof are encouraged.
Short papers may consist of up to four (4) pages of content, plus unlimited references. Upon acceptance, short papers will still be given up to five (5) content pages in the proceedings. Appropriate short paper topics include preliminary results, application notes, descriptions of work in progress, etc.
Electronic Submission
Submissions must be electronic and in PDF format, using the Softconf START conference management system at https://softconf.com/acl2023/BioNLP2023/
We strongly recommend consulting the ACL Policies for Submission, Review, and Citation: https://2023.aclweb.org/calls/main_conference/ and using ACL LaTeX style files tailored for this year's conference. Submissions must conform to the official style guidelines: https://2023.aclweb.org/calls/style_and_formatting/
Submissions need to be anonymous.
Dual submission policy: papers may NOT be submitted to the BioNLP 2023 workshop if they are or will be concurrently submitted to another meeting or publication.
Program Committee
* Sophia Ananiadou, National Centre for Text Mining and University of Manchester, UK * Emilia Apostolova, Anthem, Inc., USA * Eiji Aramaki, University of Tokyo, Japan * Saadullah Amin, Saarland University, Germany * Steven Bethard, University of Arizona, USA * Olivier Bodenreider, US National Library of Medicine * Robert Bossy, Inrae, Université Paris Saclay, France * Leonardo Campillos-Llanos, Centro Superior de Investigaciones Científicas - CSIC, Spain * Kevin Bretonnel Cohen, University of Colorado School of Medicine, USA * Brian Connolly, Ohio, USA * Mike Conway, University of Melbourne, Australia * Manirupa Das, Amazon, USA * Berry de Bruijn, National Research Council, Canada * Dina Demner-Fushman, US National Library of Medicine * Bart Desmet, National Institutes of Health, USA * Dmitriy Dligach, Loyola University Chicago, USA * Kathleen C. Fraser, National Research Council Canada * Travis Goodwin, Amazon Web Services (AWS), Seattle, Washington, USA * Natalia Grabar, CNRS, U Lille, France * Cyril Grouin, Université Paris-Saclay, CNRS * Tudor Groza, EMBL-EBI * Deepak Gupta, US National Library of Medicine * William Hogan, UCSD, USA * Thierry Hamon, LIMSI-CNRS, France * Richard Jackson, AstraZeneca * Antonio Jimeno Yepes, IBM, Melbourne Area, Australia * Sarvnaz Karimi, CSIRO, Australia * Nazmul Kazi, University of North Florida, USA * Roman Klinger, University of Stuttgart, Germany * Anna Koroleva, Omdena * Majid Latifi, Department of Computer Science, University of York, York, UK * Andre Lamurias, Aalborg University, Denmark * Alberto Lavelli, FBK-ICT, Italy * Robert Leaman, US National Library of Medicine * Lung-Hao Lee, National Central University, Taiwan * Ulf Leser, Humboldt-Universität zu Berlin, Germany * Timothy Miller, Boston Childrens Hospital and Harvard Medical School, USA * Claire Nedellec, French national institute of agronomy (INRA) * Guenter Neumann, German Research Center for Artificial Intelligence (DFKI) * Mariana Neves, Hasso-Plattner-Institute at the University of Potsdam, Germany * Nhung Nguyen, National Centre for Text Mining, University of Manchester, UK * Aurélie Névéol, CNRS, France * Amandalynne Paullada, University of Washington School of Medicine * Yifan Peng, Weill Cornell Medical College, USA * Laura Plaza, Universidad Nacional de Educación a Distancia * Francisco J. Ribadas-Pena, University of Vigo, Spain * Anthony Rios, The University of Texas at San Antonio, USA * Kirk Roberts, The University of Texas Health Science Center at Houston, USA * Roland Roller, DFKI, Germany * Mourad Sarrouti, Sumitovant Biopharma, Inc., USA * Diana Sousa, University of Lisbon, Portugal * Peng Su, University of Delaware, USA * Madhumita Sushil, University of California, San Francisco, USA * Mario Sänger, Humboldt Universität zu Berlin, Germany * Andrew Taylor, Yale University School of Medicine, USA * Karin Verspoor, RMIT University, Australia * Leon Weber, Humboldt Universität Berlin, Germany * Nathan M. White, James Cook University, Australia * Dustin Wright, University of Copenhagen,Denmark * Amelie Wührl, University of Stuttgart, Germany * Dongfang Xu, Harvard University, USA * Jingqing Zhang, Imperial College London, UK * Ayah Zirikly, Johns Hopkins Whiting School of Engineering, USA * Pierre Zweigenbaum, LIMSI - CNRS, France
Organizers
* Kevin Bretonnel Cohen, University of Colorado School of Medicine * Dina Demner-Fushman, US National Library of Medicine * Sophia Ananiadou, National Centre for Text Mining and University of Manchester, UK * Jun-ichi Tsujii, National Institute of Advanced Industrial Science and Technology, Japan
SHARED TASKS 2023
Shared Tasks on Summarization of Clinical Notes and Scientific Articles
The first task focuses on Clinical Text.
Task 1A. Problem List Summarization
Codalab competition for Problem List Summarization Evaluation: https://codalab.lisn.upsaclay.fr/competitions/12388 Test Set Release: https://physionet.org/content/bionlp-workshop-2023-task-1a/1.1.0/
The deadline for registration is March 1st, after which no further registrations will be accepted.
Automatically summarizing patients’ main problems from the daily care notes in the electronic health record can help mitigate information and cognitive overload for clinicians and provide augmented intelligence via computerized diagnostic decision support at the bedside. The task of Problem List Summarization aims to generate a list of diagnoses and problems in a patient’s daily care plan using input from the provider’s progress notes during hospitalization.This task aims to promote NLP model development for downstream applications in diagnostic decision support systems that could improve efficiency and reduce diagnostic errors in hospitals. This task will contain 768 hospital daily progress notes and 2783 diagnoses in the training set, and a new set of 300 daily progress notes will be annotated by physicians as the test set. The annotation methods and annotation quality have previously been reported here. The goal of this shared task is to attract future research efforts in building NLP models for real-world decision support applications, where a system generating relevant and accurate diagnoses will assist the healthcare providers’ decision-making process and improve the quality of care for patients.
Shared Task 1A Registration: https://forms.gle/yp6TKD66G8KGpweN9
Please join our Google discussion group for the important update: https://groups.google.com/g/bionlp2023problemsumm
Full Task 1A Details at:
https://physionet.org/content/bionlp-workshop-2023-task-1a/1.0.0/
Important Dates:
Registration Started: January 13th, 2023Releasing of training and validation data: January 13th, 2023Registration stops: March 1, 2023- Releasing of test data: April 13th, 2023
Codalab competition for Problem List Summarization Evaluation: https://codalab.lisn.upsaclay.fr/competitions/12388 Test Set Release: https://physionet.org/content/bionlp-workshop-2023-task-1a/1.1.0/
- System submission deadline: April 20th, 2023
- System papers due date: April 28th, 2023
- Notification of acceptance: June 1st, 2023
- Camera-ready system papers due: June 6, 2023
- BioNLP Workshop Date: July 13th, 2023
Task 1A Organizers:
- Majid Afshar, Department of Medicine University of Wisconsin - Madison.
- Yanjun Gao, University of Wisconsin Madison.
- Dmitriy Dligach, Department of Computer Science at Loyola University Chicago.
- Timothy Miller, Boston Children’s Hospital and Harvard Medical School.
Task 1B. Radiology report summarization
Radiology report summarization is a growing area of research. Given the Findings and/or Background sections of a radiology report, the goal is to generate a summary (called an Impression section) that highlights the key observations and conclusions of the radiology study.
The research area of radiology report summarization currently faces an important limitation: most research is carried out on chest X-rays. To palliate these limitations, we propose two datasets: A shared summarization task that includes six different modalities and anatomies, totalling 79,779 samples, based on the MIMIC-III database.
A shared summarization task on chest x-ray radiology reports with images and a brand new out-of-domain test-set from Stanford.
Full Task 1B details at:
https://vilmedic.app/misc/bionlp23/sharedtask
Task 1B Organizers:
- Jean-Benoit Delbrouck, Stanford University.
- Maya Varma, Stanford University.
Task 2. Lay Summarization of Biomedical Research Articles
Biomedical publications contain the latest research on prominent health-related topics, ranging from common illnesses to global pandemics. This can often result in their content being of interest to a wide variety of audiences including researchers, medical professionals, journalists, and even members of the public. However, the highly technical and specialist language used within such articles typically makes it difficult for non-expert audiences to understand their contents.
Abstractive summarization models can be used to generate a concise summary of an article, capturing its salient point using words and sentences that aren’t used in the original text. As such, these models have the potential to help broaden access to highly technical documents when trained to generate summaries that are more readable, containing more background information and less technical terminology (i.e., a “lay summary”).
This shared task surrounds the abstractive summarization of biomedical research articles, with an emphasis on controllability and catering to non-expert audiences. Through this task, we aim to help foster increased research interest in controllable summarization that helps broaden access to technical texts and progress toward more usable abstractive summarization models in the biomedical domain.
For more information on Task 2, see:
- Main site: https://biolaysumm.org/
- CodaLab page - subtask 1: https://codalab.lisn.upsaclay.fr/competitions/9541
- CodaLab page - subtask 2: https://codalab.lisn.upsaclay.fr/competitions/9544
Detailed descriptions of the motivation, the tasks, and the data are also published in:
- Goldsack, T., Zhang, Z., Lin, C., Scarton, C.. Making Science Simple: Corpora for the Lay Summarisation of Scientific Literature. EMNLP 2022.
- Luo, Z., Xie, Q., Ananiadou, S.. Readability Controllable Biomedical Document Summarization. EMNLP 2022 Findings.
Task 2 Organizers:
- Chenghua Lin, Deputy Director of Research and Innovation in the Computer Science Department, University of Sheffield.
- Sophia Ananiadou, Turing Fellow, Director of the National Centre for Text Mining and Deputy Director of the Institute of Data Science and AI at the University of Manchester.
- Carolina Scarton, Computer Science Department at the University of Sheffield.
- Qianqian Xie, National Centre for Text Mining (NaCTeM).
- Tomas Goldsack, University of Sheffield.
- Zheheng Luo, the University of Manchester.
- Zhihao Zhang, Beihang University.