AudioCaps: Generating Captions for Audios in The Wild

Chris Dongjoo Kim, Byeongchang Kim, Hyunmin Lee, Gunhee Kim


Abstract
We explore the problem of Audio Captioning: generating natural language description for any kind of audio in the wild, which has been surprisingly unexplored in previous research. We contribute a large-scale dataset of 46K audio clips with human-written text pairs collected via crowdsourcing on the AudioSet dataset. Our thorough empirical studies not only show that our collected captions are indeed faithful to audio inputs but also discover what forms of audio representation and captioning models are effective for the audio captioning. From extensive experiments, we also propose two novel components that help improve audio captioning performance: the top-down multi-scale encoder and aligned semantic attention.
Anthology ID:
N19-1011
Volume:
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
Month:
June
Year:
2019
Address:
Minneapolis, Minnesota
Editors:
Jill Burstein, Christy Doran, Thamar Solorio
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
119–132
Language:
URL:
https://aclanthology.org/N19-1011
DOI:
10.18653/v1/N19-1011
Bibkey:
Cite (ACL):
Chris Dongjoo Kim, Byeongchang Kim, Hyunmin Lee, and Gunhee Kim. 2019. AudioCaps: Generating Captions for Audios in The Wild. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 119–132, Minneapolis, Minnesota. Association for Computational Linguistics.
Cite (Informal):
AudioCaps: Generating Captions for Audios in The Wild (Kim et al., NAACL 2019)
Copy Citation:
PDF:
https://aclanthology.org/N19-1011.pdf
Video:
 https://aclanthology.org/N19-1011.mp4
Data
AudioCapsAudioSetFlickr30kMSR-VTT