Difference between revisions of "Question Answering (State of the art)"

From ACL Wiki
Jump to navigation Jump to search
(+two 2015 papers (that contain results of three algorithms))
Line 62: Line 62:
 
| 0.711
 
| 0.711
 
| 0.785
 
| 0.785
 +
|-
 +
| W&N (2015) - Three-Layer BLSTM+BM25
 +
| Wang and Nyberg (2015)
 +
| 0.713
 +
| 0.791
 +
|-
 +
| Feng (2015) - Architecture-II
 +
| Tan et al. (2015)
 +
| 0.711
 +
| 0.800
 
|-
 
|-
 
| S&M (2015)
 
| S&M (2015)
Line 72: Line 82:
 
| 0.746
 
| 0.746
 
| 0.820
 
| 0.820
 +
|-
 +
| Tan (2015) - QA-LSTM/CNN+attention
 +
| Tan et al. (2015)
 +
| 0.728
 +
| 0.832
 
|}
 
|}
  
Line 86: Line 101:
 
* Severyn, Aliaksei and Moschitti, Alessandro. 2013. [http://www.aclweb.org/anthology/D13-1044.pdf Automatic Feature Engineering for Answer Selection and Extraction]. In EMNLP 2013.
 
* Severyn, Aliaksei and Moschitti, Alessandro. 2013. [http://www.aclweb.org/anthology/D13-1044.pdf Automatic Feature Engineering for Answer Selection and Extraction]. In EMNLP 2013.
 
* Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. 2014. [http://arxiv.org/pdf/1412.1632v1.pdf Deep Learning for Answer Sentence Selection]. In NIPS deep learning workshop.
 
* Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. 2014. [http://arxiv.org/pdf/1412.1632v1.pdf Deep Learning for Answer Sentence Selection]. In NIPS deep learning workshop.
 +
* Di Wang and Eric Nyberg. 2015. [http://www.aclweb.org/anthology/P15-2116 A Long Short-Term Memory Model for Answer Sentence Selection in Question Answering]. In ACL 2015.
 +
* Minwei Feng, Bing Xiang, Michael R. Glass, Lidan Wang, Bowen Zhou. 2015. [http://arxiv.org/abs/1508.01585 Applying deep learning to answer selection: A study and an open task]. In ASRU 2015.
 +
* Aliaksei Severyn and Alessandro Moschitti. 2015. [http://disi.unitn.it/~severyn/papers/sigir-2015-long.pdf Learning to Rank Short Text Pairs with Convolutional Deep Neural Networks]. In SIGIR 2015.
 
* Zhiguo Wang and Abraham Ittycheriah. 2015. [http://arxiv.org/abs/1507.02628 FAQ-based Question Answering via Word Alignment]. In eprint arXiv:1507.02628.
 
* Zhiguo Wang and Abraham Ittycheriah. 2015. [http://arxiv.org/abs/1507.02628 FAQ-based Question Answering via Word Alignment]. In eprint arXiv:1507.02628.
 +
* Ming Tan, Cicero dos Santos, Bing Xiang & Bowen Zhou. 2015. [http://arxiv.org/abs/1511.04108 LSTM-Based Deep Learning Models for Nonfactoid Answer Selection]. In eprint arXiv:1511.04108.
 +
 
[[Category:State of the art]]
 
[[Category:State of the art]]
* Aliaksei Severyn and Alessandro Moschitti. 2015. [http://disi.unitn.it/~severyn/papers/sigir-2015-long.pdf Learning to Rank Short Text Pairs with Convolutional Deep Neural Networks]. In SIGIR 2015.
 

Revision as of 18:27, 10 February 2016

Answer Sentence Selection

The task of answer sentence selection is designed for the open-domain question answering setting. Given a question and a set of candidate sentences, the task is to choose the correct sentence that contains the exact answer and can sufficiently support the answer choice.


Algorithm Reference MAP MRR
Punyakanok (2004) Wang et al. (2007) 0.419 0.494
Cui (2005) Wang et al. (2007) 0.427 0.526
Wang (2007) Wang et al. (2007) 0.603 0.685
H&S (2010) Heilman and Smith (2010) 0.609 0.692
W&M (2010) Wang and Manning (2010) 0.595 0.695
Yao (2013) Yao et al. (2013) 0.631 0.748
S&M (2013) Severyn and Moschitti (2013) 0.678 0.736
Shnarch (2013) - Backward Shnarch (2013) 0.686 0.754
Yih (2013) - LCLR Yih et al. (2013) 0.709 0.770
Yu (2014) - TRAIN-ALL bigram+count Yu et al. (2014) 0.711 0.785
W&N (2015) - Three-Layer BLSTM+BM25 Wang and Nyberg (2015) 0.713 0.791
Feng (2015) - Architecture-II Tan et al. (2015) 0.711 0.800
S&M (2015) Severyn and Moschitti (2015) 0.746 0.808
W&I (2015) Wang and Ittycheriah (2015) 0.746 0.820
Tan (2015) - QA-LSTM/CNN+attention Tan et al. (2015) 0.728 0.832


References