Difference between revisions of "Question Answering (State of the art)"

From ACL Wiki
Jump to navigation Jump to search
(6 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
== Answer Sentence Selection ==
 
== Answer Sentence Selection ==
  
The task of answer sentence selection is designed for the open-domain question answering setting. Given a question and a set of candidate sentences, the task is to choose the correct sentence that contains the exact answer and can sufficiently support the answer choice.
+
The task of answer sentence selection is designed for the open-domain question answering setting. Given a question and a set of candidate sentences, the task is to choose the correct sentence that contains the exact answer and can sufficiently support the answer choice.  
 +
 
 +
* [http://cs.stanford.edu/people/mengqiu/data/qg-emnlp07-data.tgz QA Answer Sentence Selection Dataset]: labeled sentences using TREC QA track data, provided by [http://cs.stanford.edu/people/mengqiu/ Mengqiu Wang] and first used in [http://www.aclweb.org/anthology/D/D07/D07-1003.pdf Wang et al. (2007)]. 
  
  
Line 40: Line 42:
 
| 0.631
 
| 0.631
 
| 0.748
 
| 0.748
 +
|-
 +
| S&M (2013)
 +
| Severyn and Moschitti (2013)
 +
| 0.678
 +
| 0.736
 
|-
 
|-
 
| Shnarch (2013) - Backward  
 
| Shnarch (2013) - Backward  
Line 51: Line 58:
 
| 0.770
 
| 0.770
 
|-
 
|-
 +
| Yu (2014) - TRAIN-ALL bigram+count
 +
| Yu et al. (2014)
 +
| 0.711
 +
| 0.785
 +
|-
 +
| W&I (2015)
 +
| Wang and Ittycheriah (2015)
 +
| 0.746
 +
| 0.820
 
|}
 
|}
 +
  
 
== References ==
 
== References ==
Line 63: Line 80:
 
* Yih, Wen-tau and Chang, Ming-Wei and Meek, Christopher and Pastusiak, Andrzej. 2013. [http://research.microsoft.com/pubs/192357/QA-SentSel-Updated-PostACL.pdf Question Answering Using Enhanced Lexical Semantic Models]. In ACL 2013.
 
* Yih, Wen-tau and Chang, Ming-Wei and Meek, Christopher and Pastusiak, Andrzej. 2013. [http://research.microsoft.com/pubs/192357/QA-SentSel-Updated-PostACL.pdf Question Answering Using Enhanced Lexical Semantic Models]. In ACL 2013.
 
* Severyn, Aliaksei and Moschitti, Alessandro. 2013. [http://www.aclweb.org/anthology/D13-1044.pdf Automatic Feature Engineering for Answer Selection and Extraction]. In EMNLP 2013.
 
* Severyn, Aliaksei and Moschitti, Alessandro. 2013. [http://www.aclweb.org/anthology/D13-1044.pdf Automatic Feature Engineering for Answer Selection and Extraction]. In EMNLP 2013.
 
+
* Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. 2014. [http://arxiv.org/pdf/1412.1632v1.pdf Deep Learning for Answer Sentence Selection]. In NIPS deep learning workshop.
 +
* Zhiguo Wang and Abraham Ittycheriah. 2015. [http://arxiv.org/abs/1507.02628 FAQ-based Question Answering via Word Alignment]. In eprint arXiv:1507.02628.
 
[[Category:State of the art]]
 
[[Category:State of the art]]

Revision as of 07:24, 10 July 2015

Answer Sentence Selection

The task of answer sentence selection is designed for the open-domain question answering setting. Given a question and a set of candidate sentences, the task is to choose the correct sentence that contains the exact answer and can sufficiently support the answer choice.


Algorithm Reference MAP MRR
Punyakanok (2004) Wang et al. (2007) 0.419 0.494
Cui (2005) Wang et al. (2007) 0.427 0.526
Wang (2007) Wang et al. (2007) 0.603 0.685
H&S (2010) Heilman and Smith (2010) 0.609 0.692
W&M (2010) Wang and Manning (2010) 0.595 0.695
Yao (2013) Yao et al. (2013) 0.631 0.748
S&M (2013) Severyn and Moschitti (2013) 0.678 0.736
Shnarch (2013) - Backward Shnarch (2013) 0.686 0.754
Yih (2013) - LCLR Yih et al. (2013) 0.709 0.770
Yu (2014) - TRAIN-ALL bigram+count Yu et al. (2014) 0.711 0.785
W&I (2015) Wang and Ittycheriah (2015) 0.746 0.820


References