Difference between revisions of "Question Answering (State of the art)"

From ACL Wiki
Jump to navigation Jump to search
m
(6 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
== Answer Sentence Selection ==
 
== Answer Sentence Selection ==
  
The task of answer sentence selection is designed for the open-domain question answering setting. Given a question and a set of candidate sentences, the task is to choose the correct sentence that contains the exact answer and can sufficiently support the answer choice.
+
The task of answer sentence selection is designed for the open-domain question answering setting. Given a question and a set of candidate sentences, the task is to choose the correct sentence that contains the exact answer and can sufficiently support the answer choice.  
 +
 
 +
* [http://cs.stanford.edu/people/mengqiu/data/qg-emnlp07-data.tgz QA Answer Sentence Selection Dataset]: labeled sentences using TREC QA track data, provided by [http://cs.stanford.edu/people/mengqiu/ Mengqiu Wang] and first used in [http://www.aclweb.org/anthology/D/D07/D07-1003.pdf Wang et al. (2007)]. 
  
  
Line 10: Line 12:
 
! [http://en.wikipedia.org/wiki/Mean_average_precision MAP]
 
! [http://en.wikipedia.org/wiki/Mean_average_precision MAP]
 
! [http://en.wikipedia.org/wiki/Mean_reciprocal_rank MRR]
 
! [http://en.wikipedia.org/wiki/Mean_reciprocal_rank MRR]
 +
|-
 +
| Punyakanok (2004)
 +
| Wang et al. (2007)
 +
| 0.419
 +
| 0.494
 +
|-
 +
| Cui (2005)
 +
| Wang et al. (2007)
 +
| 0.427
 +
| 0.526
 
|-
 
|-
 
| Wang (2007)
 
| Wang (2007)
Line 30: Line 42:
 
| 0.631
 
| 0.631
 
| 0.748
 
| 0.748
 +
|-
 +
| S&M (2013)
 +
| Severyn and Moschitti (2013)
 +
| 0.678
 +
| 0.736
 
|-
 
|-
 
| Shnarch (2013) - Backward  
 
| Shnarch (2013) - Backward  
Line 45: Line 62:
  
 
== References ==
 
== References ==
* Wang, Mengqiu and Smith, Noah A. and Mitamura, Teruko. [http://www.aclweb.org/anthology/D/D07/D07-1003 What is the Jeopardy Model? A Quasi-Synchronous Grammar for QA]. In EMNLP-CoNLL 2007.
+
* Vasin Punyakanok, Dan Roth, and Wen-Tau Yih. 2004. [http://cogcomp.cs.illinois.edu/papers/PunyakanokRoYi04a.pdf Mapping dependencies trees: An application to question answering]. In Proceedings of the 8th International Symposium on Artificial Intelligence and Mathematics, Fort Lauderdale, FL, USA.
* Heilman, Michael and Smith, Noah A. [http://www.aclweb.org/anthology/N10-1145 Tree Edit Models for Recognizing Textual Entailments, Paraphrases, and Answers to Questions]. In NAACL-HLT 2010.
+
* Hang Cui, Renxu Sun, Keya Li, Min-Yen Kan, and Tat-Seng Chua. 2005. [http://ws.csie.ncku.edu.tw/login/upload/2005/paper/Question%20answering%20Question%20answering%20passage%20retrieval%20using%20dependency%20relations.pdf Question answering passage retrieval using dependency relations]. In Proceedings of the 28th ACM-SIGIR International Conference on Research and Development in Information Retrieval, Salvador, Brazil.
* E. Shnarch. Probabilistic Models for Lexical Inference. Ph.D. thesis, Bar Ilan University. 2013.
+
* Wang, Mengqiu and Smith, Noah A. and Mitamura, Teruko. 2007. [http://www.aclweb.org/anthology/D/D07/D07-1003.pdf What is the Jeopardy Model? A Quasi-Synchronous Grammar for QA]. In EMNLP-CoNLL 2007.
* Yao, Xuchen and Van Durme, Benjamin and Callison-Burch, Chris and Clark, Peter. [http://www.aclweb.org/anthology/N13-1106 Answer Extraction as Sequence Tagging with Tree Edit Distance]. In NAACL-HLT 2013.
+
* Heilman, Michael and Smith, Noah A. 2010. [http://www.aclweb.org/anthology/N10-1145 Tree Edit Models for Recognizing Textual Entailments, Paraphrases, and Answers to Questions]. In NAACL-HLT 2010.
* Yih, Wen-tau and Chang, Ming-Wei and Meek, Christopher and Pastusiak, Andrzej. [http://research.microsoft.com/pubs/192357/QA-SentSel-Updated-PostACL.pdf Question Answering Using Enhanced Lexical Semantic Models]. In ACL 2013.
+
* Wang, Mengqiu and Manning, Christopher. 2010. [http://aclweb.org/anthology//C/C10/C10-1131.pdf Probabilistic Tree-Edit Models with Structured Latent Variables for Textual Entailment and Question Answering]. In COLING 2010.
* Severyn, Aliaksei and Moschitti, Alessandro. [http://www.aclweb.org/anthology/D13-1044 Automatic Feature Engineering for Answer Selection and Extraction]. In EMNLP 2013.
+
* E. Shnarch. 2013. Probabilistic Models for Lexical Inference. Ph.D. thesis, Bar Ilan University.
 +
* Yao, Xuchen and Van Durme, Benjamin and Callison-Burch, Chris and Clark, Peter. 2013. [http://www.aclweb.org/anthology/N13-1106.pdf Answer Extraction as Sequence Tagging with Tree Edit Distance]. In NAACL-HLT 2013.
 +
* Yih, Wen-tau and Chang, Ming-Wei and Meek, Christopher and Pastusiak, Andrzej. 2013. [http://research.microsoft.com/pubs/192357/QA-SentSel-Updated-PostACL.pdf Question Answering Using Enhanced Lexical Semantic Models]. In ACL 2013.
 +
* Severyn, Aliaksei and Moschitti, Alessandro. 2013. [http://www.aclweb.org/anthology/D13-1044.pdf Automatic Feature Engineering for Answer Selection and Extraction]. In EMNLP 2013.
  
 
[[Category:State of the art]]
 
[[Category:State of the art]]

Revision as of 16:34, 21 January 2014

Answer Sentence Selection

The task of answer sentence selection is designed for the open-domain question answering setting. Given a question and a set of candidate sentences, the task is to choose the correct sentence that contains the exact answer and can sufficiently support the answer choice.


Algorithm Reference MAP MRR
Punyakanok (2004) Wang et al. (2007) 0.419 0.494
Cui (2005) Wang et al. (2007) 0.427 0.526
Wang (2007) Wang et al. (2007) 0.603 0.685
H&S (2010) Heilman and Smith (2010) 0.609 0.692
W&M (2010) Wang and Manning (2010) 0.595 0.695
Yao (2013) Yao et al. (2013) 0.631 0.748
S&M (2013) Severyn and Moschitti (2013) 0.678 0.736
Shnarch (2013) - Backward Shnarch (2013) 0.686 0.754
Yih (2013) - LCLR Yih et al. (2013) 0.709 0.770


References