Difference between revisions of "SAT Analogy Questions (State of the art)"
Line 32: | Line 32: | ||
| 47.1% | | 47.1% | ||
| 42.2-52.5% | | 42.2-52.5% | ||
+ | |- | ||
+ | | PERT | ||
+ | | Turney (2006a) | ||
+ | | corpus-based | ||
+ | | 53.5% | ||
+ | | 48.5-58.9% | ||
|- | |- | ||
| LRA | | LRA | ||
− | | Turney ( | + | | Turney (2006b) |
| corpus-based | | corpus-based | ||
| 56.1% | | 56.1% | ||
Line 45: | Line 51: | ||
Turney, P.D., and Littman, M.L. (2005). [http://arxiv.org/abs/cs.LG/0508103 Corpus-based learning of analogies and semantic relations]. ''Machine Learning'', 60 (1-3), 251-278. | Turney, P.D., and Littman, M.L. (2005). [http://arxiv.org/abs/cs.LG/0508103 Corpus-based learning of analogies and semantic relations]. ''Machine Learning'', 60 (1-3), 251-278. | ||
− | Turney, P.D. ( | + | Turney, P.D. (2006a). [http://arxiv.org/abs/cs.CL/0607120 Expressing implicit semantic relations without supervision]. ''Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (Coling/ACL-06)'', Sydney, Australia, pp. 313-320. |
+ | |||
+ | Turney, P.D. (2006b). [http://arxiv.org/abs/cs.CL/0608100 Similarity of semantic relations]. ''Computational Linguistics'', 32 (3), 379-416. | ||
Veale, T. (2004). [http://afflatus.ucd.ie/Papers/ecai2004.pdf WordNet sits the SAT: A knowledge-based approach to lexical analogy]. ''Proceedings of the 16th European Conference on Artificial Intelligence (ECAI 2004)'', pp. 606–612, Valencia, Spain. | Veale, T. (2004). [http://afflatus.ucd.ie/Papers/ecai2004.pdf WordNet sits the SAT: A knowledge-based approach to lexical analogy]. ''Proceedings of the 16th European Conference on Artificial Intelligence (ECAI 2004)'', pp. 606–612, Valencia, Spain. |
Revision as of 05:14, 13 May 2007
- SAT= Scholastic Aptitude Test
- 374 multiple-choice analogy questions; 5 choices per question
- SAT questions collected by Michael Littman, available from Peter Turney
- introduced in Turney et al. (2003) as a way of evaluating algorithms for measuring relational similarity
- Algorithm = name of algorithm
- Reference = source for algorithm description and experimental results
- Type = general type of algorithm: corpus-based, lexicon-based, hybrid
- Correct = percent of 374 questions that given algorithm answered correctly
- 95% confidence = confidence interval calculated using Binomial Exact Test
- table rows sorted in order of increasing percent correct
- VSM = Vector Space Model
- LRA = Latent Relational Analysis
Algorithm | Reference | Type | Correct | 95% confidence |
---|---|---|---|---|
KNOW-BEST | Veale (2004) | lexicon-based | 43.0% | 38.0-48.2% |
VSM | Turney and Littman (2005) | corpus-based | 47.1% | 42.2-52.5% |
PERT | Turney (2006a) | corpus-based | 53.5% | 48.5-58.9% |
LRA | Turney (2006b) | corpus-based | 56.1% | 51.0–61.2% |
Turney, P.D., Littman, M.L., Bigham, J., and Shnayder, V. (2003). Combining independent modules to solve multiple-choice synonym and analogy problems. Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP-03), Borovets, Bulgaria, pp. 482-489.
Turney, P.D., and Littman, M.L. (2005). Corpus-based learning of analogies and semantic relations. Machine Learning, 60 (1-3), 251-278.
Turney, P.D. (2006a). Expressing implicit semantic relations without supervision. Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (Coling/ACL-06), Sydney, Australia, pp. 313-320.
Turney, P.D. (2006b). Similarity of semantic relations. Computational Linguistics, 32 (3), 379-416.
Veale, T. (2004). WordNet sits the SAT: A knowledge-based approach to lexical analogy. Proceedings of the 16th European Conference on Artificial Intelligence (ECAI 2004), pp. 606–612, Valencia, Spain.