Difference between revisions of "Analogy (State of the art)"
(17 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
− | == Analogy | + | * see also: [[State of the art]] |
+ | |||
+ | == Analogy tasks == | ||
+ | |||
A proportional analogy holds between two word pairs: ''a'':''a*'' :: ''b'':''b*'' (''a'' is to ''a*'' as ''b'' is to ''b*'') | A proportional analogy holds between two word pairs: ''a'':''a*'' :: ''b'':''b*'' (''a'' is to ''a*'' as ''b'' is to ''b*'') | ||
For example, ''Tokyo'' is to ''Japan'' as ''Paris'' is to ''France''. | For example, ''Tokyo'' is to ''Japan'' as ''Paris'' is to ''France''. | ||
Line 8: | Line 11: | ||
In NLP analogies (Mikolov's "linguistic regularities"<ref name = "Mikolov2013a"/>) are interpreted broadly as basically any "similarities between pairs of words" <ref name = "Levy2014"/>, not just semantic. | In NLP analogies (Mikolov's "linguistic regularities"<ref name = "Mikolov2013a"/>) are interpreted broadly as basically any "similarities between pairs of words" <ref name = "Levy2014"/>, not just semantic. | ||
+ | |||
+ | See Church's (2017)<ref>Church, K. W. (2017). Word2Vec. Natural Language Engineering, Volume 23, Issue 1, pp. 155-162, DOI https://doi.org/10.1017/S1351324916000334</ref> analysis of Word2Vec, which argues that the Google analogy dataset is not as challenging as the [[SAT Analogy Questions (State of the art)|SAT]] (Scholastic Aptitude Test) dataset. Inspection shows that the SAT analogies are all semantic (not syntactic) and involve relatively complex relations. The [https://sites.google.com/site/semeval2012task2/home SemEval-2012 Task 2] website includes a [https://sites.google.com/site/semeval2012task2/documentation taxonomy of semantic relations] derived from human analysis of GRE (Graduate Record Exam) analogies by researchers at [https://www.ets.org/ ETS (Educational Testing Service)]. | ||
== Available analogy datasets (ordered by date) == | == Available analogy datasets (ordered by date) == | ||
+ | |||
* Listed by date | * Listed by date | ||
Line 25: | Line 31: | ||
| Turney et al (2003)<ref>Turney, P., Littman, M. L., Bigham, J., & Shnayder, V. (2003). Combining independent modules to solve multiple-choice synonym and analogy problems. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (pp. 482--489). Retrieved from http://nparc.cisti-icist.nrc-cnrc.gc.ca/npsi/ctrl?action=rtdoc&an=8913366 </ref> | | Turney et al (2003)<ref>Turney, P., Littman, M. L., Bigham, J., & Shnayder, V. (2003). Combining independent modules to solve multiple-choice synonym and analogy problems. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (pp. 482--489). Retrieved from http://nparc.cisti-icist.nrc-cnrc.gc.ca/npsi/ctrl?action=rtdoc&an=8913366 </ref> | ||
|374 | |374 | ||
− | | | + | | ~374 |
| available on request from Peter Turney | | available on request from Peter Turney | ||
| [[SAT Analogy Questions (State of the art)]] | | [[SAT Analogy Questions (State of the art)]] | ||
− | | | + | | multiple-choice format: select the correct answer out of 5 proposed alternatives |
+ | |- | ||
+ | | LRME | ||
+ | | Turney (2008)<ref>Turney, P.D. (2008), The latent relation mapping engine: Algorithm and experiments, Journal of Artificial Intelligence Research (JAIR), 33, 615-655. Retrieved from http://jair.org/papers/paper2693.html</ref> | ||
+ | | 20 maps; 140 pairs | ||
+ | | ~140 | ||
+ | | [http://jair.org/media/2693/live-2693-4214-jair.txt JAIR LRME dataset] | ||
+ | | [[JAIR Analogical Mapping (State of the art)]] | ||
+ | | 20 analogical mapping problems, 10 from science and 10 from common metaphors | ||
|- | |- | ||
| SemEval 2012 Task 2 | | SemEval 2012 Task 2 | ||
Line 36: | Line 50: | ||
|[https://sites.google.com/site/semeval2012task2/download SemEval2012-Task2] | |[https://sites.google.com/site/semeval2012task2/download SemEval2012-Task2] | ||
| [[SemEval-2012 Task 2 (State of the art)]] | | [[SemEval-2012 Task 2 (State of the art)]] | ||
− | | | + | | ranking format: ranking the degree to which a relation applies. |
|- | |- | ||
| MSR | | MSR | ||
Line 44: | Line 58: | ||
|[http://research.microsoft.com/en-us/um/people/gzweig/Pubs/myz_naacl13_test_set.tgz MSR] | |[http://research.microsoft.com/en-us/um/people/gzweig/Pubs/myz_naacl13_test_set.tgz MSR] | ||
| [[Syntactic Analogies (State of the art)]] | | [[Syntactic Analogies (State of the art)]] | ||
− | | | + | |syntactic (i.e. morphological) questions only |
|- | |- | ||
| Google | | Google | ||
Line 58: | Line 72: | ||
| 99,200 | | 99,200 | ||
| 40 | | 40 | ||
− | |[ | + | |[http://vsm.blackbird.pw/bats BATS] |
− | | | + | | [[Bigger analogy test set (State of the art)]] |
|balanced across 4 types of relations: inflectional and derivational morphology, encyclopedic and lexicographic semantics. 10 relations of each type with 50 unique source pairs per relation. Multiple correct answers allowed where applicable. | |balanced across 4 types of relations: inflectional and derivational morphology, encyclopedic and lexicographic semantics. 10 relations of each type with 50 unique source pairs per relation. Multiple correct answers allowed where applicable. | ||
|- | |- | ||
Line 67: | Line 81: | ||
=== Pair-based methods for solving analogies === | === Pair-based methods for solving analogies === | ||
+ | |||
* '''vector offset''' a.k.a. '''3CosAdd''' <ref>Mikolov, T., Yih, W., & Zweig, G. (2013). Linguistic Regularities in Continuous Space Word Representations. In HLT-NAACL (pp. 746–751). Retrieved from http://www.aclweb.org/anthology/N13-1#page=784 | * '''vector offset''' a.k.a. '''3CosAdd''' <ref>Mikolov, T., Yih, W., & Zweig, G. (2013). Linguistic Regularities in Continuous Space Word Representations. In HLT-NAACL (pp. 746–751). Retrieved from http://www.aclweb.org/anthology/N13-1#page=784 | ||
</ref> | </ref> | ||
Line 74: | Line 89: | ||
=== Set-based methods for solving analogies === | === Set-based methods for solving analogies === | ||
+ | |||
* '''3CosAvg''' (vector offset averaged over multiple pairs) <ref name="LRCos">Drozd, A., Gladkova, A., & Matsuoka, S. (2016). Word embeddings, analogies, and machine learning: beyond king - man + woman = queen. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers (pp. 3519–3530). Osaka, Japan, December 11-17: ACL. Retrieved from https://www.aclweb.org/anthology/C/C16/C16-1332.pdf | * '''3CosAvg''' (vector offset averaged over multiple pairs) <ref name="LRCos">Drozd, A., Gladkova, A., & Matsuoka, S. (2016). Word embeddings, analogies, and machine learning: beyond king - man + woman = queen. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers (pp. 3519–3530). Osaka, Japan, December 11-17: ACL. Retrieved from https://www.aclweb.org/anthology/C/C16/C16-1332.pdf | ||
</ref> | </ref> | ||
* '''LRCos''' (supervised learning of the target class + cosine similarity to the ''b'' word) <ref name="LRCos"/>. | * '''LRCos''' (supervised learning of the target class + cosine similarity to the ''b'' word) <ref name="LRCos"/>. | ||
+ | |||
+ | === Pair--Pattern matrix for solving analogies === | ||
+ | |||
+ | * '''LRA''' (latent relational analysis) <ref>Turney, P.D. (2006), Similarity of semantic relations, Computational Linguistics, 32 (3), 379-416. Retrieved from http://www.mitpressjournals.org/doi/abs/10.1162/coli.2006.32.3.379</ref>. | ||
+ | |||
+ | === Dual-Space method for solving analogies === | ||
+ | |||
+ | * '''Dual-Space''' <ref>Turney, P.D. (2012), Domain and function: A dual-space model of semantic relations and compositions, Journal of Artificial Intelligence Research (JAIR), 44, 533-585. Retrieved from http://jair.org/papers/paper3640.html</ref>. | ||
== Issues with evaluating word embeddings on analogy task== | == Issues with evaluating word embeddings on analogy task== | ||
− | + | ||
+ | There is interplay between the chosen embedding, its parameters, particular relations <ref name = "Gladkova2016"/>, and method of solving analogies <ref name="Linzen2016"/> <ref name="LRCos"/>. It is possible that analogies not solved by one method can be solved by another method on the same embedding. Therefore results for solving analogies with different methods should be taken as a way to ''explore'' or describe an embedding rather than ''evaluate'' it. | ||
==Notes== | ==Notes== | ||
− | <references /> | + | |
+ | <references/> | ||
[[Category:State of the art]] | [[Category:State of the art]] | ||
+ | [[Category:Analogy]] |
Latest revision as of 15:10, 25 January 2017
- see also: State of the art
Analogy tasks
A proportional analogy holds between two word pairs: a:a* :: b:b* (a is to a* as b is to b*) For example, Tokyo is to Japan as Paris is to France.
With the pair-based methods, given a:a* :: b:?, the task is to find b*.
With set-based methods, the task is to find b* given a set of other pairs (excluding b:b*) that hold the same relation as b:b*.
In NLP analogies (Mikolov's "linguistic regularities"[1]) are interpreted broadly as basically any "similarities between pairs of words" [2], not just semantic.
See Church's (2017)[3] analysis of Word2Vec, which argues that the Google analogy dataset is not as challenging as the SAT (Scholastic Aptitude Test) dataset. Inspection shows that the SAT analogies are all semantic (not syntactic) and involve relatively complex relations. The SemEval-2012 Task 2 website includes a taxonomy of semantic relations derived from human analysis of GRE (Graduate Record Exam) analogies by researchers at ETS (Educational Testing Service).
Available analogy datasets (ordered by date)
- Listed by date
Dataset | Reference | Number of questions | Number of relations | Dataset Link | List of state-of-the-art results | Comments |
---|---|---|---|---|---|---|
SAT | Turney et al (2003)[4] | 374 | ~374 | available on request from Peter Turney | SAT Analogy Questions (State of the art) | multiple-choice format: select the correct answer out of 5 proposed alternatives |
LRME | Turney (2008)[5] | 20 maps; 140 pairs | ~140 | JAIR LRME dataset | JAIR Analogical Mapping (State of the art) | 20 analogical mapping problems, 10 from science and 10 from common metaphors |
SemEval 2012 Task 2 | Jurgens et al (2012)[6] | 3218 | 79 | SemEval2012-Task2 | SemEval-2012 Task 2 (State of the art) | ranking format: ranking the degree to which a relation applies. |
MSR | Mikolov et al. (2013a)[1] | 8,000 | 8 | MSR | Syntactic Analogies (State of the art) | syntactic (i.e. morphological) questions only |
Mikolov et al. (2013b)[7] | 19544 | 15 | Original link deprecated, copy hosted @TensorFlow | Google analogy test set (State of the art) | unbalanced: 8,869 semantic and 10,675 syntactic questions, with 20-70 pairs per category; country:capital relation is over 50% of all semantic questions. Relations in the syntactic part largely the same as MSR. | |
BATS | Gladkova et al. (2016)[8] | 99,200 | 40 | BATS | Bigger analogy test set (State of the art) | balanced across 4 types of relations: inflectional and derivational morphology, encyclopedic and lexicographic semantics. 10 relations of each type with 50 unique source pairs per relation. Multiple correct answers allowed where applicable. |
Methods to solve analogies
Pair-based methods for solving analogies
Set-based methods for solving analogies
- 3CosAvg (vector offset averaged over multiple pairs) [11]
- LRCos (supervised learning of the target class + cosine similarity to the b word) [11].
Pair--Pattern matrix for solving analogies
- LRA (latent relational analysis) [12].
Dual-Space method for solving analogies
- Dual-Space [13].
Issues with evaluating word embeddings on analogy task
There is interplay between the chosen embedding, its parameters, particular relations [8], and method of solving analogies [10] [11]. It is possible that analogies not solved by one method can be solved by another method on the same embedding. Therefore results for solving analogies with different methods should be taken as a way to explore or describe an embedding rather than evaluate it.
Notes
- ↑ 1.0 1.1 Mikolov, T., Yih, W., & Zweig, G. (2013). Linguistic Regularities in Continuous Space Word Representations. In HLT-NAACL (pp. 746–751). Retrieved from http://www.aclweb.org/anthology/N13-1#page=784
- ↑ 2.0 2.1 Levy, O., Goldberg, Y., & Ramat-Gan, I. (2014). Linguistic Regularities in Sparse and Explicit Word Representations. In CoNLL (pp. 171–180). Retrieved from http://anthology.aclweb.org/W/W14/W14-1618.pdf
- ↑ Church, K. W. (2017). Word2Vec. Natural Language Engineering, Volume 23, Issue 1, pp. 155-162, DOI https://doi.org/10.1017/S1351324916000334
- ↑ Turney, P., Littman, M. L., Bigham, J., & Shnayder, V. (2003). Combining independent modules to solve multiple-choice synonym and analogy problems. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (pp. 482--489). Retrieved from http://nparc.cisti-icist.nrc-cnrc.gc.ca/npsi/ctrl?action=rtdoc&an=8913366
- ↑ Turney, P.D. (2008), The latent relation mapping engine: Algorithm and experiments, Journal of Artificial Intelligence Research (JAIR), 33, 615-655. Retrieved from http://jair.org/papers/paper2693.html
- ↑ Jurgens, D. A., Turney, P. D., Mohammad, S. M., & Holyoak, K. J. (2012). Semeval-2012 task 2: Measuring degrees of relational similarity. In Proceedings of the First Joint Conference on Lexical and Computational Semantics (*SEM) (pp. 356–364). Montréal, Canada, June 7-8, 2012: Association for Computational Linguistics. Retrieved from http://dl.acm.org/citation.cfm?id=2387693
- ↑ Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. In Proceedings of International Conference on Learning Representations (ICLR).
- ↑ 8.0 8.1 Gladkova, A., Drozd, A., & Matsuoka, S. (2016). Analogy-based detection of morphological and semantic relations with word embeddings: what works and what doesn’t. In Proceedings of the NAACL-HLT SRW (pp. 47–54). San Diego, California, June 12-17, 2016: ACL. Retrieved from https://www.aclweb.org/anthology/N/N16/N16-2002.pdf
- ↑ Mikolov, T., Yih, W., & Zweig, G. (2013). Linguistic Regularities in Continuous Space Word Representations. In HLT-NAACL (pp. 746–751). Retrieved from http://www.aclweb.org/anthology/N13-1#page=784
- ↑ 10.0 10.1 Linzen, T. (2016). Issues in evaluating semantic spaces using word analogies. In Proceedings of the First Workshop on Evaluating Vector Space Representations for NLP. Association for Computational Linguistics. Retrieved from http://anthology.aclweb.org/W16-2503
- ↑ 11.0 11.1 11.2 Drozd, A., Gladkova, A., & Matsuoka, S. (2016). Word embeddings, analogies, and machine learning: beyond king - man + woman = queen. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers (pp. 3519–3530). Osaka, Japan, December 11-17: ACL. Retrieved from https://www.aclweb.org/anthology/C/C16/C16-1332.pdf
- ↑ Turney, P.D. (2006), Similarity of semantic relations, Computational Linguistics, 32 (3), 379-416. Retrieved from http://www.mitpressjournals.org/doi/abs/10.1162/coli.2006.32.3.379
- ↑ Turney, P.D. (2012), Domain and function: A dual-space model of semantic relations and compositions, Journal of Artificial Intelligence Research (JAIR), 44, 533-585. Retrieved from http://jair.org/papers/paper3640.html