TY - CHAP U1 - Konferenzveröffentlichung A1 - Aga, Rosa Tsegaye A1 - Wartena, Christian A1 - Drumond, Lucas A1 - Schmidt-Thieme, Lars T1 - Learning thesaurus relations from distributional features T2 - LREC 2016, Tenth International Conference on Language Resources and Evaluation N2 - In distributional semantics words are represented by aggregated context features. The similarity of words can be computed by comparing their feature vectors. Thus, we can predict whether two words are synonymous or similar with respect to some other semantic relation. We will show on six different datasets of pairs of similar and non-similar words that a supervised learning algorithm on feature vectors representing pairs of words outperforms cosine similarity between vectors representing single words. We compared different methods to construct a feature vector representing a pair of words. We show that simple methods like pairwise addition or multiplication give better results than a recently proposed method that combines different types of features. The semantic relation we consider is relatedness of terms in thesauri for intellectual document classification. Thus our findings can directly be applied for the maintenance and extension of such thesauri. To the best of our knowledge this relation was not considered before in the field of distributional semantics. KW - distributional semantics KW - thesauri KW - context vectors KW - supervised machine learning KW - Thesaurus KW - Überwachtes Lernen KW - Semantik Y1 - 2016 UN - https://nbn-resolving.org/urn:nbn:de:bsz:960-opus4-10894 SN - 978-2-9517408-9-1 SB - 978-2-9517408-9-1 U6 - https://doi.org/10.25968/opus-1089 DO - https://doi.org/10.25968/opus-1089 SP - 2071 EP - 2075 ER -