Journal "Software Engineering"
a journal on theoretical and applied science and technology
ISSN 2220-3397
Issue N1 2021 year
In this paper, a number of issues related to the use of word embeddings for solving the problem of classifying text documents were considered. Models of word embeddings have been a popular object of theoretical research since the mid-1970. The development of these models, including the transition from count based models to predictive ones, was stimulated by the need to reduce the large computational costs arising from their practical use. This problem is still relevant today. When solving practical problems, one often has to abandon the independent construction of word embeddings and use ready-made solutions. As a result, the problem of post-processing of the existing word embeddings, for example, reducing its dimension, becomes urgent. Recently, a number of works have appeared in which an unusual approach to using the principal component analysis to reduce the dimension of word embeddings has been investigated. In this approach, projections are removed not on the last, but on the first principal components. It turns out that in relation to the problems of determining the word similarity and word analogies, this approach can increase the accuracy of their solution. Experimental studies, the results of which are presented in this work, show that this effect is not observed in solving the problem of document classification. Removing projections to the first directions leads to a decrease in the classification accuracy. At the same time, the traditional approach, which removes projections to the last principal directions, in most cases gives the best result.