Indexed on: 12 Jul '18Published on: 12 Jul '18Published in: arXiv - Computer Science - Computation and Language
More than 80% of today's data is unstructured in nature, and these unstructured datasets evolving over time. A large part of the evolving unstructured data are text documents generated by media outlets, scholarly articles in digital libraries, and social media. Vector space models have been developed to analyze text documents using data mining and machine learning algorithms. While ample vector space models exist for text data, the evolution aspect of evolving text corpora is still missing in vector-based representations. The advent of word embeddings has given a way to create a contextual vector space, but the embeddings do not consider the temporal aspects of the feature space successfully yet. The inclusion of the time aspect in the feature space will provide vectors for every natural language element, such as words or entities, at every timestamp. Such temporal word vectors will provide the ability to track how the meaning of a word changes over time, in terms of the changes in its neighborhood. Moreover, a time-reflective text representation will pave the way to a new set of text analytic abilities involving time series for text collections. In this paper, we present the potential benefits of a time-reflective vector space model for temporal text data that is able to capture short and long-term changes in the meaning of words. We compare our approach with the limited literature on dynamic embeddings. We present qualitative and quantitative evaluations using semantic evolution tracking as the target application.