Indexed on: 18 Feb '14Published on: 18 Feb '14Published in: Computer Science - Computation and Language
There are two main approaches to the distributed representation of words: low-dimensional deep learning embeddings and high-dimensional distributional models, in which each dimension corresponds to a context word. In this paper, we combine these two approaches by learning embeddings based on distributional-model vectors - as opposed to one-hot vectors as is standardly done in deep learning. We show that the combined approach has better performance on a word relatedness judgment task.
Join Sparrho today to stay on top of science
Discover, organise and share research that matters to you