• Stars
    star
    3
  • Rank 3,943,206 (Top 79 %)
  • Language
    Jupyter Notebook
  • Created over 3 years ago
  • Updated over 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Significant sentences are made of expressive words. The information about words, their meanings, and origin are provided through widely available dictionaries. But dictionary entries developed for the convenience of human readers, not for machines. Different knowledge base datasets i.e. FB15K, Deep-dive are used to provide a more effective combination of traditional information and modern computing. To use such datasets different NLP (Natural Language Processing) approaches in the area of Word and Graph embeddings were used to extract data into a meaningful form for different techniques such as recommendation. In this research paper, we differentiate Word2vec and Node2vec both NLP approaches on the MovieLens dataset for the recommendation. An epic technique to compute movie similarities for the recommendation that influences having item features. These methods are widely used in web search and content-based recommendation. We compare the Word2vec and Node2vec methods which influence user previous interactions with items and their features to compute low-dimensional embeddings of movies. Categorically, the movie features are injected into the model as side information to standardize the movie embeddings. We show that the new movie representations lead to better performance on recommendation tasks on an open MovieLens dataset. Word2vec is used for word embedding while for graph embeddings we used the Node2vec method. In this research paper, we examine both neural models based on their training and sampling strategy for the recommendation.