• Stars
    star
    114
  • Rank 306,311 (Top 7 %)
  • Language
    Python
  • Created about 7 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Code for "Effective Dimensionality Reduction for Word Embeddings".

Code for Effective Dimensionality Reduction for Word Embeddings, and its earlier version.

Accepted at NIPS 2017 LLLD Workshop and Published at the 4th Workshop on Representation Learning for NLP, ACL.

Abstract: Word embeddings have become the basic building blocks for several natural language processing and information retrieval tasks. Pre-trained word embeddings are used in several downstream applications as well as for constructing representations for sentences, paragraphs and documents. Recently, there has been an emphasis on further improving the pre-trained word vectors through post-processing algorithms. One such area of improvement is the dimensionality reduction of the word embeddings. Reducing the size of word embeddings through dimensionality reduction can improve their utility in memory constrained devices, benefiting several real-world applications. In this work, we present a novel algorithm that effectively combines PCA based dimensionality reduction with a recently proposed post-processing algorithm, to construct word embeddings of lower dimensions. Empirical evaluations on 12 standard word similarity benchmarks show that our algorithm reduces the embedding dimensionality by 50%, while achieving similar or (more often) better performance than the higher dimension embeddings.

The word-vector evaluation code is directly used from https://github.com/mfaruqui/eval-word-vectors.

Run the script algo.py (embedding file location is hardcoded as of now) to reproduce the algorithm and its evaluation on word-similarity benchmarks.

Similarly, other baselines': PCA pca_simple.py, PPA+PCA ppa_pca.py and PCA+PPA pca_ppa.py results can be reproduced.

To run the algo and the baselines (as in the paper) get the embedding files (Glove, FastText) and put the file locations as required in the code.

The code will generate and evaluate (on 12 word-similarity datasets) a modified word embedding file that is half-the-size of the original embeddings.

The sentence-vector evalution is based on SentEval (https://github.com/facebookresearch/SentEval). The generated embedding file could be directly used following the SentEval bow.py example.

The algorithm can be used to generate embeddings of any size, not necessarily half.

Another paper which partially uses this code is On Dimensional Linguistic Properties of the Word Embedding Space.

More Repositories

1

Megalodon

Various ML/DL Resources organised at a single place.
185
star
2

DNI-tensorflow

DNI (Decoupled Neural Interfaces using Synthetic Gradients) Implementation with Tensorflow.
Python
29
star
3

long-tailed

Code for "On Long-Tailed Phenomena in NMT".
Python
10
star
4

dlp

Code for "On Dimensional Linguistic Properties of the Word Embedding Space".
Python
7
star
5

Multilabel-Classification-using-NN

Multilabel classification using Neural Network
Jupyter Notebook
4
star
6

Finding-Memo

Code for "Extractive Memorization in Constrained Sequence Generation Tasks"
Python
4
star
7

hallucinations

Code for "The Curious Case of Hallucinations in Neural Machine Translation".
3
star
8

weightnorm-pytorch

Weight normalization in pytorch
Python
2
star
9

701-Project

Quora Dataset, Semantic Textual Similarity
Python
2
star
10

Google-Search-Chatbot

A rudimentary Chatbot powered by Google Search.
HTML
2
star
11

TextGAN

Improving text generation with adversarial objectives
Python
1
star
12

Keras-Pointer-Network

Implementation of Pointer Network in Keras. [Incomplete]
Python
1
star
13

evanet-iccv19

Code and models for our ICCV'19 paper "Evolving Space-Time Neural Architectures for Videos"
Python
1
star
14

blindspots

Seq2Seq Blindspots
PLSQL
1
star
15

Python-Latent-Semantic-Analysis

1
star
16

deep-learning-keras-tensorflow-pyss2016

Deep Learning Tutorial with Keras and Tensorflow - Invited Speaker @ Python San Sebastian 2016
Jupyter Notebook
1
star
17

text-classification-wv

Text Classification Using Word Vectors.
Python
1
star
18

Eazy-Search

Personalized Distraction Free Search Homepage for Daily Use.
HTML
1
star