• Stars
    star
    548
  • Rank 78,387 (Top 2 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created over 6 years ago
  • Updated over 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

About Muti-Label Text Classification Based on Neural Network.

Deep Learning for Multi-Label Text Classification

Python Version Build Status Codacy Badge License Issues

This repository is my research project, and it is also a study of TensorFlow, Deep Learning (Fasttext, CNN, LSTM, etc.).

The main objective of the project is to solve the multi-label text classification problem based on Deep Neural Networks. Thus, the format of the data label is like [0, 1, 0, ..., 1, 1] according to the characteristics of such a problem.

Requirements

  • Python 3.6
  • Tensorflow 1.15.0
  • Tensorboard 1.15.0
  • Sklearn 0.19.1
  • Numpy 1.16.2
  • Gensim 3.8.3
  • Tqdm 4.49.0

Project

The project structure is below:

.
β”œβ”€β”€ Model
β”‚Β Β  β”œβ”€β”€ test_model.py
β”‚Β Β  β”œβ”€β”€ text_model.py
β”‚Β Β  └── train_model.py
β”œβ”€β”€ data
β”‚Β Β  β”œβ”€β”€ word2vec_100.model.* [Need Download]
β”‚Β Β  β”œβ”€β”€ Test_sample.json
β”‚Β Β  β”œβ”€β”€ Train_sample.json
β”‚Β Β  └── Validation_sample.json
└── utils
β”‚Β Β  β”œβ”€β”€ checkmate.py
β”‚Β Β  β”œβ”€β”€ data_helpers.py
β”‚Β Β  └── param_parser.py
β”œβ”€β”€ LICENSE
β”œβ”€β”€ README.md
└── requirements.txt

Innovation

Data part

  1. Make the data support Chinese and English (Can use jieba or nltk ).
  2. Can use your pre-trained word vectors (Can use gensim).
  3. Add embedding visualization based on the tensorboard (Need to create metadata.tsv first).

Model part

  1. Add the correct L2 loss calculation operation.
  2. Add gradients clip operation to prevent gradient explosion.
  3. Add learning rate decay with exponential decay.
  4. Add a new Highway Layer (Which is useful according to the model performance).
  5. Add Batch Normalization Layer.

Code part

  1. Can choose to train the model directly or restore the model from the checkpoint in train.py.
  2. Can predict the labels via threshold and top-K in train.py and test.py.
  3. Can calculate the evaluation metrics --- AUC & AUPRC.
  4. Can create the prediction file which including the predicted values and predicted labels of the Testset data in test.py.
  5. Add other useful data preprocess functions in data_helpers.py.
  6. Use logging for helping to record the whole info (including parameters display, model training info, etc.).
  7. Provide the ability to save the best n checkpoints in checkmate.py, whereas the tf.train.Saver can only save the last n checkpoints.

Data

See data format in /data folder which including the data sample files. For example:

{"testid": "3935745", "features_content": ["pore", "water", "pressure", "metering", "device", "incorporating", "pressure", "meter", "force", "meter", "influenced", "pressure", "meter", "device", "includes", "power", "member", "arranged", "control", "pressure", "exerted", "pressure", "meter", "force", "meter", "applying", "overriding", "force", "pressure", "meter", "stop", "influence", "force", "meter", "removing", "overriding", "force", "pressure", "meter", "influence", "force", "meter", "resumed"], "labels_index": [526, 534, 411], "labels_num": 3}
  • "testid": just the id.
  • "features_content": the word segment (after removing the stopwords)
  • "labels_index": The label index of the data records.
  • "labels_num": The number of labels.

Text Segment

  1. You can use nltk package if you are going to deal with the English text data.

  2. You can use jieba package if you are going to deal with the Chinese text data.

Data Format

This repository can be used in other datasets (text classification) in two ways:

  1. Modify your datasets into the same format of the sample.
  2. Modify the data preprocessing code in data_helpers.py.

Anyway, it should depend on what your data and task are.

πŸ€”Before you open the new issue about the data format, please check the data_sample.json and read the other open issues first, because someone maybe ask me the same question already. For example:

Pre-trained Word Vectors

You can download the Word2vec model file (dim=100). Make sure they are unzipped and under the /data folder.

You can pre-training your word vectors (based on your corpus) in many ways:

  • Use gensim package to pre-train data.
  • Use glove tools to pre-train data.
  • Even can use a fasttext network to pre-train data.

Usage

See Usage.

Network Structure

FastText

References:


TextANN

References:

  • Personal ideas πŸ™ƒ

TextCNN

References:


TextRNN

Warning: Model can use but not finished yet πŸ€ͺ!

TODO

  1. Add BN-LSTM cell unit.
  2. Add attention.

References:


TextCRNN

References:

  • Personal ideas πŸ™ƒ

TextRCNN

References:

  • Personal ideas πŸ™ƒ

TextHAN

References:


TextSANN

Warning: Model can use but not finished yet πŸ€ͺ!

TODO

  1. Add attention penalization loss.
  2. Add visualization.

References:


About Me

ι»„ε¨οΌŒRandolph

SCU SE Bachelor; USTC CS Ph.D.

Email: [email protected]

My Blog: randolph.pro

LinkedIn: randolph's linkedin