• Stars
    star
    3,736
  • Rank 11,813 (Top 0.3 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created about 5 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Transformers for Classification, NER, QA, Language Modelling, Language Generation, T5, Multi-Modal, and Conversational AI

License Downloads

All Contributors

Simple Transformers

This library is based on the Transformers library by HuggingFace. Simple Transformers lets you quickly train and evaluate Transformer models. Only 3 lines of code are needed to initialize, train, and evaluate a model.

Supported Tasks:

  • Information Retrieval (Dense Retrieval)
  • (Large) Language Models (Training, Fine-tuning, and Generation)
  • Encoder Model Training and Fine-tuning
  • Sequence Classification
  • Token Classification (NER)
  • Question Answering
  • Language Generation
  • T5 Model
  • Seq2Seq Tasks
  • Multi-Modal Classification
  • Conversational AI.

Table of contents

Setup

With Conda

  1. Install Anaconda or Miniconda Package Manager from here
  2. Create a new virtual environment and install packages.
$ conda create -n st python pandas tqdm
$ conda activate st

Using Cuda:

$ conda install pytorch>=1.6 cudatoolkit=11.0 -c pytorch

Without using Cuda

$ conda install pytorch cpuonly -c pytorch
  1. Install simpletransformers.
$ pip install simpletransformers

Optional

  1. Install Weights and Biases (wandb) for tracking and visualizing training in a web browser.
$ pip install wandb

Usage

All documentation is now live at simpletransformers.ai

Simple Transformer models are built with a particular Natural Language Processing (NLP) task in mind. Each such model comes equipped with features and functionality designed to best fit the task that they are intended to perform. The high-level process of using Simple Transformers models follows the same pattern.

  1. Initialize a task-specific model
  2. Train the model with train_model()
  3. Evaluate the model with eval_model()
  4. Make predictions on (unlabelled) data with predict()

However, there are necessary differences between the different models to ensure that they are well suited for their intended task. The key differences will typically be the differences in input/output data formats and any task specific features/configuration options. These can all be found in the documentation section for each task.

The currently implemented task-specific Simple Transformer models, along with their task, are given below.

Task Model
Binary and multi-class text classification ClassificationModel
Conversational AI (chatbot training) ConvAIModel
Language generation LanguageGenerationModel
Language model training/fine-tuning LanguageModelingModel
Multi-label text classification MultiLabelClassificationModel
Multi-modal classification (text and image data combined) MultiModalClassificationModel
Named entity recognition NERModel
Question answering QuestionAnsweringModel
Regression ClassificationModel
Sentence-pair classification ClassificationModel
Text Representation Generation RepresentationModel
Document Retrieval RetrievalModel
  • Please refer to the relevant section in the docs for more information on how to use these models.
  • Example scripts can be found in the examples directory.
  • See the Changelog for up-to-date changes to the project.

A quick example

from simpletransformers.classification import ClassificationModel, ClassificationArgs
import pandas as pd
import logging


logging.basicConfig(level=logging.INFO)
transformers_logger = logging.getLogger("transformers")
transformers_logger.setLevel(logging.WARNING)

# Preparing train data
train_data = [
    ["Aragorn was the heir of Isildur", 1],
    ["Frodo was the heir of Isildur", 0],
]
train_df = pd.DataFrame(train_data)
train_df.columns = ["text", "labels"]

# Preparing eval data
eval_data = [
    ["Theoden was the king of Rohan", 1],
    ["Merry was the king of Rohan", 0],
]
eval_df = pd.DataFrame(eval_data)
eval_df.columns = ["text", "labels"]

# Optional model configuration
model_args = ClassificationArgs(num_train_epochs=1)

# Create a ClassificationModel
model = ClassificationModel(
    "roberta", "roberta-base", args=model_args
)

# Train the model
model.train_model(train_df)

# Evaluate the model
result, model_outputs, wrong_predictions = model.eval_model(eval_df)

# Make predictions with the model
predictions, raw_outputs = model.predict(["Sam was a Wizard"])

Experiment Tracking with Weights and Biases

  • Weights and Biases makes it incredibly easy to keep track of all your experiments. Check it out on Colab here: Open In Colab

Current Pretrained Models

For a list of pretrained models, see Hugging Face docs.

The model_types available for each task can be found under their respective section. Any pretrained model of that type found in the Hugging Face docs should work. To use any of them set the correct model_type and model_name in the args dictionary.


Contributors ✨

Thanks goes to these wonderful people (emoji key):


hawktang

πŸ’»

Mabu Manaileng

πŸ’»

Ali Hamdi Ali Fadel

πŸ’»

Tovly Deutsch

πŸ’»

hlo-world

πŸ’»

huntertl

πŸ’»

Yann Defretin

πŸ’» πŸ“– πŸ’¬ πŸ€”

Manuel

πŸ“– πŸ’»

Gilles Jacobs

πŸ“–

shasha79

πŸ’»

Mercedes Garcia

πŸ’»

Hammad Hassan Tarar

πŸ’» πŸ“–

Todd Cook

πŸ’»

Knut O. Hellan

πŸ’» πŸ“–

nagenshukla

πŸ’»

flaviussn

πŸ’» πŸ“–

Marc Torrellas

🚧

Adrien Renaud

πŸ’»

jacky18008

πŸ’»

Matteo Senese

πŸ’»

sarthakTUM

πŸ“– πŸ’»

djstrong

πŸ’»

Hyeongchan Kim

πŸ“–

Pradhy729

πŸ’» 🚧

Iknoor Singh

πŸ“–

Gabriel Altay

πŸ’»

flozi00

πŸ“– πŸ’» 🚧

alexysdussier

πŸ’»

Jean-Louis Queguiner

πŸ“–

aced125

πŸ’»

Laksh1997

πŸ’»

Changlin_NLP

πŸ’»

jpotoniec

πŸ’»

fcggamou

πŸ’» πŸ“–

guy-mor

πŸ› πŸ’»

Cahya Wirawan

πŸ’»

BjarkePedersen

πŸ’»

tekkkon

πŸ’»

Amit Garg

πŸ’»

caprone

πŸ›

Ather Fawaz

πŸ’»

Santiago Castro

πŸ“–

taranais

πŸ’»

Pablo N. Marino

πŸ’» πŸ“–

Anton Kiselev

πŸ’» πŸ“–

Alex

πŸ’»

Karthik Ganesan

πŸ’»

Zhylko Dima

πŸ’»

Jonatan KΕ‚osko

πŸ’»

sarapapi

πŸ’» πŸ’¬

Abdul

πŸ’»

James Milliman

πŸ“–

Suraj Parmar

πŸ“–

KwanHong Lee

πŸ’¬

Erik FÀßler

πŸ’»

Thomas SΓΈvik

πŸ’¬

Gagandeep Singh

πŸ’» πŸ“–

Andrea Esuli

πŸ’»

DM2493

πŸ’»

Nick Doiron

πŸ’»

Abhinav Gupta

πŸ’»

Martin H. Normark

πŸ“–

Mossad Helali

πŸ’»

calebchiam

πŸ’»

Daniele Sartiano

πŸ’»

tuner007

πŸ“–

xia jiang

πŸ’»

Hendrik Buschmeier

πŸ“–

Mana Borwornpadungkitti

πŸ“–

rayline

πŸ’»

Mehdi Heidari

πŸ’»

William Roe

πŸ’»

Álvaro Abella BascarÑn

πŸ’»

Brett Fazio

πŸ“–

Viet-Tien

πŸ’»

Bisola Olasehinde

πŸ’» πŸ“–

William Chen

πŸ“–

Reza Ebrahimi

πŸ“–

gabriben

πŸ“–

Prashanth Kurella

πŸ’»

dopc

πŸ’»

Tanish Tyagi

πŸ“– πŸ’»

kongyurui

πŸ’»

Andrew Lensen

πŸ’»

jinschoi

πŸ’»

Le Nguyen Khang

πŸ’»

Jordi Mas

πŸ“–

mxa

πŸ’»

MichelBartels

πŸ’»

Luke Tudge

πŸ“–

Saint

πŸ’»

deltaxrg

πŸ’» πŸ“–

Fortune Adekogbe

πŸ’»

This project follows the all-contributors specification. Contributions of any kind welcome!

If you should be on this list but you aren't, or you are on the list but don't want to be, please don't hesitate to contact me!


How to Contribute

How to Update Docs

The latest version of the docs is hosted on Github Pages, if you want to help document Simple Transformers below are the steps to edit the docs. Docs are built using Jekyll library, refer to their webpage for a detailed explanation of how it works.

  1. Install Jekyll: Run the command gem install bundler jekyll
  2. Visualizing the docs on your local computer: In your terminal cd into the docs directory of this repo, eg: cd simpletransformers/docs From the docs directory run this command to serve the Jekyll docs locally: bundle exec jekyll serve Browse to http://localhost:4000 or whatever url you see in the console to visualize the docs.
  3. Edit and visualize changes: All the section pages of our docs can be found under docs/_docs directory, you can edit any file you want by following the markdown format and visualize the changes after refreshing the browser tab.

Acknowledgements

None of this would have been possible without the hard work by the HuggingFace team in developing the Transformers library.

<div>Icon for the Social Media Preview made by <a href="https://www.flaticon.com/authors/freepik" title="Freepik">Freepik</a> from <a href="https://www.flaticon.com/" title="Flaticon">www.flaticon.com</a></div>