Overview | Tutorials | Examples | Installation | FAQ | API Docs | How to Cite
Welcome to ktrain
a "Swiss Army knife" for machine learning
News and Announcements
- 2023-05-11
- ktrain 0.37.x is released and supports Generative Question-Answering using OpenAI models like GPT-3.5-turbo by wrapping the LangChain and Paper-QA packages. Ask a large text corpus questions and receive answers with citations to which documents the answers were found. See the example notebook for more information.
import os
os.environ['OPENAI_API_KEY'] = 'ENTER YOUR OPENAI API KEY HERE'
from ktrain.text.qa import GenerativeQA
genqa = GenerativeQA()
genqa.add_doc(text=a_string_containing_text_of_your_document)
print(genqa.query('What is ktrain?'))
- 2023-04-21
- ktrain 0.36.x is released and supports a simple wrapper to Sentiment Analysis. See the example notebook for more information.
# Example: Sentiment Analysis
from ktrain.text.sentiment import SentimentAnalyzer
classifier = SentimentAnalyzer()
texts = ['I got a promotion today.', 'My appointment is at 3:30.', 'There were cost overruns.']
result = classifier.predict(texts)
# OUTPUT:
#[{'POSITIVE': 0.9021117091178894},
# {'NEUTRAL': 0.9110478758811951},
# {'NEGATIVE': 0.743671715259552}]
- 2023-04-01
- ktrain 0.35.x is released and supports Generative AI using an instruction-fine-tuned version of GPT-J that can run on your own machine. See the example notebook for more information. Supply prompts in the form of instructions for what you want the model to do:
# Example: Generative AI in ktrain
from ktrain.text.generative_ai import GenerativeAI
model = GenerativeAI() # needs at least 16GB of GPU memory
prompt = """Extract the names of people in the supplied sentences. Here is an example:
Sentence:
Paul Newman is a great actor.
People:
Paul Newman
Sentence:
I like James Gandolfini's acting.
People:"""
print(model.execute(prompt))
# OUTPUT:
# James Gandolfini
- 2023-03-30
- ktrain 0.34.x is released and supports fast LexRank-based text summarization.
Overview
ktrain is a lightweight wrapper for the deep learning library TensorFlow Keras (and other libraries) to help build, train, and deploy neural networks and other machine learning models. Inspired by ML framework extensions like fastai and ludwig, ktrain is designed to make deep learning and AI more accessible and easier to apply for both newcomers and experienced practitioners. With only a few lines of code, ktrain allows you to easily and quickly:
-
employ fast, accurate, and easy-to-use pre-canned models for
text
,vision
,graph
, andtabular
data:text
data:- Text Classification: BERT, DistilBERT, NBSVM, fastText, and other models [example notebook]
- Text Regression: BERT, DistilBERT, Embedding-based linear text regression, fastText, and other models [example notebook]
- Sequence Labeling (NER): Bidirectional LSTM with optional CRF layer and various embedding schemes such as pretrained BERT and fasttext word embeddings and character embeddings [example notebook]
- Ready-to-Use NER models for English, Chinese, and Russian with no training required [example notebook]
- Sentence Pair Classification for tasks like paraphrase detection [example notebook]
- Unsupervised Topic Modeling with LDA [example notebook]
- Document Similarity with One-Class Learning: given some documents of interest, find and score new documents that are thematically similar to them using One-Class Text Classification [example notebook]
- Document Recommendation Engines and Semantic Searches: given a text snippet from a sample document, recommend documents that are semantically-related from a larger corpus [example notebook]
- Text Summarization: summarize long documents - no training required [example notebook]
- Extractive Question-Answering: ask a large text corpus questions and receive exact answers using BERT [example notebook]
- Generative Question-Answering: ask a large text corpus questions and receive answers with citations using OpenAI models [example notebook]
- Easy-to-Use Built-In Search Engine: perform keyword searches on large collections of documents [example notebook]
- Zero-Shot Learning: classify documents into user-provided topics without training examples [example notebook]
- Language Translation: translate text from one language to another [example notebook]
- Text Extraction: Extract text from PDFs, Word documents, etc. [example notebook]
- Speech Transcription: Extract text from audio files [example notebook]
- Universal Information Extraction: extract any kind of information from documents by simply phrasing it in the form of a question [example notebook]
- Keyphrase Extraction: extract keywords from documents [example notebook]
- Sentiment Analysis: easy-to-use wrapper to pretrained sentiment analysis [example notebook]
- Generative AI with GPT: Provide instructions to a lightweight ChatGPT-like model running on your own own machine to solve various tasks. Model was fine-tuned on the Alpaca instruction dataset (CC By NC 4.0) [example notebook]
vision
data:- image classification (e.g., ResNet, Wide ResNet, Inception) [example notebook]
- image regression for predicting numerical targets from photos (e.g., age prediction) [example notebook]
- image captioning with a pretrained model [example notebook]
- object detection with a pretrained model [example notebook]
graph
data:- node classification with graph neural networks (GraphSAGE) [example notebook]
- link prediction with graph neural networks (GraphSAGE) [example notebook]
tabular
data:- tabular classification (e.g., Titanic survival prediction) [example notebook]
- tabular regression (e.g., predicting house prices) [example notebook]
- causal inference using meta-learners [example notebook]
-
estimate an optimal learning rate for your model given your data using a Learning Rate Finder
-
utilize learning rate schedules such as the triangular policy, the 1cycle policy, and SGDR to effectively minimize loss and improve generalization
-
build text classifiers for any language (e.g., Arabic Sentiment Analysis with BERT, Chinese Sentiment Analysis with NBSVM)
-
easily train NER models for any language (e.g., Dutch NER )
-
load and preprocess text and image data from a variety of formats
-
inspect data points that were misclassified and provide explanations to help improve your model
-
leverage a simple prediction API for saving and deploying both models and data-preprocessing steps to make predictions on new raw data
-
built-in support for exporting models to ONNX and TensorFlow Lite (see example notebook for more information)
Tutorials
Please see the following tutorial notebooks for a guide on how to use ktrain on your projects:
- Tutorial 1: Introduction
- Tutorial 2: Tuning Learning Rates
- Tutorial 3: Image Classification
- Tutorial 4: Text Classification
- Tutorial 5: Learning from Unlabeled Text Data
- Tutorial 6: Text Sequence Tagging for Named Entity Recognition
- Tutorial 7: Graph Node Classification with Graph Neural Networks
- Tutorial 8: Tabular Classification and Regression
- Tutorial A1: Additional tricks, which covers topics such as previewing data augmentation schemes, inspecting intermediate output of Keras models for debugging, setting global weight decay, and use of built-in and custom callbacks.
- Tutorial A2: Explaining Predictions and Misclassifications
- Tutorial A3: Text Classification with Hugging Face Transformers
- Tutorial A4: Using Custom Data Formats and Models: Text Regression with Extra Regressors
Some blog tutorials and other guides about ktrain are shown below:
ktrain: A Lightweight Wrapper for Keras to Help Train Neural Networks
Text Classification with Hugging Face Transformers in TensorFlow 2 (Without Tears)
Build an Open-Domain Question-Answering System With BERT in 3 Lines of Code
Finetuning BERT using ktrain for Disaster Tweets Classification by Hamiz Ahmed
Indonesian NLP Examples with ktrain by Sandy Khosasi
Examples
Using ktrain on Google Colab? See these Colab examples:
- text classification: a simple demo of Multiclass Text Classification with BERT
- text classification: a simple demo of Multiclass Text Classification with Hugging Face Transformers
- sequence-tagging (NER): NER example using
transformer
word embeddings - question-answering: End-to-End Question-Answering using the 20newsgroups dataset.
- image classification: image classification with Cats vs. Dogs
Tasks such as text classification and image classification can be accomplished easily with only a few lines of code.
IMDb Movie Reviews Using BERT [see notebook]
Example: Text Classification ofimport ktrain
from ktrain import text as txt
# load data
(x_train, y_train), (x_test, y_test), preproc = txt.texts_from_folder('data/aclImdb', maxlen=500,
preprocess_mode='bert',
train_test_names=['train', 'test'],
classes=['pos', 'neg'])
# load model
model = txt.text_classifier('bert', (x_train, y_train), preproc=preproc)
# wrap model and data in ktrain.Learner object
learner = ktrain.get_learner(model,
train_data=(x_train, y_train),
val_data=(x_test, y_test),
batch_size=6)
# find good learning rate
learner.lr_find() # briefly simulate training to find good learning rate
learner.lr_plot() # visually identify best learning rate
# train using 1cycle learning rate schedule for 3 epochs
learner.fit_onecycle(2e-5, 3)
Dogs and Cats Using a Pretrained ResNet50 model [see notebook]
Example: Classifying Images ofimport ktrain
from ktrain import vision as vis
# load data
(train_data, val_data, preproc) = vis.images_from_folder(
datadir='data/dogscats',
data_aug = vis.get_data_aug(horizontal_flip=True),
train_test_names=['train', 'valid'],
target_size=(224,224), color_mode='rgb')
# load model
model = vis.image_classifier('pretrained_resnet50', train_data, val_data, freeze_layers=80)
# wrap model and data in ktrain.Learner object
learner = ktrain.get_learner(model=model, train_data=train_data, val_data=val_data,
workers=8, use_multiprocessing=False, batch_size=64)
# find good learning rate
learner.lr_find() # briefly simulate training to find good learning rate
learner.lr_plot() # visually identify best learning rate
# train using triangular policy with ModelCheckpoint and implicit ReduceLROnPlateau and EarlyStopping
learner.autofit(1e-4, checkpoint_folder='/tmp/saved_weights')
Named Entity Recognition using a randomly initialized Bidirectional LSTM CRF model [see notebook]
Example: Sequence Labeling forimport ktrain
from ktrain import text as txt
# load data
(trn, val, preproc) = txt.entities_from_txt('data/ner_dataset.csv',
sentence_column='Sentence #',
word_column='Word',
tag_column='Tag',
data_format='gmb',
use_char=True) # enable character embeddings
# load model
model = txt.sequence_tagger('bilstm-crf', preproc)
# wrap model and data in ktrain.Learner object
learner = ktrain.get_learner(model, train_data=trn, val_data=val)
# conventional training for 1 epoch using a learning rate of 0.001 (Keras default for Adam optmizer)
learner.fit(1e-3, 1)
Cora Citation Graph using a GraphSAGE model [see notbook]
Example: Node Classification onimport ktrain
from ktrain import graph as gr
# load data with supervision ratio of 10%
(trn, val, preproc) = gr.graph_nodes_from_csv(
'cora.content', # node attributes/labels
'cora.cites', # edge list
sample_size=20,
holdout_pct=None,
holdout_for_inductive=False,
train_pct=0.1, sep='\t')
# load model
model=gr.graph_node_classifier('graphsage', trn)
# wrap model and data in ktrain.Learner object
learner = ktrain.get_learner(model, train_data=trn, val_data=val, batch_size=64)
# find good learning rate
learner.lr_find(max_epochs=100) # briefly simulate training to find good learning rate
learner.lr_plot() # visually identify best learning rate
# train using triangular policy with ModelCheckpoint and implicit ReduceLROnPlateau and EarlyStopping
learner.autofit(0.01, checkpoint_folder='/tmp/saved_weights')
Hugging Face Transformers on 20 Newsgroups Dataset Using DistilBERT [see notebook]
Example: Text Classification with# load text data
categories = ['alt.atheism', 'soc.religion.christian','comp.graphics', 'sci.med']
from sklearn.datasets import fetch_20newsgroups
train_b = fetch_20newsgroups(subset='train', categories=categories, shuffle=True)
test_b = fetch_20newsgroups(subset='test',categories=categories, shuffle=True)
(x_train, y_train) = (train_b.data, train_b.target)
(x_test, y_test) = (test_b.data, test_b.target)
# build, train, and validate model (Transformer is wrapper around transformers library)
import ktrain
from ktrain import text
MODEL_NAME = 'distilbert-base-uncased'
t = text.Transformer(MODEL_NAME, maxlen=500, class_names=train_b.target_names)
trn = t.preprocess_train(x_train, y_train)
val = t.preprocess_test(x_test, y_test)
model = t.get_classifier()
learner = ktrain.get_learner(model, train_data=trn, val_data=val, batch_size=6)
learner.fit_onecycle(5e-5, 4)
learner.validate(class_names=t.get_classes()) # class_names must be string values
# Output from learner.validate()
# precision recall f1-score support
#
# alt.atheism 0.92 0.93 0.93 319
# comp.graphics 0.97 0.97 0.97 389
# sci.med 0.97 0.95 0.96 396
#soc.religion.christian 0.96 0.96 0.96 398
#
# accuracy 0.96 1502
# macro avg 0.95 0.96 0.95 1502
# weighted avg 0.96 0.96 0.96 1502
Titanic Survival Prediction Using an MLP [see notebook]
Example: Tabular Classification forimport ktrain
from ktrain import tabular
import pandas as pd
train_df = pd.read_csv('train.csv', index_col=0)
train_df = train_df.drop(['Name', 'Ticket', 'Cabin'], 1)
trn, val, preproc = tabular.tabular_from_df(train_df, label_columns=['Survived'], random_state=42)
learner = ktrain.get_learner(tabular.tabular_classifier('mlp', trn), train_data=trn, val_data=val)
learner.lr_find(show_plot=True, max_epochs=5) # estimate learning rate
learner.fit_onecycle(5e-3, 10)
# evaluate held-out labeled test set
tst = preproc.preprocess_test(pd.read_csv('heldout.csv', index_col=0))
learner.evaluate(tst, class_names=preproc.get_classes())
here.
Additional examples can be foundInstallation
-
Make sure pip is up-to-date with:
pip install -U pip
-
Install TensorFlow 2 if it is not already installed (e.g.,
pip install tensorflow
) -
Install ktrain:
pip install ktrain
The above should be all you need on Linux systems and cloud computing environments like Google Colab and AWS EC2. If you are using ktrain on a Windows computer, you can follow these more detailed instructions that include some extra steps.
Supported TensorFlow Versions: ktrain should currently support any version of TensorFlow at or above to v2.3: i.e., pip install tensorflow>=2.3
. However, if using tensorflow>=2.11
, then you must only use legacy optimizers such as tf.keras.optimizers.legacy.Adam
. The newer tf.keras.optimizers.Optimizer
base class is not supported at this time. For instance, when using TensorFlow 2.11 and above, please use tf.keras.optimzers.legacy.Adam()
instead of the string "adam"
in model.compile
. ktrain does this automatically when using out-of-the-box models (e.g., models from the transformers
library).
Additional Notes About Installation
- Some optional, extra libraries used for some operations can be installed as needed. (Notice that ktrain is using forked versions of the
eli5
andstellargraph
libraries in order to support TensorFlow2.)
# for graph module:
pip install https://github.com/amaiya/stellargraph/archive/refs/heads/no_tf_dep_082.zip
# for text.TextPredictor.explain and vision.ImagePredictor.explain:
pip install https://github.com/amaiya/eli5-tf/archive/refs/heads/master.zip
# for tabular.TabularPredictor.explain:
pip install shap
# for text.zsl (ZeroShotClassifier), text.summarization, text.translation, text.speech:
pip install torch
# for text.speech:
pip install librosa
# for tabular.causal_inference_model:
pip install causalnlp
# for text.summarization.core.LexRankSummarizer:
pip install sumy
# for text.kw.KeywordExtractor
pip install textblob
# for text.qa.generative_qa
pip install paper-qa
-
ktrain purposely pins to a lower version of transformers to include support for older versions of TensorFlow. If you need a newer version of
transformers
, it is usually safe for you to upgradetransformers
, as long as you do it after installing ktrain. -
As of v0.30.x, TensorFlow installation is optional and only required if training neural networks. Although ktrain uses TensorFlow for neural network training, it also includes a variety of useful pretrained PyTorch models and sklearn models, which can be used out-of-the-box without having TensorFlow installed, as summarized in this table:
Feature | TensorFlow | PyTorch | Sklearn |
---|---|---|---|
training any neural network (e.g., text or image classification) | |||
End-to-End Question-Answering (pretrained) | |||
QA-Based Information Extraction (pretrained) | |||
Zero-Shot Classification (pretrained) | |||
Language Translation (pretrained) | |||
Summarization (pretrained) | |||
Speech Transcription (pretrained) | |||
Image Captioning (pretrained) | |||
Object Detection (pretrained) | |||
Sentiment Analysis (pretrained) | |||
Topic Modeling (sklearn) | |||
Keyphrase Extraction (textblob/nltk/sklearn) |
As noted above, end-to-end question-answering and information extraction in ktrain can be used with either TensorFlow (using framework='tf'
) or PyTorch (using framework='pt'
).
How to Cite
Please cite the following paper when using ktrain:
@article{maiya2020ktrain,
title={ktrain: A Low-Code Library for Augmented Machine Learning},
author={Arun S. Maiya},
year={2020},
eprint={2004.10703},
archivePrefix={arXiv},
primaryClass={cs.LG},
journal={arXiv preprint arXiv:2004.10703},
}
Creator: Arun S. Maiya
Email: arun [at] maiya [dot] net