• This repository has been archived on 24/Sep/2020
  • Stars
    star
    450
  • Rank 97,143 (Top 2 %)
  • Language
    Python
  • License
    Apache License 2.0
  • Created almost 6 years ago
  • Updated about 5 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

๐Ÿ”ก Token level embeddings from BERT model on mxnet and gluonnlp

Bert Embeddings

[Deprecated] Thank you for checking this project. Unfortunately, I don't have time to maintain this project anymore. If you are interested in maintaing this project. Please create an issue and let me know.

Build Status codecov PyPI version Documentation Status

BERT, published by Google, is new way to obtain pre-trained language model word representation. Many NLP tasks are benefit from BERT to get the SOTA.

The goal of this project is to obtain the token embedding from BERT's pre-trained model. In this way, instead of building and do fine-tuning for an end-to-end NLP model, you can build your model by just utilizing or token embedding.

This project is implemented with @MXNet. Special thanks to @gluon-nlp team.

Install

pip install bert-embedding
# If you want to run on GPU machine, please install `mxnet-cu92`.
pip install mxnet-cu92

Usage

from bert_embedding import BertEmbedding

bert_abstract = """We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers.
 Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations by jointly conditioning on both left and right context in all layers.
 As a result, the pre-trained BERT representations can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. 
BERT is conceptually simple and empirically powerful. 
It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE benchmark to 80.4% (7.6% absolute improvement), MultiNLI accuracy to 86.7 (5.6% absolute improvement) and the SQuAD v1.1 question answering Test F1 to 93.2 (1.5% absolute improvement), outperforming human performance by 2.0%."""
sentences = bert_abstract.split('\n')
bert_embedding = BertEmbedding()
result = bert_embedding(sentences)

If you want to use GPU, please import mxnet and set context

import mxnet as mx
from bert_embedding import BertEmbedding

...

ctx = mx.gpu(0)
bert = BertEmbedding(ctx=ctx)

This result is a list of a tuple containing (tokens, tokens embedding)

For example:

first_sentence = result[0]

first_sentence[0]
# ['we', 'introduce', 'a', 'new', 'language', 'representation', 'model', 'called', 'bert', ',', 'which', 'stands', 'for', 'bidirectional', 'encoder', 'representations', 'from', 'transformers']
len(first_sentence[0])
# 18


len(first_sentence[1])
# 18
first_token_in_first_sentence = first_sentence[1]
first_token_in_first_sentence[1]
# array([ 0.4805648 ,  0.18369392, -0.28554988, ..., -0.01961522,
#        1.0207764 , -0.67167974], dtype=float32)
first_token_in_first_sentence[1].shape
# (768,)

OOV

There are three ways to handle oov, avg (default), sum, and last. This can be specified in encoding.

...
bert_embedding = BertEmbedding()
bert_embedding(sentences, 'sum')
...

Available pre-trained BERT models

book_corpus_wiki_en_uncased book_corpus_wiki_en_cased wiki_multilingual wiki_multilingual_cased wiki_cn
bert_12_768_12 โœ“ โœ“ โœ“ โœ“ โœ“
bert_24_1024_16 x โœ“ x x x

Example of using the large pre-trained BERT model from Google

from bert_embedding import BertEmbedding

bert_embedding = BertEmbedding(model='bert_24_1024_16', dataset_name='book_corpus_wiki_en_cased')

Source: gluonnlp

More Repositories

1

awesome-webservers

โš™๏ธ Collection of one-liner static server
77
star
2

text

๐Ÿ‘ป An elegant ghost blogging theme. Mandarin optimization support.
CSS
64
star
3

active_merchant_pay2go

๐Ÿ’ต Unified API to integrate Ruby on Rails with Pay2go(ๆ™บไป˜ๅฏถ) offsite payment.
Ruby
33
star
4

use-mailchimp-form

โœ‰๏ธ MailChimp form react integration implemented in React hooks way.
TypeScript
27
star
5

use-tw-zipcode

A React hook implementation for Taiwanese zip code selection.
TypeScript
15
star
6

Open-iCloud-Drive

๐Ÿ’ป Open iCloud Drive
Swift
13
star
7

gary-lai.com

TypeScript
5
star
8

open-icloud-drive-electron

๐Ÿ’ป open iCloud Drive from menubar
JavaScript
5
star
9

tw_zipcode

:tw: ๅฐ็ฃ้ƒต้žๅ€่™Ÿไธ‹ๆ‹‰้ธๅ–ฎ
Ruby
4
star
10

jiazi

A library for generating fake Chinese data.
Ruby
4
star
11

lazy_format

Make frequently used formatter into rails view helper.
Ruby
3
star
12

gary-lai.com-jekyll

๐Ÿ™Œ About me
HTML
3
star
13

to_zh

Easy way to convert integer into Chinese character with grammar
Ruby
3
star
14

mlfp

Machine Learning Final Project
MATLAB
2
star
15

emerald

๐Ÿ‘ป A ghost blogging theme.
Handlebars
2
star
16

ixpass

HTML
2
star
17

ts-package-starter

TypeScript
1
star
18

tmoji

It's the website of Tmoji Keyboard app for iOS.
HTML
1
star
19

active_merchant_square

๐Ÿ’ณ Active merchant with square payment
Ruby
1
star
20

react_on_rails_with_aws_ebs

1
star
21

mamayahuhu.com

TypeScript
1
star
22

rails_active_merchant_pay2go

Example Rails app with active_merchant_pay2go
Ruby
1
star
23

zh-web

็ฐก็น่ฝ‰ๆ›
Ruby
1
star
24

active_merchant_pay2go_period

Ruby
1
star
25

nckucourse

nckucourse
CSS
1
star
26

goody

JavaScript
1
star
27

algorithm

Unfinished
1
star
28

wyc

It's a gift to my friend.
JavaScript
1
star
29

bobchien

bobchien.com
HTML
1
star
30

dynamodb.tw

JavaScript
1
star
31

jena-maven-example

Java
1
star
32

dip_project

Python
1
star
33

express-ts-starter

๐Ÿ“ฆA minimalist Express starter with typescript config
TypeScript
1
star
34

gatsbyjs-boilerplate

JavaScript
1
star