• Stars
    star
    403
  • Rank 103,561 (Top 3 %)
  • Language
    Jupyter Notebook
  • License
    MIT License
  • Created over 5 years ago
  • Updated over 4 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

BERT-NER (nert-bert) with google bert https://github.com/google-research.

0. Papers

There are two solutions based on this architecture.

  1. BSNLP 2019 ACL workshop: solution and paper on multilingual shared task.
  2. The second place solution of Dialogue AGRR-2019 task and paper.

Description

This repository contains solution of NER task based on PyTorch reimplementation of Google's TensorFlow repository for the BERT model that was released together with the paper BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova.

This implementation can load any pre-trained TensorFlow checkpoint for BERT (in particular Google's pre-trained models).

Old version is in "old" branch.

2. Usage

2.1 Create data

from modules.data import bert_data
data = bert_data.LearnData.create(
    train_df_path=train_df_path,
    valid_df_path=valid_df_path,
    idx2labels_path="/path/to/vocab",
    clear_cache=True
)

2.2 Create model

from modules.models.bert_models import BERTBiLSTMAttnCRF
model = BERTBiLSTMAttnCRF.create(len(data.train_ds.idx2label))

2.3 Create Learner

from modules.train.train import NerLearner
num_epochs = 100
learner = NerLearner(
    model, data, "/path/for/save/best/model", t_total=num_epochs * len(data.train_dl))

2.4 Predict

from modules.data.bert_data import get_data_loader_for_predict
learner.load_model()
dl = get_data_loader_for_predict(data, df_path="/path/to/df/for/predict")
preds = learner.predict(dl)

2.5 Evaluate

from sklearn_crfsuite.metrics import flat_classification_report
from modules.analyze_utils.utils import bert_labels2tokens, voting_choicer
from modules.analyze_utils.plot_metrics import get_bert_span_report
from modules.analyze_utils.main_metrics import precision_recall_f1


pred_tokens, pred_labels = bert_labels2tokens(dl, preds)
true_tokens, true_labels = bert_labels2tokens(dl, [x.bert_labels for x in dl.dataset])
tokens_report = flat_classification_report(true_labels, pred_labels, digits=4)
print(tokens_report)

results = precision_recall_f1(true_labels, pred_labels)

3. Results

We didn't search best parametres and obtained the following results.

Model Data set Dev F1 tok Dev F1 span Test F1 tok Test F1 span
OURS
M-BERTCRF-IO FactRuEval - - 0.8543 0.8409
M-BERTNCRF-IO FactRuEval - - 0.8637 0.8516
M-BERTBiLSTMCRF-IO FactRuEval - - 0.8835 0.8718
M-BERTBiLSTMNCRF-IO FactRuEval - - 0.8632 0.8510
M-BERTAttnCRF-IO FactRuEval - - 0.8503 0.8346
M-BERTBiLSTMAttnCRF-IO FactRuEval - - 0.8839 0.8716
M-BERTBiLSTMAttnNCRF-IO FactRuEval - - 0.8807 0.8680
M-BERTBiLSTMAttnCRF-fit_BERT-IO FactRuEval - - 0.8823 0.8709
M-BERTBiLSTMAttnNCRF-fit_BERT-IO FactRuEval - - 0.8583 0.8456
- - - - - -
BERTBiLSTMCRF-IO CoNLL-2003 0.9629 - 0.9221 -
B-BERTBiLSTMCRF-IO CoNLL-2003 0.9635 - 0.9229 -
B-BERTBiLSTMAttnCRF-IO CoNLL-2003 0.9614 - 0.9237 -
B-BERTBiLSTMAttnNCRF-IO CoNLL-2003 0.9631 - 0.9249 -
Current SOTA
DeepPavlov-RuBERT-NER FactRuEval - - - 0.8266
CSE CoNLL-2003 - - 0.931 -
BERT-LARGE CoNLL-2003 0.966 - 0.928 -
BERT-BASE CoNLL-2003 0.964 - 0.924 -

More Repositories

1

Kandinsky-2

Kandinsky 2 — multilingual text2image latent diffusion model
Jupyter Notebook
2,699
star
2

ru-gpts

Russian GPT3 models.
Python
2,045
star
3

ru-dalle

Generate images from texts. In Russian
Jupyter Notebook
1,638
star
4

ghost

A new one shot face swap approach for image and video domains
Python
1,030
star
5

ru-dolph

RUDOLPH: One Hyper-Tasking Transformer can be creative as DALL-E and GPT-3 and smart as CLIP
Jupyter Notebook
242
star
6

Real-ESRGAN

PyTorch implementation of Real-ESRGAN model
Python
201
star
7

mgpt

Multilingual Generative Pretrained Model
Jupyter Notebook
194
star
8

KandinskyVideo

KandinskyVideo — multilingual end-to-end text2video latent diffusion model
Python
140
star
9

ru-clip

CLIP implementation for Russian language
Jupyter Notebook
126
star
10

ruGPT3_demos

121
star
11

sage

SAGE: Spelling correction, corruption and evaluation for multiple languages
Jupyter Notebook
101
star
12

deforum-kandinsky

Kandinsky x Deforum — generating short animations
Python
100
star
13

digital_peter_aij2020

Materials of the AI Journey 2020 competition dedicated to the recognition of Peter the Great's manuscripts, https://ai-journey.ru/contest/task01
Jupyter Notebook
66
star
14

music-composer

Python
62
star
15

ru-prompts

Python
54
star
16

fusion_brain_aij2021

Creating multimodal multitask models
Jupyter Notebook
47
star
17

model-zoo

NLP model zoo for Russian
44
star
18

gigachat

Библиотека для доступа к GigaChat
Python
43
star
19

OCR-model

An easy-to-run OCR model pipeline based on CRNN and CTC loss
Python
42
star
20

augmentex

Augmentex — a library for augmenting texts with errors
Python
40
star
21

StackMix-OCR

Jupyter Notebook
37
star
22

MoVQGAN

MoVQGAN - model for the image encoding and reconstruction
Jupyter Notebook
35
star
23

MERA

MERA (Multimodal Evaluation for Russian-language Architectures) is a new open benchmark for the Russian language for evaluating fundamental models.
Jupyter Notebook
31
star
24

tuned-vq-gan

Jupyter Notebook
28
star
25

ReadingPipeline

Text reading pipeline that combines segmentation and OCR-models.
Python
23
star
26

htr_datasets

Repository containing our datasets for HTR (handwritten text recognition) task.
Jupyter Notebook
23
star
27

fbc3_aij2023

Jupyter Notebook
20
star
28

mineral-recognition

Python
19
star
29

DigiTeller

18
star
30

fbc2_aij2022

FusionBrain Challenge 2.0: creating multimodal multitask model
Python
16
star
31

combined_solution_aij2019

AI Journey 2019: Combined Solution
Python
15
star
32

railway_infrastructure_detection_aij2021

AI Journey Contest 2021: AITrain
Python
13
star
33

no_fire_with_ai_aij2021

AI Journey Contest 2021: NoFireWithAI
Jupyter Notebook
13
star
34

SEGM-model

An easy-to-run semantic segmentation model based on Unet
Python
11
star
35

ControlledNST

An implementation of Neural Style Transfer in PyTorch.
Jupyter Notebook
8
star
36

kandinsky3-diffusers

Python
5
star
37

mchs-wildfire

Соревнование по классификации лесных пожаров
Jupyter Notebook
4
star
38

no_flood_with_ai_aij2020

Материалы соревнования AI Journey 2020, посвященного прогнозированию паводков на реке Амур, https://ai-journey.ru/contest/task02
Jupyter Notebook
4
star
39

paper_persi_chat

PaperPersiChat: Scientific Paper Discussion Chatbot using Transformers and Discourse Flow Management
Jupyter Notebook
1
star
40

Zoom_In_Video_Kandinsky

Framework for creating Zoom in / Zoom out video based on inpainting Kandinsky
Jupyter Notebook
1
star