• Stars
    star
    241
  • Rank 167,643 (Top 4 %)
  • Language
    Python
  • Created almost 4 years ago
  • Updated 11 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

[CVPR 2021] Rethinking Text Segmentation: A Novel Dataset and A Text-Specific Refinement Approach

Rethinking Text Segmentation: A Novel Dataset and A Text-Specific Refinement Approach

This is the repo to host the dataset TextSeg and code for TexRNet from the following paper:

Xingqian Xu, Zhifei Zhang, Zhaowen Wang, Brian Price, Zhonghao Wang and Humphrey Shi, Rethinking Text Segmentation: A Novel Dataset and A Text-Specific Refinement Approach, ArXiv Link

Note:

[2021.04.21] So far, our dataset is partially released with images and semantic labels. Since many people may request the dataset for OCR or non-segmentation tasks, please stay tuned, and we will release the dataset in full ASAP.

[2021.06.18] Our dataset is now fully released. To download the data, please send a request email to [email protected] and tell us which school you are affiliated with. Please be aware the released dataset is version 2, and the annotations are slightly different from the one in the paper. In order to provide the most accurate dataset, we went through a second round of quality assurance, in which we fixed some faulty annotations and made them more consistent across the dataset. Since our TexRNet in the paper doesn't use OCR and character instance labels (i.e. word- and character-level bounding polygons; character-level masks;), we will not release the older version of these labels. However, we release the retroactive semantic_label_v1.tar.gz for researchers to reproduce the results in the paper. For more details about the dataset, please see below.

Introduction

Text in the real world is extremely diverse, yet current text dataset does not reflect such diversity very well. To bridge this gap, we proposed TextSeg, a large-scale fine-annotated and multi-purpose text dataset, collecting scene and design text with six types of annotations: word- and character-wise bounding polygons, masks and transcriptions. We also introduce Text Refinement Network (TexRNet), a novel text segmentation approach that adapts to the unique properties of text, e.g. non-convex boundary, diverse texture, etc., which often impose burdens on traditional segmentation models. TexRNet refines results from common segmentation approach via key features pooling and attention, so that wrong-activated text regions can be adjusted. We also introduce trimap and discriminator losses that show significant improvement on text segmentation.

TextSeg Dataset

Image Collection

Annotation

Download

Our dataset (TextSeg) is academia-only and cannot be used on any commercial project and research. To download the data, please send a request email to [email protected] and tell us which school you are affiliated with.

A full download should contain these files:

  • image.tar.gz contains 4024 images.
  • annotation.tar.gz labels corresponding to the images. These three types of files are included:
    • [dataID]_anno.json contains all word- and character-level translations and bounding polygons.
    • [dataID]_mask.png contains all character masks. Character mask label value will be ordered from 1 to n. Label value 0 means background, 255 means ignore.
    • [dataID]_maskeff.png contains all character masks with effect.
    • Adobe_Research_License_TextSeg.txt license file.
  • semantic_label.tar.gz contains all word-level (semantic-level) masks. It contains:
    • [dataID]_maskfg.png 0 means background, 100 means word, 200 means word-effect, 255 means ignore. (The [dataID]_maskfg.png can also be generated using [dataID]_mask.png and [dataID]_maskeff.png)
  • split.json the official split of train, val and test.
  • [Optional] semantic_label_v1.tar.gz the old version of label that was used in our paper. One can download it to reproduce our paper results.

TexRNet Structure and Results

In this table, we report the performance of our TexRNet on 5 text segmentation dataset including ours.

TextSeg(Ours) ICDAR13 FST COCO_TS MLT_S Total-Text
Method fgIoUF-score fgIoUF-score fgIoUF-score fgIoUF-score fgIoUF-score
DeeplabV3+ 84.070.914 69.270.802 72.070.641 84.630.837 74.440.824
HRNetV2-W48 85.030.914 70.980.822 68.930.629 83.260.836 75.290.825
HRNetV2-W48 + OCR 85.980.918 72.450.830 69.540.627 83.490.838 76.230.832
Ours: TexRNet + DeeplabV3+ 86.06 0.921 72.16 0.835 73.980.722 86.31 0.830 76.53 0.844
Ours: TexRNet + HRNetV2-W48 86.840.924 73.380.850 72.39 0.720 86.09 0.865 78.470.848

To run the code

Set up the environment

conda create -n texrnet python=3.7
conda activate texrnet
pip install -r requirement.txt

To eval

First, make the following directories to hold pre-trained models, dataset, and running logs:

mkdir ./pretrained
mkdir ./data
mkdir ./log

Second, download the models from this link. Move those downloaded models to ./pretrained.

Thrid, make sure that ./data contains the data. A sample root directory for TextSeg would be ./data/TextSeg.

Lastly, evaluate the model and compute fgIoU/F-score with the following command:

python main.py --eval --pth [model path] [--hrnet] [--gpu 0 1 ...] --dsname [dataset name]

Here is the sample command to eval a TexRNet_HRNet on TextSeg with 4 GPUs:

python main.py --eval --pth pretrained/texrnet_hrnet.pth --hrnet --gpu 0 1 2 3 --dsname textseg

The program will store results and execution log in ./log/eval.

To train

Similarly, these directories need to be created:

mkdir ./pretrained
mkdir ./pretrained/init
mkdir ./data
mkdir ./log

Second, we use multiple pre-trained models for training. Download these initial models from this link. Move those models to ./pretrained/init. Also, make sure that ./data contains the data.

Lastly, execute the training code with the following command:

python main.py [--hrnet] [--gpu 0 1 ...] --dsname [dataset name] [--trainwithcls]

Here is the sample command to train a TexRNet_HRNet on TextSeg with classifier and discriminate loss using 4 GPUs:

python main.py --hrnet --gpu 0 1 2 3 --dsname textseg --trainwithcls

The training configs, logs, and models will be stored in ./log/texrnet_[dsname]/[exid]_[signature].

Bibtex

@article{xu2020rethinking,
  title={Rethinking Text Segmentation: A Novel Dataset and A Text-Specific Refinement Approach},
  author={Xu, Xingqian and Zhang, Zhifei and Wang, Zhaowen and Price, Brian and Wang, Zhonghao and Shi, Humphrey},
  journal={arXiv preprint arXiv:2011.14021},
  year={2020}
}

Acknowledgements

The directory .\hrnet_code is directly copied from the HRNet official github website (link). HRNet code ownership should be credited to HRNet authors, and users should follow their terms of usage.

More Repositories

1

OneFormer

OneFormer: One Transformer to Rule Universal Image Segmentation, arxiv 2022 / CVPR 2023
Jupyter Notebook
1,461
star
2

Versatile-Diffusion

Versatile Diffusion: Text, Images and Variations All in One Diffusion Model, arXiv 2022 / ICCV 2023
Python
1,300
star
3

Neighborhood-Attention-Transformer

Neighborhood Attention Transformer, arxiv 2022 / CVPR 2023. Dilated Neighborhood Attention Transformer, arxiv 2022
Python
1,037
star
4

Prompt-Free-Diffusion

Prompt-Free Diffusion: Taking "Text" out of Text-to-Image Diffusion Models, arxiv 2023 / CVPR 2024
Python
727
star
5

Matting-Anything

Matting Anything Model (MAM), an efficient and versatile framework for estimating the alpha matte of any instance in an image with flexible and interactive visual or linguistic user prompt guidance.
Python
607
star
6

Compact-Transformers

Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)
Python
492
star
7

Cross-Scale-Non-Local-Attention

PyTorch code for our paper "Image Super-Resolution with Cross-Scale Non-Local Attention and Exhaustive Self-Exemplars Mining" (CVPR2020).
Python
401
star
8

Pyramid-Attention-Networks

[IJCV] Pyramid Attention Networks for Image Restoration: new SOTA results on multiple image restoration tasks: denoising, demosaicing, compression artifact reduction, super-resolution
Python
382
star
9

NATTEN

Neighborhood Attention Extension. Bringing attention to a neighborhood near you!
Cuda
333
star
10

Smooth-Diffusion

Smooth Diffusion: Crafting Smooth Latent Spaces in Diffusion Models arXiv 2023 / CVPR 2024
Python
305
star
11

VCoder

VCoder: Versatile Vision Encoders for Multimodal Large Language Models, arXiv 2023 / CVPR 2024
Python
259
star
12

Agriculture-Vision

[CVPR 2020 & 2021 & 2022 & 2023] Agriculture-Vision Dataset, Prize Challenge and Workshop: A joint effort with many great collaborators to bring Agriculture and Computer Vision/AI communities together to benefit humanity!
199
star
13

Self-Similarity-Grouping

Self-similarity Grouping: A Simple Unsupervised Cross Domain Adaptation Approach for Person Re-identification (ICCV 2019, Oral)
Python
188
star
14

FcF-Inpainting

[WACV 2023] Keys to Better Image Inpainting: Structure and Texture Go Hand in Hand
Jupyter Notebook
174
star
15

Decoupled-Classification-Refinement

Revisiting RCNN: On Awakening the Classification Power of Faster RCNN (ECCV 2018)
Python
167
star
16

Convolutional-MLPs

[Preprint] ConvMLP: Hierarchical Convolutional MLPs for Vision, 2021
Python
163
star
17

3D-Point-Cloud-Learning

131
star
18

CuMo

CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts
Python
130
star
19

Forget-Me-Not

Forget-Me-Not: Learning to Forget in Text-to-Image Diffusion Models, 2023
Python
107
star
20

VMFormer

[Preprint] VMFormer: End-to-End Video Matting with Transformer
Python
106
star
21

Semi-Supervised-Transfer-Learning

[CVPR 2021] Adaptive Consistency Regularization for Semi-Supervised Transfer Learning
Jupyter Notebook
101
star
22

SGL-Retinal-Vessel-Segmentation

[MICCAI 2021] Study Group Learning: Improving Retinal Vessel Segmentation Trained with Noisy Labels: New SOTA on both DRIVE and CHASE_DB1.
Jupyter Notebook
101
star
23

StyleNAT

New flexible and efficient image generation framework that sets new SOTA on FFHQ-256 with FID 2.05, 2022
Python
97
star
24

Unsupervised-Domain-Adaptation-with-Differential-Treatment

[CVPR 2020] Differential Treatment for Stuff and Things: A Simple Unsupervised Domain Adaptation Method for Semantic Segmentation
Python
88
star
25

Text2Video-Zero-sd-webui

Python
79
star
26

GFR-DSOD

Improving Object Detection from Scratch via Gated Feature Reuse (BMVC 2019)
Python
65
star
27

SH-GAN

[WACV 2023] Image Completion with Heterogeneously Filtered Spectral Hints
Python
62
star
28

VIM

Python
54
star
29

UltraSR-Arbitrary-Scale-Super-Resolution

[Preprint] UltraSR: Spatial Encoding is a Missing Key for Implicit Image Function-based Arbitrary-Scale Super-Resolution, 2021
53
star
30

Any-Precision-DNNs

Any-Precision Deep Neural Networks (AAAI 2021)
Python
44
star
31

Horizontal-Pyramid-Matching

Horizontal Pyramid Matching for Person Re-identification (AAAI 2019)
Python
39
star
32

Pseudo-IoU-for-Anchor-Free-Object-Detection

Pseudo-IoU: Improving Label Assignment in Anchor-Free Object Detection
Python
30
star
33

Human-Object-Interaction-Detection

25
star
34

CompFeat-for-Video-Instance-Segmentation

CompFeat: Comprehensive Feature Aggregation for Video Instance Segmentation (AAAI 2021)
19
star
35

Diffusion-Driven-Test-Time-Adaptation-via-Synthetic-Domain-Alignment

Everything to the Synthetic: Diffusion-driven Test-time Adaptation via Synthetic-Domain Alignment
Python
17
star
36

OneFormer-Colab

[Colab Demo Code] OneFormer: One Transformer to Rule Universal Image Segmentation.
Python
13
star
37

DiSparse-Multitask-Model-Compression

[CVPR 2022] DiSparse: Disentangled Sparsification for Multitask Model Compression
Jupyter Notebook
13
star
38

Interpretable-Visual-Reasoning

[ICCV 2021] Interpretable Visual Reasoning via Induced Symbolic Space
9
star
39

Mask-Selection-Networks

[CVPR 2021] Youtube-VIS 2021 3rd place, [CVPR 2020] winner DAVIS 2020. Code for mask selection based methods.
6
star
40

Activity-Recognition

5
star
41

Boosted-Dynamic-Networks

Boosted Dynamic Neural Networks, AAAI 2023
Python
4
star
42

Aneurysm-Segmentation-with-Multi-Teacher-Pseudo-Labels

1
star