• Stars
    star
    3,915
  • Rank 10,691 (Top 0.3 %)
  • Language
    Python
  • License
    MIT License
  • Created over 6 years ago
  • Updated 2 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

โ›น๏ธ Pytorch ReID: A tiny, friendly, strong pytorch implement of person re-id / vehicle re-id baseline. Tutorial ๐Ÿ‘‰https://github.com/layumi/Person_reID_baseline_pytorch/tree/master/tutorial

Pytorch ReID

Strong, Small, Friendly

Python3.6+ License: MIT

A tiny, friendly, strong baseline code for Object-reID (based on pytorch) since 2017.

Share to Facebook Twitter Weibo LinkedIn Email

Tutorial

Table of contents

Features

Now we have supported:

Training

  • Running the code on Google Colab with Free GPU. Check Here (Thanks to @ronghao233)
  • DG-Market (10x Large Synethic Dataset from Market CVPR 2019 Oral)
  • Swin Transformer / EfficientNet / HRNet
  • ResNet/ResNet-ibn/DenseNet
  • Circle Loss, Triplet Loss, Contrastive Loss, Sphere Loss, Lifted Loss, Arcface, Cosface and Instance Loss
  • Float16 to save GPU memory based on apex
  • Part-based Convolutional Baseline(PCB)
  • Random Erasing
  • Linear Warm-up

Testing

  • TensorRT
  • Pytorch JIT
  • Fuse Conv and BN layer into one Conv layer
  • Multiple Query Evaluation
  • Re-Ranking (CPU & GPU Version)
  • Visualize Training Curves
  • Visualize Ranking Result
  • Visualize Heatmap

Here we provide hyperparameters and architectures, that were used to generate the result. Some of them (i.e. learning rate) are far from optimal. Do not hesitate to change them and see the effect.

P.S. With similar structure, we arrived Rank@1=87.74% mAP=69.46% with Matconvnet. (batchsize=8, dropout=0.75) You may refer to Here. Different framework need to be tuned in a different way.

Some News

12 Aug 2023 Large Person Langauge Model is currently available at HereGitHub stars accepted by ACM MM'23. You are welcomed to check it.

19 Mar 2023 We host a special session on IEEE Intelligent Transportation Systems Conference (ITSC), covering the object re-identification & point cloud topic. The paper ddl is by May 15, 2023 and the paper notification is at June 30, 2023. Please select the session code ``w7r4a'' during submission. More details can be found at Special Session Website.

9 Mar 2023 Market-1501 is in 3D. Please check our single 2D to 3D reconstruction work https://github.com/layumi/3D-Magic-Mirror GitHub stars. Magic Mirror

2022 News

7 Sep 2022 We support SwinV2.

24 Jul 2022 Market-HQ is released with super-resolution quality from 128*64 to 512*256. Please check at https://github.com/layumi/HQ-Market

14 Jul 2022 Add adversarial training by python train.py --name ftnet_adv --adv 0.1 --aiter 40.

1 Feb 2022 Speed up the inference process about 10 seconds by removing the cat function in test.py.

1 Feb 2022 Add the demo with TensorRT (The fast inference speed may depend on the GPU with the latest RT Core).

2021 News

30 Dec 2021 We add supports for new losses, including arcface loss, cosface loss and instance loss. The hyper-parameters are still tunning.

3 Dec 2021 We add supports for four losses, including triplet loss, contrastive loss, sphere loss and lifted loss. The hyper-parameters are still tunning.

1 Dec 2021 We support EfficientNet/HRNet.

15 Sep 2021 We support ResNet-ibn from ECCV2018 (https://github.com/XingangPan/IBN-Net).

17 Aug 2021 We support running code on Google Colab with free GPU. Please check it out at https://github.com/layumi/Person_reID_baseline_pytorch/tree/master/colab .

14 Aug 2021 We have supported the training with DG-Market for regularization via Self-supervised Memory Learning. You only neeed to download/unzip the dataset and add --DG to train model.

12 Aug 2021 We have supported the transformer-based model Swin by --use_swin. The basic performance is 92.73% Rank@1 and 79.71%mAP.

23 Jun 2021 Attack your re-ID model via Query! They are not robust as you expected! Check the code at Here.

5 Feb 2021 We have supported Circle loss(CVPR20 Oral). You can try it by simply adding --circle.

11 January 2021 On the Market-1501 dataset, we accelerate the re-ranking processing from 89.2s to 9.4ms with one K40m GPU, facilitating the real-time post-processing. The pytorch implementation can be found in GPU-Re-Ranking.

2020 News

11 June 2020 People live in the 3D world. We release one new person re-id code Person Re-identification in the 3D Space, which conduct representation learning in the 3D space. You are welcomed to check out it.

30 April 2020 We have applied this code to the AICity Challenge 2020, yielding the 1st Place Submission to the re-id track ๐Ÿš—. Check out here.

01 March 2020 We release one new image retrieval dataset, called University-1652, for drone-view target localization and drone navigation ๐Ÿš. It has a similar setting with the person re-ID. You are welcomed to check out it.

2019 News

07 July 2019: I added some new functions, such as --resume, auto-augmentation policy, acos loss, into developing thread and rewrite the save and load functions. I haven't tested the functions throughly. Some new functions are worthy of having a try. If you are first to this repo, I suggest you stay with the master thread.

01 July 2019: My CVPR19 Paper is online. It is based on this baseline repo as teacher model to provide pseudo label for the generated images to train a better student model. You are welcomed to check out the opensource code at here.

03 Jun 2019: Testing with multiple-scale inputs is added. You can use --ms 1,0.9 when extracting the feature. It could slightly improve the final result.

20 May 2019: Linear Warm Up is added. You also can set warm-up the first K epoch by --warm_epoch K. If K <=0, there will be no warm-up.

2018 & 2017 News

What's new: FP16 has been added. It can be used by simply added --fp16. You need to install apex and update your pytorch to 1.0.

Float16 could save about 50% GPU memory usage without accuracy drop. Our baseline could be trained with only 2GB GPU memory.

python train.py --fp16

What's new: Visualizing ranking result is added.

python prepare.py
python train.py
python test.py
python demo.py --query_index 777

What's new: Multiple-query Evaluation is added. The multiple-query result is about Rank@1=91.95% mAP=78.06%.

python prepare.py
python train.py
python test.py --multi
python evaluate_gpu.py

What's new: ย PCB is added. You may use '--PCB' to use this model. It can achieve around Rank@1=92.73% mAP=78.16%. I used a GPU (P40) with 24GB Memory. You may try apply smaller batchsize and choose the smaller learning rate (for stability) to run. (For example, --batchsize 32 --lr 0.01 --PCB)

python train.py --PCB --batchsize 64 --name PCB-64
python test.py --PCB --name PCB-64

What's new: You may try evaluate_gpu.py to conduct a faster evaluation with GPU.

What's new: You may apply '--use_dense' to use DenseNet-121. It can arrive around Rank@1=89.91% mAP=73.58%.

What's new: Re-ranking is added to evaluation. The re-ranked result is about Rank@1=90.20% mAP=84.76%.

What's new: Random Erasing is added to train.

What's new: I add some code to generate training curves. The figure will be saved into the model folder when training.

Trained Model

I re-trained several models, and the results may be different with the original one. Just for a quick reference, you may directly use these models. The download link is Here.

Methods Rank@1 mAP Reference
[EfficientNet-b4] 85.78% 66.80% python train.py --use_efficient --name eff; python test.py --name eff
[ResNet-50 + adv defense] 87.77% 69.83% python train.py --name adv0.1_40_w10_all --adv 0.1 --aiter 40 --warm 10 --train_all; python test.py --name adv0.1_40_w10_all
[ConvNeXt] 88.98% 71.35% python train.py --use_convnext --name convnext; python test.py --name convnext
[ResNet-50 (fp16)] 88.03% 71.40% python train.py --name fp16 --fp16 --train_all
[ResNet-50] 88.84% 71.59% python train.py --train_all
[ResNet-50-ibn] 89.13% 73.40% python train.py --train_all --name res-ibn --ibn
[DenseNet-121] 90.17% 74.02% python train.py --name ft_net_dense --use_dense --train_all
[DenseNet-121 (Circle)] 91.00% 76.54% python train.py --name ft_net_dense_circle_w5 --circle --use_dense --train_all --warm_epoch 5
[HRNet-18] 90.83% 76.65% python train.py --use_hr --name hr18; python test.py --name hr18
[PCB] 92.64% 77.47% python train.py --name PCB --PCB --train_all --lr 0.02
[PCB + DG] 92.70% 78.31% python train.py --name PCB_DG --PCB --train_all --lr 0.02 --DG; python test.py --name PCB_DG
[ResNet-50 (all tricks)] 91.83% 78.32% python train.py --warm_epoch 5 --stride 1 --erasing_p 0.5 --batchsize 8 --lr 0.02 --name warm5_s1_b8_lr2_p0.5
[ResNet-50 (all tricks+Circle)] 92.13% 79.84% python train.py --warm_epoch 5 --stride 1 --erasing_p 0.5 --batchsize 8 --lr 0.02 --name warm5_s1_b8_lr2_p0.5_circle --circle
[ResNet-50 (all tricks+Circle+DG)] 92.13% 80.13% python train.py --warm_epoch 5 --stride 1 --erasing_p 0.5 --batchsize 8 --lr 0.02 --name warm5_s1_b8_lr2_p0.5_circle_DG --circle --DG; python test.py --name warm5_s1_b8_lr2_p0.5_circle_DG
[DenseNet-121 (all tricks+Circle)] 92.61% 80.24% python train.py --warm_epoch 5 --stride 1 --erasing_p 0.5 --batchsize 8 --lr 0.02 --name dense_warm5_s1_b8_lr2_p0.5_circle --circle --use_dense; python test.py --name dense_warm5_s1_b8_lr2_p0.5_circle
[HRNet-18 (all tricks+Circle+DG)] 92.19% 81.00% python train.py --use_hr --name hr18_p0.5_circle_w5_b16_lr0.01_DG --lr 0.01 --batch 16 --DG --erasing_p 0.5 --circle --warm_epoch 5; python test.py --name hr18_p0.5_circle_w5_b16_lr0.01_DG
[Swin] (224x224) 92.75% 79.70% python train.py --use_swin --name swin; python test.py --name swin
[SwinV2 (all tricks+Circle 256x128)] 92.93% 82.99% python train.py --use_swinv2 --name swinv2_p0.5_circle_w5_b16_lr0.03 --lr 0.03 --batch 16 --erasing_p 0.5 --circle --warm_epoch 5; python test.py --name swinv2_p0.5_circle_w5_b16_lr0.03 --batch 32
[Swin (all tricks+Circle 224x224)] 94.12% 84.39% python train.py --use_swin --name swin_p0.5_circle_w5 --erasing_p 0.5 --circle --warm_epoch 5; python test.py --name swin_p0.5_circle_w5
[Swin (all tricks+Circle+b16 224x224)] 94.00% 85.21% python train.py --use_swin --name swin_p0.5_circle_w5_b16_lr0.01 --lr 0.01 --batch 16 --erasing_p 0.5 --circle --warm_epoch 5; python test.py --name swin_p0.5_circle_w5_b16_lr0.01
[Swin (all tricks+Circle+b16+DG 224x224)] 94.00% 85.36% python train.py --use_swin --name swin_p0.5_circle_w5_b16_lr0.01_DG --lr 0.01 --batch 16 --DG --erasing_p 0.5 --circle --warm_epoch 5; python test.py --name swin_p0.5_circle_w5_b16_lr0.01_DG
  • More training iterations may lead to better results.
  • Swin costs more GPU memory (11G GPU is needed) to run.
  • The hyper-parameter of DG-Market --DG is not tuned. Better hyper-parameter may lead to better results.

Different Losses

I do not optimize the hyper-parameters. You are free to tune them for better performance.

Methods Rank@1 mAP Reference
CE 92.01% 79.31% python train.py --warm_epoch 5 --stride 1 --erasing_p 0.5 --batchsize 32 --lr 0.08 --name warm5_s1_b32_lr8_p0.5_100 --total 100 ; python test.py --name warm5_s1_b32_lr8_p0.5_100
CE + Sphere [Paper] 92.01% 79.39% python train.py --warm_epoch 5 --stride 1 --erasing_p 0.5 --batchsize 32 --lr 0.08 --name warm5_s1_b32_lr8_p0.5_sphere100 --sphere --total 100; python test.py --name warm5_s1_b32_lr8_p0.5_sphere100
CE + Triplet [Paper] 92.40% 79.71% python train.py --warm_epoch 5 --stride 1 --erasing_p 0.5 --batchsize 32 --lr 0.08 --name warm5_s1_b32_lr8_p0.5_triplet100 --triplet --total 100; python test.py --name warm5_s1_b32_lr8_p0.5_triplet100
CE + Lifted [Paper] 91.78% 79.77% python train.py --warm_epoch 5 --stride 1 --erasing_p 0.5 --batchsize 32 --lr 0.08 --name warm5_s1_b32_lr8_p0.5_lifted100 --lifted --total 100; python test.py --name warm5_s1_b32_lr8_p0.5_lifted100
CE + Instance [Paper] 92.73% 81.11% python train.py --warm_epoch 5 --stride 1 --erasing_p 0.5 --batchsize 32 --lr 0.08 --name warm5_s1_b32_lr8_p0.5_instance100_gamma64 --instance --ins_gamma 64 --total 100 ; python test.py --name warm5_s1_b32_lr8_p0.5_instance100_gamma64
CE + Contrast [Paper] 92.28% 81.42% python train.py --warm_epoch 5 --stride 1 --erasing_p 0.5 --batchsize 32 --lr 0.08 --name warm5_s1_b32_lr8_p0.5_contrast100 --contrast --total 100; python test.py --name warm5_s1_b32_lr8_p0.5_contrast100
CE + Circle [Paper] 92.46% 81.70% python train.py --warm_epoch 5 --stride 1 --erasing_p 0.5 --batchsize 32 --lr 0.08 --name warm5_s1_b32_lr8_p0.5_circle100 --circle --total 100 ; python test.py --name warm5_s1_b32_lr8_p0.5_circle100
CE + Contrast + Sphere 92.79% 82.02% python train.py --warm_epoch 5 --stride 1 --erasing_p 0.5 --batchsize 32 --lr 0.08 --name warm5_s1_b32_lr8_p0.5_cs100 --contrast --sphere --total 100; python test.py --name warm5_s1_b32_lr8_p0.5_cs100
CE + Contrast + Triplet (Long) 92.61% 82.01% python train.py --warm_epoch 5 --stride 1 --erasing_p 0.5 --batchsize 24 --lr 0.062 --name warm5_s1_b24_lr6.2_p0.5_contrast_triplet_133 --contrast --triplet --total 133 ; python test.py --name warm5_s1_b24_lr6.2_p0.5_contrast_triplet_133
CE + Contrast + Circle (Long) 92.19% 82.07% python train.py --warm_epoch 5 --stride 1 --erasing_p 0.5 --batchsize 24 --lr 0.08 --name warm5_s1_b24_lr8_p0.5_contrast_circle133 --contrast --circle --total 133 ; python test.py --name warm5_s1_b24_lr8_p0.5_contrast_circle133
CE + Contrast + Sphere (Long) 92.84% 82.37% python train.py --warm_epoch 5 --stride 1 --erasing_p 0.5 --batchsize 24 --lr 0.06 --name warm5_s1_b24_lr6_p0.5_contrast_sphere133 --contrast --sphere --total 133 ; python test.py --name warm5_s1_b24_lr6_p0.5_contrast_sphere133

Model Structure

You may learn more from model.py. We add one linear layer(bottleneck), one batchnorm layer and relu.

Prerequisites

  • Python 3.6+
  • GPU Memory >= 6G
  • Numpy
  • Pytorch 0.3+
  • timm pip install timm for Swin-Transformer with Pytorch >1.7.0
  • pretrainedmodels via pip install pretrainedmodels
  • [Optional] apex (for float16)
  • [Optional] pretrainedmodels

(Some reports found that updating numpy can arrive the right accuracy. If you only get 50~80 Top1 Accuracy, just try it.) We have successfully run the code based on numpy 1.12.1 and 1.13.1 .

Getting started

Installation

git clone https://github.com/pytorch/vision
cd vision
python setup.py install
  • [Optional] You may skip it. Install apex from the source
git clone https://github.com/NVIDIA/apex.git
cd apex
python setup.py install --cuda_ext --cpp_ext

Because pytorch and torchvision are ongoing projects.

Here we noted that our code is tested based on Pytorch 0.3.0/0.4.0/0.5.0/1.0.0 and Torchvision 0.2.0/0.2.1 .

Dataset & Preparation

Download Market1501 Dataset [Google] [Baidu] Or use command line:

pip install gdown 
pip install --upgrade gdown #!!important!!
gdown 0B8-rUzbwVRk0c054eEozWG9COHM

Preparation: Put the images with the same id in one folder. You may use

python prepare.py

Remember to change the dataset path to your own path.

Futhermore, you also can test our code on [DukeMTMC-reID Dataset]( GoogleDriver or (BaiduYun password: bhbh)) Or use command line:

gdown 1jjE85dRCMOgRtvJ5RQV9-Afs-2_5dY3O

Our baseline code is not such high on DukeMTMC-reID Rank@1=64.23%, mAP=43.92%. Hyperparameters are need to be tuned.

  • [Optional] DG-Market is a generated pedestrian dataset of 128,307 images for training a robust model.

Train

Train a model by

python train.py --gpu_ids 0 --name ft_ResNet50 --train_all --batchsize 32  --data_dir your_data_path

--gpu_ids which gpu to run.

--name the name of model.

--data_dir the path of the training data.

--train_all using all images to train.

--batchsize batch size.

--erasing_p random erasing probability.

Train a model with random erasing by

python train.py --gpu_ids 0 --name ft_ResNet50 --train_all --batchsize 32  --data_dir your_data_path --erasing_p 0.5

Test

Use trained model to extract feature by

python test.py --gpu_ids 0 --name ft_ResNet50 --test_dir your_data_path  --batchsize 32 --which_epoch 59

--gpu_ids which gpu to run.

--batchsize batch size.

--name the dir name of trained model.

--which_epoch select the i-th model.

--data_dir the path of the testing data.

Evaluation

python evaluate.py

It will output Rank@1, Rank@5, Rank@10 and mAP results. You may also try evaluate_gpu.py to conduct a faster evaluation with GPU.

For mAP calculation, you also can refer to the C++ code for Oxford Building. We use the triangle mAP calculation (consistent with the Market1501 original code).

re-ranking

python evaluate_rerank.py

It may take more than 10G Memory to run. So run it on a powerful machine if possible.

It will output Rank@1, Rank@5, Rank@10 and mAP results.

Tips

Notes the format of the camera id and the number of cameras.

For some dataset, e.g., MSMT17, there are more than 10 cameras. You need to modify the prepare.py and test.py to read the double-digit camera ID.

For some vehicle re-ID datasets. e.g. VeRi, you also need to modify the prepare.py and test.py. It has different naming rules. #107 (Sorry. It is in Chinese)

Citation

The following paper uses and reports the result of the baseline model. You may cite it in your paper.

@article{zheng2019joint,
  title={Joint discriminative and generative learning for person re-identification},
  author={Zheng, Zhedong and Yang, Xiaodong and Yu, Zhiding and Zheng, Liang and Yang, Yi and Kautz, Jan},
  journal={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2019}
}

The following papers may be the first two to use the bottleneck baseline. You may cite them in your paper.

@article{DBLP:journals/corr/SunZDW17,
  author    = {Yifan Sun and
               Liang Zheng and
               Weijian Deng and
               Shengjin Wang},
  title     = {SVDNet for Pedestrian Retrieval},
  booktitle   = {ICCV},
  year      = {2017},
}

@article{hermans2017defense,
  title={In Defense of the Triplet Loss for Person Re-Identification},
  author={Hermans, Alexander and Beyer, Lucas and Leibe, Bastian},
  journal={arXiv preprint arXiv:1703.07737},
  year={2017}
}

Basic Model

@article{zheng2018discriminatively,
  title={A discriminatively learned CNN embedding for person reidentification},
  author={Zheng, Zhedong and Zheng, Liang and Yang, Yi},
  journal={ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM)},
  volume={14},
  number={1},
  pages={13},
  year={2018},
  publisher={ACM}
}

@article{zheng2020vehiclenet,
  title={VehicleNet: Learning Robust Visual Representation for Vehicle Re-identification},
  author={Zheng, Zhedong and Ruan, Tao and Wei, Yunchao and Yang, Yi and Mei, Tao},
  journal={IEEE Transaction on Multimedia (TMM)},
  year={2020}
}

Related Repos

  1. Pedestrian Alignment Network GitHub stars
  2. 2stream Person re-ID GitHub stars
  3. Pedestrian GAN GitHub stars
  4. Language Person Search GitHub stars
  5. DG-Net GitHub stars
  6. 3D Person re-ID GitHub stars

More Repositories

1

AICIty-reID-2020

๐Ÿš— The 1st Place Submission to AICity Challenge 2020 re-id track (Baidu-UTS submission)
Python
446
star
2

Vehicle_reID-Collection

๐Ÿš— the collection of vehicle re-ID papers, datasets. ๐Ÿš—
431
star
3

University1652-Baseline

ACM Multimedia2020 University-1652: A Multi-view Multi-source Benchmark for Drone-based Geo-localization ๐Ÿš annotates 1652 buildings in 72 universities around the world.
Python
409
star
4

Seg-Uncertainty

IJCAI2020 & IJCV2021 ๐ŸŒ‡ Unsupervised Scene Adaptation with Memory Regularization in vivo
Python
375
star
5

Person-reID_GAN

ICCV2017 Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in vitro
Cuda
317
star
6

Image-Text-Embedding

TOMM2020 Dual-Path Convolutional Image-Text Embedding ๐Ÿพ https://arxiv.org/abs/1711.05535
MATLAB
279
star
7

2016_person_re-ID

TOMM2017 A Discriminatively Learned CNN Embedding for Person Re-identification
C
265
star
8

person-reid-3d

TNNLS'22 ๐Ÿ—ฝ Parameter-Efficient Person Re-identification in the 3D Space ๐Ÿ—ฝ
Python
257
star
9

Pedestrian_Alignment

TCSVT2018 Pedestrian Alignment Network for Large-scale Person Re-identification
Cuda
237
star
10

Person-reID-triplet-loss

Person re-ID baseline with triplet loss
Python
186
star
11

2015_Face_Detection

CVPR2015 Cascade CNNs for Face Detection
HTML
136
star
12

Person-reID-verification

๐Ÿจ (pytorch version) TOMM2017 A Discriminatively Learned CNN Embedding for Person Re-identification ๐Ÿจ
Python
99
star
13

2016_super_resolution

ICCV2015 Image Super-Resolution Using Deep Convolutional Networks
Cuda
86
star
14

Awesome-Fools

๐Ÿ’€ A collection of methods to fool the deep neural network ๐Ÿ’€
76
star
15

3D-Magic-Mirror

๐Ÿ‘—3D Magic Mirror: Clothing Reconstruction from a Single Image via a Causal Perspective๐Ÿ‘— Single-View 3D Reconstruction
Python
65
star
16

U_turn

IJCV22 ๐Ÿ™ˆ Attack your retrieval model via Query! They are not robust as you expected! ๐Ÿ™‰
Python
47
star
17

AdaBoost_Seg

TIP2022 Adaptive Boosting (AdaBoost) for Domain Adaptation ? ๐Ÿคทโ€โ™€๏ธ Why not ! ๐Ÿ™†โ€โ™€๏ธ
Python
44
star
18

visualize_matconvnet

A simple code to visualize net for matconvnet.
MATLAB
35
star
19

2016_GAN_Matlab

Generative Adversarial Nets for Matlab
HTML
35
star
20

2016_Artist_Style

Using CNN to create 'famous painting' with Matlab code
HTML
19
star
21

UTS-Person-reID-Practical

UTS Person-reID Practical By Zhedong Zheng
18
star
22

DukeMTMC-reID_baseline

DukeMTMC-reID_baseline (Matlab)
Cuda
18
star
23

Image-Retrieval-by-Finetuning-CNN

Code for project
Python
17
star
24

HQ-Market

Market-1501 dataset with super-resolution quality
Python
17
star
25

ACMMM2023Workshop

UAVM @ ACM MM2023 Workshop on UAVs in Multimedia: Capturing the World from a New Perspective
16
star
26

NLP-AICity2021

The 1st Place Submission to AICity Track5 - Natural Language-based Vehicle Retrieval.
Python
14
star
27

Person_reID_baseline_matconvnet

Matconvnet implement of Person re-identification baseline. We arrived Rank@1=87.74% mAP=69.46% only with softmax loss.
Cuda
12
star
28

matlab_email_demo

a easy solution for baby sitting program!!! (MATLAB)
M
11
star
29

Awesome-Text2Motion-Generation

Awesome-Text2Motion-Generation
11
star
30

ICME2022SS

ICME2022 Special Session โ€œBeyond Accuracy: Responsible, Responsive, and Robust Multimedia Retrieval โ€
11
star
31

UAVM2023

ACM MM Workshop on UAVs in Multimedia: Capturing the World from a New Perspective (UAVM 2023)
10
star
32

To-Academic-Newcomers

10
star
33

DCGAN-pytorch

Pytorch implement of DCGAN and LSGAN
Python
8
star
34

University1652-triplet-loss

triplet loss with hard negative / soft margin for the University-1652 dataset.
Python
8
star
35

2016_Class_Activation_Mapping

semantic segmentation in MATLAB
HTML
7
star
36

Robust-GPUs

Python
7
star
37

Cifar10-Adaboost

Python
6
star
38

layumi.github.io

UTS Group Seminar http://www.zdzheng.xyz
HTML
6
star
39

market1501_body_point

MATLAB
6
star
40

2016_Center_Loss

Matlab_ECCV16_Center_Loss
HTML
4
star
41

google_scholar_scrapy

extract data from google scholar
Python
4
star
42

Oxford-Paris-Attack

๐Ÿ™ˆ We added our attacking method ODFA (https://arxiv.org/abs/1809.02681). The performance drops from 88.2% to 2.24% on Oxford. ๐Ÿ™‰
Python
4
star
43

visualize_face_detection_net

MATLAB
3
star
44

SOTA-semi

3
star
45

Awesome-Sign-Language

awesome list for sign language
3
star
46

Matlab_TripletLoss

Matlab_TripletLoss
MATLAB
3
star
47

Batch-Normal-For-Caffe

Extend batch normalization layer for caffe
C++
3
star
48

pytorch-mnist

Draw mnist
Python
3
star
49

2015_speech

word audio recognition
HTML
2
star
50

layumi

2
star
51

pkl2mat

a tool for transfer pkl file to mat file
Python
2
star
52

ComputerVisionAwardPapers

1
star
53

2016_FlowNet

Cuda
1
star
54

Zhedong-Zheng-blog

zhedong zheng's blog
CSS
1
star
55

empty

1
star
56

2016_Video_Stabilization

A project @Fudan for 2016 Digital Image Processing
C
1
star
57

Workshop-Proposal-DDL

1
star
58

WordNet_Matlab

a simple api for matlab to search semantic synonym
MATLAB
1
star