• Stars
    star
    1,780
  • Rank 26,144 (Top 0.6 %)
  • Language
    Python
  • License
    BSD 3-Clause "New...
  • Created over 5 years ago
  • Updated over 5 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

CornerNet-Lite: Training, Evaluation and Testing Code

Code for reproducing results in the following paper:

CornerNet-Lite: Efficient Keypoint Based Object Detection
Hei Law, Yun Teng, Olga Russakovsky, Jia Deng
arXiv:1904.08900

Getting Started

Software Requirement

  • Python 3.7
  • PyTorch 1.0.0
  • CUDA 10
  • GCC 4.9.2 or above

Installing Dependencies

Please first install Anaconda and create an Anaconda environment using the provided package list conda_packagelist.txt.

conda create --name CornerNet_Lite --file conda_packagelist.txt --channel pytorch

After you create the environment, please activate it.

source activate CornerNet_Lite

Compiling Corner Pooling Layers

Compile the C++ implementation of the corner pooling layers. (GCC4.9.2 or above is required.)

cd <CornerNet-Lite dir>/core/models/py_utils/_cpools/
python setup.py install --user

Compiling NMS

Compile the NMS code which are originally from Faster R-CNN and Soft-NMS.

cd <CornerNet-Lite dir>/core/external
make

Downloading Models

In this repo, we provide models for the following detectors:

Put the CornerNet-Saccade model under <CornerNet-Lite dir>/cache/nnet/CornerNet_Saccade/, CornerNet-Squeeze model under <CornerNet-Lite dir>/cache/nnet/CornerNet_Squeeze/ and CornerNet model under <CornerNet-Lite dir>/cache/nnet/CornerNet/. (* Note we use underscore instead of dash in both the directory names for CornerNet-Saccade and CornerNet-Squeeze.)

Note: The CornerNet model is the same as the one in the original CornerNet repo. We just ported it to this new repo.

Running the Demo Script

After downloading the models, you should be able to use the detectors on your own images. We provide a demo script demo.py to test if the repo is installed correctly.

python demo.py

This script applies CornerNet-Saccade to demo.jpg and writes the results to demo_out.jpg.

In the demo script, the default detector is CornerNet-Saccade. You can modify the demo script to test different detectors. For example, if you want to test CornerNet-Squeeze:

#!/usr/bin/env python

import cv2
from core.detectors import CornerNet_Squeeze
from core.vis_utils import draw_bboxes

detector = CornerNet_Squeeze()
image    = cv2.imread("demo.jpg")

bboxes = detector(image)
image  = draw_bboxes(image, bboxes)
cv2.imwrite("demo_out.jpg", image)

Using CornerNet-Lite in Your Project

It is also easy to use CornerNet-Lite in your project. You will need to change the directory name from CornerNet-Lite to CornerNet_Lite. Otherwise, you won't be able to import CornerNet-Lite.

Your project
β”‚   README.md
β”‚   ...
β”‚   foo.py
β”‚
└───CornerNet_Lite
β”‚
└───directory1
β”‚   
└───...

In foo.py, you can easily import CornerNet-Saccade by adding:

from CornerNet_Lite import CornerNet_Saccade

def foo():
    cornernet = CornerNet_Saccade()
    # CornerNet_Saccade is ready to use

    image  = cv2.imread('/path/to/your/image')
    bboxes = cornernet(image)

If you want to train or evaluate the detectors on COCO, please move on to the following steps.

Training and Evaluation

Installing MS COCO APIs

mkdir -p <CornerNet-Lite dir>/data
cd <CornerNet-Lite dir>/data
git clone [email protected]:cocodataset/cocoapi.git coco
cd <CornerNet-Lite dir>/data/coco/PythonAPI
make install

Downloading MS COCO Data

  • Download the training/validation split we use in our paper from here (originally from Faster R-CNN)
  • Unzip the file and place annotations under <CornerNet-Lite dir>/data/coco
  • Download the images (2014 Train, 2014 Val, 2017 Test) from here
  • Create 3 directories, trainval2014, minival2014 and testdev2017, under <CornerNet-Lite dir>/data/coco/images/
  • Copy the training/validation/testing images to the corresponding directories according to the annotation files

To train and evaluate a network, you will need to create a configuration file, which defines the hyperparameters, and a model file, which defines the network architecture. The configuration file should be in JSON format and placed in <CornerNet-Lite dir>/configs/. Each configuration file should have a corresponding model file in <CornerNet-Lite dir>/core/models/. i.e. If there is a <model>.json in <CornerNet-Lite dir>/configs/, there should be a <model>.py in <CornerNet-Lite dir>/core/models/. There is only one exception which we will mention later.

Training and Evaluating a Model

To train a model:

python train.py <model>

We provide the configuration files and the model files for CornerNet-Saccade, CornerNet-Squeeze and CornerNet in this repo. Please check the configuration files in <CornerNet-Lite dir>/configs/.

To train CornerNet-Saccade:

python train.py CornerNet_Saccade

Please adjust the batch size in CornerNet_Saccade.json to accommodate the number of GPUs that are available to you.

To evaluate the trained model:

python evaluate.py CornerNet_Saccade --testiter 500000 --split <split>

If you want to test different hyperparameters during evaluation and do not want to overwrite the original configuration file, you can do so by creating a configuration file with a suffix (<model>-<suffix>.json). There is no need to create <model>-<suffix>.py in <CornerNet-Lite dir>/core/models/.

To use the new configuration file:

python evaluate.py <model> --testiter <iter> --split <split> --suffix <suffix>

We also include a configuration file for CornerNet under multi-scale setting, which is CornerNet-multi_scale.json, in this repo.

To use the multi-scale configuration file:

python evaluate.py CornerNet --testiter <iter> --split <split> --suffix multi_scale

More Repositories

1

infinigen

Infinite Photorealistic Worlds using Procedural Generation
Python
5,286
star
2

RAFT

Python
3,189
star
3

CornerNet

Python
2,355
star
4

DROID-SLAM

Python
1,730
star
5

lietorch

Cuda
670
star
6

RAFT-Stereo

Python
667
star
7

DeepV2D

Python
651
star
8

DPVO

Deep Patch Visual Odometry/SLAM
C++
597
star
9

pose-hg-train

Training and experimentation code used for "Stacked Hourglass Networks for Human Pose Estimation"
Jupyter Notebook
575
star
10

pytorch_stacked_hourglass

Pytorch implementation of the ECCV 2016 paper "Stacked Hourglass Networks for Human Pose Estimation"
Python
469
star
11

CoqGym

A Learning Environment for Theorem Proving with the Coq proof assistant
Coq
380
star
12

pose-ae-train

Training code for "Associative Embedding: End-to-End Learning for Joint Detection and Grouping"
Python
373
star
13

pose-hg-demo

Code to test and use the model from "Stacked Hourglass Networks for Human Pose Estimation"
Lua
316
star
14

SEA-RAFT

[ECCV2024 - Oral, Best Paper Award Candidate] SEA-RAFT: Simple, Efficient, Accurate RAFT for Optical Flow
Python
298
star
15

RAFT-3D

Python
229
star
16

SimpleView

Official Code for ICML 2021 paper "Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline"
Python
154
star
17

px2graph

Training code for "Pixels to Graphs by Associative Embedding"
Python
133
star
18

relative_depth

Code for the NIPS 2016 paper
Lua
124
star
19

CER-MVS

Python
122
star
20

YouTube3D

Code for the CVPR 2019 paper "Learning Single-Image Depth from Videos using Quality Assessment Networks"
Python
106
star
21

Coupled-Iterative-Refinement

Python
105
star
22

pose-ae-demo

Python
97
star
23

MultiSlam_DiffPose

Jupyter Notebook
94
star
24

SNP

Official code for View Synthesis with Sculpted Neural Points
Python
83
star
25

DecorrelatedBN

Code for Decorrelated Batch Normalization
Lua
80
star
26

SpatialSense

An Adversarially Crowdsourced Benchmark for Spatial Relation Recognition
Python
70
star
27

oasis

Code for the CVPR 2020 paper "OASIS: A Large-Scale Dataset for Single Image 3D in the Wild"
MATLAB
64
star
28

selfstudy

Code for reproducing experiments in "How Useful is Self-Supervised Pretraining for Visual Tasks?"
Python
60
star
29

PackIt

Code for reproducing results in ICML 2020 paper "PackIt: A Virtual Environment for Geometric Planning"
Jupyter Notebook
52
star
30

d3dhelper

Unofficial sample code for Distilled 3D Networks (D3D) in Tensorflow.
Jupyter Notebook
48
star
31

Oriented1D

Official code for ICCV 2023 paper "Convolutional Networks with Oriented 1D Kernels"
Python
44
star
32

SOLID

Python
41
star
33

OGNI-DC

[ECCV24] official code for "OGNI-DC: Robust Depth Completion with Optimization-Guided Neural Iterations"
Python
38
star
34

OcMesher

C++
35
star
35

attach-juxtapose-parser

Code for the paper "Strongly Incremental Constituency Parsing with Graph Neural Networks"
Python
34
star
36

surface_normals

Code for the ICCV 2017 paper "Surface Normals in the Wild"
Lua
33
star
37

MetaGen

Code for the paper "Learning to Prove Theorems by Learning to Generate Theorems"
Objective-C++
30
star
38

FormulaNet

Code for FormulaNet in NIPS 2017
Python
29
star
39

Rel3D

Official code for NeurRIPS 2020 paper "Rel3D: A Minimally Contrastive Benchmark for Grounding Spatial Relations in 3D"
Python
26
star
40

selfstudy-render

Code to generate datasets used in "How Useful is Self-Supervised Pretraining for Visual Tasks?"
Python
22
star
41

think_visually

Code for ACL 2018 paper 'Think Visually: Question Answering through Virtual Imagery'
Python
14
star
42

structured-matching

codes for ECCV 2016
Lua
9
star
43

DPVO_Docker

Shell
8
star
44

uniloss

Python
8
star
45

MetaQNL

Learning Symbolic Rules for Reasoning in Quasi-Natural Language: https://arxiv.org/abs/2111.12038
Julia
6
star
46

PackIt_Extra

Code for generating data in ICML 2020 paper "PackIt: A Virtual Environment for Geometric Planning"
C#
5
star
47

Rel3D_Render

Code for rendering images for NeurRIPS 2020 paper "Rel3D: A Minimally Contrastive Benchmark for Grounding Spatial Relations in 3D"
Python
3
star
48

HYPE-C

Python
1
star