Verilog Generator of Neural Net Digit Detector for FPGA
It's the project which train neural net to detect dark digits on light background. Then neural net converted to verilog HDL representation using several techniques to reduce needed resources on FPGA and increase speed of processing. Code is production ready to use in real device. It can be easily extended to be used with detection of other objects with different neural net structure.
Requirements
Python 3.5, Tensorflow 1.4.0, Keras 2.1.3
How to run:
- python r01_train_neural_net_and_prepare_initial_weights.py
- python r02_rescale_weights_to_use_fixed_point_representation.py
- python r03_find_optimal_bit_for_weights.py
- python r04_verilog_generator_grayscale_file.py
- python r05_verilog_generator_neural_net_structure.py
Verilog already added in repository in ''verilog'' folder. It has everything you need including all code to interact with camera or screen. Neural net verilog description is located in ''verliog/code/neuroset'' folder.
Neural net structure
Device
To recreate the device you need 3 components:
- De0Nano board (~80$)
- Camera OV7670 (~7$)
- Display ILI9341 (~7$)
Connection of components
- You need to connect pins with same name
- 'x' pins are not used
- You can see our connection variant on photo below
- Detailed guide how to use project in Altera Quartus.
Demo video with detection
Notes
-
You can change constant num_conv = 2 in r05_verilog_generator_neural_net_structure.py to 1, 2 or 4 convolutional blocks which will work in parallel. More blocks will require more LE in FPGA, but increase the overall speed.
-
Comparison table for different bit weights and number of convolution blocks below (red rows: unable to synthesize, due to Cyclone IV limitations).
Related project
The similar project but with more complicated and widely used neural net: MobileNet (v1). It uses some other set of devices. It has similar code structure. It has fast speed (>40 FPS) and much better accuracy comparing to this project. It suitable for most image classification tasks in real time.
Citation
You can find detailed description of the method in our paper (or unpaywalled preprint). If you find this work useful, please consider citing:
@inproceedings{solovyev2019fixed,
title={Fixed-point convolutional neural network for real-time video processing in FPGA},
author={Solovyev, Roman and Kustov, Alexander and Telpukhov, Dmitry and Rukhlov, Vladimir and Kalinin, Alexandr},
booktitle={2019 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus)},
pages={1605--1611},
year={2019},
organization={IEEE}
}