• This repository has been archived on 19/Jun/2019
  • Stars
    star
    106
  • Rank 325,871 (Top 7 %)
  • Language
    C
  • Created over 8 years ago
  • Updated over 8 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Tool to compress trained caffe weights

Caffe Model Compression

This is a python tool used to compress the trained caffe weights. For Alexnet, we got 17x compression rate (~233M bytes to 14M bytes). The idea comes from Deep Compression . This work does not implement purning and Huffman coding, but implement the Kmeans -based quantization to compress the weights of convolution and full-connected layer. One contribution of this work is using OpenMP to accelerate the Kmeans processing.


####Dependency

  • Python/Numpy
  • Caffe

####Authors

####How to Build:

cd quantz_kit 
 ./build.sh

####How to use it:

caffe_model_compress: #function to compress model 
caffe_model_decompress: #function to decompress model