• Stars
    star
    4
  • Rank 3,304,323 (Top 66 %)
  • Language VHDL
  • License
    MIT License
  • Created about 3 years ago
  • Updated over 2 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Design and implementation of a pipelined Bfloat16 Floating Point Arithmetic Unit using VHDL. This unit can perform addition, subtraction, multiplication, division and fused multiply-add/subtract operations. Bfloat16 is a 16-bit floating-point data type developed at Google and currently used in their Tensor Processing Units (TPU's). Thanks to its dynamic range, the Bfloat16 format can be useful for Machine Learning applications that work well with low-precision representations of data. This Bfloat16 unit will be used to add custom RISC-V floating-point instructions to a RISC-V processor that can potentially be used as a hardware accelerator for Machine Learning applications. This model will also be tested on and FPGA and possibly modified to achieve optimal performance. Work is still in progress.