pytorch implementation of "Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit Neural Networks"
-
Updated
Jan 2, 2020 - Python
pytorch implementation of "Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit Neural Networks"
QAT(quantize aware training) for classification with MQBench
Add a description, image, and links to the dsq topic page so that developers can more easily learn about it.
To associate your repository with the dsq topic, visit your repo's landing page and select "manage topics."