Exploring the Differences in Pruning Methods for Convolutional Neural Networks
Synopsis
With the rising computational and memory cost of deep neural networks there is more effort to reduce the size of these models, especially when their deployment on resource constrained devices is the goal. New methods of compressing neural networks are being constantly developed with the goal of minimizing the drop in accuracy. In this paper we focus on pruning techniques as a way of compression. We present a comparison of different pruning criteria and analyze the loss in accuracy for the case of a simple non-iterative pruning procedure. We provide the comparison between cases when these criteria are applied to different architectures of convolutional neural networks.
Downloads
Pages
Published
Series
Categories
License

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.