Exploring the Differences in Pruning Methods for Convolutional Neural Networks

Authors

Romanela Lajić
University of Ljubljana, Faculty of Computer and Information Science
Peter Peer
University of Ljubljana, Faculty of Computer and Information Science
https://orcid.org/0000-0001-9744-4035
Žiga Emeršič
University of Ljubljana, Faculty of Computer and Information Science
https://orcid.org/0000-0002-3726-9404

Synopsis

With the rising computational and memory cost of deep neural networks there is more effort to reduce the size of these models, especially when their deployment on resource constrained devices is the goal. New methods of compressing neural networks are being constantly developed with the goal of minimizing the drop in accuracy. In this paper we focus on pruning techniques as a way of compression. We present a comparison of different pruning criteria and analyze the loss in accuracy for the case of a simple non-iterative pruning procedure. We provide the comparison between cases when these criteria are applied to different architectures of convolutional neural networks.

Author Biographies

Romanela Lajić, University of Ljubljana, Faculty of Computer and Information Science

Ljubljana, Slovenia. E-mail: romanela.lajic@fri.uni-lj.si

Peter Peer, University of Ljubljana, Faculty of Computer and Information Science

Ljubljana, Slovenia. E-maila: peter.peer@fri.uni-lj.si

Žiga Emeršič, University of Ljubljana, Faculty of Computer and Information Science

Ljubljana, Slovenia. E-mail: ziga.emersic@fri.uni-lj.si

Downloads

Published

March 6, 2025

Series

How to Cite

(Ed.). (2025). Exploring the Differences in Pruning Methods for Convolutional Neural Networks . In ROSUS 2025 - Računalniška obdelava slik in njena uporaba v Sloveniji 2025: Zbornik 19. strokovne konference (Vols. 19, pp. 31-40). University of Maribor Press. https://press.um.si/index.php/ump/catalog/book/957/chapter/258