Strukturiertes Filter Pruning angewendet auf Mask R-CNN: Ein Weg zur effizienten Bildsegmentierung

Autor/innen

  • Yannik Frühwirth Duale Hochschule Baden-Württemberg

DOI:

https://doi.org/10.26034/lu.akwi.2024.4673

Schlagworte:

Pruning, Filter Pruning, Mask R-CNN, Bildsegmentierung, Zweistufige Detektoren

Abstract

Die Studie erforscht die Effizienzsteigerung zweistufiger Objekterkennungssysteme mittels fortschrittlicher maschineller Lernmethoden. Trotz hoher Genauigkeit dieser Systeme besteht das Problem der Überparametrisierung und hohen Rechenanforderungen. Es wird eine auf Mask R-CNN zugeschnittene Pruning-Methode entwickelt, welche Modellkomplexität und Speicherbedarf verringert, ohne die Leistung zu beeinträchtigen. Eine globale Kernel-Level-Pruning-Strategie, identifiziert durch die L1-Norm, eliminiert überflüssige Parameter post-Training, wobei die Genauigkeit bei bis zu 40% Pruning stabil bleibt. Mit einer Kompressionsrate von 1,25 und einem IoU-Wert von 0,72 zeigen die Ergebnisse, dass der Ansatz die Effizienz von neuronalen Netzen verbessern kann, ohne deren Erkennungsgenauigkeit zu mindern, und leisten damit einen Beitrag zur Diskussion über KI-Optimierung.

Literaturhinweise

Abbasi-Asl, R. and B. Yu 2017. “Structural Compression of Convolutional Neural Networks Based on Greedy Filter Pruning”. In: arXiv preprint arXiv:1705.07356.

Aguiar Salvi, A. de and R. C. Barros 2021. “Model Compression in Object Detection”. In: 2021 International Joint Conference on Neural Networks (IJCNN). IEEE, pp. 1–8.

Alpaydin, E. 2016. Machine learning, The new AI. Cambridge, MA: MIT Press.

Anwar, S. et al. 2017. “Structured Pruning of Deep Convolutional Neural Networks”. In: ACM Journal on Emerging Technologies in Computing Systems 13.3, pp. 1–18.

Basili, V. R. 2002. “The role of experimentation in software engineering: past, current, and future”. In: Proceedings of IEEE 18th International Conference on Software Engineering. 1, pp. 1–8.

Blalock, D. et al. 2020. “What is the State of Neural Network Pruning?” In: Proceedings of Machine Learning and Systems 2020 1.1, pp. 1–18.

Canziani, A. et al. 2016. An Analysis of Deep Neural Network Models for Practical Applications. arXiv.

Chen, L. et al. 2021. “Knowledge from the original network: restore a better pruned network with knowledge distillation”. In: Complex Intelligent Systems 8.1, pp. 1–10.

Cosman, P. C. et al. 1993. “Using vector quantization for image processing”. In: Proceedings of the IEEE 81.9, pp. 1326–1341.

Facebook 2020. From Research to Production. https://pytorch.org/. Retrieved: 05.05.2020.

Feng, Di et al. 2021. “A Review and Comparative Study on Probabilistic Object Detection in Autonomous Driving”. In: IEEE Transactions on Intelligent Transportation Systems 1, pp. 1–20.

Feng, M. et al. 2013. Complementarity formulations of l0-norm optimization problems. Industrial Engineering and Management Sciences. Technical Report.

Gholami, A. et al. 2015. “A Survey of Quantization Methods for Efficient Neural Network Inference”. In: Proceedings of the IEEE International Conference on Computer Vision. Santiago, pp. 1440–1448.

H. Zhou, et. al 2016. “Less Is More: Towards Compact CNNs”. In: ECCV. Springer, pp. 662–677.

Han, S. et. al 2016. “Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding”. In: International Conference on Learning Representations. San Juan, pp. 1–14.

Hassibi, B. and D. Stork 1992. “Second Order Derivatives for Network Pruning: Optimal Brain Surgeon”. In: Neural Information Processing Systems 1992, pp. 164–171.

Hu, H. et al. 2016. “Network Trimming: A Data-Driven Neuron Pruning Approach Towards Efficient Deep Architectures”. In: arXiv preprint arXiv:1607.03250.

Huang, J. et al. 2017. “Speed/Accuracy Trade-Offs for Modern Convolutional Object Detectors”. In: IEEE Conference on Computer Vision and Pattern Recognition 2017, pp. 1–21.

Kumar, A. et al. 2021. “Pruning filters with L1-norm and capped L1-norm for CNN compression”. In: Appl Intell 51, pp. 1152–1160. doi: 10.1007/s10489-020-01894-y.

LeCun, Y. et al. 1989. “Optimal Brain Damage”. In: Advances in Neural Information Processing Systems. Ed. by D. Touretzky. Morgan-Kaufmann, pp. 1–8.

Li, H. et al. 2017. “Pruning Filters for Efficient ConvNets”. In: International Conference on Learning Representations. Toulon, pp. 1–13.

Lin, T.-Y. et al. 2014. “Microsoft COCO: Common Objects in Context”. In: European Conference on Computer Vision. Springer, pp. 740–755.

Liu, Z. et al. 2017. “Learning Efficient Convolutional Networks Through Network Slimming”. In: ICCV. IEEE, pp. 2755–2763.

Molchanov, P. et al. 2017. “Pruning Convolutional Neural Networks for Resource Efficient Inference”. In: ICLR.

Peffers, K. et al. 2007. “A Design Science Research Methodology for Information Systems Research”. In: Journal of Management Information Systems 24.3, pp. 45–77.

Rastegari, M. et al. 2016. “XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks”. In: ECCV. Springer, pp. 525-542.

Shinde, P. P. and S. Shah 2018. “A Review of Machine Learning and Deep Learning Applications”. In: 2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA). IEEE, pp. 1–6.

Singh, P. et al. 2019. “Stability Based Filter Pruning for Accelerating Deep CNNs”. In: Winter Conference on Applications of Computer Vision 2019. IEEE. Waikoloa Village, pp. 1–9.

Tzelepis, G. et al. 2019. Deep Neural Network Compression for Image Classification and Object Detection. n.d.: n.p..

Wen, W. et al. 2016. “Learning Structured Sparsity in Deep Neural Networks”. In: NIPS, pp. 2074–2082.

Zou, Z. et al. 2019. Object Detection in 20 Years: A Survey. n.d.: n.p.

Downloads

Veröffentlicht

2025-01-09

Ausgabe

Rubrik

Grundlagen