Research Article Open Access

A Conceptual Framework for Efficient and Sustainable Pruning Techniques in Deep Learning Models

Nyalalani Smarts1, Rajalakshm Selvaraj 1 and Venumadhav Kuthadi1
  • 1 Department of Computing and Informatics, School of Pure and Applied of Sciences, Botswana International University of Science and Technology, Palapye, Botswana

Abstract

This research paper proposes a conceptual framework and optimization algorithm for pruning techniques in deep learning models, its focus is on key challenges such as model size, computational efficiency, inference speed and sustainable technology development. The aim of the framework is to transition from large neural networks to sparse, efficient models, indicating the benefits of pruning in improving model scalability and applicability of the pruned models. The proposed framework focuses on reducing the model size, optimizing training schedules and facilitating efficient deployment in real-world devices. The development of the framework involves four stages: Reviewing critical research concepts, identifying relationships between concepts and designing the pruning framework. Furthermore, this study also introduces a new multi-objective optimization algorithm that formalizes the trade-offs between accuracy, computational cost, inference time and energy consumption in the pruning process. Our experiments demonstrate the method's effectiveness in achieving notable model compression while preserving competitive performance on a sentiment analysis and linguistic acceptability tasks using Stanford Sentiment Treebank (SST-2) and Corpus of Linguistic Acceptability (CoLA) datasets. The results of our experiments show the BERT Base model being pruned to 25 million parameters gaining an accuracy of 96.3% on SST-2 dataset and F1-score of 95.2%. Furthermore, the pruned model demonstrated F1 score of 82.3 and 56% of Matthews correlation coefficient in CoLA dataset respectively. This framework, along with the algorithm, serves as a reference for researchers and practitioners, who can select a suitable approach based on the specific application requirements for pruning deep learning models.

Journal of Computer Science
Volume 21 No. 5, 2025, 1113-1128

DOI: https://doi.org/10.3844/jcssp.2025.1113.1128

Submitted On: 14 October 2024 Published On: 26 April 2025

How to Cite: Smarts, N., Selvaraj , R. & Kuthadi, V. (2025). A Conceptual Framework for Efficient and Sustainable Pruning Techniques in Deep Learning Models. Journal of Computer Science, 21(5), 1113-1128. https://doi.org/10.3844/jcssp.2025.1113.1128

  • 22 Views
  • 11 Downloads
  • 0 Citations

Download

Keywords

  • Deep Language Models
  • Pruning
  • Efficiency
  • Pretrained Language Models
  • Sustainable Technology Development