Image Classification Model Selector
PDF
PDF

How to Cite

Arulanand, N., D. Kamalraj, and B. Krishna Teja. 2023. “Image Classification Model Selector”. Journal of Innovative Image Processing 4 (4): 299-315. https://doi.org/10.36548/jiip.2022.4.007.

Keywords

  • Image classification
  • automated image classification model selection
  • automated image classification

Abstract

Image classification is a part of computer vision, in which the digital system categorizes the entire image. Deep Learning (DL) models are widely used for image classification. However, creating DL models is resource-intensive and time-consuming, and requires extensive knowledge in the DL domain. Google Teachable Machines (GTM) is a website that outputs a trained model given the dataset, however, GTM uses only the MobileNet model and does not balance the image dataset which affects the model’s accuracy. This paper proposes a tool that automates the steps in building and training an image classification model. Using this tool does not require any extensive knowledge in DL. The tool automates the image data pre-processing steps, model building, model training, and model testing to output the best model for the given image classification dataset based on the test accuracy. The tool is tested on two datasets (each balanced and unbalanced dataset): a custom construction dataset and a Minet dataset. Both datasets are also used to train models using the GTM website. Due to the automated pre-processing steps, the average increase in the accuracy is 14.55% in the construction dataset and 3.91% in the Minet dataset. Comparing to the GTM models, the tool produced model with 8.33% more accuracy on the construction dataset and model with 14.07% more accuracy on the Minet dataset. The models trained by the proposed tool have better accuracy compared to the models obtained using GTM. Thus, using the image classification model selector facilitates the creation of an effective image classification model for the target dataset.

References

P. Yogendra Prasad, Dr. Dumpa Prasad, Dr.D Naga Malleswari, Monali N. Shetty, Dr Neha Gupta. “Implementation of Machine Learning Based Google Teachable Machine in Early Childhood Education” (2022).

Paula Branco, Lu´ıs Torgo, Rita P. Ribeiro. “A Survey of Predictive Modelling under Imbalanced Distributions” (2015).

Tan, Mingxing, Quoc V. Le. "EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks." (2019).

Kotsiantis, Sotiris & Kanellopoulos, D. & Pintelas, P. “Handling imbalanced datasets: A review”. GESTS International Transactions on Computer Science and Engineering (2005). 25-36

Nitesh V. Chawla, Kevin W. Bowyer, Lawrence O. Hall, W. Philip Kegelmeyer. “SMOTE: Synthetic Minority Over-sampling Technique” (2002).

Karl Weiss, Taghi M. Khoshgoftaar, DingDing Wang. “A survey of transfer learning”. DOI 10.1186/s40537-016-0043-6 (2016).

Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam. “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications” (2017).

Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, Quoc V. Le, Hartwig Adam. “Searching for MobileNetV3” (2019).

François Chollet. “Xception: Deep Learning with Depthwise Separable Convolutions” (2017).

C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, A. Rabinovich. “Going deeper with convolutions.” (2015).

C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, Z. Wojna. “Rethinking the inception architecture for computer vision.” (2015)

C. Szegedy, S. Ioffe, V. Vanhoucke. “Inception-v4, inception-resnet and the impact of residual connections on learning.” (2016)

Tan, Mingxing, Quoc V. Le. "EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks." (2019).

Mingxing Tan Quoc V. Le. “EfficientNetV2: Smaller Models and Faster Training”.

Hossin, Mohammad & M.N, Sulaiman. “A Review on Evaluation Metrics for Data Classification Evaluations”. International Journal of Data Mining & Knowledge Management Process. 5. 01-11. 10.5121/ijdkp.2015.5201 (2015).