Deteksi Rambu Lalu Lintas Indonesia Menggunakan Transfer Learning
DOI:
https://doi.org/10.62951/modem.v4i1.827Keywords:
Computer Vision, Deep Learning, Indonesian Traffic Signs, Transfer Learning, YOLOv5Abstract
This study aims to develop an Indonesian traffic sign detection system using a transfer learning approach to improve road safety and traffic efficiency. The dataset was obtained from Kaggle and consists of 2,100 images across 21 traffic sign classes. The research stages include data collection, preprocessing to reduce noise and normalize image brightness, object detection using YOLOv5, and classification based on transfer learning with ResNet, VGG-16, and MobileNet architectures. Model performance was evaluated using accuracy, precision, recall, and F1-score metrics. Experimental results indicate that the YOLOv5 model is capable of detecting traffic sign objects; however, the classification performance remains relatively low, with a mean Average Precision (mAP) value of 0.17. These findings suggest that further optimization is required in data preprocessing, dataset quality, and model parameter tuning to achieve better performance. This study demonstrates that transfer learning has significant potential for developing computer vision-based traffic sign detection systems, although further improvements are necessary to ensure robustness under real-world Indonesian traffic conditions.
Downloads
References
Bochkovskiy, A., Wang, C.-Y., & Liao, H.-Y. M. (2020). YOLOv4: Optimal speed and accuracy of object detection. arXiv. https://arxiv.org/abs/2004.10934
Everingham, M., Van Gool, L., Williams, C. K. I., Winn, J., & Zisserman, A. (2010). The Pascal Visual Object Classes (VOC) challenge. International Journal of Computer Vision, 88, 303–338. https://doi.org/10.1007/s11263-009-0275-4
Girshick, R. (2015). Fast R-CNN. In Proceedings of the IEEE International Conference on Computer Vision (ICCV) (pp. 1440–1448). https://doi.org/10.1109/ICCV.2015.169
Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press.
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 770–778). https://doi.org/10.1109/CVPR.2016.90
Howard, A. G., et al. (2017). MobileNets: Efficient convolutional neural networks for mobile vision applications. arXiv. https://arxiv.org/abs/1704.04861
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems (NeurIPS) (Vol. 25).
Lin, T.-Y., Goyal, P., Girshick, R., He, K., & Dollár, P. (2017). Focal loss for dense object detection. In Proceedings of the IEEE International Conference on Computer Vision (ICCV) (pp. 2980–2988). https://doi.org/10.1109/ICCV.2017.324
Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., & Berg, A. C. (2016). SSD: Single shot multibox detector. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 21–37). https://doi.org/10.1007/978-3-319-46448-0_2
Pan, S. J., & Yang, Q. (2010). A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10), 1345–1359. https://doi.org/10.1109/TKDE.2009.191
Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 779–788). https://doi.org/10.1109/CVPR.2016.91
Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems (NeurIPS) (Vol. 28).
Russakovsky, O., et al. (2015). ImageNet large scale visual recognition challenge. International Journal of Computer Vision, 115, 211–252. https://doi.org/10.1007/s11263-015-0816-y
Shorten, C., & Khoshgoftaar, T. M. (2019). A survey on image data augmentation for deep learning. Journal of Big Data, 6, Article 60. https://doi.org/10.1186/s40537-019-0197-0
Stallkamp, J., Schlipsing, M., Salmen, J., & Igel, C. (2012). Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition. Neural Networks, 32, 323–332. https://doi.org/10.1016/j.neunet.2012.02.016
Tan, M., & Le, Q. V. (2019). EfficientNet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning (ICML) (pp. 6105–6114).
Zhu, Z., Liang, D., Zhang, S., Huang, X., Li, B., & Hu, S. (2016). Traffic sign detection and recognition using fully convolutional network. IEEE Transactions on Intelligent Transportation Systems, 17(7), 2021–2032. https://doi.org/10.1109/TITS.2016.2552441
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Modem : Jurnal Informatika dan Sains Teknologi.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.


