Self-Supervised Learning for Real-World Data Scarcity in Industrial AI

Authors

  • Swetha Ravipudi Lucid Motors, USA Author
  • Praveen Kumar Dora Mallareddi Dollar General Corp, USA Author
  • Aarthi Anbalagan Microsoft Corporation, USA Author

Keywords:

self-supervised learning, contrastive learning, masked autoencoders

Abstract

Self-supervised learning (SSL) is turned out to be a revolutionary approach to reduce data shortage challenges in industrial artificial intelligence (AI) where acquiring large labelled data sets are exceptionally expensive. This study aims to explore advanced SSL techniques which includes contrastive learning, masked autoencoders, and transformer-based pretraining, which will enhance defect detection, predictive maintenance, and smart manufacturing. By utilising SSL techniques deep learning models can extract strong representation from unlabeled industrial data which improves generalization and adaptability across diverse manufacturing environments.

Downloads

References

A. Dosovitskiy et al., “An image is worth 16x16 words: Transformers for image

recognition at scale,” in Proc. Int. Conf. Learn. Representations (ICLR), 2021.

X. Chen, H. Fan, R. Girshick, and K. He, “Improved baselines with momentum

contrastive learning,” Adv. Neural Inf. Process. Syst. (NeurIPS), vol. 33, pp.

–4271, 2020.

J. Devlin, M. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep

bidirectional transformers for language understanding,” in Proc. Conf. North Amer.

Chapter Assoc. Comput. Linguist. (NAACL-HLT), 2019, pp. 4171–4186.

A. Grill et al., “Bootstrap your own latent: A new approach to self-supervised

learning,” in Proc. Adv. Neural Inf. Process. Syst. (NeurIPS), 2020.

X. He, K. Zhao, and X. Chu, “AutoML: A survey of the state-of-the-art,” Knowl.-Based

Syst., vol. 212, p. 106622, 2021.

T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple framework for contrastive

learning of visual representations,” in Proc. Int. Conf. Mach. Learn. (ICML), 2020, pp.

–1607.

Y. Bengio, A. Courville, and P. Vincent, “Representation learning: A review and new

perspectives,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 8, pp. 1798–1828,

Aug. 2013.

K. He, X. Chen, S. Xie, Y. Li, P. Dollár, and R. Girshick, “Masked autoencoders are

scalable vision learners,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit.

(CVPR), 2022, pp. 16000–16009.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp.

–444, May 2015.

S. Jaiswal, A. Romero, M. Gagliardi, Y. Fu, M. Pellegrino, and R. Chellappa, “Self-

supervised learning with deep clustering for visual representation,” IEEE Trans.

Pattern Anal. Mach. Intell., vol. 45, no. 1, pp. 765–780, Jan. 2023.

L. Jing and Y. Tian, “Self-supervised visual feature learning with deep neural

networks: A survey,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 11, pp.

–4058, Nov. 2021.

S. Xie, X. Chen, P. Dollár, and K. He, “Self-supervised learning with SwAV,” in Proc.

Adv. Neural Inf. Process. Syst. (NeurIPS), 2020.

M. Caron et al., “Unsupervised learning of visual features by contrasting cluster

assignments,” in Proc. Adv. Neural Inf. Process. Syst. (NeurIPS), 2020.

H. Li, J. Zhang, and S. Liu, “A self-supervised deep learning framework for industrial

anomaly detection,” IEEE Trans. Ind. Electron., vol. 69, no. 6, pp. 6731–6741, Jun.

P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol, “Extracting and composing

robust features with denoising autoencoders,” in Proc. Int. Conf. Mach. Learn.

(ICML), 2008, pp. 1096–1103.

F. Schroff, D. Kalenichenko, and J. Philbin, “FaceNet: A unified embedding for face

recognition and clustering,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit.

(CVPR), 2015, pp. 815–823.

L. Ma et al., “A self-supervised deep learning model for predictive maintenance in

industrial applications,” IEEE Trans. Ind. Inf., vol. 18, no. 3, pp. 1847–1856, Mar.

M. Kornblith, T. Chen, H. Norouzi, and G. Hinton, “Similarity of neural network

representations revisited,” in Proc. Int. Conf. Mach. Learn. (ICML), 2019, pp.

–3529.

X. Liu, L. Wang, and Y. Zhang, “Self-supervised learning for sensor-based predictive

maintenance in manufacturing,” IEEE Trans. Autom. Sci. Eng., vol. 20, no. 2, pp.

–778, Apr. 2023.

D. Yarotsky and A. Tikhonov, “Robust self-supervised learning for industrial defect

detection,” in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), 2021, pp. 987–994.

Downloads

Published

17-01-2025

How to Cite

[1]
Swetha Ravipudi, Praveen Kumar Dora Mallareddi, and Aarthi Anbalagan, “Self-Supervised Learning for Real-World Data Scarcity in Industrial AI”, Newark J. Hum. Centric AI Robot Inter., vol. 5, pp. 1–41, Jan. 2025, Accessed: Apr. 29, 2025. [Online]. Available: https://njhcair.org/index.php/publication/article/view/22