Self-Supervised Waste Image Classification Model Considering Intra-Class Diversity 


Vol. 14,  No. 5, pp. 343-351, May  2025
https://doi.org/10.3745/TKIPS.2025.14.5.343


PDF
  Abstract

In this paper, we propose ReCLNet(Reconstructed patch Contrastive Learning Network), a self-supervised learning model that integrates MAE(Masked AutoEncoder) and CL(Contrastive Learning) to address the challenge of generalizing waste image classification, which is often hindered by high intra-class visual diversity and complex backgrounds. To resolve the issue of representation interference between reconstruction and contrastive learning observed in previous combined models, ReCLNet utilizes reconstructed patches for contrastive learning and adopts a dual-encoder architecture with shared weights, thereby ensuring both representation alignment and training consistency. Experimental results show that ReCLNet achieves lower loss and higher classification accuracy compared to TMAC(Transformer-based MAE using Contrastive learning), MAE, CL, and supervised learning-based models. Furthermore, it demonstrates robust and generalized representation learning even in environments with high intra-class variability. These results suggest that ReCLNet has strong potential not only for real-world automated waste classification systems but also for a wide range of applications that require the understanding of complex visual information.

  Statistics


  Cite this article

[IEEE Style]

H. Choi, J. Yang, N. Moon, J. Kim, "Self-Supervised Waste Image Classification Model Considering Intra-Class Diversity," The Transactions of the Korea Information Processing Society, vol. 14, no. 5, pp. 343-351, 2025. DOI: https://doi.org/10.3745/TKIPS.2025.14.5.343.

[ACM Style]

Hyuksoon Choi, Jinhwan Yang, Nammee Moon, and Jinah Kim. 2025. Self-Supervised Waste Image Classification Model Considering Intra-Class Diversity. The Transactions of the Korea Information Processing Society, 14, 5, (2025), 343-351. DOI: https://doi.org/10.3745/TKIPS.2025.14.5.343.