MITC: A Memory-Efficient Self-supervised Vision Transformer for Encrypted Traffic Classification via Selective Input Compression 


Vol. 14,  No. 10, pp. 746-755, Oct.  2025
https://doi.org/10.3745/TKIPS.2025.14.10.746


PDF
  Abstract

In this paper, we focus on the problems of the existing state-of-the-art encrypted traffic classification models that have low robustness and high computational complexity due to using encrypted payloads as learning data, and propose the Memory Improved Traffic Classifier (MITC) model to solve and improve the two problems. MITC maintains performance consistency by using only the header features of the IP layer and TCP/UDP layer even when the encryption algorithm changes, and shows results equivalent to those of existing prior research models despite reducing the input size and computational complexity compared to the existing vision transformer model. MITC greatly contributes to the study of encrypted traffic classification using artificial intelligence in that it approaches a realistic solution that can be applied to an actual network environment.

  Statistics


  Cite this article

[IEEE Style]

T. Kim, C. Kim, J. M. Youn, "MITC: A Memory-Efficient Self-supervised Vision Transformer for Encrypted Traffic Classification via Selective Input Compression," The Transactions of the Korea Information Processing Society, vol. 14, no. 10, pp. 746-755, 2025. DOI: https://doi.org/10.3745/TKIPS.2025.14.10.746.

[ACM Style]

Tae-yun Kim, Chan-hyung Kim, and Jonghee M. Youn. 2025. MITC: A Memory-Efficient Self-supervised Vision Transformer for Encrypted Traffic Classification via Selective Input Compression. The Transactions of the Korea Information Processing Society, 14, 10, (2025), 746-755. DOI: https://doi.org/10.3745/TKIPS.2025.14.10.746.