Compressed LLM-Based Secure Coding Support 


Vol. 14,  No. 6, pp. 397-399, Jun.  2025
https://doi.org/10.3745/TKIPS.2025.14.6.397


PDF
  Abstract

As the importance of security in the IT industry continues to grow, secure coding has become essential. However, small-scale enterprises often struggle to fully implement secure coding guidelines due to a lack of specialized security personnel. This study proposes a compressed Large Language Model (LLM)-based approach to support secure coding, enabling cost-effective adoption of secure coding guidelines. The proposed method leverages an open-source model fine-tuned on a security dataset for domain-specific learning and applies the Low-Rank Adaptation (LoRA) technique to optimize training efficiency in a single GPU environment. Additionally, it incorporates BF16 transformation and 8-bit GGUF quantization to reduce model size, ensuring operability in standard computing environments. Through experiments, we evaluated the proposed model’s ability to detect security vulnerabilities and generate improved code suggestions. The results demonstrate superior performance over the baseline model in key evaluation metrics, including F1 score and BLEU score.

  Statistics


  Cite this article

[IEEE Style]

C. W. Lee and J. Heo, "Compressed LLM-Based Secure Coding Support," The Transactions of the Korea Information Processing Society, vol. 14, no. 6, pp. 397-399, 2025. DOI: https://doi.org/10.3745/TKIPS.2025.14.6.397.

[ACM Style]

Chan Woo Lee and Junyoung Heo. 2025. Compressed LLM-Based Secure Coding Support. The Transactions of the Korea Information Processing Society, 14, 6, (2025), 397-399. DOI: https://doi.org/10.3745/TKIPS.2025.14.6.397.